Are we witnessing the rise of a different, adaptive artificial intelligence (AI) that collaborates with humans and supports them with smart decisions? The computer scientist Niao He is investigating how such an AI can theoretically be secured so that it is really useful.
As a researcher, Niao He has people and technology in mind. Hers was typical inaugural lecture, when she outlined in a few words how adaptive computer software affects our daily lives: “We are on the threshold of a new age of artificial intelligence and are all amazed at what AI is already able to do today and how it is already changing our everyday lives. ” In the lecture, the computer science professor compared Machine Learning Institute the current state of development of artificial intelligence with the dawn, when the dawn promises great things and we feel that there is still a lot of work to do.
The image of the dawn alone reflects the striking rise that research and development in the field of machine learning and artificial intelligence has experienced in recent years. New mathematical and algorithmic insights, enormous increases in hardware performance, freely available AI software modules and huge amounts of data with which artificial intelligence can be trained have expanded the application possibilities of AI by leaps and bounds.
Today, computers are capable of machine learning using statistical and data-driven methods. They supplement people’s knowledge by automatically extracting patterns and regularities from huge data sets that are too complex and extensive for humans. For example, AI can use this to discover new protein structures and thus contribute to the development of new drugs. Niao He wants to go one step further and develop an AI that can do more than recognize patterns. In line with the values of ETH AI Centersto whose Core team she owns, she has in mind a trustworthy and cooperative artificial intelligence that does not work in competition with humans, but works together with them and supports them with intelligent and comprehensible decisions.
Towards Trustworthy AI – Niao He explores how AI can contribute to intelligent operational decisions. (Source: ETH Zurich / Nicole Davidson)
AI as companion and advisor
The long-term goal of her research is an adaptable AI that – like people themselves – adapts quickly and flexibly to changing environmental conditions and supports people like an advisor when they make unusually difficult decisions. Niao He is also encouraged by her own life experience: After completing her bachelor’s degree in China, she studied and researched in Georgia and Illinois (USA) for a good ten years before coming to Switzerland in 2020. With each move, she adapted to a different culture and acquired new skills – she learned to drive in Illinois, for example, and German in Zurich. “When I encountered a completely new environment, I often wished that an AI would help me to make the best possible decisions.”
In her research, Niao He deals with optimization, automation and intelligent decision-making in organizations: Her group investigates the principles according to which algorithms – i.e. calculation rules on the basis of which intelligent software is programmed – can be designed in a mathematically sound manner so that AI works reliably at all times as well as data-based problem solving and intelligent decisions. To ensure that AI actually complements human work instead of replacing it, He’s team is looking for new AI approaches and alternative methods of machine learning. “Today we almost routinely develop intelligent programs to solve real-world problems of extremely high complexity with huge amounts of data,” she explains.
Learning to deal with uncertainty and the unknown
In fact, most AI methods today improve the quality of their results by learning what works from large training data sets, thereby increasing their reliability. “In day-to-day operations, however, the problems that AI is supposed to solve are subject to many uncertainties,” Niao He points out. These uncertainties can be technical or human in nature. They can affect data and data security as well as the use of shared platforms or systematic human bias.
“In order for artificial intelligence to work reliably even under uncertainty and changing conditions, it is important that we formulate the uncertainties mathematically and integrate them into our learning algorithms. That’s what we’re working on,” says Niao He, “we need AI systems that can be used over time make decisions without contradiction; who learn to deal with uncertainties or unfamiliar environments and who can adapt to new tasks.” A promising approach that could lead to adaptive AI is “reinforcement learning”. An intelligent learning process increases the reliability of its results through repeated interaction with the environment. Niao He’s team also extends this approach to cases when data is sparse or human experience is absent.
Reliability over time creates trust
For Niao He, one thing is clear: “AI must be ‘human-centric’, meaning human-centric, expressing our values and working in a trustworthy manner.” She shares the view that values such as trustworthiness, transparency, privacy, fairness, ethics, responsibility and liability should serve as guiding principles for the use of AI in practice. “Trustworthy AI works reliably over a long period of time. Reliability over time creates trust,” she explains.
Unfortunately, I too often miss the theoretical aspects of AI in the picture of trustworthy AI.
She loves theory. She is aware that talk of trustworthiness is ambiguous: “For me, trustworthy AI is a very magical word because it has been overloaded with many meanings. For me, trustworthiness is a question of methodology.” She adds resolutely: “We should develop AI from scratch in such a way that it is mathematically sound and theoretically sound before it is used in practice. Unfortunately, the theoretical aspects of AI are too often missing in my image of trustworthy AI.”
Knowing limits, increasing diversity
Since it is comparatively easy for computer designers and programmers today to try something out on AI platforms, there are now and then in practice algorithms that deliver very plausible results, but if an error occurs, it remains unclear why it happened has arisen. For Niao He, “theory” means that the mathematical and algorithmic foundations of AI are understood to such an extent that it can be explained at any time how an algorithm really works and how its results come about. “The theoretical understanding is above all about understanding the fundamental limits of these problems and the fundamental limits of the algorithms”.
Developing AI in a human-centric way requires counterfactual thinking, says Niao He. What would have happened if I had made a different choice, or if someone of a different gender or ethnicity made a choice in my place? Diversity is important for AI research: “Especially when it comes to fairness, trust and cooperation with AI, it is very important that different people contribute their perspectives to the development of AI. I would therefore like to encourage all female students to get involved to participate in AI research so that the AI works in a cooperative, ethical, fair and trustworthy manner in the future.”
In her inaugural lecture, Niao He compared the current development status of artificial intelligence with the dawn, when the dawn promises great things and we feel that there is still a lot of work to do. (Source: ETH Zurich / Department of Computer Science)
This article first appeared on ETH News.
Speaking of AI: ChatGPT is currently making waves. What the AI itself has to say to questions like “When will you develop your own consciousness?”, “How many jobs will you replace?” or “Are you a danger to humanity?”, read here in the exclusive interview with ChatGPT.