Do you remember: As the consumer version of Google Glass launched in 2014, it was hailed as the dawn of a new era in human-computer interfaces. People should use it to go about their daily lives and access all the information they need right in front of their eyes. Eight years later, how many people do you see walking around with smart glasses?
Stanford professor Elizabeth Gerber explains that technology can only reach people if they want it. In her speech at the fall conference Stanford Human-Centered Artificial Intelligence she stated that “we Google Glass didn’t want to wear it because it invaded our privacy. We didn’t want it either because it changes human interaction. Just think of Google Glass when you think about what AI can do – people have to want it.”
“Making AI people want is just as important as making sure it works,” Gerber said. Another lesson is the failed use of AI-based tutors, which resulted in children being distracted from subjects. The same goes for workers who have to work with AI-driven systems, she added. Developing human-centric AI requires more interaction with people across the enterprise and is often hard work getting everyone involved to agree on what systems are helpful and beneficial to the business of are worth.
“Having the right people in the room is no guarantee of consensus, and indeed results often arise from disagreements and unease. We need to deal with discomfort and use it productively,” said Genevieve Bell, a professor at the Australian National University and a speaker at the HAI event. “How do you teach people to navigate a place they are uncomfortable with?”
It could even mean that no AI is better than any AI, as Gerber pointed out. “Remember that when you develop you take this human-centric approach and design for people’s work, sometimes all you need is a script. Instead of taking an AI-centric approach, you should put people first. Iteratively design and test with people to increase their job satisfaction and engagement.”
Perhaps it is counterintuitive not to try to make AI more human-like when developing AI, e.g. B. by using natural language processing for conversational interfaces. In the process, the functionality of the system that helps people be more productive can be diluted or lost altogether. “Look at what happens when someone who doesn’t understand it designs the prompt system,” said Ben Shneiderman, a professor at the University of Maryland. “Why is it a conversation thing? Why is it a natural language interface when it’s a great place to design a structured prompt containing the various components designed along the semantics?”
“The idea that human-computer interaction should be based on human-human interaction is suboptimal — it’s bad design,” Shneiderman said. “Human-human interaction is not the best model. We have better ways to design, and moving away from natural language interaction is an obvious possibility. There are many ways we can overcome this model and shift to the idea of designing tools—super tools, telebots, and active devices.”
“We don’t know how to design AI systems to have a positive impact on humans,” said James Landay, associate director of Stanford HAI and host of the conference. “There is a better way to develop AI.”
The following recommendations were made at the conference:
Reimagining and redefining human-centric design
Panellists proposed a new definition of human-centric AI that emphasizes the need for systems that improve human life and challenges the problematic incentives currently driving the development of AI tools. “Current efforts are based on a denial of human competence,” Shneiderman said. “Yes, people make mistakes, but they are also remarkable in their creativity and in their ability to be experts. What we really need to do is build machines that make smart people smarter. We want to improve their skills. We have understood this in many designs by incorporating limits, crash barriers and interlocks.
These are all the things that have been described in the human factors literature for 70 years – how we can prevent mistakes. When the temperature of your self-cleaning oven is over 600 degrees Celsius, you can no longer open the door, okay? And that’s built into many technologies. This is design at work. That’s the right kind of design that we need to build more of. And we need to improve human competence while reducing the chance of error.”
Find multiple perspectives
This requires multidisciplinary teams made up of workers, managers, software developers and others with opposing perspectives, according to Jodi Forlizzi, a professor at Carnegie Mellon University. In addition, according to Saleema Amershi, senior principal research manager at Microsoft Research, “we need to redesign some of our processes so that even if there are people like designers or people who are knowledgeable about human-centric principles. A lot of these people aren’t in the room where the decisions are being made about what’s going to be built. We need to rethink our entire processes and make sure those people who are working with the technologists are working with the AI people early on.”
Rethink AI success metrics
“Most of the time we ask what these models can do, but we really should ask what people can do with these models,” Amershi said. “Currently, we measure AI by optimizing accuracy, but accuracy isn’t the only measure of value. Developing human-centric AI requires human-centric metrics.”
Keep the human informed – and the AI easily transferrable
“We want AI models that are understandable, predictable, and controllable,” Shneiderman said. “It’s still the enduring notion that you’re in charge and that you can override it. We rely on reliable, safe and trusted things like our cameras to set the shutter, sharpness and color balance. But if we see that the focus is wrong, we can change that. The mental model should be that users have a panel that gets them what they want, and then the system gives them some previews, some choices, but they can override it.”