A curious coincidence that for the past few weeks ChatGPT has been so electrifying the media and people on the centenary of a birthday that can be associated with it, writes Chris Kaiser on January 8, 2023. Joseph Weizenbaum would have been 100 years old on this day. A good opportunity to refer to a text I wrote in 2007. At that time, I managed to win over Weizenbaum for a panel discussion in Berlin. Here is the article that appeared 15 years ago:
In the not too distant future, a dialogue between man and car could look like this: “I would like to drive to the office!” – “Sure,” replies a computer voice, “there are only three liters of petrol left in the tank. The nearest gas station is five kilometers away. Would you like to stop there on the way to the office?” The car calculates the route to the office and lets you know in good time when the petrol station is coming. Machines or computer systems that appear to speak, act and react like a human being are the results of research from “Artificial Intelligence” (AI): Science promises refrigerators that independently reorder groceries, vacuum-cleaning robots, virtual assistants in libraries, hospitals and other personnel-intensive services, as well as Computer systems that anyone can talk to normally. Representatives of the so-called ‘hard AI’ even predict a coming generation of robots that can think, learn and feel. Such intelligent systems will be much more powerful than humans and will therefore make them superfluous, claims Hans Moravec. The professor of robotics at Carnegie Mellon University in Pittsburgh, USA, believes in a post-biological age in which robots will replace humans.
Artificial intelligence – is it a blessing or a curse for mankind and what are the limits of research? “It’s clearly a blessing,” says Wolfgang Wahlster, Director of the German Research Center for Artificial Intelligence (DFKI). His thesis: Intelligent systems will make life easier for people. At the Call Center World specialist congress in Berlin, Wahlster, computer scientist Joseph Weizenbaum and host Lupo Pape, Managing Director of the Berlin company SemanticEdge, discussed the pros and cons of artificial intelligence: “We are investigating specific application-oriented issues in order to support everyday life with human-friendly services”. , according to Wahlster. DFKI is a leader in the development of semantic technologies, computer systems that ‘understand’ natural spoken language. “This is one of the greatest challenges for computer science in the coming years,” emphasized Wahlster. In the future, computers and robots should control everyday language and gestures, not interaction via complicated artificial languages with keyboard and mouse. In Japan, people are already talking about the one-button computer, “on and off, everything else happens via language, facial expressions and gestures,” explained the director of the DFKI. Cars are another exciting field of application for semantic speech recognition.
This is where Lupo Pape came in. “Software systems must become more intelligent so that they can better understand what people want from them and, conversely, make themselves easier for people to understand,” Pape demanded, using the example of speech dialog systems. To make all this possible, linguists, psychologists and computer scientists are constantly working on new language dialogues. The recognition of spoken language in the context of the application and a more intelligent and freer handling of the speech dialog system with the wishes and answers of the user are at the center of the SemanticEdge developments. The aim of the language dialogues is to come as close as possible to people’s expectations. “In a dialogue with human traits, the caller will feel more accepted than in one with rigid menu navigation and the request for specific answers,” said Pape. Even if you don’t know the name of a business partner at the moment, you can set up the desired connection using a search function by entering the industry and location, says the speech dialogue expert.
But what is understanding? “We can only conduct speech dialogues with machines in very narrow contexts,” says Weizenbaum, who developed the ELIZA computer program at the Massachusetts Institute of Technology (MIT) in the 1960s, which apparently enabled a dialogue between humans and computers and is the prototype for today’s chatbots is. A sentence would always have a context of meaning, or an experiential background that the computer could not develop. For example, the car stops at the word “Halt”. But if the driver says, “I’m just tired,” then the car will stop, possibly in the middle of the freeway. “What is said is not always what is meant,” says Weizenbaum. “The human recipient interprets the sentence in the sense of his entire life story”, a machine would never be able to do that.
“Why do we actually need cars that we can talk to?” asked Weizenbaum. And indeed, it is hard to see why it is easier to say “stop” than to step on the brakes in a car, for example. Feelings are generally attributed to living beings and not machines. But are emotions also programmable? “To a limited extent,” says Wahlster. “For example, with a lot of training, a virtual call center agent can ‘learn’ that a loud and excited voice means trouble and respond accordingly. To do this, however, a large number of behavioral patterns must be stored. Researchers still know too little about what happens in the human brain when it comes to emotions to program operating instructions for joy or fear. It will certainly take centuries,” Wahlster is convinced.
For the time being, there are robots like Elvis from Gothenburg, who are supposed to learn to walk independently. Elvis’ program discards the unsuccessful attempts that make him keel over and saves only successful program variants. Scientists at MIT go one step further. Several robots are currently ‘living’ at the world’s leading university for technology, which not only look human-like, but are also supposed to react and ‘learn’ like small children. Over ten years ago, AI expert Rodney Brooks brought the robot children Cog and Kismet to life. Armed with some basic skills, the two should learn human behavior through interacting with team members, much like young children do. Cog can hear, speak and see. In the meantime, the robot recognizes its supervisors and ‘stranges’ with others. With Kismet, the researchers want to transfer human moods to the machine. Kismet becomes sad when no one speaks to him for a long time and smiles when someone stops and looks at him. Robot Lazlo gets a face that trains human facial expressions.
“Megalomania!” counters Weizenbaum.
“The fantasy that we’re making robots that look and act like humans is just madness. The limits of artificial intelligence have been reached here and we shouldn’t cross them, even if we could,” warned the computer dissident. “People are fooling themselves when they think the computer smiles because it’s happy. The joy is just programmed,” says Weizenbaum. When a certain set of circumstances come together, people tend to be happy and smile. The robot calculates that. But he couldn’t really feel it. Even with his ELIZA computer program, the MIT professor at the time, Weizenbaum, was appalled that people actually thought ELIZA would respond to their questions and problems. In reality, Weizenbaum had only programmed a certain number of question and answer patterns. The more there are, the better the illusion that the computer ‘understands’ people.
Marvin Minsky, who coined the term artificial intelligence at the Dartmouth Conference fifty years ago and headed the Artificial Intelligence Laboratory at MIT for many years, claims that a database must have a million records stored, including contradictory ones, in order to conform to common sense . At the end of 2006 he published the book “The Emotion Machine”. In it, Marvin deals with the topic of how feelings can be programmed. “There are states called emotions that people think are some sort of mystical complement to rational thinking. My take on things is that each emotional state is just another way of thinking,” Marvin said. He also said: “The brain is just a machine made of flesh.”
So is the human being totally predictable and can be broken down into bits and bytes or defined precisely by a certain number of algorithms, little instruction manuals? In particular, Weizenbaum considers the “contempt for biological life” to be questionable. Marvin, Moravec and other proponents of so-called ‘hard AI’ describe humans as ‘malfunctioning’: they are weak and prone to failure. Weizenbaum observes an almost religious belief in science, not only the organs can be replaced by artificial ones, but also the brain.
Wahlster and Pape also consider the post-biological age to be undesirable. “We don’t need robots that feel,” says Wahlster. Robots should do useful things. “Cleaning robots, for example, will also look very different from humans because they should be flat to clean under tables and chairs. Robots are already being used in hospitals to run errands and serve meals, hopefully with a 100% hit rate. The intelligent helpers do the routine work and create free space. Human robots are not an issue for the DFKI. We see artificial intelligence as an engineering science, take a pragmatic approach and pursue application-oriented questions. For example in vehicle technology for companies such as BMW or Mercedes. Our technology is already in there,” summed up Wahlster.