He was of course distraught in the spring of 2023 when hearing about a case where a man had committed suicide, perhaps as a result of contact with AI which suggested that option.
But he recognised the pattern. He came to at Stanford as a PhD student in 1966. His supervisor was Professor John McCarthy, known for having coined the concept of Artificial Intelligence (AI).
It is now more than 50 years ago that Erik Sandewall for the first time came into contact with a programme that can “talk” to the user.
“Such programmes use a compassionate personality. Simple systems of this type constructed at MIT and Stanford in the late 1960s were a bit like Apple´s digital service Siri. One such system was developed by a guy called Joe Weizenbaum.”
Photo credit Magnus Johansson
Together with his colleagues at Stanford University, Erik Sandewall had fun “talking” to one of these programmes. They often tried to provoke exciting replies.
Humourous dialogues
“You could understand what it was doing. It was a sophisticated mirror. If you said: ‘I’m sad about something’, the programme would mirror this in its reply: ‘Is there anything else that makes you sad?’”Any humorous dialogues generated would be posted on the notice board.
How did this contact affect you and your colleagues?
“You got carried away by the dialogue, although intellectually you knew what it was. This was an Aha! moment...realising that contact with a machine could make you react like that. It made you committed. If we were that easily caught, then you understand that it’s possible to develop a stronger relation, for instance if you’re mentally unstable. And of course it has all been more fine-tuned over the last 50 years.”
Erik Sandewall once decided to give an OTT reply. He typed something like: ”If you can’t behave decently, I’ll beat you to death”.
“‘Please go ahead’, was the reply. This was a common reply when the system could not provide a reply. Then I typed ‘ctrl/c/kjob’ and finished the job. That slip of paper on the notice board gave many a good laugh!”
We quickly realised that
AI could be the solution
The command for ending a process is ”ctrl/c/kjob” which simply means “kill job”.
New day, new opportunities
Erik Sandewall has a playful, curious glint is his eyes. A feeling that every day brings new opportunities. He has been firmly planted in computer science since the 1960s, and is often described as Sweden’s first AI professor. He came to Linköping University as a new professor in 1975, and in 1985 was one of the founders of the Department of Computer and Information Science (IDA) in its present form.“It’s been interesting, the whole time. Recruitment went really well. We had a modern organisation. We could offer young talents a post as associate professor and their own research group. It was a winning concept,” he says.
Interactivity
Some of his research has been associated with development of various types of business systems and administrative systems. They realised that the systems need an ability to interact with the user. The existing systems were complicated.“We understood that it should be required of the administrative systems that the user would be able to enter it and ask questions. And receive an immediate reply from the system. We quickly realised that AI could be the solution.”
Erik Sandewall also believed that it would not be possible to separate AI from other computer science at a Swedish university, as resources were limited.
The problem is that intelligence is so much more complicated than ChatGPT“It would have been an enormous mistake to have had research and education within AI and then programming on the side. We needed research in AI, programming systems, databases and for example programming languages. Languages with certain properties for AI also have great advantages in other programme development.”
Intelligence is something that is very complicated. Erik Sandewall stresses this several times. Still, it is tempting to think that computer science may recreate intelligence.
“The problem is that intelligence is so much more complicated than ChatGPT which is currently described as something very intelligent. It uses AI technology and an ability to handle large amounts of information, and assembles something that sounds reasonable. But you can’t draw the conclusion that a system is intelligent in a general sense. It’s all about certain aspects of intelligence and certain learning.”
One way of looking at intelligence is that it should be able to draw conclusions and make assessments based on situations.
Intuition
“If you see intelligence in humans as exactly this, then I can’t see any reason why you wouldn’t be able to get a computer system to do this also. But then there’s the difficult concept of intuition. It is thought that humans have the ability to reach conclusions and make assessments without reasoning, and without relying on any abstract principles. It just clicks. Maybe it’s a combination. But this type of intuition seems to be an important part of the moral compass in us. The question is how to recreate this in our computers. There are theories on this, but no good answers, not that I know, anyway.”Why is that important?
“It’s an important piece of the puzzle, if we are to recreate human intelligence. It’s also a big part of the mystery, and it’s still nowhere in sight. I think the solution to this is generations away.”
Why would a computer need morals?
“A computer system must in some sense understand the tasks it is to perform, and this often includes a moral aspect. It can’t just be superficial: ‘You can’t do this, you can’t do that.’ The system needs to be able to assess the consequences in relation to a set of moral principles.”
Many people do not consider the fact that AI technology is a collection of various technologies, such as learning, pattern recognition and handling large data sets. But also diagnosis and planning, and being able to identify a series of actions to reach a given outcome. And learning from mistakes and changing the plan.
“Each ability on its own may work perfectly for its purpose. That’s when we are tricked into believing that AI is extremely intelligent and will soon come to threaten humanity. But so far, no one has come anywhere near a combination of all the abilities of the human brain.”
Machines taking over
He refers to the American Psychology Association and its definition of intelligence as a combination of many abilities.“Attempts to categorise all abilities most often result in there being too much to consider. The individual abilities can be tested and separated. Some years ago, there was talk about AI as a ‘big switch’, where you connected one ability at a time. But it’s very easy to construct cases where you need several abilities simultaneously. Also, it’s still not known whether human abilities are interconnected. Do they reinforce one another?”
The worst-case scenario for many people is an intelligence that suddenly acts independently, can make its own decisions and extend itself. That machines will take over.
“This concern keeps recurring. The concept of AI winter is based on this concern. In the 1970s, there were many such concerns that made researchers hold back, and financing was reduced. Fears that machines would take over and criticism grew too large.”
Did the AI winter affect you too?
“Yes, we were also affected by those currents. Research continued, but maybe in a more cautious way. As a matter of fact, the positive effects of research can actually be considerably greater than the negative ones,” he says.
What could lead us into a new AI winter?
“Lacking trust in technology, should something go terribly wrong. Or, for example, chatbots in a webshop. As a customer, you may prefer human contact, because the chatbot gives you inadequate or strange answers. This could make people negative to such applications and, by extension, to AI in general.”
What are your own fears for the future?
“The totalitarian tendencies in China and Russia give cause for concern. What Putin is doing now, he can do without artificial intelligence. But should AI come into the picture, things will probably get even worse. Facial recognition is one example of things a totalitarian state can use against its citizens. Research must go on, but its applications should probably be controlled.”
In your view, should AI be regulated by governments or by industry?
“My immediate answer would be governments... Otherwise there’s a risk that it becomes negligent. There must be national regulations to base it on. But the question is whether a society where businesses do not take responsibility would be politically tenable. Huge businesses must have their own ethical opinion that they are serious about.”
He is very familiar with the corridors on Campus Valla. He is currently working on sorting thousands of documents from his many years as a researcher at Linköping University. He has had time to think. And he is fully convinced that moral considerations must be at the top of the agenda in IT development.
“Ethics means having approaches ensuring that you won’t have to regret anything later,” he says, letting the words sink in.