Skip to main content
DA / EN
Feature

When is something artificial intelligence?

Is artificial intelligence really as intelligent as we think? PhD student Jonas Vistrup from the Department of Mathematics and Computer Science, SDU, challenges our perception of AI in this feature article. From ELIZA to ChatGPT, gain insight into what really makes a machine intelligent.

By Jonas Vistrup, , 3/13/2024

AI is everywhere. AI is discussed in the news, used in businesses, and completes students’ assignments. Companies around the world are trying to incorporate AI into almost every part of the job market. AI has become a buzzword that is thrown into countless business conversations and company descriptions almost exclusively to get investors to throw money at the idea being sold. To understand what is actually being sold, one needs to be able to see through the buzzword AI, and this requires an understanding of the word.

AI is an acronym for Artificial Intelligence. Thus, it refers to something artificial that is intelligent. Humans are intelligent but not artificial, and paperclips are artificial but not intelligent. Since everyone agrees that technologies like computers, phones, calculators, or robot vacuums are artificial, the debate about what is artificial is somewhat moot. When something is intelligent, however, is an entirely different matter.

To answer this question, we can look at examples of artificial intelligence that are available. ChatGPT is the largest, and many are also familiar with image-creating programs like DALL-E and Midjourney.

These types of programs are called data-driven artificial intelligence because they are trained on data. As a rule of thumb, the more data they are trained on, the better they become at their tasks. Therefore, it can be said that data-driven artificial intelligence learns from data. And if it can learn, it is intelligent. One can take a very stringent version of this approach: if a program learns, it is intelligent; if it does not learn, it is not intelligent.

Humans are intelligent but not artificial, and paperclips are artificial but not intelligent. Since everyone agrees that technologies like computers, phones, calculators, or robot vacuums are artificial, the debate about what is artificial is somewhat moot. When something is intelligent, however, is an entirely different matter.

Jonas Vistrup, PhD Student

This definition can be very narrow. For example, with one’s elderly family members, one might find that they need help almost every time they use technology that is more complex than a remote control. It seems as though they have reached an age where it is difficult to learn new things. But if one starts to call them unintelligent because they have difficulty learning, one can be sure not to be invited home for Easter.

Even if the definition is only applied to the world of programs, it still runs into problems. In 1997, the program Deep Blue defeated the world’s then-best chess player, Garry Kasparov. Deep Blue is not data-driven. Nothing was learned from data.

The techniques used to build Deep Blue are far from the techniques used to build ChatGPT. However, it would be odd to call Deep Blue non-intelligent. Deep Blue was better at chess than the world champion, and we can hardly call Kasparov’s chess skills unintelligent.

But why do we even call ChatGPT artificial intelligence? There is a worldwide consensus that ChatGPT is artificial intelligence. What creates this consensus?

If you have used ChatGPT before, you have probably experienced the feeling that you are writing with something intelligent. ChatGPT feels human and smart. Can this feeling perhaps be a good indicator of what is and is not AI?

In 1967, the program ELIZA was released. ELIZA was a program that performed basic psychotherapy. The program could communicate with the user through text, where the user could talk about their problems, and ELIZA would inquire about these issues. ELIZA seemed so realistic that several test users did not believe they were communicating with a program - they were convinced they were writing with another human.

These users did not just have the feeling that ELIZA was intelligent, but they felt and believed that ELIZA was human intelligence. In reality the messages from ELIZA mainly consisted of taking parts of the last thing the user wrote and writing it back in the form of a question, which was chosen from a prepared catalog of questions.

For example, a user might write "My partner says I am depressed all the time" to which ELIZA would respond "I am sorry to hear that you are depressed" or "Can you explain why you are depressed?".

If these users learned how ELIZA worked, they probably would no longer see the program as intelligent. ELIZA does not change because one understands its mechanics, but our feeling about it being intelligent does. This is not an isolated case.

In the field of artificial intelligence, it is often seen that people are willing to call something artificial intelligence, but only until they understand how it works. This feeling functions like a magic show feels magical, but only as long as one does not understand how the trick was performed. If one learns that the card was up the sleeve, the magic disappears, and likewise, if one understands the program, its magic disappears.

Fortunately, there is a more practical approach that combines two viewpoints: intelligence as something humans have, and intelligence as something that is clever. Something can be called intelligent if it either thinks or behaves like humans, and it can be called intelligent if it either thinks or behaves cleverly.

Do not decide whether a program is intelligent based on whether it resembles ChatGPT, instead think about what the program does. Does it act cleverly? Does it behave like humans? Is what the program does useful? In this way, one can detach from the shackles of the buzzword AI and instead think critically about what is being offered.

ChatGPT was the first program to break the ice and made us aware that artificial intelligence has entered everyday life. Now that the ice is broken, similar programs are flooding in.

This rapid development has taken the world by storm. In response to the development, the EU is in the process of introducing the AI Act, which is intended to protect Europe against the misuse of artificial intelligence.

This law has been discussed many times in the EU Parliament, partly because of the question of what AI encompasses. The EU has ended up with a definition that is even more inclusive than the one presented here.

Why is that? Because when we think about artificial intelligence, it is not enough to think about today’s artificial intelligence; we must think about all possible future artificial intelligence.

This article was also published on Erhverv+ (in Danish only).

Meet the researcher

Jonas Vistrup is a PhD student at the Department of Mathematics and Computer Science at SDU. Jonas is affiliated with the section Artificial Intelligence, Cybersecurity and Programming Languages, where he researches artificial intelligence for traffic cases. He is also a member of SDU’s Digital Democracy Centre.

Contact Jonas

Editing was completed: 13.03.2024