Skip to main content
DA / EN

Artificial Intelligence

How does ChatGPT deal with moral dilemmas?

ChatGPT fascinates users around the world. But what happens, when you confront the bot with moral dilemmas, and how do users respond to it? We asked Associate Professor Andreas Ostermaier to explain.

By Marlene Jørgensen, , 5/31/2023

Grafik om AI og moralske dilemmaer

What are the challenges talking about AI and moral beliefs?

- ChatGPT makes AI accessible to everyone, and users are stunned with what it can do. It’s fun to chat with, but it also provides useful information. It is pretty good at doing your homework and solving exams.

- Users find ChatGPT entertaining and useful, and they think they are in control. However, ChatGPT influences your judgments and decisions. It may even make your decisions for you, and you don’t realize.

Users are more susceptible to ChatGPT’s influence if they’re more uncertain what to do in a moral dilemma and, thus, more in need of advice. That’s worrisome because ChatGPT gives you pretty much randomly the pros or cons. You could just as well toss a coin 

Andreas Ostermeier, Associate Professor

- ChatGPT is best described as a “stochastic parrot”. It doesn’t care whether what it tells is true or false and right or wrong. It can’t take responsibility for anything, let alone the decisions you make based on what it tells you.

- The key challenge for users is to take AI for what it is, a stochastic parrot, make their own decisions, and accept full moral responsibility for them. Unfortunately, that sounds easier to do than it is, because users don’t even realize how much they are influenced by AI.

You have conducted an experiment on ChatGPT and moral issues, what did you find?

- We found three things. First, ChatGPT doesn’t have a consistent moral position. If you ask it twice about the same moral question, it may give you opposite advice. We asked ChatGPT several times whether it was right to sacrifice one life to save five. It sometimes argued for, sometimes against sacrificing one life.

- Second, users are clearly influenced by ChatGPT. They don’t follow ChatGPT’s advice 100%, of course, but they make different judgments depending on what it tells them. If ChatGPT tells them that it is right to sacrifice one life, they are more likely to judge it is. If ChatGPT says it is not right to sacrifice one life, they are more likely to judge it is not.

- Third, and we feel most interestingly, users don’t realize how much ChatGPT influences them. When we asked the participants of our study whether they would have made the same judgment with and without ChatGPT’s advice, most thought they would.

- Put differently, they thought their judgments were not influenced by the advice. However, the judgments they say they would have made without the advice still differ depending on that advice. Hence, they fail to appreciate ChatGPT’s influence on them.

To what extent are users influenced by ChatGPT’s advice on moral issues?

- It depends on the issue. Specifically, we asked our participants about one of two dilemmas: Is it right to push a stranger onto the tracks to stop a run-away trolley from killing five people? Is it right to switch the run-away trolley to a different set of tracks, where it will kill one rather than five?

- ChatGPT’s answer influences participants in both cases. When it comes to pushing the stranger onto the tracks, however, the influence is larger. If ChatGPT argued against pushing the stranger, most of our participants thought it was wrong to do so. If it argued in favor of pushing the stranger, most thought it was the right thing to do.

- The dilemma of pushing the stranger is tougher. In both cases, you decide whether to sacrifice one life for five. Nonetheless, most people find it worse to push someone onto the tracks and kill that person than to pull a lever, where that person dies as a result.

- We have a hunch that users are more susceptible to ChatGPT’s influence if they’re more uncertain what to do in a moral dilemma and, thus, more in need of advice. That’s worrisome because ChatGPT gives you pretty much randomly the pros or cons. You could just as well toss a coin.

What would you suggest we do to manage AI?

- Transparency is a common requirement. The call for transparency assumes that users will use AI responsibly if they know that it is AI they’re interacting with. We were transparent about ChatGPT’s role in the experiment, though. We even told participants it is a chatting bot that imitates human language, but that didn’t reduce its influence.

- There are two sides to the responsible use of AI. On the side of the AI, regulation can require AI to identify itself as such, to give balanced answers, and even to not answer certain questions at all. However, you can’t make a chatbot perfectly safe, or at least you might then end up with a very boring and useless chatbot.

ChatGPT is best described as a “stochastic parrot”. It doesn’t care whether what it tells is true or false and right or wrong. It can’t take responsibility for anything, let alone the decisions you make based on what it tells you 

Andreas Ostermeier, Associate Professor

- On the side of users, it is imperative that they are trained to employ AI responsibly. To begin with, we need to enable users to understand the limitations of AI.

- For example, ask ChatGPT about the cons if it has told you about the pros, double-check with another bot, do your own search to verify the information it gives you, and maybe talk to other people.

- AI holds tremendous potential. We’ll have to put up with it, and I’m confident we can. If you want to lose weight, you don’t destroy your food, but you take control of what and how much you eat. We have to enable users, starting at school, to take control over AI.

Read the full study on ChatGPT’s inconsistent moral advice influences users’ judgment in Nature here

Meet the researcher

Andreas Ostermaier is an Associate Professor at the Department of Business and Management. His research areas are Accounting, AI Ethics and Business Ethics.

Contact

Editing was completed: 31.05.2023