WTF   //   May 9, 2023  ■  6 min read

WTF are AI hallucinations?

Since ChatGPT became mainstream at the end of 2022, it’s become clear that its outputs are not always 100% factually accurate.

In fact, people have been having fun with the generative artificial intelligence bot, created by OpenAI, to test how far it would go in making things up. For example, there was a trend recently where people would prompt the bot with a simple “who is” question to see what kind of biography it would make up for an individual. People shared on Twitter their embellished bios that included the wrong schools they graduated from, awards they never received, incorrect hometowns and so on.

When it’s making things up, that’s called a hallucination. While it’s true that GPT-4, OpenAI’s newest language model, is 40% more likely than its predecessor to produce factual responses, it’s not all the way there.

We spoke to experts to learn more about what AI hallucinations are, the potential dangers and safeguards that can be put into place.

What exactly are AI hallucinations?

AI algorithms can generate unexpected and often bizarre outputs that are not consistent with their training data and when they don’t know the answer. So whatever a generative AI bot – like ChatGPT – spits out as an answer that is as close to the truth as it can muster without having ingested all the necessary data to give the right answer, it’s called a hallucination. And they’re currently quite common.

“The hallucinations start when you have a hole in the knowledge and the system goes with the most statistically close information,” said Pierre Haren, CEO and co-founder at software development company Causality Link. “It’s the closest it has, but not close enough.”

The hallucinations can be caused by various factors but the most common is when it has limited training data and hasn’t been taught to say “I don’t know that answer.” Instead of saying that, it will make something up that seems like it could in fact be the answer. It works hard to make it seem realistic. 

“The hallucinations start when you have a hole in the knowledge and the system goes with the most statistically close information.”
Pierre Haren, CEO and co-founder at software development company Causality Link.

“It’s saying something very different from reality, but it doesn’t know it’s different from reality as opposed to a lie,” said Haren. “The question becomes, what is the truth for a ChatGPT brain? Is there a way the system knows if it’s deviating from the truth or not?”

Arvind Jain, CEO of AI-powered workplace search platform Glean, says that it’s important to remember “the most likely answer isn’t always the correct answer.” 

However, that is dangerous because it can lead to the spread of misinformation if it’s not fact-checked by the human. 

What are the potential dangers of AI hallucinations?

Spreading misinformation is at the top of the list here. If the AI system produces hallucinations that seem so realistic but completely false, it could be spread on a large scale. 

Additionally, if it gives you incorrect information about something that is high stakes, it could have a domino effect. That’s why experts say to avoid using it for something that can’t afford mistakes.

“Right now, I don’t think creators of large language models are excited about the idea that somebody would drive a nuclear power plant with their system,” said Haren. “Right now, there is so much to be done with helping complete small tasks. I’m not sure they want to get into the business of medical advice or stuff like that.”

But even in a work situation, if it outputs misinformation that is then used for a marketing campaign, it could ruin a company’s reputation. Or, if you are asking for financial advice and it guides you in the wrong direction, it could also have damaging knock-on effects.  

All in all, AI hallucinations highlight the need for responsible AI development and deployment that take into account broader ethical and safety implications of these systems. 

What safeguards can be put in place?

“How do I ensure that this response is factually correct?,” said Cliff Jurkiewicz, vp, strategy at global HR tech company Phenom. “That’s the question that is left for human beings. But technology companies are putting safeguards in place that allow that relationship to be explained in a transparent way.”

With slowing down, precautions can be put into place to ensure that people are using AI safely. Jain says it’s especially important to take a step back because when the large language model is correct most of the time, humans will trust that it’s right all the time, when that’s not necessarily the case.

“When models are right, people trust them,” said Jain. “That’s the challenge with AI. How do you prevent that phenomenon from happening where humans build too much trust?”

One step is ensuring these models provide sourcing for where the output information comes from. There are existing AI providers that are already strict on that. For example, DISCO provides AI-powered legal solutions that include legal document review, legal hold and case management for enterprises, law firms, legal services providers, and governments.

“It’s [providing sourcing] really critical,” said Kevin Smith, chief product officer at DISCO. “I always want to understand where the information came from. If I’m studying a case, I want to read the case law from where the information came from so I can understand the nuance of the answer. When you have high stakes use cases, where there is nuance and expert craft involved, being able to cite sources is really critical.”

Jurkiewicz agrees. “If you are going to provide an output, there should be an opportunity to examine the homework,” said Jurkiewicz. “My 7-year-old can get the answer really quick to a math problem, but he has to provide the work. How did you get to that result? Generative AI should be no different. We should hold it to the same standard as we do human beings. You need to show us the work.”

A good motto to follow, experts say, is to trust and verify. Without the verify piece, it can go back to spreading misinformation. 

“My 7-year-old can get the answer really quick to a math problem, but he has to provide the work. How did you get to that result? Generative AI should be no different. We should hold it to the same standard as we do human beings. You need to show us the work.”
Cliff Jurkiewicz, vp, strategy at global HR tech company Phenom.

“Unless we have that level of verification, we will allow it to become very much like the Mandela Effect, it will become like the truth simply because no one’s verified it and everyone is saying it and repeating it,” said Jurkiewicz. “That’s the danger. It can be very serious and I think you’re hearing that from a lot of technology pioneers who are encouraging us to slow down.”

Another simple tip is asking the AI to tell you when it doesn’t know an answer. Smith says that a lot of large language models are told that they don’t get credit if they don’t give an answer, which is why they always try to spit something out. 

“You have to specifically instruct it that a non-answer is an acceptable answer if the information cannot be found in its training data and the context you provided,” said Smith. “There are lots of models that have started getting good at this and there are ways to help with the hallucination issue.”

Additionally, if a human does catch a hallucination or error, there should be an easy mechanism for them to report that it was incorrect. On ChatGPT, you can thumbs down a response and provide feedback on how it can be improved, including selecting if it was harmful or unsafe, untrue or unhelpful. 

“Taking a responsible approach to it, through those methods, is a mandatory and even call it an ethical and responsible step, to ensure that human beings stay at the center of these technologies,” said Jurkiewicz.