How are Humans and AI Hallucinations Different?

0
hallucinations

There has been a lot of buzz about the release of ever-capable large language models (LLMs) like GPT-3.5. However, customers’ trust in these models has dwindled as they have found that they can make mistakes like humans. An LLM that produces inaccurate information is considered to have “hallucinations,” and there is a major research effort aimed at reducing this effect. Researchers are trying to figure out how this affects the accuracy of the LLMs we build.

Researchers can begin to design wiser AI systems. They will eventually help reduce human error by recognizing the link between AI’s hallucinatory potential. It’s no secret that individuals fabricate information. Sometimes we do it on purpose, and sometimes we don’t. The latter is the product of cognitive biases, often known as heuristics (mental shortcuts developed via previous experiences).

As a result, our brains must rely on learned connections to fill in the blanks and answer promptly to whatever question or quandary is before us. In other words, depending on our limited information, our brains guess what the proper answer may be. This is referred to as “confabulation” and is an example of human bias.

However, biases can lead to bad judgment. This bias can cause us to overlook mistakes and even act on erroneous information.

The halo effect is another key heuristic in which our first perception of something influences our later encounters with it. There’s the fluency bias, which describes how we prefer material that’s easier to read.

The basic line is that human thinking is frequently influenced by cognitive biases and distortions, and these “hallucinatory” tendencies occur entirely outside of our awareness.

How are hallucinations treated in an LLM environment?

Hallucinating is treated differently in an LLM environment. An LLM does not try to conserve scarce mental resources in order to make sense of the world more efficiently. In this case, “hallucinating” simply refers to a failed attempt to predict a proper reaction to an input. Nonetheless, there is some overlap in how humans and LLMs hallucinate, as LLMs also do this to “fill in the gaps.”

LLMs generate a response by predicting which word in a sequence is most likely to appear next, given what has come before and relationships learned through training.

LLMs, like humans, attempt to forecast the most likely reaction. They do this without knowing what they’re saying, unlike humans. This is how they end up producing rubbish.

There are several reasons why LLMs hallucinate. One important one is being taught using faulty or insufficient data. Other considerations include how the system is trained to learn from these facts and how this programming is reinforced through additional human training.

LEAVE A REPLY

Please enter your comment!
Please enter your name here