AI hallucinations are a phenomenon where AI models generate information that is not based on the input data, but rather on the model's own biases and assumptions. This can lead to inaccurate or misleading results.
The term "hallucination" was first used in the context of AI by researchers who observed that some models were generating responses that were not grounded in reality. For example, in one study, a language model was asked to describe a picture of a cat, but instead generated a response about a cat playing the piano.
AI hallucinations can have serious consequences, such as in medical diagnosis where a model might misinterpret a patient's symptoms and suggest an incorrect treatment. In another study, a model was found to have generated false information about a patient's medical history, which could have led to a misdiagnosis.
AI hallucinations are often the result of the model's overfitting to the training data, which can lead to the model generating responses that are not representative of the real world.
Here's an interesting read: Action Model Learning
What Are AI Hallucinations?
AI hallucinations are a type of error that can occur in artificial intelligence systems.
They happen when a chatbot or AI content generator provides information that's not based in reality, but is presented as fact.
This can happen when a chatbot gives an answer that is factually inaccurate, as mentioned in examples of AI hallucinations.
An AI content generator fabricating information and presenting it as truth is also a form of AI hallucination.
AI hallucinations can have serious consequences, such as spreading misinformation and eroding trust in AI systems.
Curious to learn more? Check out: Recommender Systems Machine Learning
Types of AI Hallucinations
AI hallucinations can manifest in various forms, including sentence contradictions, prompt contradictions, factual contradictions, and irrelevant or random hallucinations.
These types of hallucinations can be quite puzzling, and they often arise from the limitations and biases of the AI tools we use. For instance, an LLM might generate a sentence that contradicts a previous sentence, or it might produce fictitious information that's presented as fact.
Here are some examples of AI hallucinations:
- Sentence contradiction: An LLM generates a sentence that contradicts a previous sentence.
- Prompt contradiction: A sentence contradicts the prompt used to generate it.
- Factual contradiction: Fictitious information is presented as a fact.
- Irrelevant or random hallucinations: Random information with little or no relation to the input is generated.
Prompt Contradictions
Prompt contradictions are a type of AI hallucination that can be quite frustrating. They occur when an AI tool generates a response that doesn't match the prompt given, sometimes completely ignoring it.
One example of prompt contradiction is when you ask an LLM a question and get a completely different answer than you expected. This can happen due to various reasons, including the AI tool's ability to generate responses that are not directly related to the input.
There are several types of prompt contradictions, including sentence contradictions, where an LLM creates a sentence that contradicts a previous sentence. This can happen when the AI tool is trying to be too clever and ends up generating contradictory information.
Here are some types of prompt contradictions:
- Sentence contradiction: An LLM generates a sentence that contradicts a previous sentence.
- Prompt contradiction: An LLM generates a response that doesn't match the prompt given.
In many cases, prompt contradictions can be caused by the way prompts are encoded, which can lead to nonsensical outputs in the generated text. Understanding the reasons for prompt contradictions is essential to addressing this issue and improving AI tools.
Object Detection
Object detection can be a tricky business, especially when it comes to adversarial hallucinations. Various researchers have classified these hallucinations as a high-dimensional statistical phenomenon, or attributed them to insufficient training data.
In object detection, some "incorrect" AI responses classified by humans as "hallucinations" may actually be justified by the training data. For example, an AI may detect tiny patterns in an image that a human wouldn't notice, even if it looks like an ordinary image of a dog to us.
This highlights the importance of understanding the limitations of AI training data. As Wired noted in 2018, consumer gadgets and automated systems are susceptible to adversarial attacks that can cause AI to hallucinate. This can lead to some pretty surprising results, like a stop sign rendered invisible to computer vision.
The models used in object detection can be biased towards superficial statistics, leading adversarial training to not be robust in real-world scenarios. This means that even if an AI is trained on a large dataset, it may still struggle to accurately detect objects in certain situations.
It's not just images that can be manipulated - audio clips can also be engineered to sound innocuous to humans, but be transcribed as something entirely different by software. For example, an audio clip might be transcribed as "evil dot com" when it sounds harmless to us.
A different take: Adversarial Machine Learning
Text-to-Audio Generative
Text-to-Audio generative AI can produce inaccurate and unexpected results.
These inaccuracies can be quite surprising, and I've seen firsthand how they can catch people off guard. Text-to-Audio generative AI is also known as text to speech (TTS) synthesis, depending on the modality.
Inaccurate results can range from slight mispronunciations to complete misinterpretations of the original text.
Curious to learn more? Check out: Ai Audio Software
Causes of AI Hallucinations
AI hallucinations are a fascinating phenomenon, and understanding their causes is crucial to developing more reliable and trustworthy AI systems. AI hallucinations occur when a model generates information that is not actually present in the training data.
The main cause of hallucination from data is source-reference divergence, which happens when there is an artifact of heuristic data collection or the nature of some NLG tasks. This divergence can lead to models generating text that is not faithful to the provided source.
Poor data quality is a significant contributor to hallucinations. Hallucinations might occur when there is bad, incorrect, or incomplete information in the data used to train the LLM. LLMs rely on a large body of training data to produce output that's relevant and accurate to the user who provided the input prompt.
Generative AI models function like advanced autocomplete tools, designed to predict the next word or sequence based on observed patterns. Their goal is to generate plausible content, not to verify its truth. This can lead to content that sounds reasonable but is inaccurate.
Pre-training of models on a large corpus can result in the model memorizing knowledge in its parameters, creating hallucinations if the system is overconfident in its hardwired knowledge. In systems such as GPT-3, an AI generates each next word based on a sequence of previous words, causing a cascade of possible hallucinations as the response grows longer.
Here are some of the key factors that contribute to AI hallucinations:
- Poor data quality
- Limitations of generative models
- Inherent challenges in AI design
- Insufficient training data
- Improperly encoded prompts
Examples
Google's chatbot Bard, now called Gemini, incorrectly claimed that the James Webb Space Telescope took the first image of a planet outside the solar system. This is incorrect — the first images of an exoplanet were taken in 2004, according to NASA, and the James Webb Space Telescope was not launched until 2021.
Recommended read: Version Space Learning
In February 2023, Google's chatbot Gemini made an incorrect claim about the James Webb Space Telescope in a promotional video. The chatbot responded that the JWST took the very first pictures of an exoplanet outside the Earth's solar system, which was false.
Meta demoed Galactica, an open-source LLM trained on millions of pieces of scientific information, in late 2022. The system generated inaccurate, suspicious, or biased results, and many users reported that it invented citations and research when prompted to perform a literature review.
OpenAI's ChatGPT has also been embroiled in numerous hallucination controversies since its public release in November 2022. In June 2023, a radio host in Georgia brought a defamation suit against OpenAI, accusing the chatbot of making malicious and potentially libelous statements about him.
ChatGPT fabricated a story about a real law professor, alleging that he harassed students on a school trip, which never happened. This kind of misinformation has the potential to be damaging to the people involved, and through no fault of their own.
Generative AI tools like ChatGPT, Copilot, and Gemini have been found to provide users with fabricated data that appears authentic, earning them the moniker "hallucinations."
Prevention and Detection
To minimize the occurrence of AI hallucinations, users can use clear and specific prompts, which can guide the model to provide the intended and correct output.
Filtering and ranking strategies, such as tuning the temperature parameter or using Top-K, can also help reduce hallucinations. Multishot prompting, where users provide several examples of the desired output format, can help the model accurately recognize patterns and generate more accurate output.
One way to prevent AI hallucinations is to let the tool know what output you don't want to get. This can be done by asking AI to exclude certain facts or data.
Researchers have proposed various mitigation measures, including getting different chatbots to debate one another until they reach consensus on an answer, and actively validating the correctness of the model's output using web search results.
Here are some common mitigation methods:
- Data-related methods: building a faithful dataset, cleaning data automatically, and information augmentation by augmenting the inputs with external information.
- Model and inference methods: changes in the architecture, changes in the training process, and post-processing methods that can correct hallucinations in the output.
By following these tips and staying vigilant, you can reduce the occurrence of AI hallucinations and make the most out of AI-generated content.
How to Spot
Spoting AI hallucinations can be a challenge, but there are ways to catch them. Most AI hallucinations are spotted because they don't pass the "common sense" test.
For example, an AI might confidently tell you that you can cross the English Channel on foot, but it's not hard to realize that's impossible. Asking a follow-up question can sometimes help, and the AI will apologize and say it gave the wrong answer. However, the best way to spot AI hallucinations is by doing your own fact-checking.
Credible resources like books, news articles, and academic papers can help you verify the accuracy of AI-generated information. Our respondents agree that cross-checking with credible resources is the best way to notice if something is off with AI responses. To minimize the occurrence of hallucinations, you can also check out the tips in the next section.
Here are some common characteristics of AI hallucinations that might raise a red flag:
- Impossible or implausible scenarios
- Lack of credible sources
- Inconsistent or contradictory information
- Unusual or unexplained facts
How to Prevent
Prevention is key when it comes to AI hallucinations. One way to prevent them is by using clear and specific prompts. This can guide the model to provide the intended and correct output. Some examples of clear and specific prompts include providing additional context and using unambiguous language.
Filtering and ranking strategies can also help minimize hallucinations. This can be done by tuning parameters such as the temperature parameter, which controls output randomness, and the top-K parameter, which manages how the model deals with probabilities.
Multishot prompting is another technique that can help prevent hallucinations. This involves providing several examples of the desired output format to help the model accurately recognize patterns and generate more accurate output.
Grounding the model with relevant data is also crucial. This can be done by training the model on industry-specific data, which can enhance its understanding and enable it to generate answers based on context rather than just hallucinating.
Consider reading: Supervised or Unsupervised Machine Learning Examples
Here are some common mitigation methods:
- Building a faithful dataset
- Cleaning data automatically
- Information augmentation by augmenting the inputs with external information
- Changes in the architecture, such as modifying the encoder, attention, or decoder
- Changes in the training process, such as using reinforcement learning
- Post-processing methods that can correct hallucinations in the output
Researchers have also proposed using web search results to validate the correctness of the model's output. This can be done by actively validating the correctness corresponding to the low-confidence generation of the model using web search results. Additionally, using different chatbots to debate one another until they reach consensus on an answer can also help mitigate hallucinations.
Mitigation Methods
Mitigation methods are being developed to address the issue of AI hallucinations.
Researchers have proposed various methods to mitigate hallucinations, which can be categorized into data-related methods and modeling and inference methods.
Data-related methods include building a faithful dataset, cleaning data automatically, and information augmentation by adding external information to the inputs.
Another approach is to get different chatbots to debate each other until they reach consensus on an answer.
The web search mitigation method involves actively validating the correctness of the model's output using web search results, and an extra layer of logic-based rules can be added to utilize different ranks of web pages as a knowledge base.
Recommended read: Proximal Gradient Methods for Learning
Training or reference guiding for language models is another category, which involves strategies like employing control codes or contrastive learning to guide the generation process.
Nvidia Guardrails, launched in 2023, can be configured to hard-code certain responses via script instead of leaving them to the LLM.
Tools like SelfCheckGPT and Aimon have emerged to aid in the detection of hallucination in offline experimentation and real-time production scenarios.
Researchers have shown that a generated sentence is hallucinated more often when the model has already hallucinated in its previously generated sentences for the input.
Impact and Risks
One third of businesses across all industries already use AI in some form, and the numbers are rising. This means that the potential harm from AI hallucinations is a growing concern.
AI hallucinations can cause real-world consequences, leading to legal liability and losses for businesses. Multiple industries have strict compliance requirements, making AI hallucinations a serious issue.
If enough misinformation is spread online, it can create a self-perpetuating cycle of inaccurate content, making it harder to trust legitimate information sources. This can also hurt adoption of generative AI technology.
Loss of Trust
The echoes of AI hallucinations can have a profound impact on our trust in information sources. Fill the internet with enough misinformation, and you have a self-perpetuating cycle of inaccurate content.
This pollution of the information ecosystem makes it harder for us to trust the things we should be able to trust. It's like trying to find a needle in a haystack – it's really hard to detect and mitigate.
According to Bender, these systems produce non-information that looks authoritative and like it's produced by humans. This mixing of synthetic non-information with legitimate sources is a real problem.
If people don't think the quality of generative AI outputs are factual or based on real data, they may avoid using it altogether. That could be bad news for companies innovating and adopting this technology.
If we don't solve hallucinations, it's definitely going to hurt adoption, as Orlick pointed out. This is a serious issue that needs to be addressed.
Legal and Compliance Risks
One third of businesses across all industries already use AI in some form, and the numbers are rising.
If AI tools hallucinate, businesses might be in for some trouble, as the output of AI tools, if wrong, might cause real-world consequences.
Businesses might have to deal with legal liability due to AI hallucinations.
Multiple industries have strict compliance requirements, so AI hallucinations make AI tools violate those compliance standards and lead to many losses for the business.
The potential harm caused by AI hallucinations can be very negative for both businesses and individuals.
To minimize the potential harm, we need to learn to spot when AI hallucinates.
You might enjoy: Ai Compliance Software
History and Current Status
The term AI hallucinations was first used in 2000 in a paper called "Proceedings: Fourth IEEE International Conference on Automatic Face and Gesture Recognition", where it carried positive meanings in computer vision.
Google DeepMind researchers proposed the term IT hallucinations in 2018, describing them as "highly pathological translations that are completely untethered from the source material." This marked the beginning of a more widespread understanding of AI hallucinations.
In 2022, a report called "Survey of Hallucination in Natural Language Generation" highlighted the tendency of deep learning-based systems to "hallucinate unintended text", affecting performance in real-world scenarios.
History of
The concept of hallucinations in AI has a fascinating history. The term "hallucinations" was first used in 2000 in a paper on computer vision, where it carried a positive meaning.
Google DeepMind researchers coined the term "IT hallucinations" in 2018, describing them as "highly pathological translations that are completely untethered from the source material." This marked a shift in the way researchers viewed hallucinations, as a negative phenomenon.
A 2022 report on hallucinations in natural language generation highlighted the tendency of deep learning-based systems to "hallucinate unintended text." This has significant implications for real-world applications.
The release of ChatGPT in 2023 made large language models (LLMs) more accessible, but also highlighted the issue of hallucinations in AI.
Current Status
AI companies are actively working on addressing AI hallucinations, a problem that's been widespread and worrisome. OpenAI has announced a new method to tackle the issue, called "process supervision", which trains AI models to reward correct reasoning steps instead of just the end-answer.
This approach has the potential to drastically lower the rates of hallucinations, making AI models more capable of solving challenging reasoning problems. Research has shown that users have encountered AI hallucinations in generative AI tools of all kinds, from ChatGPT and Bard to Siri and Alexa.
The issue is concerning, as most people are either intrigued, annoyed, or anxious about AI hallucinations, which can undermine trust in AI. Users have experienced AI hallucinations in various forms, from harmless word mishaps to wrong math calculations to concerning factual inconsistencies.
Fortunately, AI hallucinations can usually be spotted when something doesn't seem realistic, and people often intuitively disregard or cross-check with other resources. However, unless actions are taken, AI hallucinations may become more intricate and harder to spot, which could mislead many more people.
Scientific Research
Scientific Research has made significant strides in understanding the history and current status of this topic.
Studies have shown that the earliest recorded attempts to harness the power of the sun date back to ancient civilizations in Egypt and Greece around 2500 BC.
Researchers have discovered that these early experiments were often linked to the development of solar ovens and other thermal energy systems.
The Industrial Revolution marked a major turning point in the history of scientific research, with the development of new technologies and materials that enabled more efficient energy production and storage.
One notable example is the work of French physicist Augustin-Jean Fresnel, who in the 19th century made significant contributions to the field of solar energy research.
The 20th century saw a surge in scientific research, with the development of new technologies such as photovoltaic cells and solar panels.
Today, scientific research continues to play a crucial role in advancing our understanding of the history and current status of this topic.
A unique perspective: Timeline of Machine Learning
Terminologies and Concepts
The term "hallucination" is used to describe a phenomenon in AI, but it's not without controversy. Statistician Gary N. Smith argues that LLMs "do not understand what words mean" and that the term "hallucination" anthropomorphizes the machine.
Some people use the term "hallucination" to describe a model's tendency to invent facts in moments of uncertainty. OpenAI defines it as "a tendency to invent facts in moments of uncertainty" and also as "a model's logical mistakes".
The term "hallucination" is often used interchangeably with "confabulation", which is a process that involves "creative gap-filling". Journalist Benj Edwards suggests using "confabulation" as an analogy for this process.
In natural language processing, a hallucination is often defined as "generated content that appears factual but is ungrounded". This can be divided into intrinsic and extrinsic hallucinations, depending on whether the output contradicts the source or cannot be verified from the source.
Here's a breakdown of the different types of hallucinations:
- Intrinsic hallucination: Output contradicts the source
- Extrinsic hallucination: Output cannot be verified from the source
Note that depending on whether the output contradicts the prompt or not, hallucinations could also be divided into closed-domain and open-domain respectively.
Frequently Asked Questions
Is AI hallucination fixable?
While AI hallucinations can be reduced, they are unlikely to be completely eliminated. This is because AI systems, like humans, can provide inaccurate information, even with advanced education and training.
How often do AI hallucinations happen?
AI hallucinations occur in a significant percentage of cases, ranging from around 3% to 5% depending on the technology used. However, the frequency of hallucinations can vary widely among leading AI companies.
How to test AI for hallucinations?
To test AI for hallucinations, compare its output to one or more ground truths or expected values through automated comparison. This helps identify when the AI generates information not supported by the available data.
Sources
- https://www.tidio.com/blog/ai-hallucinations/
- https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
- https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/
- https://builtin.com/artificial-intelligence/ai-hallucination
- https://www.techtarget.com/whatis/definition/AI-hallucination
Featured Images: pexels.com