Understanding AI Hallucinations and Their Impact

Author

Posted Nov 6, 2024

Reads 1.2K

An artist's illustration of artificial intelligence (AI). This image visualises the benefits and flaws of large language models. It was created by Tim West as part of the Visualising AI pr...
Credit: pexels.com, An artist's illustration of artificial intelligence (AI). This image visualises the benefits and flaws of large language models. It was created by Tim West as part of the Visualising AI pr...

AI hallucinations are a phenomenon where AI models generate information that is not based on the input data, but rather on the model's own biases and assumptions. This can lead to inaccurate or misleading results.

The term "hallucination" was first used in the context of AI by researchers who observed that some models were generating responses that were not grounded in reality. For example, in one study, a language model was asked to describe a picture of a cat, but instead generated a response about a cat playing the piano.

AI hallucinations can have serious consequences, such as in medical diagnosis where a model might misinterpret a patient's symptoms and suggest an incorrect treatment. In another study, a model was found to have generated false information about a patient's medical history, which could have led to a misdiagnosis.

AI hallucinations are often the result of the model's overfitting to the training data, which can lead to the model generating responses that are not representative of the real world.

Here's an interesting read: Action Model Learning

What Are AI Hallucinations?

Credit: youtube.com, Ai hallucinations explained

AI hallucinations are a type of error that can occur in artificial intelligence systems.

They happen when a chatbot or AI content generator provides information that's not based in reality, but is presented as fact.

This can happen when a chatbot gives an answer that is factually inaccurate, as mentioned in examples of AI hallucinations.

An AI content generator fabricating information and presenting it as truth is also a form of AI hallucination.

AI hallucinations can have serious consequences, such as spreading misinformation and eroding trust in AI systems.

Curious to learn more? Check out: Recommender Systems Machine Learning

Types of AI Hallucinations

AI hallucinations can manifest in various forms, including sentence contradictions, prompt contradictions, factual contradictions, and irrelevant or random hallucinations.

These types of hallucinations can be quite puzzling, and they often arise from the limitations and biases of the AI tools we use. For instance, an LLM might generate a sentence that contradicts a previous sentence, or it might produce fictitious information that's presented as fact.

Credit: youtube.com, Why Large Language Models Hallucinate

Here are some examples of AI hallucinations:

  • Sentence contradiction: An LLM generates a sentence that contradicts a previous sentence.
  • Prompt contradiction: A sentence contradicts the prompt used to generate it.
  • Factual contradiction: Fictitious information is presented as a fact.
  • Irrelevant or random hallucinations: Random information with little or no relation to the input is generated.

Prompt Contradictions

Prompt contradictions are a type of AI hallucination that can be quite frustrating. They occur when an AI tool generates a response that doesn't match the prompt given, sometimes completely ignoring it.

One example of prompt contradiction is when you ask an LLM a question and get a completely different answer than you expected. This can happen due to various reasons, including the AI tool's ability to generate responses that are not directly related to the input.

There are several types of prompt contradictions, including sentence contradictions, where an LLM creates a sentence that contradicts a previous sentence. This can happen when the AI tool is trying to be too clever and ends up generating contradictory information.

Here are some types of prompt contradictions:

  • Sentence contradiction: An LLM generates a sentence that contradicts a previous sentence.
  • Prompt contradiction: An LLM generates a response that doesn't match the prompt given.

In many cases, prompt contradictions can be caused by the way prompts are encoded, which can lead to nonsensical outputs in the generated text. Understanding the reasons for prompt contradictions is essential to addressing this issue and improving AI tools.

Object Detection

Credit: youtube.com, Object Detection Explained | Tensorflow Object Detection | AI ML for Beginners | Edureka

Object detection can be a tricky business, especially when it comes to adversarial hallucinations. Various researchers have classified these hallucinations as a high-dimensional statistical phenomenon, or attributed them to insufficient training data.

In object detection, some "incorrect" AI responses classified by humans as "hallucinations" may actually be justified by the training data. For example, an AI may detect tiny patterns in an image that a human wouldn't notice, even if it looks like an ordinary image of a dog to us.

This highlights the importance of understanding the limitations of AI training data. As Wired noted in 2018, consumer gadgets and automated systems are susceptible to adversarial attacks that can cause AI to hallucinate. This can lead to some pretty surprising results, like a stop sign rendered invisible to computer vision.

The models used in object detection can be biased towards superficial statistics, leading adversarial training to not be robust in real-world scenarios. This means that even if an AI is trained on a large dataset, it may still struggle to accurately detect objects in certain situations.

It's not just images that can be manipulated - audio clips can also be engineered to sound innocuous to humans, but be transcribed as something entirely different by software. For example, an audio clip might be transcribed as "evil dot com" when it sounds harmless to us.

Text-to-Audio Generative

Credit: youtube.com, Tuning Your AI Model to Reduce Hallucinations

Text-to-Audio generative AI can produce inaccurate and unexpected results.

These inaccuracies can be quite surprising, and I've seen firsthand how they can catch people off guard. Text-to-Audio generative AI is also known as text to speech (TTS) synthesis, depending on the modality.

Inaccurate results can range from slight mispronunciations to complete misinterpretations of the original text.

Curious to learn more? Check out: Ai Audio Software

Causes of AI Hallucinations

AI hallucinations are a fascinating phenomenon, and understanding their causes is crucial to developing more reliable and trustworthy AI systems. AI hallucinations occur when a model generates information that is not actually present in the training data.

The main cause of hallucination from data is source-reference divergence, which happens when there is an artifact of heuristic data collection or the nature of some NLG tasks. This divergence can lead to models generating text that is not faithful to the provided source.

Poor data quality is a significant contributor to hallucinations. Hallucinations might occur when there is bad, incorrect, or incomplete information in the data used to train the LLM. LLMs rely on a large body of training data to produce output that's relevant and accurate to the user who provided the input prompt.

Credit: youtube.com, Ai Hallucinations Explained in Non Nerd English

Generative AI models function like advanced autocomplete tools, designed to predict the next word or sequence based on observed patterns. Their goal is to generate plausible content, not to verify its truth. This can lead to content that sounds reasonable but is inaccurate.

Pre-training of models on a large corpus can result in the model memorizing knowledge in its parameters, creating hallucinations if the system is overconfident in its hardwired knowledge. In systems such as GPT-3, an AI generates each next word based on a sequence of previous words, causing a cascade of possible hallucinations as the response grows longer.

Here are some of the key factors that contribute to AI hallucinations:

  • Poor data quality
  • Limitations of generative models
  • Inherent challenges in AI design
  • Insufficient training data
  • Improperly encoded prompts

Examples

Google's chatbot Bard, now called Gemini, incorrectly claimed that the James Webb Space Telescope took the first image of a planet outside the solar system. This is incorrect — the first images of an exoplanet were taken in 2004, according to NASA, and the James Webb Space Telescope was not launched until 2021.

Recommended read: Version Space Learning

Credit: youtube.com, What are 'hallucinations' and what more can we expect from AI?

In February 2023, Google's chatbot Gemini made an incorrect claim about the James Webb Space Telescope in a promotional video. The chatbot responded that the JWST took the very first pictures of an exoplanet outside the Earth's solar system, which was false.

Meta demoed Galactica, an open-source LLM trained on millions of pieces of scientific information, in late 2022. The system generated inaccurate, suspicious, or biased results, and many users reported that it invented citations and research when prompted to perform a literature review.

OpenAI's ChatGPT has also been embroiled in numerous hallucination controversies since its public release in November 2022. In June 2023, a radio host in Georgia brought a defamation suit against OpenAI, accusing the chatbot of making malicious and potentially libelous statements about him.

ChatGPT fabricated a story about a real law professor, alleging that he harassed students on a school trip, which never happened. This kind of misinformation has the potential to be damaging to the people involved, and through no fault of their own.

Generative AI tools like ChatGPT, Copilot, and Gemini have been found to provide users with fabricated data that appears authentic, earning them the moniker "hallucinations."

Prevention and Detection

Credit: youtube.com, How do we prevent AI hallucinations

To minimize the occurrence of AI hallucinations, users can use clear and specific prompts, which can guide the model to provide the intended and correct output.

Filtering and ranking strategies, such as tuning the temperature parameter or using Top-K, can also help reduce hallucinations. Multishot prompting, where users provide several examples of the desired output format, can help the model accurately recognize patterns and generate more accurate output.

One way to prevent AI hallucinations is to let the tool know what output you don't want to get. This can be done by asking AI to exclude certain facts or data.

Researchers have proposed various mitigation measures, including getting different chatbots to debate one another until they reach consensus on an answer, and actively validating the correctness of the model's output using web search results.

Here are some common mitigation methods:

  • Data-related methods: building a faithful dataset, cleaning data automatically, and information augmentation by augmenting the inputs with external information.
  • Model and inference methods: changes in the architecture, changes in the training process, and post-processing methods that can correct hallucinations in the output.

By following these tips and staying vigilant, you can reduce the occurrence of AI hallucinations and make the most out of AI-generated content.

How to Spot

An artist’s illustration of artificial intelligence (AI). This image was inspired by how AI tools can disguise biases and the importance of research for responsible deployment. It was crea...
Credit: pexels.com, An artist’s illustration of artificial intelligence (AI). This image was inspired by how AI tools can disguise biases and the importance of research for responsible deployment. It was crea...

Spoting AI hallucinations can be a challenge, but there are ways to catch them. Most AI hallucinations are spotted because they don't pass the "common sense" test.

For example, an AI might confidently tell you that you can cross the English Channel on foot, but it's not hard to realize that's impossible. Asking a follow-up question can sometimes help, and the AI will apologize and say it gave the wrong answer. However, the best way to spot AI hallucinations is by doing your own fact-checking.

Credible resources like books, news articles, and academic papers can help you verify the accuracy of AI-generated information. Our respondents agree that cross-checking with credible resources is the best way to notice if something is off with AI responses. To minimize the occurrence of hallucinations, you can also check out the tips in the next section.

Here are some common characteristics of AI hallucinations that might raise a red flag:

  • Impossible or implausible scenarios
  • Lack of credible sources
  • Inconsistent or contradictory information
  • Unusual or unexplained facts

How to Prevent

An artist’s illustration of artificial intelligence (AI). This image visualises the input and output of neural networks and how AI systems perceive data. It was created by Rose Pilkington ...
Credit: pexels.com, An artist’s illustration of artificial intelligence (AI). This image visualises the input and output of neural networks and how AI systems perceive data. It was created by Rose Pilkington ...

Prevention is key when it comes to AI hallucinations. One way to prevent them is by using clear and specific prompts. This can guide the model to provide the intended and correct output. Some examples of clear and specific prompts include providing additional context and using unambiguous language.

Filtering and ranking strategies can also help minimize hallucinations. This can be done by tuning parameters such as the temperature parameter, which controls output randomness, and the top-K parameter, which manages how the model deals with probabilities.

Multishot prompting is another technique that can help prevent hallucinations. This involves providing several examples of the desired output format to help the model accurately recognize patterns and generate more accurate output.

Grounding the model with relevant data is also crucial. This can be done by training the model on industry-specific data, which can enhance its understanding and enable it to generate answers based on context rather than just hallucinating.

An artist’s illustration of artificial intelligence (AI). This image depicts how AI tools can reproduce and disguise biases and the importance of research to mitigate this. It was created ...
Credit: pexels.com, An artist’s illustration of artificial intelligence (AI). This image depicts how AI tools can reproduce and disguise biases and the importance of research to mitigate this. It was created ...

Here are some common mitigation methods:

  • Building a faithful dataset
  • Cleaning data automatically
  • Information augmentation by augmenting the inputs with external information
  • Changes in the architecture, such as modifying the encoder, attention, or decoder
  • Changes in the training process, such as using reinforcement learning
  • Post-processing methods that can correct hallucinations in the output

Researchers have also proposed using web search results to validate the correctness of the model's output. This can be done by actively validating the correctness corresponding to the low-confidence generation of the model using web search results. Additionally, using different chatbots to debate one another until they reach consensus on an answer can also help mitigate hallucinations.

Mitigation Methods

Mitigation methods are being developed to address the issue of AI hallucinations.

Researchers have proposed various methods to mitigate hallucinations, which can be categorized into data-related methods and modeling and inference methods.

Data-related methods include building a faithful dataset, cleaning data automatically, and information augmentation by adding external information to the inputs.

Another approach is to get different chatbots to debate each other until they reach consensus on an answer.

The web search mitigation method involves actively validating the correctness of the model's output using web search results, and an extra layer of logic-based rules can be added to utilize different ranks of web pages as a knowledge base.

Credit: youtube.com, Tuning Your AI Model to Reduce Hallucinations

Training or reference guiding for language models is another category, which involves strategies like employing control codes or contrastive learning to guide the generation process.

Nvidia Guardrails, launched in 2023, can be configured to hard-code certain responses via script instead of leaving them to the LLM.

Tools like SelfCheckGPT and Aimon have emerged to aid in the detection of hallucination in offline experimentation and real-time production scenarios.

Researchers have shown that a generated sentence is hallucinated more often when the model has already hallucinated in its previously generated sentences for the input.

Impact and Risks

One third of businesses across all industries already use AI in some form, and the numbers are rising. This means that the potential harm from AI hallucinations is a growing concern.

AI hallucinations can cause real-world consequences, leading to legal liability and losses for businesses. Multiple industries have strict compliance requirements, making AI hallucinations a serious issue.

If enough misinformation is spread online, it can create a self-perpetuating cycle of inaccurate content, making it harder to trust legitimate information sources. This can also hurt adoption of generative AI technology.

Loss of Trust

Credit: youtube.com, Taking a Risk on Trust | Trudi West | TEDxHultLondon

The echoes of AI hallucinations can have a profound impact on our trust in information sources. Fill the internet with enough misinformation, and you have a self-perpetuating cycle of inaccurate content.

This pollution of the information ecosystem makes it harder for us to trust the things we should be able to trust. It's like trying to find a needle in a haystack – it's really hard to detect and mitigate.

According to Bender, these systems produce non-information that looks authoritative and like it's produced by humans. This mixing of synthetic non-information with legitimate sources is a real problem.

If people don't think the quality of generative AI outputs are factual or based on real data, they may avoid using it altogether. That could be bad news for companies innovating and adopting this technology.

If we don't solve hallucinations, it's definitely going to hurt adoption, as Orlick pointed out. This is a serious issue that needs to be addressed.

An artist's illustration of artificial intelligence (AI). This image visualises the benefits and flaws of large language models. It was created by Tim West as part of the Visualising AI pr...
Credit: pexels.com, An artist's illustration of artificial intelligence (AI). This image visualises the benefits and flaws of large language models. It was created by Tim West as part of the Visualising AI pr...

One third of businesses across all industries already use AI in some form, and the numbers are rising.

If AI tools hallucinate, businesses might be in for some trouble, as the output of AI tools, if wrong, might cause real-world consequences.

Businesses might have to deal with legal liability due to AI hallucinations.

Multiple industries have strict compliance requirements, so AI hallucinations make AI tools violate those compliance standards and lead to many losses for the business.

The potential harm caused by AI hallucinations can be very negative for both businesses and individuals.

To minimize the potential harm, we need to learn to spot when AI hallucinates.

You might enjoy: Ai Compliance Software

History and Current Status

The term AI hallucinations was first used in 2000 in a paper called "Proceedings: Fourth IEEE International Conference on Automatic Face and Gesture Recognition", where it carried positive meanings in computer vision.

Google DeepMind researchers proposed the term IT hallucinations in 2018, describing them as "highly pathological translations that are completely untethered from the source material." This marked the beginning of a more widespread understanding of AI hallucinations.

In 2022, a report called "Survey of Hallucination in Natural Language Generation" highlighted the tendency of deep learning-based systems to "hallucinate unintended text", affecting performance in real-world scenarios.

History of

An artist's illustration of artificial intelligence (AI). This image visualises the benefits and flaws of large language models. It was created by Tim West as part of the Visualising AI pr...
Credit: pexels.com, An artist's illustration of artificial intelligence (AI). This image visualises the benefits and flaws of large language models. It was created by Tim West as part of the Visualising AI pr...

The concept of hallucinations in AI has a fascinating history. The term "hallucinations" was first used in 2000 in a paper on computer vision, where it carried a positive meaning.

Google DeepMind researchers coined the term "IT hallucinations" in 2018, describing them as "highly pathological translations that are completely untethered from the source material." This marked a shift in the way researchers viewed hallucinations, as a negative phenomenon.

A 2022 report on hallucinations in natural language generation highlighted the tendency of deep learning-based systems to "hallucinate unintended text." This has significant implications for real-world applications.

The release of ChatGPT in 2023 made large language models (LLMs) more accessible, but also highlighted the issue of hallucinations in AI.

Current Status

AI companies are actively working on addressing AI hallucinations, a problem that's been widespread and worrisome. OpenAI has announced a new method to tackle the issue, called "process supervision", which trains AI models to reward correct reasoning steps instead of just the end-answer.

An artist's illustration of artificial intelligence (AI). This image visualises the streams of data that large language models produce. It was created by Tim West as part of the Visualisin...
Credit: pexels.com, An artist's illustration of artificial intelligence (AI). This image visualises the streams of data that large language models produce. It was created by Tim West as part of the Visualisin...

This approach has the potential to drastically lower the rates of hallucinations, making AI models more capable of solving challenging reasoning problems. Research has shown that users have encountered AI hallucinations in generative AI tools of all kinds, from ChatGPT and Bard to Siri and Alexa.

The issue is concerning, as most people are either intrigued, annoyed, or anxious about AI hallucinations, which can undermine trust in AI. Users have experienced AI hallucinations in various forms, from harmless word mishaps to wrong math calculations to concerning factual inconsistencies.

Fortunately, AI hallucinations can usually be spotted when something doesn't seem realistic, and people often intuitively disregard or cross-check with other resources. However, unless actions are taken, AI hallucinations may become more intricate and harder to spot, which could mislead many more people.

Scientific Research

Scientific Research has made significant strides in understanding the history and current status of this topic.

Studies have shown that the earliest recorded attempts to harness the power of the sun date back to ancient civilizations in Egypt and Greece around 2500 BC.

Person Holding a Set of False Eyelashes
Credit: pexels.com, Person Holding a Set of False Eyelashes

Researchers have discovered that these early experiments were often linked to the development of solar ovens and other thermal energy systems.

The Industrial Revolution marked a major turning point in the history of scientific research, with the development of new technologies and materials that enabled more efficient energy production and storage.

One notable example is the work of French physicist Augustin-Jean Fresnel, who in the 19th century made significant contributions to the field of solar energy research.

The 20th century saw a surge in scientific research, with the development of new technologies such as photovoltaic cells and solar panels.

Today, scientific research continues to play a crucial role in advancing our understanding of the history and current status of this topic.

A unique perspective: Timeline of Machine Learning

Terminologies and Concepts

The term "hallucination" is used to describe a phenomenon in AI, but it's not without controversy. Statistician Gary N. Smith argues that LLMs "do not understand what words mean" and that the term "hallucination" anthropomorphizes the machine.

Credit: youtube.com, Grounding AI Explained: How to stop AI hallucinations

Some people use the term "hallucination" to describe a model's tendency to invent facts in moments of uncertainty. OpenAI defines it as "a tendency to invent facts in moments of uncertainty" and also as "a model's logical mistakes".

The term "hallucination" is often used interchangeably with "confabulation", which is a process that involves "creative gap-filling". Journalist Benj Edwards suggests using "confabulation" as an analogy for this process.

In natural language processing, a hallucination is often defined as "generated content that appears factual but is ungrounded". This can be divided into intrinsic and extrinsic hallucinations, depending on whether the output contradicts the source or cannot be verified from the source.

Here's a breakdown of the different types of hallucinations:

  • Intrinsic hallucination: Output contradicts the source
  • Extrinsic hallucination: Output cannot be verified from the source

Note that depending on whether the output contradicts the prompt or not, hallucinations could also be divided into closed-domain and open-domain respectively.

Frequently Asked Questions

Is AI hallucination fixable?

While AI hallucinations can be reduced, they are unlikely to be completely eliminated. This is because AI systems, like humans, can provide inaccurate information, even with advanced education and training.

How often do AI hallucinations happen?

AI hallucinations occur in a significant percentage of cases, ranging from around 3% to 5% depending on the technology used. However, the frequency of hallucinations can vary widely among leading AI companies.

How to test AI for hallucinations?

To test AI for hallucinations, compare its output to one or more ground truths or expected values through automated comparison. This helps identify when the AI generates information not supported by the available data.

Sources

  1. ChatGPT ‘may make up facts,’ OpenAI’s chief technology officer says (businessinsider.com)
  2. Both humans and AI hallucinate — but not in the same way (theconversation.com)
  3. ChatGPT: What Are Hallucinations And Why Are They A Problem For AI Systems (bernardmarr.com)
  4. ChatGTP and the Generative AI Hallucinations (medium.com)
  5. What Is AI Hallucination, and How Do You Spot It? (makeuseof.com)
  6. When A.I. Chatbots Hallucinate (nytimes.com)
  7. OpenAI is pursuing a new way to fight A.I. ‘hallucinations’ (cnbc.com)
  8. Hallucinations in Neural Machine Translation (research.google)
  9. Survey of Hallucination in Natural Language Generation (arxiv.org)
  10. Chatbots sometimes make things up. Is AI’s hallucination problem fixable? (apnews.com)
  11. What are AI hallucinations and how do you prevent them? (zapier.com)
  12. The hilarious & horrifying hallucinations of AI (sify.com)
  13. Hallucinations of ChatGPT-4: Even the most powerful tool has a weakness (flyingbisons.com)
  14. AI Hallucinations: Understanding the Phenomenon and Exploring Potential Solutions (artificialcorner.com)
  15. How to Tell When an Artificial Intelligence Is ‘Hallucinating’ (lifehacker.com)
  16. Generative AI hallucinations: Why they occur and how to prevent them (telusinternational.com)
  17. "Nvidia has a new way to prevent A.I. chatbots from 'hallucinating' wrong facts" (cnbc.com)
  18. 2305.15852 (arxiv.org)
  19. "Reducing Quantity Hallucinations in Abstractive Summarization" (aclanthology.org)
  20. 2212.10400 (arxiv.org)
  21. 2104.08455 (arxiv.org)
  22. 2401.08358 (arxiv.org)
  23. 2307.03987 (arxiv.org)
  24. "ChatGPT 'hallucinates.' Some researchers worry it isn't fixable" (washingtonpost.com)
  25. 10.1145/3571730 (doi.org)
  26. 2202.03629 (arxiv.org)
  27. 10.18653/v1/2022.naacl-main.387 (doi.org)
  28. "On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models?" (aclanthology.org)
  29. 10.18653/v1/P19-1256 (doi.org)
  30. "A Simple Recipe towards Reducing Hallucination in Neural Surface Realisation" (aclanthology.org)
  31. 2401.11817 (arxiv.org)
  32. "Google apologizes for "missing the mark" after Gemini generated racially diverse Nazis" (theverge.com)
  33. 2303.13336 (arxiv.org)
  34. 10.23915/distill.00019.1 (doi.org)
  35. "Artificial Intelligence May Not 'Hallucinate' After All" (wired.com)
  36. "Google's AI chatbot Bard makes factual error in first demo" (theverge.com)
  37. "OpenAI is pursuing a new way to fight A.I. 'hallucinations'" (cnbc.com)
  38. "An AI that can "write" is feeding delusions about how smart artificial intelligence really is" (salon.com)
  39. 10.1038/s41537-023-00379-4 (doi.org)
  40. 10.1038/s41746-023-00819-6 (doi.org)
  41. 10.1016/j.amjmed.2023.06.012 (doi.org)
  42. "OpenAI faces defamation suit after ChatGPT completely fabricated another lawsuit" (arstechnica.com)
  43. "Lawyers have real bad day in court after citing fake cases made up by ChatGPT" (arstechnica.com)
  44. "Federal judge: No AI in my courtroom unless a human verifies its accuracy" (arstechnica.com)
  45. "Lawyer apologizes for fake court citations from ChatGPT" (cnn.com)
  46. "Google cautions against 'hallucinating' chatbots, report says" (reuters.com)
  47. "How come GPT can seem so brilliant one minute and so breathtakingly dumb the next?" (substack.com)
  48. "Finally, an A.I. Chatbot That Reliably Passes 'the Nazi Test'" (slate.com)
  49. "OpenAI invites everyone to test ChatGPT, a new AI-powered chatbot—with amusing results" (arstechnica.com)
  50. "We Asked ChatGPT Your Questions About Astronomy. It Didn't Go so Well" (discovermagazine.com)
  51. "We asked an AI questions about New Brunswick. Some of the answers may surprise you" (cbc.ca)
  52. "New Meta AI demo writes racist and inaccurate scientific literature, gets pulled" (arstechnica.com)
  53. 2211.09085 (arxiv.org)
  54. 2303.08774 (arxiv.org)
  55. 2401.01313 (arxiv.org)
  56. "What are AI chatbots actually doing when they 'hallucinate'? Here's why experts don't like the term" (northeastern.edu)
  57. "'Hallucinate' chosen as Cambridge dictionary's word of the year" (theguardian.com)
  58. "When A.I. Chatbots Hallucinate" (nytimes.com)
  59. "Meta warns its new chatbot may forget that it's a bot" (zdnet.com)
  60. 2301.12867 (arxiv.org)
  61. "AI Has a Hallucination Problem That's Proving Tough to Fix" (wired.com)
  62. "Hallucinations in Neural Machine Translation" (research.google)
  63. "AI Hallucinations: A Misnomer Worth Clarifying" (arxiv.org)
  64. "Microsoft's Bing A.I. made several factual errors in last week's launch demo" (cnbc.com)
  65. 10.1016/j.nlp.2023.100024 (doi.org)
  66. 2304.08637 (arxiv.org)
  67. "Chatbots May 'Hallucinate' More Often Than Many Realize" (nytimes.com)
  68. "Survey of Hallucination in Natural Language Generation" (acm.org)
  69. 2005.00661 (arxiv.org)
  70. "On Faithfulness and Factuality in Abstractive Summarization" (aclanthology.org)
  71. "Definition of HALLUCINATION" (merriam-webster.com)
  72. "Shaking the foundations: delusions in sequence models for interaction and control" (deepmind.com)
  73. When AI Gets It Wrong: Addressing AI Hallucinations and ... (mit.edu)
  74. sanctioned (reuters.com)
  75. fabricated a story (washingtonpost.com)
  76. falsely claimed (yahoo.com)
  77. gaslighted and insulted (fastcompany.com)
  78. doubtful (cnbc.com)
  79. spread misinformation (computerweekly.com)
  80. This approach (cnbc.com)
  81. in 2018 (research.google)

Keith Marchal

Senior Writer

Keith Marchal is a passionate writer who has been sharing his thoughts and experiences on his personal blog for more than a decade. He is known for his engaging storytelling style and insightful commentary on a wide range of topics, including travel, food, technology, and culture. With a keen eye for detail and a deep appreciation for the power of words, Keith's writing has captivated readers all around the world.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.