top of page
  • Writer's pictureClare

Am I Seeing Things? - Hallucinations in AI

Image- Dall-E 3
Image- Dall-E 3

Large Language Models (LLMs) like GPT-4 and their contemporaries are at the forefront of AI technology, capturing the imagination with their ability to generate human-like text. Despite their capabilities, they're not without flaws, and one of the most notable is the tendency to 'hallucinate'—to produce content that sounds right but isn't factually accurate.


The crux of why LLMs hallucinate lies in their design. These models are trained on vast datasets comprising a myriad of text from the internet. However, these texts come with varying degrees of reliability, and the models aren't inherently equipped to discern truth from falsehood. Instead, they learn to predict the next word in a sequence based on patterns observed in their training data. When the model encounters a scenario that's similar to misleading or incorrect patterns it has learned, it may reproduce these inaccuracies in its output.


Another factor contributing to hallucinations is the context window of LLMs. They can only consider a finite piece of text at any given time, which means they can lose track of earlier parts of a conversation or document. This limited attention span can result in the model 'forgetting' crucial details that ensure accuracy, leading to responses that may diverge from factual correctness.


Moreover, the way LLMs are trained—to maximize the likelihood of the next word—means that they prioritize fluent language generation over factual accuracy. They do not possess an internal model of the world or an understanding of the truth of statements. This absence of a 'reality check' can lead them to make confident assertions that are entirely baseless.


The challenge of hallucinations is not just a technical issue; it reflects deeper questions about how AI understands and interacts with human language and knowledge. LLMs don't have a grasp of real-world concepts and consequences the way humans do. They don't understand the implications of false information, nor can they evaluate the credibility of their sources. They are, in essence, sophisticated pattern matchers, not thinkers or knowers.


Researchers and developers use various methods to try to reduce the rate of hallucinations.

One approach is fine-tuning models with more reliable datasets or using reinforcement learning techniques to steer models towards more accurate responses. Another method is to implement safety layers that attempt to flag or correct potentially incorrect information before it reaches the user.


The Vectara study provides a quantitative look at this issue, benchmarking various LLMs against each other to evaluate their propensity for factual inaccuracies. It shows us that even the most advanced models like GPT-4, while impressive in their abilities, are not immune to these errors. The differences in hallucination rates among models point to the different architectures, training regimes, and design philosophies behind each model.


In practical terms, this means that while we can be astonished by the fluency and apparent intelligence of these AI systems, we should also approach their outputs with a critical eye. The AI field is still in a state of evolution, working towards models that can not only mimic human language but also align closely with our understanding of facts and reality.


Understanding the nature of hallucinations in LLMs is crucial as these models become more prevalent in our lives. It reminds us to maintain a healthy scepticism, even as we marvel at the latest advancements. As we progress, the hope is that LLMs will become not only more linguistically adept but also more factually reliable, reducing the gap between artificial and human intelligence.




Written by Kara Webb, Head of Technology at England Marketing and Open-AI Development Panel Member


30 views0 comments
bottom of page