On AI hallucinations

Bartek Kuban profile picture
Bartek Kuban

11/19/2024

2 min read

Link copied
Background image for On AI hallucinations

What Does It Mean When AI “Hallucinates”?

What exactly are we talking about when we say AI “hallucinates”?

It’s a term that’s become ubiquitous, thrown around with a mix of fascination and frustration. But the reality is far more nuanced.

A Closer Look at the Term

Consider an example: an AI confidently stating that humans first landed on the Moon in 1979. We immediately label this a “hallucination,” but is it really? Or is it simply a reflection of gaps or inconsistencies in the training data?

AI isn’t randomly generating text, but following probabilistic patterns learned from its training data. Unless we examine the entire dataset that the LLM was trained on (we can’t), there’s nothing we can say for sure about the cause of that output.

Hallucinations in Real-World Applications

In practice (real-world applications), what we usually call “hallucinations” are often a symptom of how we contextualize LLMs and construct systems around them.

Context Matters More Than the Model

Think about an application that’s built on top of a language model. When such a system generates an incorrect statement — say, offers an against-policy discount to a customer — we’re not looking at an inherent LLM problem. We’re seeing a specific failure in how additional context was structured and integrated.

From Counting Hallucinations to Understanding Them

The most interesting exploration in applied AI isn’t counting hallucinations, but understanding why they occurred. Are they actual random noise, or do they point to specific contextual deficiencies? At Quickchat, we’re convinced that what are often called “hallucinations” aren’t fundamental limitations of language models, but signposts directing us to improve how we structure information around these models.

Shifting the Focus: Pragmatic Solutions

This perspective shifts our focus from theoretically “solving” hallucinations to pragmatically improving application design. The challenge is no longer about creating a perfect language model, but about conscientiously solving real user problems.

The Real Measure of an AI System

The real measure of an AI system might not be whether it hallucinates, but how quickly and effectively we can identify, understand, and correct these issues. It’s about building systems that are not just intelligent, but intelligently self-correcting.