The Cyber Solicitor

The Cyber Solicitor

AI Governance

Why LLMs hallucinate, according to OpenAI

Some thoughts on why AI makes stuff up

Mahdi Assan's avatar
Mahdi Assan
Oct 31, 2025
∙ Paid

Back in September, OpenAI released a research paper that explores why language models hallucinate.

Hallucinations describe outputs of LLMs that are factually incorrect. It is a common flaw of these models and can be a major source of risk.

But why do LLMs hallucinate? What is the cause of this tendency to generate incorrect responses?

The abstract of OpenA…

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Mahdi Assan · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture