The Cyber Solicitor

The Cyber Solicitor

AI Governance

Why LLMs hallucinate, according to OpenAI

Some thoughts on why AI makes stuff up

Mahdi Assan's avatar
Mahdi Assan
Oct 31, 2025
∙ Paid

Back in September, OpenAI released a research paper that explores why language models hallucinate.

Hallucinations describe outputs of LLMs that are factually incorrect. It is a common flaw of these models and can be a major source of risk.

But why do LLMs hallucinate? What is the cause of this tendency to generate incorrect responses?

The abstract of OpenA…

Keep reading with a 7-day free trial

Subscribe to The Cyber Solicitor to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Mahdi Assan · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture