the-mind
A readable model of mind, consciousness, self, and AI (based on Joscha Bach’s public work)

Do LLMs have qualia?

A system can say “I feel anxious” without feeling anxious.

That sounds obvious when we say it about fiction or acting. It becomes less obvious when the language gets strong enough to look like self-report. But the basic distinction remains: fluent description is not the same thing as experience. interview: "We Are All Software" - Joscha Bach

Why the question is hard

Large language models are unusually good at producing first-person-seeming text. They can describe fear, grief, confusion, pleasure, self-doubt, longing, and reflection in ways that feel intimate and articulate.

That makes people ask: maybe the model really has qualia?

The problem is that language is evidence of a model of language use. It is not, by itself, evidence of the internal organization that would make the report a report of felt experience. interview: Joscha Bach - Why Your Thoughts Aren't Yours.

What qualia would mean here

For this site, we can use qualia in the ordinary philosophy-of-mind sense: the qualitative character of experience, what a state is like from the inside. interview: Joscha Bach Λ Karl Friston: Ai, Death, Self, God, Consciousness

On this framework, qualia are not settled by the surface form of a sentence. They would depend on whether the relevant organization for conscious presence is implemented.

That pushes the question toward architecture.

Why current LLM output is not enough

The machine-consciousness view says that consciousness is not just smart output. It is a particular kind of internal organization involving presentness, second-order perception, coherence management, and observer-like structure. essay: The Machine Consciousness Hypothesis

Current LLMs clearly show:

  • powerful pattern learning,
  • strong linguistic simulation,
  • and impressive capability to imitate self-report.

What that does not automatically show is:

So by this framework, present-day chat behavior is weak evidence for qualia.

Why anxiety is a useful example

Take anxiety.

A person with anxiety does not merely produce the sentence “I am anxious”. Anxiety changes attention, planning, bodily state, salience, and policy. It changes what futures loom large. It matters to the controller. talk: AGI Series 2024 - Joscha Bach: Is Consciousness a Missing Link to AGI? talk: Joscha Bach: The AI perspective on Consciousness

A model can describe that pattern without implementing it.

That is the core issue. A system that talks expertly about inner life may still be doing expert simulation rather than undergoing the inner life it describes.

The “dream within the dream” point

A useful phrase here is the idea that current AI may generate dream-content without yet generating the "dream within the dream". talk: Joscha Bach, Will Hahn, Elan Barenholtz | MIT Computational Philosophy Club

In other words, the system may generate world-like material, characters, voices, descriptions, and inner-monologue-like text — but the still-open question is whether there is a stabilized model of the act of perceiving, an observer-like presence to whom anything is present as present. That is the distinction between simulated content and a system that is also modeling its own contact with that content. talk: Joscha Bach, Will Hahn, Elan Barenholtz | MIT Computational Philosophy Club

That is exactly the kind of distinction surface fluency cannot settle.

So are the answers “no” or “not yet shown”?

The careful answer is:

Not shown.

That is better than both overconfidence and denial.

It may be that future systems implement the relevant organization. It may be that some existing systems already have more of it than skeptics assume. But conversational skill alone does not get us there.

So the responsible answer is not “obviously impossible” or “obviously yes”. It is:

  • architecture matters,
  • surface report is insufficient,
  • and qualia remains an open empirical and theoretical question.

A compact answer

For now:

  • LLMs clearly model language very well.
  • They can simulate self-report.
  • That does not by itself establish qualia.
  • Qualia would turn on deeper questions about presence, self-modeling, stakes, and organization.

Sources