Claim Ledger
Template (copy/paste for new claims):
CLM-XXXX: <claim statement>
- Status: candidate | verified | contested
- Confidence: low | medium | high
- Supports:
- <source_id> @ <HH:MM:SS>
- Dependencies: (optional)
- CLM-XXXX
- Notes: (optional)
- <ambiguities, alternate phrasings, context>
CLM-0001: A mind can be described as an adaptive control system that builds and uses a world-model to predict and steer outcomes.
- Status: verified
- Confidence: medium
- Supports:
- talk: Joscha Bach - Agency in an Age of Machines - How AI Will Change Humanity @ 00:02:07
- talk: Joscha Bach: How to Build a Conscious Artificial Agent @ 00:02:11
- talk: Mind from Matter (Lecture By Joscha Bach) @ 00:24:59
- Notes:
- Treat as a functional description, not a claim about substrate.
CLM-0002: Representation is required for control; action selection depends on internal models rather than direct access to reality.
- Status: verified
- Confidence: medium
- Supports:
- talk: Joscha Bach: How to Build a Conscious Artificial Agent @ 00:06:33
- talk: Synthetic Sentience @ 00:18:19
- talk: Mind from Matter (Lecture By Joscha Bach) @ 00:26:35
- Notes:
- Distinguish representation from mere storage.
CLM-0003: Learning updates both model and policy; understanding is compression that supports control, not memorization.
- Status: verified
- Confidence: medium
- Supports:
- talk: AGI Series 2024 - Joscha Bach: Is Consciousness a Missing Link to AGI? @ 00:42:39
- interview: Self Learning Systems - Joscha Bach @ 00:28:59
- talk: The Ghost in the Machine @ 00:19:06
- Notes:
- Emphasize compression as usable abstraction.
CLM-0004: Valence is not identical to reward; reward is a signal, value is a learned structure in the model.
- Status: verified
- Confidence: medium
- Supports:
- talk: The Ghost in the Machine @ 00:37:19
- talk: Computational Meta-Psychology @ 00:24:50
- interview: How to Engineer Consciousness | Joscha Bach and Lex Fridman @ 00:07:03
- Notes:
- Keep phenomenology distinct from signal.
CLM-0005: Emotions function as control modulators that reconfigure policy, attention, and learning under specific contexts.
- Status: verified
- Confidence: medium
- Supports:
- interview: Joscha Bach on the Bible, emotions and how AI could be wonderful. @ 00:59:16
- talk: How a Spiritual Worldview Explains Consciousness - Joscha Bach @ 00:40:05
- Notes:
- Avoid collapsing emotion into a single scalar.
CLM-0006: The self is a self-model constructed for stability and coordination; it is a representation, not an entity behind the representation.
- Status: verified
- Confidence: medium
- Supports:
- talk: Self Models of Loving Grace @ 00:01:14
- talk: Mind from Matter (Lecture By Joscha Bach) @ 00:16:40
- talk: Joscha Bach: The AI perspective on Consciousness @ 00:15:44
- Notes:
- Guard against metaphysical reification.
CLM-0007: Consciousness is a functional interface that coordinates mental subsystems and stabilizes global coherence.
- Status: verified
- Confidence: medium
- Supports:
- talk: Synthetic Sentience @ 00:11:15
- talk: Self Models of Loving Grace @ 00:10:34
- Notes:
- Often described as a conductor role.
CLM-0008: Consciousness is temporally anchored to the present and can be described as a second-order perception (perception of perception).
- Status: verified
- Confidence: medium
- Supports:
- talk: AGI Series 2024 - Joscha Bach: Is Consciousness a Missing Link to AGI? @ 00:34:08
- interview: Geist und Künstliche Intelligenz - Vortrag von Dr. Dr. h. c. Joscha Bach @ 00:31:19
- Notes:
- Connect to observer construction.
CLM-0009: There is no straightforward behavioral test for consciousness; it requires interpretation of internal organization rather than performance alone.
- Status: verified
- Confidence: medium
- Supports:
- talk: The Machine Consciousness Hypothesis @ 00:20:01
- Notes:
- Contrast with the Turing test.
CLM-0010: Intelligence and consciousness are not identical; consciousness is a particular organization of intelligence.
- Status: verified
- Confidence: medium
- Supports:
- talk: The Machine Consciousness Hypothesis @ 00:20:39
- Notes:
- Useful disambiguation in AI contexts.
CLM-0011: In biological systems, consciousness likely appears early as a learning scaffold that stabilizes self-organization.
- Status: verified
- Confidence: medium
- Supports:
- talk: The Machine Consciousness Hypothesis @ 00:18:17
- interview: "We Are All Software" - Joscha Bach @ 00:27:38
- talk: Self Models of Loving Grace @ 00:10:34
- Notes:
- Hypothesis-level claim.
CLM-0012: Machine consciousness is a hypothesis about reproducing self-organization conditions, not about current task performance.
- Status: verified
- Confidence: medium
- Supports:
- talk: The Machine Consciousness Hypothesis @ 00:32:23
- Notes:
- Separate biological and machine hypotheses.
CLM-0013: Social cognition and language extend individual models into multi-agent coordination systems.
- Status: verified
- Confidence: medium
- Supports:
- interview: Joscha Bach - Why Your Thoughts Aren't Yours. @ 01:06:10
- interview: Joscha Bach Λ Karl Friston: Ai, Death, Self, God, Consciousness @ 01:58:03
- Notes:
- Language as shared compression.
CLM-0014: Culture and norms are control structures that regulate multi-agent coordination at scale.
- Status: verified
- Confidence: medium
- Supports:
- interview: Joscha Bach - Why Your Thoughts Aren't Yours. @ 01:06:10
- talk: The Ghost in the Machine @ 00:28:18
- Notes:
- Treat as emergent control layers.
CLM-0015: Alignment is best framed as value learning plus governance, not just optimization of a fixed utility.
- Status: verified
- Confidence: medium
- Supports:
- talk: AGI Series 2024 - Joscha Bach: Is Consciousness a Missing Link to AGI? @ 01:29:16
- talk: Joscha Bach - Agency in an Age of Machines - How AI Will Change Humanity @ 00:49:42
- Notes:
- Downstream implication, not core of the mind model.
CLM-0016: Attention is resource allocation and selection; it is distinct from consciousness though often correlated.
- Status: verified
- Confidence: medium
- Supports:
- talk: AGI Series 2024 - Joscha Bach: Is Consciousness a Missing Link to AGI? @ 00:10:33
- talk: Synthetic Sentience @ 00:12:04
- Notes:
- Prevents common conflation.
CLM-0017: Mechanism, function, and phenomenology must be kept distinct to avoid category errors.
- Status: verified
- Confidence: high
- Supports:
- talk: The Machine Consciousness Hypothesis @ 00:20:01
- Notes:
- Central methodological rule.
CLM-0018: The explanatory gap arises when mechanism is expected to directly yield phenomenology without a functional bridge.
- Status: verified
- Confidence: low
- Supports:
- talk: The Machine Consciousness Hypothesis @ 00:01:37
- Notes:
- Needs careful wording to avoid overclaim.
CLM-0019: A world-model is a constructed representation optimized for control, not a literal copy of the world.
- Status: verified
- Confidence: medium
- Supports:
- talk: Mind from Matter (Lecture By Joscha Bach) @ 00:24:59
- talk: Synthetic Sentience @ 00:18:19
- Notes:
- Essential for model-territory distinction.
CLM-0020: The agent is an abstraction over the organism; organisms implement agents but are not identical to them as models.
- Status: verified
- Confidence: medium
- Supports:
- talk: Joscha Bach - Agency in an Age of Machines - How AI Will Change Humanity @ 00:02:07
- Notes:
- Helps avoid category errors in biology.
CLM-0021: Consciousness can be treated as an operator on mental states that increases global coherence.
- Status: verified
- Confidence: medium
- Supports:
- talk: The Machine Consciousness Hypothesis @ 00:26:51
- talk: The Machine Consciousness Hypothesis @ 00:27:12
- Notes:
- This is a functional claim about what consciousness does in the architecture.
CLM-0022: Coherence is a control-relevant property: it makes the system's subsystems agree enough to act as one agent rather than as competing local processes.
- Status: verified
- Confidence: medium
- Supports:
- talk: The Machine Consciousness Hypothesis @ 00:27:12
- talk: Synthetic Sentience @ 00:11:15
- Notes:
- Often expressed with the "conductor" metaphor.
CLM-0023: The self is a virtual construct (a model of being a person) that supports control, explanation, and coordination.
- Status: verified
- Confidence: medium
- Supports:
- interview: Joscha Bach Λ Karl Friston: Ai, Death, Self, God, Consciousness @ 01:59:34
- talk: Self Models of Loving Grace @ 00:01:14
- Notes:
- Phrase carefully: "fiction/virtual" in the technical sense of a representational construct.
CLM-0024: A narrative self is often a post-hoc explanatory artifact; it can be coherent without being a faithful causal story.
- Status: verified
- Confidence: medium
- Supports:
- talk: The Ghost in the Machine @ 00:32:21
- Notes:
- This supports a consistent distinction between control explanations and introspective narratives.
CLM-0025: Minimal consciousness can be framed as perception of perception; higher-order awareness can include explicit awareness of self and even deconstruction of the observer model.
- Status: verified
- Confidence: low
- Supports:
- talk: The Machine Consciousness Hypothesis @ 00:16:38
- talk: Synthetic Sentience @ 00:40:15
- Notes:
- Phrase carefully: the ordering is a pedagogical device, not a formal theory of levels.
CLM-0026: Computationalist functionalism is an epistemological stance: objects are constructed from observations and defined by the functional differences their presence makes in how the world evolves.
- Status: verified
- Confidence: medium
- Supports:
- talk: The Machine Consciousness Hypothesis @ 00:08:24
- Notes:
- This is about how we construct objects/knowledge, not the slogan "today's computers think."
CLM-0027: Strong computationalism takes implementable representational languages to have bounded expressive power (Church-Turing), and treats "hypercomputational objects" as non-referenceable for realizable systems.
- Status: verified
- Confidence: low
- Supports:
- talk: Synthetic Sentience @ 00:08:58
- Notes:
- "Bounded expressive power" here is about realizable systems/languages, not a claim that all computation is simple.
CLM-0028: The experienced world is a generated model ("a dream of reality"); the self is a character inside that model rather than a physics-level entity.
- Status: verified
- Confidence: medium
- Supports:
- talk: The Ghost in the Machine @ 00:04:02
- interview: Joscha Bach Λ Karl Friston: Ai, Death, Self, God, Consciousness @ 01:59:34
- Notes:
- The point is representationalism (model vs territory), not mysticism.
CLM-0029: Consciousness can be described as a coherence-maintaining control process that monitors disharmonies among subsystems and coordinates them (the "conductor of a mental orchestra" metaphor).
- Status: verified
- Confidence: medium
- Supports:
- interview: "We Are All Software" - Joscha Bach @ 00:19:16
- talk: Self Models of Loving Grace @ 00:10:34
- talk: AGI Series 2024 - Joscha Bach: Is Consciousness a Missing Link to AGI? @ 00:41:09
- Notes:
- This frames consciousness by function: what it does in the architecture.
CLM-0030: Consciousness is a reflexive second-order perception that creates a bubble of "nowness" (the subjective present) as a stabilized model state.
- Status: verified
- Confidence: medium
- Supports:
- interview: "We Are All Software" - Joscha Bach @ 00:19:16
- talk: The Machine Consciousness Hypothesis @ 00:30:34
- Notes:
- "Nowness" is a modeled present, not a metaphysical property of time.
CLM-0031: Consciousness can be framed as a real-time control model of attention (attention schema): it tracks what the system attends to and supports global regulation/coherence.
- Status: verified
- Confidence: medium
- Supports:
- talk: Mind from Matter (Lecture By Joscha Bach) @ 00:19:13
- talk: AGI Series 2024 - Joscha Bach: Is Consciousness a Missing Link to AGI? @ 00:41:09
- talk: Self Models of Loving Grace @ 00:10:40
- Notes:
- This is presented as convergent with (and partly inspired by) attention schema theory.
CLM-0032: Conscious suffering is interpreted as dysregulation: a failure of the mind to maintain coherent regulation; "more consciousness" corresponds to building better models for regulation.
- Status: verified
- Confidence: low
- Supports:
- interview: "We Are All Software" - Joscha Bach @ 00:45:15
- Notes:
- This is a functional framing of suffering as a control/constraint-violation phenomenon.
CLM-0033: The "conductor/coherence" framing is presented as convergent with multiple consciousness theories (e.g., global workspace, Cartesian theater, attention schema) rather than as a unique doctrine.
- Status: verified
- Confidence: medium
- Supports:
- interview: "We Are All Software" - Joscha Bach @ 00:19:16
- talk: AGI Series 2024 - Joscha Bach: Is Consciousness a Missing Link to AGI? @ 00:41:09
- Notes:
- Treat as a bibliographic/navigation claim: he points to multiple traditions.
CLM-0034: Free will is framed as the ability to do what one believes is right; it is not opposed to determinism or coercion. Its opposite is compulsion (acting despite knowing better).
- Status: verified
- Confidence: high
- Supports:
- talk: The Ghost in the Machine @ 00:36:40
- Notes:
- This reframes free-will debates as category errors about control architectures.
CLM-0035: LLMs can simulate aspects of a conscious interaction partner (including self-report) when prompted, without being required to maintain an underlying coherent first-person simulation to produce outputs.
- Status: verified
- Confidence: low
- Supports:
- interview: "We Are All Software" - Joscha Bach @ 00:25:22
- Notes:
- This is about functional requirements: what the system must implement to output the behavior.
CLM-0036: Some "agentic" behavior attributed to LLMs can be understood as a simulated agent: the simulation can act as a stand-in for agency even if the underlying driver is next-token prediction.
- Status: verified
- Confidence: low
- Supports:
- interview: Joscha Bach - Why Your Thoughts Aren't Yours. @ 01:06:10
- Notes:
- The claim is not "LLMs have no agency," but that agency can be simulated as a layer.
CLM-0037: Typical software runs on a substrate engineered to be highly deterministic and is forced to follow the algorithm; this differs from biological self-organization and matters for machine-consciousness conjectures.
- Status: verified
- Confidence: low
- Supports:
- interview: Joscha Bach - Why Your Thoughts Aren't Yours. @ 00:57:15
- Notes:
- This motivates why "same performance" need not imply "same internal organization."
CLM-0038: A system can be conscious without a first-person perspective; consciousness is often experienced at the interface between self-model and world-model but can occur in dreams and other states.
- Status: verified
- Confidence: medium
- Supports:
- talk: Joscha Bach: The AI perspective on Consciousness @ 00:08:46
- Notes:
- This distinguishes "consciousness" from "first-person perspective" as separate representational conditions.
CLM-0039: Valence is what gives meaning and color to perception; it depends on preferences and works alongside norms to shape control objectives.
- Status: verified
- Confidence: medium
- Supports:
- talk: The Ghost in the Machine @ 00:27:50
- Notes:
- This is a bridge between perception (model constraints) and control (what matters).
CLM-0040: The meaning of internal signals (pain/pleasure) is not imposed by physics; it is generated by the mind's interpretation and valuation.
- Status: verified
- Confidence: medium
- Supports:
- talk: The Ghost in the Machine @ 00:56:55
- Notes:
- This is consistent with the "dream/model" framing: meaning is a modeled property.
CLM-0041: Pleasure can be interpreted as moving a feedback loop closer to a target value; pain corresponds to being off-target and noticing the error signal.
- Status: verified
- Confidence: medium
- Supports:
- talk: The Ghost in the Machine @ 00:59:20
- Notes:
- This treats hedonic tone as a control-theoretic signal rather than a primitive metaphysical quality.
CLM-0042: Reward hacking or self-modification is a generic risk for sufficiently capable agents; even humans can learn to modify their own reward function.
- Status: verified
- Confidence: medium
- Supports:
- talk: The Ghost in the Machine @ 00:38:16
- Notes:
- This motivates caution around "alignment by fixed reward."
CLM-0043: Social systems have reward infrastructures (e.g., money/finance) that regulate resource allocation; misaligned incentives can dominate behavior at scale.
- Status: verified
- Confidence: medium
- Supports:
- talk: The Ghost in the Machine @ 00:50:31
- Notes:
- Used as a cautionary analogy for AI governance and institutional design.
CLM-0044: Norms can be described as beliefs "without priors": desired truths that constrain behavior beyond immediate pleasure/pain, enabling coordination.
- Status: verified
- Confidence: low
- Supports:
- talk: The Ghost in the Machine @ 00:27:50
- Notes:
- The phrasing is rhetorically sharp; treat it as a functional characterization.
CLM-0045: The first-person perspective is a representational content; it is not strictly required for consciousness in all states.
- Status: verified
- Confidence: medium
- Supports:
- talk: Joscha Bach: The AI perspective on Consciousness @ 00:08:46
- talk: AGI Series 2024 - Joscha Bach: Is Consciousness a Missing Link to AGI? @ 00:41:09
- Notes:
- This is a key disambiguation for the "consciousness vs self" chapters.
CLM-0046: "Enlightenment" can be described as recognizing that the self/observer is a representation and being able to deconstruct that representational stance (rather than as moral purity).
- Status: verified
- Confidence: medium
- Supports:
- interview: "We Are All Software" - Joscha Bach @ 00:15:15
- Notes:
- Use this as a model-level framing: enlightenment is a representational insight about the self-model, not an ethical badge.
CLM-0047: For machine consciousness, what may be missing in current AI is "the dream within the dream": a model of the act of perceiving (second-order perception / observer construction), not just high-resolution generated content.
- Status: verified
- Confidence: low
- Supports:
- talk: Joscha Bach, Will Hahn, Elan Barenholtz | MIT Computational Philosophy Club @ 00:01:58
- Notes:
- This connects the "generated dreams" metaphor to the observer/second-order-perception framing.
CLM-0048: A desirable outcome for AI is to extend human agency (and potentially consciousness) onto new substrates, rather than building "silicon golems" that dominate and control humans.
- Status: verified
- Confidence: medium
- Supports:
- interview: Existential Hope Salon: Joscha Bach x Lou de K @ 00:52:19
- Notes:
- Keep this as a design/policy framing: capability is not destiny; architecture and incentives matter.
CLM-0049: A hopeful social trajectory is "universal basic intelligence": each person has access to personal AI that increases competence and can be understood as extending the user's cognitive agency (rather than merely providing income).
- Status: verified
- Confidence: medium
- Supports:
- talk: Joscha Bach: The Operation of Consciousness | AGI-25 @ 00:50:27
- Notes:
- Phrase carefully: "extending consciousness into AI" is his rhetoric; interpret in the book as extending agency via shared modeling/control loops.