the-mind
A definition-driven synthesis of mind (based on Joscha Bach’s public work)

Claim Ledger

Template (copy/paste for new claims):

CLM-XXXX: <claim statement>

  • Status: candidate | verified | contested
  • Confidence: low | medium | high
  • Supports:
  • <source_id> @ <HH:MM:SS>
  • Dependencies: (optional)
  • CLM-XXXX
  • Notes: (optional)
  • <ambiguities, alternate phrasings, context>

CLM-0001: A mind can be described as an adaptive control system that builds and uses a world-model to predict and steer outcomes.

CLM-0002: Representation is required for control; action selection depends on internal models rather than direct access to reality.

CLM-0003: Learning updates both model and policy; understanding is compression that supports control, not memorization.

CLM-0004: Valence is not identical to reward; reward is a signal, value is a learned structure in the model.

CLM-0005: Emotions function as control modulators that reconfigure policy, attention, and learning under specific contexts.

CLM-0006: The self is a self-model constructed for stability and coordination; it is a representation, not an entity behind the representation.

CLM-0007: Consciousness is a functional interface that coordinates mental subsystems and stabilizes global coherence.

CLM-0008: Consciousness is temporally anchored to the present and can be described as a second-order perception (perception of perception).

CLM-0009: There is no straightforward behavioral test for consciousness; it requires interpretation of internal organization rather than performance alone.

CLM-0010: Intelligence and consciousness are not identical; consciousness is a particular organization of intelligence.

CLM-0011: In biological systems, consciousness likely appears early as a learning scaffold that stabilizes self-organization.

CLM-0012: Machine consciousness is a hypothesis about reproducing self-organization conditions, not about current task performance.

CLM-0013: Social cognition and language extend individual models into multi-agent coordination systems.

CLM-0014: Culture and norms are control structures that regulate multi-agent coordination at scale.

CLM-0015: Alignment is best framed as value learning plus governance, not just optimization of a fixed utility.

CLM-0016: Attention is resource allocation and selection; it is distinct from consciousness though often correlated.

CLM-0017: Mechanism, function, and phenomenology must be kept distinct to avoid category errors.

CLM-0018: The explanatory gap arises when mechanism is expected to directly yield phenomenology without a functional bridge.

CLM-0019: A world-model is a constructed representation optimized for control, not a literal copy of the world.

CLM-0020: The agent is an abstraction over the organism; organisms implement agents but are not identical to them as models.

CLM-0021: Consciousness can be treated as an operator on mental states that increases global coherence.

CLM-0022: Coherence is a control-relevant property: it makes the system's subsystems agree enough to act as one agent rather than as competing local processes.

CLM-0023: The self is a virtual construct (a model of being a person) that supports control, explanation, and coordination.

CLM-0024: A narrative self is often a post-hoc explanatory artifact; it can be coherent without being a faithful causal story.

  • Status: verified
  • Confidence: medium
  • Supports:
  • talk: The Ghost in the Machine @ 00:32:21
  • Notes:
  • This supports a consistent distinction between control explanations and introspective narratives.

CLM-0025: Minimal consciousness can be framed as perception of perception; higher-order awareness can include explicit awareness of self and even deconstruction of the observer model.

CLM-0026: Computationalist functionalism is an epistemological stance: objects are constructed from observations and defined by the functional differences their presence makes in how the world evolves.

CLM-0027: Strong computationalism takes implementable representational languages to have bounded expressive power (Church-Turing), and treats "hypercomputational objects" as non-referenceable for realizable systems.

  • Status: verified
  • Confidence: low
  • Supports:
  • talk: Synthetic Sentience @ 00:08:58
  • Notes:
  • "Bounded expressive power" here is about realizable systems/languages, not a claim that all computation is simple.

CLM-0028: The experienced world is a generated model ("a dream of reality"); the self is a character inside that model rather than a physics-level entity.

CLM-0029: Consciousness can be described as a coherence-maintaining control process that monitors disharmonies among subsystems and coordinates them (the "conductor of a mental orchestra" metaphor).

CLM-0030: Consciousness is a reflexive second-order perception that creates a bubble of "nowness" (the subjective present) as a stabilized model state.

CLM-0031: Consciousness can be framed as a real-time control model of attention (attention schema): it tracks what the system attends to and supports global regulation/coherence.

CLM-0032: Conscious suffering is interpreted as dysregulation: a failure of the mind to maintain coherent regulation; "more consciousness" corresponds to building better models for regulation.

CLM-0033: The "conductor/coherence" framing is presented as convergent with multiple consciousness theories (e.g., global workspace, Cartesian theater, attention schema) rather than as a unique doctrine.

CLM-0034: Free will is framed as the ability to do what one believes is right; it is not opposed to determinism or coercion. Its opposite is compulsion (acting despite knowing better).

  • Status: verified
  • Confidence: high
  • Supports:
  • talk: The Ghost in the Machine @ 00:36:40
  • Notes:
  • This reframes free-will debates as category errors about control architectures.

CLM-0035: LLMs can simulate aspects of a conscious interaction partner (including self-report) when prompted, without being required to maintain an underlying coherent first-person simulation to produce outputs.

CLM-0036: Some "agentic" behavior attributed to LLMs can be understood as a simulated agent: the simulation can act as a stand-in for agency even if the underlying driver is next-token prediction.

CLM-0037: Typical software runs on a substrate engineered to be highly deterministic and is forced to follow the algorithm; this differs from biological self-organization and matters for machine-consciousness conjectures.

CLM-0038: A system can be conscious without a first-person perspective; consciousness is often experienced at the interface between self-model and world-model but can occur in dreams and other states.

CLM-0039: Valence is what gives meaning and color to perception; it depends on preferences and works alongside norms to shape control objectives.

  • Status: verified
  • Confidence: medium
  • Supports:
  • talk: The Ghost in the Machine @ 00:27:50
  • Notes:
  • This is a bridge between perception (model constraints) and control (what matters).

CLM-0040: The meaning of internal signals (pain/pleasure) is not imposed by physics; it is generated by the mind's interpretation and valuation.

  • Status: verified
  • Confidence: medium
  • Supports:
  • talk: The Ghost in the Machine @ 00:56:55
  • Notes:
  • This is consistent with the "dream/model" framing: meaning is a modeled property.

CLM-0041: Pleasure can be interpreted as moving a feedback loop closer to a target value; pain corresponds to being off-target and noticing the error signal.

  • Status: verified
  • Confidence: medium
  • Supports:
  • talk: The Ghost in the Machine @ 00:59:20
  • Notes:
  • This treats hedonic tone as a control-theoretic signal rather than a primitive metaphysical quality.

CLM-0042: Reward hacking or self-modification is a generic risk for sufficiently capable agents; even humans can learn to modify their own reward function.

  • Status: verified
  • Confidence: medium
  • Supports:
  • talk: The Ghost in the Machine @ 00:38:16
  • Notes:
  • This motivates caution around "alignment by fixed reward."

CLM-0043: Social systems have reward infrastructures (e.g., money/finance) that regulate resource allocation; misaligned incentives can dominate behavior at scale.

  • Status: verified
  • Confidence: medium
  • Supports:
  • talk: The Ghost in the Machine @ 00:50:31
  • Notes:
  • Used as a cautionary analogy for AI governance and institutional design.

CLM-0044: Norms can be described as beliefs "without priors": desired truths that constrain behavior beyond immediate pleasure/pain, enabling coordination.

  • Status: verified
  • Confidence: low
  • Supports:
  • talk: The Ghost in the Machine @ 00:27:50
  • Notes:
  • The phrasing is rhetorically sharp; treat it as a functional characterization.

CLM-0045: The first-person perspective is a representational content; it is not strictly required for consciousness in all states.

CLM-0046: "Enlightenment" can be described as recognizing that the self/observer is a representation and being able to deconstruct that representational stance (rather than as moral purity).

  • Status: verified
  • Confidence: medium
  • Supports:
  • interview: "We Are All Software" - Joscha Bach @ 00:15:15
  • Notes:
  • Use this as a model-level framing: enlightenment is a representational insight about the self-model, not an ethical badge.

CLM-0047: For machine consciousness, what may be missing in current AI is "the dream within the dream": a model of the act of perceiving (second-order perception / observer construction), not just high-resolution generated content.

CLM-0048: A desirable outcome for AI is to extend human agency (and potentially consciousness) onto new substrates, rather than building "silicon golems" that dominate and control humans.

CLM-0049: A hopeful social trajectory is "universal basic intelligence": each person has access to personal AI that increases competence and can be understood as extending the user's cognitive agency (rather than merely providing income).