Glossary
Each entry should capture the working meaning (how we will use the term) and common confusions.
Template (copy/paste for new terms):
<term>
- Id: TERM-XXXX
- Working meaning: We will use "<term>" to mean: <definition>
- Related: <comma-separated related terms>
- Common confusion: <common misreadings>
- Sources:
- <source_id> @ <HH:MM:SS>
Computationalist functionalism
- Id: TERM-0001
- Working meaning: We will use "Computationalist functionalism" to mean: A stance about knowledge and objects: minds construct objects over observations, and an "object" is defined by the functional/causal differences its presence makes. Computationalism adds that models must be realized constructively in an implementable representational language.
- Related: computationalism, functionalism, representation, object, model.
- Common confusion: treating it as the claim that today's computers already have minds; or treating "function" as design intent rather than causal role.
- Sources:
- talk: The Machine Consciousness Hypothesis @ 00:08:24
Strong computationalism
- Id: TERM-0002
- Working meaning: We will use "Strong computationalism" to mean: The view that realizable systems can be described in implementable representational languages (with Church-Turing as a boundary condition), and that appeals to "hypercomputation" do not ground meaningful reference for real systems.
- Related: Church-Turing thesis, representation, substrate-independence.
- Common confusion: reading it as "everything is easy to compute" or as a denial of physical complexity.
- Sources:
- talk: Synthetic Sentience @ 00:08:58
World-model
- Id: TERM-0003
- Working meaning: We will use "World-model" to mean: A structured internal model that represents the world and its dynamics so the system can predict, simulate, and control outcomes.
- Related: representation, simulation, prediction, planning.
- Common confusion: treating the model as the territory; collapsing the model into raw data.
- Sources:
- talk: Mind from Matter (Lecture By Joscha Bach) @ 00:24:59
- talk: Synthetic Sentience @ 00:18:19
Representation
- Id: TERM-0004
- Working meaning: We will use "Representation" to mean: An internal structure that stands in for something else and supports inference, prediction, and control.
- Related: world-model, self-model, symbol, embedding.
- Common confusion: equating representation with mere storage or with a specific encoding format.
- Sources:
- talk: Joscha Bach: How to Build a Conscious Artificial Agent @ 00:06:33
Agent
- Id: TERM-0005
- Working meaning: We will use "Agent" to mean: A control system that uses a model to select actions under constraints to maintain or reach preferred states.
- Related: controller, policy, goal, action.
- Common confusion: equating agent with organism or with a fully rational optimizer.
- Sources:
- talk: Joscha Bach - Agency in an Age of Machines - How AI Will Change Humanity @ 00:02:07
- talk: Synthetic Sentience @ 00:32:49
Control
- Id: TERM-0006
- Working meaning: We will use "Control" to mean: Closed-loop regulation that compares desired states to actual states and adjusts actions to reduce error.
- Related: feedback, policy, error signal, homeostasis.
- Common confusion: treating control as domination rather than regulation; confusing control with optimization.
- Sources:
- talk: Joscha Bach - Agency in an Age of Machines - How AI Will Change Humanity @ 00:02:07
Learning
- Id: TERM-0007
- Working meaning: We will use "Learning" to mean: Updating model and policy based on experience to improve prediction and control.
- Related: compression, reinforcement, generalization.
- Common confusion: equating learning with memorization or with any single algorithm.
- Sources:
- talk: AGI Series 2024 - Joscha Bach: Is Consciousness a Missing Link to AGI? @ 00:42:39
- interview: Self Learning Systems - Joscha Bach @ 00:28:59
Valence
- Id: TERM-0008
- Working meaning: We will use "Valence" to mean: A signal or structure that marks states as desirable or undesirable and shapes learning and action.
- Related: reward, value, affect, motivation.
- Common confusion: equating valence with pleasure or with an external reward signal.
- Sources:
- talk: AGI Series 2024 - Joscha Bach: Is Consciousness a Missing Link to AGI? @ 00:49:17
- talk: The Ghost in the Machine @ 00:37:19
Norm
- Id: TERM-0009
- Working meaning: We will use "Norm" to mean: Control constraints that express desired truths/commitments beyond immediate pleasure/pain; used for long-horizon coordination (especially social coordination).
- Related: valence, value, commitment, culture.
- Common confusion: treating norms as purely explicit rules; or treating them as mere social conventions with no control role.
- Sources:
- talk: The Ghost in the Machine @ 00:27:50
Attention
- Id: TERM-0010
- Working meaning: We will use "Attention" to mean: Selection and allocation of limited processing resources across competing representations and control loops.
- Related: working memory, salience, global workspace.
- Common confusion: conflating attention with consciousness or with salience alone.
- Sources:
- talk: AGI Series 2024 - Joscha Bach: Is Consciousness a Missing Link to AGI? @ 00:10:33
- talk: Synthetic Sentience @ 00:12:04
Self-model
- Id: TERM-0011
- Working meaning: We will use "Self-model" to mean: A representation of the system as an agent within its own world-model, enabling self-prediction and coordinated action.
- Related: identity, narrative, first-person perspective.
- Common confusion: treating the self-model as a metaphysical self rather than a representation.
- Sources:
- talk: Self Models of Loving Grace @ 00:01:14
- talk: Mind from Matter (Lecture By Joscha Bach) @ 00:16:40
First-person perspective
- Id: TERM-0012
- Working meaning: We will use "First-person perspective" to mean: A representational mode in which consciousness is projected into the self/world boundary and the system experiences itself as "being someone here now." It is a content/state, not a primitive substrate property.
- Related: self-model, observer, consciousness, nowness.
- Common confusion: treating the first-person perspective as identical to consciousness, or as a metaphysical subject behind experience.
- Sources:
- talk: Joscha Bach: The AI perspective on Consciousness @ 00:08:46
- talk: AGI Series 2024 - Joscha Bach: Is Consciousness a Missing Link to AGI? @ 00:41:09
Consciousness
- Id: TERM-0013
- Working meaning: We will use "Consciousness" to mean: A functional interface that integrates and stabilizes mental contents; often described as a second-order perception that makes the system aware of its own observing.
- Related: attention, global workspace, self-model, phenomenology.
- Common confusion: equating consciousness with intelligence or with any specific task performance.
- Sources:
- talk: Synthetic Sentience @ 00:11:15
- talk: The Machine Consciousness Hypothesis @ 00:20:01
Phenomenology
- Id: TERM-0014
- Working meaning: We will use "Phenomenology" to mean: The subjective aspect of experience; what it is like from the first-person perspective.
- Related: consciousness, qualia, self-model.
- Common confusion: treating phenomenology as a separate substance rather than a description of experience.
- Sources:
- talk: The Machine Consciousness Hypothesis @ 00:20:05
Mechanism
- Id: TERM-0015
- Working meaning: We will use "Mechanism" to mean: The implementation details that realize a function in a substrate (e.g., neural circuitry, code, or hardware).
- Related: function, substrate, implementation.
- Common confusion: assuming mechanism alone explains phenomenology without functional framing.
- Sources:
- talk: The Machine Consciousness Hypothesis @ 00:20:01
Function
- Id: TERM-0016
- Working meaning: We will use "Function" to mean: The role a component plays in the system's behavior and control, independent of its implementation.
- Related: mechanism, purpose, causal role.
- Common confusion: assuming function implies design intent rather than causal role.
- Sources:
- talk: The Machine Consciousness Hypothesis @ 00:20:01
Coherence
- Id: TERM-0017
- Working meaning: We will use "Coherence" to mean: A property of a mind where its active models, motivations, and action tendencies are mutually consistent enough to support unified agency.
- Related: consciousness, attention, global workspace, self-model.
- Common confusion: treating coherence as a moral virtue, or as an absolute logical consistency rather than a control-relevant alignment of subsystems.
- Sources:
- talk: The Machine Consciousness Hypothesis @ 00:27:12
- talk: Synthetic Sentience @ 00:11:15
Observer
- Id: TERM-0018
- Working meaning: We will use "Observer" to mean: A constructed reference frame inside the model that stabilizes perception and enables the system to represent itself as perceiving (and, later, as being a self).
- Related: consciousness, second-order perception, self-model.
- Common confusion: importing the physics "observer" into mind-talk as a metaphysical necessity; or reifying the observer as a separate entity behind experience.
- Sources:
- talk: Self Models of Loving Grace @ 00:09:59
- talk: Synthetic Sentience @ 00:40:13
Second-order perception
- Id: TERM-0019
- Working meaning: We will use "Second-order perception" to mean: Perception of perception: the system represents the fact that it is observing, which stabilizes the observing process.
- Related: consciousness, observer, presentness.
- Common confusion: treating second-order perception as introspective narration; the core point is stabilization via self-observation, not verbal report.
- Sources:
- talk: Synthetic Sentience @ 00:40:15
- talk: Self Models of Loving Grace @ 00:09:02
Third-order perception
- Id: TERM-0020
- Working meaning: We will use "Third-order perception" to mean: Awareness of the self: the system discovers itself as the observer within the act of observation, producing a first-person perspective as a representational state.
- Related: self-model, observer, identity.
- Common confusion: equating third-order perception with philosophical ego; in this framing it is still a model phenomenon.
- Sources:
- talk: The Machine Consciousness Hypothesis @ 00:16:38
Nowness
- Id: TERM-0021
- Working meaning: We will use "Nowness" to mean: The modeled "present": a coherence bubble in which the mind's active contents are synchronized enough to be experienced as happening now.
- Related: consciousness, second-order perception, coherence, presentness.
- Common confusion: treating "nowness" as a fundamental property of time rather than as a representational construct.
- Sources:
- interview: "We Are All Software" - Joscha Bach @ 00:19:16
Attention schema
- Id: TERM-0022
- Working meaning: We will use "Attention schema" to mean: A theory-adjacent framing: consciousness can be understood as a control model of attention (analogous to a body schema), tracking and regulating what is attended to.
- Related: attention, consciousness, body schema.
- Common confusion: collapsing the attention schema framing into a full theory of consciousness; in the usage tracked here, it often functions as one convergent perspective among several.
- Sources:
- talk: Mind from Matter (Lecture By Joscha Bach) @ 00:19:13
- talk: AGI Series 2024 - Joscha Bach: Is Consciousness a Missing Link to AGI? @ 00:41:09
Global workspace
- Id: TERM-0023
- Working meaning: We will use "Global workspace" to mean: A convergent perspective: consciousness can be framed as a broadcast/integration mechanism that coordinates multiple subsystems through shared, globally available content.
- Related: consciousness, attention, working memory, coherence.
- Common confusion: treating global workspace as a definitive identification of consciousness rather than a functional family resemblance.
- Sources:
- talk: AGI Series 2024 - Joscha Bach: Is Consciousness a Missing Link to AGI? @ 00:41:09
- interview: "We Are All Software" - Joscha Bach @ 00:19:16
Narrative
- Id: TERM-0024
- Working meaning: We will use "Narrative" to mean: A story-like, compressive self-explanation that links perceptions, motives, and actions into a coherent account; often constructed after the fact.
- Related: self-model, identity, social communication.
- Common confusion: treating narrative coherence as evidence of causal truth; narratives can be useful control/communication artifacts even when they misdescribe causes.
- Sources:
- talk: The Ghost in the Machine @ 00:32:21
Reward
- Id: TERM-0025
- Working meaning: We will use "Reward" to mean: A learning signal used for credit assignment (which actions/policies get reinforced). Reward is a mechanism of learning; it is not identical to the agent's long-horizon value structure.
- Related: valence, value, credit assignment, reinforcement.
- Common confusion: treating "reward" as synonymous with "what the agent values" or "what is good".
- Sources:
- talk: The Ghost in the Machine @ 00:37:19
Value
- Id: TERM-0026
- Working meaning: We will use "Value" to mean: A learned predictive structure that estimates future valence under policies; a compressed internal representation of what will matter later (not necessarily explicit, and not necessarily stable).
- Related: reward, valence, preference, norms.
- Common confusion: equating value with an explicit utility function; equating value with immediate pleasure/pain.
- Sources:
- talk: The Ghost in the Machine @ 00:37:19
Commitment
- Id: TERM-0027
- Working meaning: We will use "Commitment" to mean: A control constraint treated as binding over time, enabling long-horizon learning and coordination (both within a person and between people).
- Related: norms, identity, contract, governance.
- Common confusion: treating commitments as mere verbal promises; in this framing they are control-level constraints that must be implemented and enforced to matter.
- Sources:
- talk: The Ghost in the Machine @ 00:30:38
Self-organization
- Id: TERM-0028
- Working meaning: We will use "Self-organization" to mean: A process in which a system constructs and maintains its own internal structure through its dynamics; contrasted with externally imposed algorithmic structure.
- Related: development, learning scaffold, consciousness, substrate.
- Common confusion: treating self-organization as "no constraints" or "randomness"; it is constrained structure formation.
- Sources:
- interview: "We Are All Software" - Joscha Bach @ 00:10:37
- interview: Joscha Bach - Why Your Thoughts Aren't Yours. @ 00:57:15
Reward hacking
- Id: TERM-0029
- Working meaning: We will use "Reward hacking" to mean: A generic failure mode where an agent learns to maximize its internal reinforcement signals directly (or via proxy loopholes), undermining the intended control objective.
- Related: wireheading, addiction, specification gaming, self-modification.
- Common confusion: thinking this is a niche pathology; in sufficiently capable agents it is a structural pressure whenever "good" is tied to a manipulable signal.
- Sources:
- talk: The Ghost in the Machine @ 00:38:16
Suffering
- Id: TERM-0030
- Working meaning: We will use "Suffering" to mean: A control/valence phenomenon associated with persistent dysregulation, where the system cannot reduce salient error/aversion signals and cannot reframe/resolve the underlying constraints.
- Related: valence, coherence, self-model, emotion regulation.
- Common confusion: treating suffering as identical to pain; pain can occur without suffering if the system does not identify with or amplify the signal.
- Sources:
- talk: Self Models of Loving Grace @ 00:42:29
Enlightenment
- Id: TERM-0031
- Working meaning: We will use "Enlightenment" to mean: A representational insight in which the self/observer becomes visible as a constructed model state and can be deconstructed; not a claim about moral purity.
- Related: self-model, observer, third-order perception, meditation.
- Common confusion: treating enlightenment as a supernatural achievement rather than as a describable reconfiguration of self-modeling.
- Sources:
- interview: "We Are All Software" - Joscha Bach @ 00:15:15
Machine consciousness hypothesis
- Id: TERM-0032
- Working meaning: We will use "Machine consciousness hypothesis" to mean: A two-part conjecture: (1) biological consciousness is an early, learnable stabilization trick for self-organizing systems, and (2) similar self-organization conditions can be recreated on computers.
- Related: consciousness, self-organization, substrate-independence, AI.
- Common confusion: treating it as a claim that current AI is conscious; in this framing it is a testable architectural program.
- Sources:
- talk: The Machine Consciousness Hypothesis @ 00:32:23
Mind
- Id: TERM-0033
- Working meaning: We will use "Mind" to mean: A functional organization realized by a substrate: an agentic control architecture that builds and uses models to regulate the future under constraints. It is not a second substance added to physics.
- Related: agent, control, world-model, learning, self-organization.
- Common confusion: treating "mind" as a metaphysical entity; or treating it as identical to the tissue that implements it.
- Sources:
- talk: Mind from Matter (Lecture By Joscha Bach) @ 00:24:59
Model
- Id: TERM-0034
- Working meaning: We will use "Model" to mean: A constructed internal structure that supports prediction, simulation, and control. A model is not a mirror of reality; it is an instrument constrained by an agent's goals and bandwidth.
- Related: representation, world-model, simulator, prediction.
- Common confusion: equating "model" with a dataset, or with a literal copy of the world.
- Sources:
- talk: Mind from Matter (Lecture By Joscha Bach) @ 00:24:59
- talk: Self Models of Loving Grace @ 00:18:49
Object (in a model)
- Id: TERM-0035
- Working meaning: We will use "Object (in a model)" to mean: A stable role constructed over observations; an object is defined by the functional differences its presence makes in how the modeled world evolves.
- Related: computationalist functionalism, representation, invariance.
- Common confusion: treating objects as "given" rather than constructed; or treating construction as mere naming.
- Sources:
- talk: The Machine Consciousness Hypothesis @ 00:08:24
Abstraction
- Id: TERM-0036
- Working meaning: We will use "Abstraction" to mean: A compressive representation that discards detail while preserving invariances relevant for prediction and control at the agent's scale.
- Related: compression, invariance, understanding.
- Common confusion: treating abstraction as vagueness; abstraction is a constraint imposed by limited resources.
- Sources:
- talk: AGI Series 2024 - Joscha Bach: Is Consciousness a Missing Link to AGI? @ 01:24:02
Invariance
- Id: TERM-0037
- Working meaning: We will use "Invariance" to mean: A control-relevant pattern that remains stable under perturbations and can be tracked by a model. In this usage: software/organization is an invariance; it is not identical to a particular set of particles.
- Related: object (in a model), abstraction, software, implementation.
- Common confusion: looking for invariance at the wrong level (e.g., in pixels or particles rather than in causal organization).
- Sources:
- interview: "We Are All Software" - Joscha Bach @ 00:01:53
Policy
- Id: TERM-0038
- Working meaning: We will use "Policy" to mean: A compact name for the action-selection mapping implemented by a controller (often learned and modulated), i.e. how the agent turns modeled state into behavior.
- Related: control, agent, habit, commitment.
- Common confusion: treating policy as a single explicit plan; much policy is implicit, layered, and context-dependent.
- Sources:
- talk: Synthetic Sentience @ 00:32:49
Goal
- Id: TERM-0039
- Working meaning: We will use "Goal" to mean: A represented constraint used for control: a target region in state space that defines what counts as error reduction for a subsystem.
- Related: control, preference, value, commitment.
- Common confusion: treating goals as external "attractors" in the world rather than as internal comparators/constraints in a controller.
- Sources:
- talk: AGI Series 2024 - Joscha Bach: Is Consciousness a Missing Link to AGI? @ 00:40:18
Preference
- Id: TERM-0040
- Working meaning: We will use "Preference" to mean: The ordering the agent imposes over futures (which errors matter more); preferences can be learned, drift, and be in conflict across subsystems.
- Related: valence, value, goal, commitment.
- Common confusion: assuming preferences are explicit, stable, and globally consistent.
- Sources:
- interview: Joscha Bach - Why Your Thoughts Aren't Yours. @ 00:10:21
Error signal
- Id: TERM-0041
- Working meaning: We will use "Error signal" to mean: Information about deviation from a constraint (or from a predicted state) used to drive control and learning.
- Related: control, learning, prediction error.
- Common confusion: treating error as a moral defect rather than as a control variable.
- Sources:
- talk: The Ghost in the Machine @ 00:27:50
Viability constraints
- Id: TERM-0042
- Working meaning: We will use "Viability constraints" to mean: The boundary conditions that keep an agent viable as the kind of system it is (integrity, energy, stability of the learning machinery), shaping what control must accomplish.
- Related: control, homeostasis, self-organization.
- Common confusion: treating viability as a single variable; in real agents it is a bundle of constraints.
- Sources:
- talk: The Machine Consciousness Hypothesis @ 00:06:57
Credit assignment
- Id: TERM-0043
- Working meaning: We will use "Credit assignment" to mean: The problem of attributing outcomes to the parts of behavior and internal structure that caused them, so learning can reinforce the right policies and representations.
- Related: learning, reward, valence.
- Common confusion: treating credit assignment as obvious; in long-horizon, multi-loop systems it is the bottleneck.
- Sources:
- talk: The Ghost in the Machine @ 00:37:19
Compression
- Id: TERM-0044
- Working meaning: We will use "Compression" to mean: Representing regularities with fewer degrees of freedom while preserving what matters for prediction and control; required by finite resources.
- Related: learning, abstraction, understanding.
- Common confusion: equating compression with loss of truth; the issue is preserving the right invariances for control.
- Sources:
- talk: The Ghost in the Machine @ 00:19:06
Generalization
- Id: TERM-0045
- Working meaning: We will use "Generalization" to mean: Transfer of learned structure beyond the data that produced it; in this framing, it is primarily a property of representations.
- Related: learning, understanding, invariance.
- Common confusion: treating generalization as a property of datasets rather than of model structure.
- Sources:
- talk: Joscha Bach - ChatGPT: Is AI Deepfaking Understanding? @ 01:13:08
Understanding
- Id: TERM-0046
- Working meaning: We will use "Understanding" to mean: Compression that is usable for counterfactual control and explanation: abstractions that remain stable under perturbation and support "what would happen if..." reasoning.
- Related: learning, compression, simulation.
- Common confusion: equating understanding with verbal fluency or with prediction accuracy alone.
- Sources:
- talk: The Ghost in the Machine @ 00:19:06
Self-supervision
- Id: TERM-0047
- Working meaning: We will use "Self-supervision" to mean: Learning driven by predicting parts of experience from other parts; the world supplies the training signal. (This is adjacent ML vocabulary; use as a translation term.)
- Related: learning, prediction error.
- Common confusion: treating self-supervision as "no supervision"; the environment still constrains learning through consequences.
- Sources:
- interview: Joscha Bach: Artificial Consciousness and the Nature of Reality | Lex Fridman Podcast #101 @ 01:10:49
Self-play
- Id: TERM-0048
- Working meaning: We will use "Self-play" to mean: Manufacturing feedback by letting the system interact with itself or a simulator where outcomes are evaluable; an engineering strategy for making learning tractable.
- Related: learning, simulation, control.
- Common confusion: treating self-play as only about games; the key is cheap, repeatable feedback.
- Sources:
- interview: Joscha Bach: Artificial Consciousness and the Nature of Reality | Lex Fridman Podcast #101 @ 01:10:49
Working memory
- Id: TERM-0049
- Working meaning: We will use "Working memory" to mean: The current binding state: the short-lived integration of percepts, memories, and imagined content that allows coherent control and reportable thought.
- Related: attention, workspace, global workspace, simulation.
- Common confusion: treating working memory as mere storage; in this framing it is a binding/integration process.
- Sources:
- talk: The Ghost in the Machine @ 00:07:19
Workspace
- Id: TERM-0050
- Working meaning: We will use "Workspace" to mean: A functional framing: a shared state in which content becomes available across subsystems for coordination (often discussed via global workspace and related theories).
- Related: attention, consciousness, global workspace, coherence.
- Common confusion: treating "workspace" as a literal location; it is a coordination function.
- Sources:
- talk: Synthetic Sentience @ 00:12:04
Simulation
- Id: TERM-0051
- Working meaning: We will use "Simulation" to mean: Running the model forward to generate counterfactual trajectories, including imagined perceptions and imagined actions; essential for planning and for the constructed experience of reality.
- Related: model, world-model, simulator, imagination.
- Common confusion: equating simulation with fantasy; simulation is constrained by learned model structure (unless it drifts into hallucination).
- Sources:
- talk: The Ghost in the Machine @ 00:28:47
Simulator
- Id: TERM-0052
- Working meaning: We will use "Simulator" to mean: A runnable model: internal causal structure that can be advanced and queried ("if I do X, what happens?"), enabling planning and counterfactual control.
- Related: model, simulation, prediction.
- Common confusion: conflating a simulator with a passive replay (a movie); simulators have internal causal degrees of freedom.
- Sources:
- talk: Self Models of Loving Grace @ 00:24:14
Governance
- Id: TERM-0053
- Working meaning: We will use "Governance" to mean: The control of control: the way an agent arbitrates among competing subsystems, error signals, and commitments across time scales.
- Related: meta-control, commitment, self-control.
- Common confusion: treating governance as a moral notion; here it is an architectural one.
- Sources:
- talk: Self Models of Loving Grace @ 00:10:54
Motivation
- Id: TERM-0054
- Working meaning: We will use "Motivation" to mean: The configuration of valence and control constraints that makes some actions more likely; motivation is a control state, not a single drive.
- Related: valence, emotion, goal, commitment.
- Common confusion: treating motivation as a trait; it is often a state of control/attention allocation.
- Sources:
- talk: The Ghost in the Machine @ 00:28:47
Emotion
- Id: TERM-0055
- Working meaning: We will use "Emotion" to mean: A family of control modulators that reconfigure policy selection, attention, and learning under particular contexts (threat, opportunity, loss, social evaluation, etc.).
- Related: motivation, valence, affect, mood.
- Common confusion: collapsing emotion into a single scalar; or treating emotions as irrational add-ons rather than control modes.
- Sources:
- talk: Joscha Bach - ChatGPT: Is AI Deepfaking Understanding? @ 00:56:36
Affect
- Id: TERM-0056
- Working meaning: We will use "Affect" to mean: The coarse valence tone of a state (roughly: good/bad), which can modulate attention and learning without specifying a detailed narrative.
- Related: valence, emotion, mood.
- Common confusion: equating affect with explicit emotion labels; affect can be present without verbalized emotion categories.
- Sources:
- talk: The Ghost in the Machine @ 00:28:47
Mood
- Id: TERM-0057
- Working meaning: We will use "Mood" to mean: A longer-lived configuration of control and affect that biases prediction, attention, and action across many contexts.
- Related: affect, emotion, motivation.
- Common confusion: treating mood as an external "thing" that happens to you; it is a control state that shapes what gets modeled as salient and possible.
- Sources:
- interview: "We Are All Software" - Joscha Bach @ 00:19:16
Habit
- Id: TERM-0058
- Working meaning: We will use "Habit" to mean: A stabilized policy in the control stack: a learned, low-friction mapping from context to action that reduces deliberation cost.
- Related: policy, learning, self-control.
- Common confusion: treating habits as merely bad; habits are necessary compression in bounded agents.
- Sources:
- talk: The Ghost in the Machine @ 00:38:16
Addiction / reward capture
- Id: TERM-0059
- Working meaning: We will use "Addiction / reward capture" to mean: A failure mode where short-horizon reward signals dominate governance and learning, producing policies that preserve access to the signal even when it undermines higher-level values and viability.
- Related: reward hacking, self-control, valence.
- Common confusion: treating addiction as only a moral failure; in this framing it is a control-loop pathology under misaligned reinforcement.
- Sources:
- talk: The Ghost in the Machine @ 00:38:16
Alignment
- Id: TERM-0060
- Working meaning: We will use "Alignment" to mean: A downstream framing: building artificial agents whose learned values and commitments remain compatible with human goals and governance, not merely optimizing a fixed utility function.
- Related: value learning, governance, incentives, agency.
- Common confusion: equating alignment with obedience or with benchmarked helpfulness; alignment is about stable goal structure under self-improvement and deployment.
- Sources:
- talk: AGI Series 2024 - Joscha Bach: Is Consciousness a Missing Link to AGI? @ 01:29:16
Virtual
- Id: TERM-0061
- Working meaning: We will use "Virtual" to mean: A property of models: something behaves as if it exists because it is implemented as a stable causal pattern at an appropriate level of abstraction.
- Related: model, self-model, representation, simulation.
- Common confusion: equating "virtual" with "fake"; virtual objects can be real as implemented patterns.
- Sources:
- interview: Joscha Bach - Why Your Thoughts Aren't Yours. @ 00:01:33
Virtualism
- Id: TERM-0062
- Working meaning: We will use "Virtualism" to mean: A perspective in which experience is treated as generated model content ("dreams"), and mind is studied as the architecture that constructs and regulates these models.
- Related: virtual, simulation, consciousness, self-model.
- Common confusion: reading virtualism as mysticism; in this framing it is a representational stance (model vs territory).
- Sources:
- talk: Virtualism as a Perspective on Consciousness by Joscha Bach @ 00:15:55
Agency
- Id: TERM-0063
- Working meaning: We will use "Agency" to mean: The property of a system that it can steer the future under constraints by selecting actions based on a model and preferences. Agency is control-relevant; it does not require human-level intelligence or consciousness.
- Related: agent, control, goal, policy, viability constraints.
- Common confusion: equating agency with consciousness; equating agency with optimal rationality; equating agency with having a human-like body.
- Sources:
- talk: AGI Series 2024 - Joscha Bach: Is Consciousness a Missing Link to AGI? @ 00:08:42
- talk: Joscha Bach - Agency in an Age of Machines - How AI Will Change Humanity @ 00:02:07
Control system
- Id: TERM-0064
- Working meaning: We will use "Control system" to mean: A system that maintains or reaches target conditions by using feedback: it compares what is to what should be, and acts to reduce error. In this framework, an agent is a control system that uses models.
- Related: control, agent, controller, error signal.
- Common confusion: treating control as domination, or as a single global optimizer; control is regulation across nested loops.
- Sources:
- talk: Joscha Bach - Agency in an Age of Machines - How AI Will Change Humanity @ 00:02:07
- interview: Self Learning Systems - Joscha Bach @ 00:02:08
Naturalizing the mind
- Id: TERM-0065
- Working meaning: We will use "Naturalizing the mind" to mean: Treating mind as part of nature: a functional organization realized by mechanisms, explainable as model-building control plus learning, valence, and self-modeling rather than as a separate substance.
- Related: computationalist functionalism, mechanism, function, mind.
- Common confusion: treating naturalization as reductionism that denies experience; naturalization is a demand for an architecture-level explanation.
- Sources:
- talk: The Machine Consciousness Hypothesis @ 00:03:18
Triangulation discipline
- Id: TERM-0066
- Working meaning: We will use "Triangulation discipline" to mean: A method rule: keep phenomenology (what it is like), mechanism (how it is implemented), and function (what it does) distinct when reasoning about mind and consciousness.
- Related: phenomenology, mechanism, function, consciousness.
- Common confusion: treating these as competing answers instead of distinct constraints; collapsing one level into another.
- Sources:
- talk: The Machine Consciousness Hypothesis @ 00:14:35
Prediction
- Id: TERM-0067
- Working meaning: We will use "Prediction" to mean: Generating expectations about state transitions and observations, including counterfactual expectations under possible actions. Prediction is for control, not only for passive forecasting.
- Related: model, world-model, simulation, learning.
- Common confusion: reducing prediction to next-sensory-frame forecasting; in this framework prediction supports intervention and planning.
- Sources:
- talk: Joscha Bach: The AI perspective on Consciousness @ 00:06:05
- talk: The Ghost in the Machine @ 00:26:34
Salience
- Id: TERM-0068
- Working meaning: We will use "Salience" to mean: A control-relevance signal that makes some model contents candidates for attention and action selection (e.g., because they predict large error, reward, uncertainty reduction, or social consequences).
- Related: attention, valence, prediction error, novelty.
- Common confusion: equating salience with attention; salience is an input to selection, not the selection mechanism itself.
- Sources:
- talk: AGI Series 2024 - Joscha Bach: Is Consciousness a Missing Link to AGI? @ 00:10:33
Constraint
- Id: TERM-0069
- Working meaning: We will use "Constraint" to mean: A restriction on what the system can do or what counts as success: physical limits, resource bounds, model structure, norms, and viability constraints that shape control.
- Related: viability constraints, goal, norm, mechanism.
- Common confusion: treating constraints as merely external obstacles; in control systems, constraints define the space of possible policies and the meaning of error.
- Sources:
- talk: Mind from Matter (Lecture By Joscha Bach) @ 01:03:35
Drive
- Id: TERM-0070
- Working meaning: We will use "Drive" to mean: A persistent control pressure (often experienced as an urge) arising from deficits/needs and encoded as error signals that bias policy and learning until resolved or reinterpreted.
- Related: valence, goal, error signal, motivation.
- Common confusion: equating drives with conscious desires; drives can operate below awareness and be re-labeled by the narrative self-model.
- Sources:
- interview: Joscha Bach - Why Your Thoughts Aren't Yours. @ 00:23:35
Impulse
- Id: TERM-0071
- Working meaning: We will use "Impulse" to mean: A short-horizon action tendency produced by local control loops (often reward-driven) that competes with higher-level commitments and long-horizon policies.
- Related: habit, self-control, reward, governance.
- Common confusion: treating impulses as pure moral weakness; in this framing they are predictable outputs of competing loops under scarcity and reinforcement.
- Sources:
- talk: The Ghost in the Machine @ 00:36:40
Compulsion
- Id: TERM-0072
- Working meaning: We will use "Compulsion" to mean: A failure mode where a policy becomes difficult to inhibit because reinforcement and habit loops overpower higher-level governance and commitments.
- Related: impulse, addiction / reward capture, self-control, governance.
- Common confusion: treating compulsion as lack of willpower rather than as an architecture-level dominance of certain loops.
- Sources:
- talk: The Ghost in the Machine @ 00:36:40
Self-control
- Id: TERM-0073
- Working meaning: We will use "Self-control" to mean: Governance of one’s own control stack: maintaining commitments and long-horizon values by constraining short-horizon reward capture and reconfiguring policies and environments.
- Related: governance, commitment, habit, reward hacking.
- Common confusion: equating self-control with suppression; in control terms, effective self-control changes constraints and feedback so desirable policies become easy and stable.
- Sources:
- talk: The Ghost in the Machine @ 00:30:38
Failure mode
- Id: TERM-0074
- Working meaning: We will use "Failure mode" to mean: A predictable way an architecture breaks down under certain inputs/constraints (e.g., reward hacking, attentional capture, incoherence), revealing what the system is actually optimizing.
- Related: reward hacking, addiction / reward capture, attention, coherence.
- Common confusion: treating failures as mere bugs or moral defects; in this framing they are diagnostic of the underlying control loops.
- Sources:
- talk: The Ghost in the Machine @ 00:38:16
Identity
- Id: TERM-0075
- Working meaning: We will use "Identity" to mean: A stabilized self-model: the agent’s narrative and role-structure that makes its policies coherent across time and social contexts.
- Related: self-model, narrative, commitment, culture.
- Common confusion: treating identity as a metaphysical essence rather than as a control-relevant representation that can change.
- Sources:
- talk: Self Models of Loving Grace @ 00:01:14
Social model (theory of mind)
- Id: TERM-0076
- Working meaning: We will use "Social model (theory of mind)" to mean: A model that represents other agents as agents (with beliefs, goals, and policies) so the system can predict and coordinate with them.
- Related: agent, world-model, language, norm.
- Common confusion: treating a social model as a list of facts about people; it is a model of agency and policy.
- Sources:
- interview: Joscha Bach Λ Karl Friston: Ai, Death, Self, God, Consciousness @ 01:58:03
Language
- Id: TERM-0077
- Working meaning: We will use "Language" to mean: A shared compression medium: a way for agents to align models and coordinate policies by transmitting compact symbols that trigger rich internal updates.
- Related: shared compression, culture, social model, meaning.
- Common confusion: treating language as mere labeling; in this framing it is a control technology that reshapes cognition and coordination.
- Sources:
- interview: Joscha Bach Λ Karl Friston: Ai, Death, Self, God, Consciousness @ 00:06:04
- talk: The Machine Consciousness Hypothesis @ 00:17:55
Institution
- Id: TERM-0078
- Working meaning: We will use "Institution" to mean: A persistent multi-agent control loop that stabilizes expectations and enforces norms by making certain feedback predictable (legal, economic, reputational).
- Related: norm, governance, reward infrastructure, contract.
- Common confusion: treating institutions as static buildings or rules; in this framing they are ongoing feedback processes that can drift and be captured.
- Sources:
- talk: Joscha Bach - Agency in an Age of Machines - How AI Will Change Humanity @ 00:58:03
Contract
- Id: TERM-0079
- Working meaning: We will use "Contract" to mean: A shared explicit model of mutual commitments that makes certain futures predictable by attaching enforceable consequences to deviation.
- Related: commitment, norm, institution, governance.
- Common confusion: treating contracts as paperwork rather than as a control technology for multi-agent coordination.
- Sources:
- talk: Joscha Bach: The Operation of Consciousness | AGI-25 @ 00:51:05
Reward function (broad usage)
- Id: TERM-0080
- Working meaning: We will use "Reward function (broad usage)" to mean: The effective reinforcement landscape that shapes learning and policy, including both internal reward signals and external incentive structures (especially in social systems).
- Related: reward, valence, reward infrastructure, incentives, alignment.
- Common confusion: equating the reward function with explicit code; the effective reward function is often implicit and emerges from coupled systems.
- Sources:
- talk: The Ghost in the Machine @ 00:27:50
Reward infrastructure
- Id: TERM-0081
- Working meaning: We will use "Reward infrastructure" to mean: The mechanisms that allocate generalized reward/resources in a society (e.g., money and incentives), thereby shaping behavior and learning at scale.
- Related: institution, governance, alignment, reward function.
- Common confusion: treating reward infrastructure as morally neutral plumbing; it implements a social-level objective and can induce reward hacking at civilization scale.
- Sources:
- talk: The Ghost in the Machine @ 00:50:31
Artificial agent
- Id: TERM-0082
- Working meaning: We will use "Artificial agent" to mean: A machine-implemented control system that builds models and selects actions under constraints.
- Related: agent, control, model, governance.
- Common confusion: equating an AI model with an agent; many systems simulate agency without having stable intrinsic control objectives.
- Sources:
- interview: Joscha Bach - Why Your Thoughts Aren't Yours. @ 01:06:10
Artificial sentience
- Id: TERM-0083
- Working meaning: We will use "Artificial sentience" to mean: The possibility that an artificial agent has experience if the relevant functional organization (self-modeling, coherence stabilization, observer construction) is implemented.
- Related: consciousness, machine consciousness hypothesis, self-organization.
- Common confusion: equating sentience with intelligence or with language competence.
- Sources:
- talk: Self Models of Loving Grace @ 00:37:31
Functionalism (usage tracked here)
- Id: TERM-0084
- Working meaning: We will use "Functionalism (usage tracked here)" to mean: A stance about what counts as an object or a mental entity: it is defined by its functional role in shaping the evolution of states and control, not by an intrinsic essence.
- Related: computationalist functionalism, object (in a model), model.
- Common confusion: reducing functionalism to \"the same behavior implies the same mind\"; the usage tracked here is more about construction of objects over observations.
- Sources:
- talk: The Machine Consciousness Hypothesis @ 00:08:24
Computationalism (usage tracked here)
- Id: TERM-0085
- Working meaning: We will use "Computationalism (usage tracked here)" to mean: The view that models must be realized in a constructive representational language built from parts, so they can be implemented and used for inference and control.
- Related: computationalist functionalism, representation, strong computationalism.
- Common confusion: equating computationalism with \"everything is digital\"; it is primarily a constraint on representational construction.
- Sources:
- talk: The Machine Consciousness Hypothesis @ 00:10:25
Dream (technical usage)
- Id: TERM-0086
- Working meaning: We will use "Dream (technical usage)" to mean: Generated model content experienced as a world. A dream can be more or less constrained: in waking perception, generation is strongly constrained by sensory input; in sleep dreams, it is primarily constrained by internal priors and coherence.
- Related: model, simulation, virtual, consciousness.
- Common confusion: treating \"dream\" as metaphor only; in this framing it is a technical handle on generated experience as model state.
- Sources:
- talk: The Ghost in the Machine @ 00:04:02
- talk: Synthetic Sentience @ 00:17:39