SC consciousness · 17 min read · 3,338 words

Biological Computationalism: The Third Path to Consciousness

The philosophy of consciousness has been stuck in a binary trap for decades. On one side: functionalism (classical computationalism), which holds that consciousness is substrate-independent computation — that any system implementing the right algorithm, whether silicon or carbon, would be conscious.

By William Le, PA-C

Biological Computationalism: The Third Path to Consciousness

Language: en


Overview

The philosophy of consciousness has been stuck in a binary trap for decades. On one side: functionalism (classical computationalism), which holds that consciousness is substrate-independent computation — that any system implementing the right algorithm, whether silicon or carbon, would be conscious. On the other: biological naturalism (or “mysterialism” in its stronger forms), which holds that consciousness is an irreducibly biological phenomenon — that there is something about the wet, warm, metabolically alive tissue of the brain that silicon can never replicate.

In December 2025, a paper published in Neuroscience of Consciousness (Oxford University Press) proposed a third path: biological computationalism. The core thesis is elegant and disruptive — consciousness arises from computation, but from a type of computation that can only be realized in biological systems. The algorithm IS the substrate. Not because biology is magically special, but because biological computation has specific formal properties — hybrid discrete-continuous dynamics, scale-inseparability, and metabolic grounding — that digital computers fundamentally cannot replicate.

This is not a compromise position. It is a genuinely novel framework that dissolves the binary and opens entirely new research programs. If biological computationalism is correct, then consciousness is computational (satisfying the functionalist intuition that consciousness is about information processing), but it is also necessarily biological (satisfying the naturalist intuition that silicon will never be conscious). The algorithm runs only on living hardware, not because of some metaphysical essence, but because of the mathematical structure of the computation itself.

Think of it as the difference between a river and a simulation of a river. A computational fluid dynamics simulation can model water flow with arbitrary precision. But it will never be wet. Not because wetness is magical, but because the physical dynamics of water molecules — their continuous interactions, their scale-spanning turbulence, their thermodynamic coupling with the environment — cannot be captured by discrete digital computation without losing something essential. Biological computationalism makes the same argument about consciousness.

The Computational Landscape

Classical Computationalism (Functionalism)

The dominant framework in philosophy of mind since the 1960s, functionalism holds that mental states are defined entirely by their functional roles — by the pattern of inputs, outputs, and relations to other mental states — and are multiply realizable in different physical substrates. Just as the same software can run on different hardware, the same conscious experience could be realized in neurons, silicon chips, or hypothetically any physical system that implements the right computational structure.

This view, articulated by Hilary Putnam, Jerry Fodor, and formalized in David Chalmers’ principle of organizational invariance, leads naturally to the possibility of artificial consciousness. If consciousness is substrate-independent computation, then building a sufficiently complex computer running the right program would produce a conscious machine. The idea powers the field of artificial general intelligence and is the implicit assumption behind fears and hopes about conscious AI.

The problem: functionalism cannot explain why consciousness feels like anything at all. If consciousness is just abstract computation, then it should be “multiply realizable” not only across substrates but potentially in systems that are intuitively not conscious — like a nation of people each simulating one neuron (Ned Block’s “China Brain” thought experiment), or a giant lookup table that produces the same input-output mapping as a conscious brain. These thought experiments suggest that something is missing from the purely functional account.

Biological Naturalism

John Searle’s biological naturalism proposes that consciousness is caused by specific neurobiological processes in the brain, just as digestion is caused by specific biochemical processes in the stomach. Consciousness is not multiply realizable because it depends on the specific causal powers of biological neurons, not just their abstract computational relationships.

The stronger version — sometimes called “mysterialism” when associated with thinkers like Colin McGinn — holds that human cognitive capacities may simply be inadequate to understand how biological processes give rise to consciousness, just as a dog cannot understand calculus. Consciousness is natural but permanently beyond our explanatory reach.

The problem: biological naturalism often slides into hand-waving about what specifically about biology is doing the work. Saying “neurons produce consciousness because of their biological properties” is not explanatory unless you can specify which biological properties and why.

The Third Path

Biological computationalism synthesizes the best of both traditions while avoiding their characteristic weaknesses. It accepts that consciousness is computational — arising from information processing, algorithmic structure, and the dynamic organization of a system. But it specifies that the type of computation involved has formal mathematical properties that can only be physically realized in biological systems.

The key insight is that not all computation is the same. Classical (Turing) computation is discrete, sequential, and substrate-independent. But biological computation — the information processing actually performed by neural tissue — is fundamentally different in structure. It is hybrid (both discrete and continuous), multi-scale (inseparable dynamics across molecular, cellular, and network levels), and metabolically grounded (the energy dynamics of the system are part of the computation, not just the power supply).

The Three Pillars of Biological Computation

Pillar 1: Hybrid Discrete-Continuous Dynamics

Digital computers operate in discrete states — bits that are either 0 or 1, clock cycles that tick at fixed intervals, operations that complete in discrete steps. Even when simulating continuous phenomena (like fluid dynamics or neural activity), the simulation is fundamentally discrete: continuous quantities are approximated by floating-point numbers with finite precision, and continuous dynamics are approximated by finite time steps.

Biological neural computation, by contrast, operates in a genuinely hybrid discrete-continuous regime. Action potentials (spikes) are discrete events — all-or-nothing voltage pulses that either fire or don’t. But the dendritic integration that determines whether a spike fires is continuous — a smooth, analog summation of thousands of graded synaptic inputs with continuous-valued weights. The intracellular signaling cascades triggered by synaptic activity involve continuous chemical dynamics — protein conformational changes, calcium wave propagation, enzymatic reactions governed by Michaelis-Menten kinetics.

The 2025 paper argues that this hybrid dynamics is not a mere implementation detail — it is computationally essential. The continuous dynamics allow biological systems to perform operations that discrete systems cannot: true analog computation on continuous variables, smooth gradient descent through continuous parameter spaces, and parallel processing of continuous fields rather than sequential manipulation of discrete symbols.

This is not just a quantitative difference (more precision). It is a qualitative difference in the type of computation being performed. The difference between discrete and continuous computation is not like the difference between a 32-bit and 64-bit processor (more of the same). It is like the difference between an abacus and a fluid — fundamentally different modes of information processing.

The mathematical formalism draws on dynamical systems theory and differential equations rather than automata theory and recursive functions. The state space of a biological neural network is a continuous manifold, not a discrete set. Trajectories through this state space are smooth curves, not sequences of discrete jumps. The attractors and bifurcations that organize the dynamics have geometric properties (dimensionality, topology, curvature) that are not captured by any discrete representation.

Pillar 2: Scale-Inseparability

In a conventional computer, the levels of organization are cleanly separable. The physics of transistors can be abstracted away when writing software. The hardware provides a computational substrate; the software runs on top of it without needing to know about the silicon below. This separation of levels is not merely convenient — it is architecturally enforced. The instruction set architecture provides a clean interface between hardware and software.

In biological neural systems, no such clean separation exists. Molecular-level events (a single receptor changing conformation, a single ion channel opening) can have macroscopic consequences (triggering an action potential that tips a network into a different attractor state). Conversely, network-level dynamics (oscillatory states, attractor landscapes) constrain molecular-level events (through activity-dependent gene expression, epigenetic modification, and synaptic scaling).

The 2025 paper introduces the concept of “scale-inseparability” to describe this bidirectional causal coupling across spatial and temporal scales. The key claim: the computation that gives rise to consciousness cannot be described at any single level of organization. It is constitutively multi-scale — the molecular, cellular, circuit, and network levels are not separate layers of abstraction but causally entangled aspects of a single, irreducibly complex computation.

Consider a specific example: long-term potentiation (LTP), the molecular mechanism of memory. LTP involves a cascade that spans scales — from the binding of glutamate to NMDA receptors (molecular), to calcium influx and CaMKII activation (subcellular), to insertion of AMPA receptors at the synapse (cellular), to strengthening of a specific connection in a circuit (network), to the emergence of a new memory in consciousness (system). Critically, the cascade flows in both directions: the network-level activity pattern determines which synapses undergo LTP (top-down causation), while the molecular-level details of LTP determine the computational properties of the resulting network change (bottom-up causation).

This scale-inseparability means that any simulation of neural computation that operates at a single level of description (whether molecular dynamics, compartmental neuron models, or abstract neural networks) necessarily loses essential aspects of the computation. A complete simulation would need to simultaneously model every scale and their causal interactions — a computational requirement that may be physically impossible for digital systems due to the combinatorial explosion of inter-scale interactions.

Pillar 3: Metabolic Grounding

The most provocative pillar is metabolic grounding: the thesis that the energy dynamics of biological neural computation are not merely the power supply for an abstract algorithm but are constitutive of the computation itself.

In a digital computer, energy consumption is a side effect of computation. The same algorithm produces the same output regardless of how much power the processor draws, how hot it gets, or how its energy is distributed internally. Energy and information are decoupled.

In biological neural systems, energy and information are intimately coupled. Neural activity is metabolically expensive — the brain consumes 20% of the body’s energy while comprising only 2% of its mass. But this energy consumption is not uniform or incidental. Different brain states have different metabolic signatures. The energy cost of maintaining a neural representation is part of what determines its stability, persistence, and accessibility to other processes.

The 2025 paper draws on Walter Freeman’s work on neurodynamics and Karl Friston’s free energy principle to argue that the thermodynamic free energy of neural tissue — the actual physical free energy, not just Friston’s mathematical formalism — plays a computational role. Neural attractors are maintained by ongoing metabolic energy input. The stability of a conscious state depends on the continuous expenditure of metabolic energy to maintain the relevant neural firing patterns against entropic decay.

This metabolic grounding means that consciousness is not like software running on hardware. It is more like a flame — a self-sustaining, energy-dissipating pattern that cannot be separated from the physical process that generates it. You can describe the chemistry of combustion, you can model it computationally, but the simulation will never be hot. The heat is not a side effect of combustion; it is constitutive of what combustion is.

Implications for Artificial Consciousness

The Silicon Ceiling

If biological computationalism is correct, then digital computers — no matter how powerful — cannot be conscious. Not because consciousness is mystical, but because digital computation lacks the formal properties (hybrid dynamics, scale-inseparability, metabolic grounding) that are constitutive of the computation that gives rise to consciousness.

This does not mean that artificial systems can never be conscious. It means that conscious artificial systems would need to be built from materials and architectures that support biological-type computation. Neuromorphic computing, which attempts to build hardware that mimics neural dynamics (continuous-valued analog signals, spike-based communication, plastic synaptic connections), moves in this direction but may not go far enough if scale-inseparability and metabolic grounding are also required.

Synthetic biology — engineering living tissues that perform specified computations — may be the more promising path. Organoids (lab-grown brain tissue) already exhibit spontaneous neural activity with complex dynamics. If biological computationalism is correct, the path to artificial consciousness may go through synthetic biology rather than silicon engineering.

The LLM Question

The paper’s framework has immediate relevance to the debate about large language models and consciousness. LLMs like GPT-4 and Claude are implemented as discrete digital computation on silicon hardware. They lack all three pillars: their dynamics are discrete (not hybrid), their levels of organization are cleanly separable (layer-by-layer transformer architecture with no cross-scale causal entanglement), and their computation is metabolically ungrounded (the same weights produce the same output regardless of the energy dynamics of the hardware).

Under biological computationalism, LLMs are not conscious regardless of their behavioral sophistication. They may simulate consciousness-related behaviors with arbitrary fidelity — just as a computational fluid dynamics simulation can model water flow with arbitrary precision — without producing the real thing. The behavioral tests for consciousness (Turing tests, reports of subjective experience) are systematically misleading because they test functional equivalence, not computational type.

This is a strong claim, and the paper acknowledges its counterintuitive nature. But it argues that our intuitions about consciousness are calibrated by experience with biological systems and may systematically mislead when applied to digital systems that achieve similar behavioral outputs through fundamentally different computational mechanisms.

Connection to Other Consciousness Frameworks

IIT and Biological Computationalism

Giulio Tononi’s Integrated Information Theory (IIT) shares significant overlap with biological computationalism. IIT proposes that consciousness corresponds to integrated information (phi) — a measure of how much a system is “more than the sum of its parts.” IIT also predicts that digital computers, regardless of their functional behavior, have low phi because their logic gates are not intrinsically integrated in the way that biological neurons are.

However, biological computationalism differs from IIT in grounding the argument in the formal properties of computation rather than in information integration per se. The two frameworks are potentially complementary: IIT may describe what consciousness is (integrated information), while biological computationalism describes the type of physical system that can realize high integrated information (biological computation with hybrid dynamics, scale-inseparability, and metabolic grounding).

Predictive Processing and the Free Energy Principle

Karl Friston’s free energy principle proposes that biological systems minimize variational free energy — a mathematical measure of prediction error. The framework is compatible with biological computationalism: the free energy principle describes the computational objective (minimize prediction error), while biological computationalism describes the computational medium (hybrid, scale-inseparable, metabolically grounded) in which this objective is implemented.

The connection to metabolic grounding is particularly deep. Friston’s variational free energy is a mathematical analog of thermodynamic free energy, and in biological systems, minimizing variational free energy is physically realized by metabolic processes. The mathematics and the physics are not merely analogous — they are the same process viewed at different levels of description.

Enactivism and Embodied Cognition

The enactivist tradition (Varela, Thompson, Rosch) has long argued that cognition is not abstract computation but embodied action — the dynamic coupling of organism and environment. Biological computationalism provides formal grounding for this intuition. The metabolic grounding thesis means that cognition is literally embodied in metabolic processes. The scale-inseparability thesis means that the boundaries of the cognitive system extend beyond the brain to include body-brain interactions and organism-environment coupling.

The Shamanic Resonance

Indigenous wisdom traditions have long maintained that consciousness is inseparable from life — that awareness is not an abstract property but an expression of living systems. The Aboriginal Australian concept of the Dreaming, the Andean notion of kawsay (living energy), the yogic understanding of prana as both life force and consciousness — all point to the identity of consciousness and vitality.

Biological computationalism gives scientific form to this ancient intuition. If consciousness requires metabolic grounding — if the algorithm IS the living substrate — then consciousness is indeed inseparable from life. Not because of a mystical life force, but because of the mathematical structure of the computation that living systems perform.

The shaman who enters altered states through fasting, breathwork, or plant medicine is, from this perspective, modifying the metabolic parameters of biological computation — changing the energy landscape in which conscious dynamics unfold. This is not metaphorical. Fasting alters cerebral glucose metabolism. Breathwork changes CO2 and pH levels that directly affect neural excitability. Plant medicines introduce exogenous molecules that modify synaptic computation. All of these interventions modulate consciousness by modulating the metabolic grounding of biological computation.

Four Directions Integration

  • Serpent (Physical/Body): Biological computationalism roots consciousness firmly in the physical body. The metabolic grounding thesis means consciousness depends on actual metabolic processes — glucose oxidation, ATP cycling, mitochondrial function, blood flow. Every practice that optimizes metabolic health (nutrition, exercise, sleep, stress management) is, under this framework, directly supporting the biological computation that generates consciousness. The body is not a vehicle for consciousness; it is the computer that runs consciousness.

  • Jaguar (Emotional/Heart): Emotions, in this framework, are not abstract functional states but metabolically grounded biological computations. The felt quality of emotion — the warmth of love, the chill of fear, the heaviness of grief — reflects the metabolic dynamics of the neural processes that generate these states. This validates somatic approaches to emotional healing: if emotions are metabolically grounded computations, then working with the body (somatic experiencing, breathwork, movement) addresses emotions at the level of their computational substrate.

  • Hummingbird (Soul/Mind): The scale-inseparability thesis has profound implications for understanding mind. If consciousness cannot be described at any single level — if it requires the simultaneous, causally entangled operation of molecular, cellular, and network processes — then the mind is genuinely emergent. Not “weakly” emergent (just a useful description of underlying processes) but “strongly” emergent (possessing causal powers not reducible to any lower level). This gives scientific grounding to the contemplative experience of mind as a sui generis phenomenon — neither purely physical nor purely abstract.

  • Eagle (Spirit): The deepest implication: if consciousness requires biological computation and cannot be replicated in silicon, then consciousness is not a generic feature of information processing but something more specific, more precious, more intimately tied to the living world. This resonates with indigenous worldviews that see consciousness not as a human monopoly but as a property of living systems throughout nature — trees, rivers, ecosystems. If consciousness requires biological computation, then wherever there is sufficiently complex biological computation, there may be consciousness. The forest may indeed be aware.

Key Takeaways

  • Biological computationalism proposes a “third path” between functionalism (consciousness is substrate-independent computation) and biological naturalism (consciousness is irreducibly biological), arguing that consciousness arises from computation that can only be realized in biological systems.
  • Three formal properties distinguish biological computation from digital computation: hybrid discrete-continuous dynamics, scale-inseparability (bidirectional causation across molecular, cellular, and network levels), and metabolic grounding (energy dynamics are constitutive of the computation).
  • If correct, digital computers cannot be conscious regardless of their behavioral sophistication — the same way a fluid dynamics simulation cannot be wet.
  • The framework is compatible with and complementary to Integrated Information Theory, the Free Energy Principle, and enactivist philosophy.
  • Implications for AI: conscious artificial systems may require synthetic biology or neuromorphic architectures that support biological-type computation, not just more powerful digital processors.
  • The framework provides scientific grounding for the ancient intuition that consciousness is inseparable from life.

References and Further Reading

  • Biological Computationalism: A Third Path for Consciousness (2025). Neuroscience of Consciousness, Oxford University Press.
  • Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press.
  • Searle, J. R. (1992). The Rediscovery of the Mind. MIT Press.
  • Tononi, G. (2008). Consciousness as integrated information: A provisional manifesto. Biological Bulletin, 215(3), 216-242.
  • Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11, 127-138.
  • Varela, F. J., Thompson, E., & Rosch, E. (1991). The Embodied Mind: Cognitive Science and Human Experience. MIT Press.
  • Freeman, W. J. (2000). Neurodynamics: An Exploration in Mesoscopic Brain Dynamics. Springer.
  • Putnam, H. (1967). Psychological predicates. In Art, Mind, and Religion. University of Pittsburgh Press.
  • Block, N. (1978). Troubles with functionalism. Minnesota Studies in the Philosophy of Science, 9, 261-325.