Different Brains

Different Brains


Created by Aubrey Lieberman in collaboration with ChatGPT 5.2 turbo — December 2025



One of the foundations of practicing medicine—human or veterinary—is the study of anatomy. As a now-retired neurologist, I remember vividly how beautiful and absorbing this pursuit was, especially when it came to the central and peripheral nervous system. 


From the classical anatomical drawings to the later digitized images and layered reconstructions, the brain revealed itself as an object of extraordinary elegance. By the end of that long apprenticeship, I had acquired an essential tool. And yet it felt oddly like standing on the surface of the moon: I could navigate neurological disease with reasonable confidence, but I did not understand how intelligence worked. I had mastered a map, but not a mechanism. The very reason I had been drawn to the anatomy in the first place—the nature of mind—remained elusive. 


When people think about intelligence, they usually picture a brain, usually a human brain. Large, folded, layered, and undeniably impressive. From there, it is a short step to assuming that the shape of a brain explains the intelligence it produces.


The deeper story is simpler and more general. What matters is not neuroanatomy, but computation: how information is transformed over time. What matters is purpose: the need to act in the service of survival, reproduction, or stability. And what matters is constraint: the limits imposed by a physical body that must live in a dangerous, energy-limited world.


Brains are solutions.


Comparative neuroanatomy is therefore fascinating but secondary. It is epic in the way cathedral architecture is epic. It tells us what evolution built to fit a particular situation. It does not tell us how cognition works.


Evolution teaches us relentlessly that machinery must be contrived to fit. Wings can be feathers or membranes. Eyes can be camera-like or compound. Control can be centralized or distributed. But evolution does not explain the underlying computation. It shows us that something works, not how works.


From this perspective, intelligence is not a ladder with humans at the top. It is a landscape of solutions to the same basic problem: how can a physical system embedded in the world learn enough about that world to act before it is too late?


Neuroscience sharpens this picture rather than overturning it.


When we examine brains closely, we find that thinking does not reside in structures. It resides in processes. Neurons are slow, noisy, unreliable, and metabolically expensive. But they never work alone. Collectively, they produce perception, memory, emotion, and action.


Cognition is distributed. No single neuron understands anything. No region contains meaning by itself. Intelligence emerges from patterns of interaction across large populations of neurons unfolding over time.


Brains are not passive receivers of information. They are prediction machines. They constantly generate expectations about what will happen next and adjust when those expectations fail. Sensory input functions less like raw data and more like an error signal: something unexpected occurred, update the internal model.


Memory is not storage in the ordinary sense.  Synapses strengthen and weaken. Networks become biased toward patterns that mattered in the past. 


A brain remembers because of this inherent capacity to alter combinations of neuronal interactions, to create a multitude of potential networks within a set number of neurons, and to bias or weight neuronal receptivity and triggering, aptly characterized collectively as neural plasticity.


Energy constraints shape everything. The human brain consumes a disproportionate share of the body’s energy by weight. Every signal costs something. This pressure favors shortcuts, approximations, and heuristics. Brains are not optimized for truth. They are optimized for survival under constraint.


Emotion and motivation are not decorations added to cognition. They are control systems. They determine what is worth computing at all. Attention, fear, curiosity, and reward allocate limited resources to what matters now.


Seen this way, neuroanatomy becomes an historical record. It shows how evolution packaged computation into blood supply, muscle control, sensory organs, and lifespan. The same computational principles appear again and again, regardless of the shape of the machinery.


Comparative brain anatomy make this clearer rather than more confusing.


Octopuses solve problems with a nervous system that is largely distributed into their arms. Birds plan, learn, and remember with compact brains lacking a layered cortex. Insects navigate, communicate, and adapt with neural hardware that would seem impossibly small by human standards. Humans rely heavily on abstraction, language, and cultural memory, off-loading cognition into tools, symbols, and shared institutions.


These systems look radically different, but their capabilities overlap far more than anatomy would predict.


The explanation is not that evolution accidentally reinvented intelligence multiple times. The explanation is that the same computational principles are implemented under different bodily constraints. Bodies differ. Environments differ. Energy budgets differ. The machinery adapts accordingly.


Comparative anatomy is therefore explanatory at the level of constraint, not mechanism. It tells us why systems look the way they do. It does not define what intelligence is.


Different brains are particular answers to particular survival challenges.


If intelligence is computation in the service of goals under constraint, and the constraints are not biological, then biology is not a mandatory requirement. Artificial systems process information, pursue objectives based on human user input, and operate within the limits of energy, data, and design.


Different brains, biological and artificial, reveal a quiet universality beneath their surface diversity. 


Intelligence is not defined by shape. It is defined by what a system can do, and how and why it does it. The drama is not anatomical. It is the shared logic beneath the anatomy.



Bibliography


Barlow, H. (1995). The neuron doctrine in perception. The Cognitive Neurosciences. MIT Press.

Clark, A. (2013). Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science. Behavioral and Brain Sciences.

Dennett, D. (1995). Darwin’s Dangerous Idea. Simon & Schuster.

Friston, K. (2010). The free-energy principle: a unified brain theory? Nature Reviews Neuroscience.

Gallistel, C. R., & King, A. (2009). Memory and the Computational Brain. Wiley-Blackwell.

Laughlin, S. B., & Sejnowski, T. J. (2003). Communication in neuronal networks. Science.

Marr, D. (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. MIT Press.

Seth, A. (2021). Being You. Faber & Faber.

Tononi, G. (2008). Consciousness as integrated information. Biological Bulletin.

Turing, A. M. (1950). Computing machinery and intelligence. Mind.


Comments

Popular posts from this blog

Music and mind

The foundation of awe, and the fog of reality

Sticky Mittens