It’s commonplace now to think of living things as machines. A plant is a machine that converts sunlight, water, and carbon dioxide into oxygen via photosynthesis; an animal is a machine that takes oxygen and food and outputs carbon dioxide and water. While there are parallels between machines and living beings, most people, especially people who primarily work with machines (i.e. engineers), take this analogy too far. There is a crucial difference between the two that is not captured when thinking about inputs and outputs. In living things, unlike in machines, unreliable parts make a reliable whole.
All of us are made of building blocks which have an autonomy of their own, but which nonetheless work together to create an integrated system that functions consistently. Cells, proteins, chemical cascades, and neurons all have an element of chaos to them—they don’t behave in easily predictable ways—and yet your heart beats three billion times, continuously, until you die.
We are unlike machines in that our parts are not reliable: if you give the same neuron the same stimulus, it will not always respond in the same way. We are also unlike machines in that our parts don’t have a perfect fit: people often talk about proteins binding to each other like a “lock and key”, but it’s more like a handshake, where two proteins modify each other’s shape as they interact, connecting based on a level of “binding affinity” that can be higher or lower but never perfect.
It’s hard to overstate just how different this is from an integrated circuit, whose parts are precision-manufactured to fit, and are built for consistency. If you set a bit in a CPU register to a 0 or 1, it will maintain that state with 99.9999% reliability: in fact, the rare time this fails is in the case of cosmic rays causing bit flips. It takes interference from other galaxies to throw the building blocks of computers out of sync, and even that only happens about once every quintillion operations.
The unreliability of biological building blocks is not a bug, it’s a feature. Cells are better thought of as active agents which pursue goals than passive mechanical parts. This means that on the one hand, they are less reliable, because they may sometimes take actions that are misaligned with the goals of the larger system (e.g. cancer). On the other hand, this unreliability comes with a creative flexibility: when one part of the system makes a mistake, another part of the system can step in and compensate. When neurons in your visual cortex are damaged, neighboring neurons can reorganize to take over their function, restoring partial vision. As cells in every part of your body die, other cells come in to take their place. Fault-tolerance is built into every level of the system, because every level of the system is partly unreliable. The unreliability of the lower levels, rather than being a nuisance, can actually be harnessed, like when bacteria increase the amount of randomness in their gene expression when stressed, to increase their odds of survival.
This is the root of the difference between biology and machines, at least in their current form: biology is grown, machines are designed.1 Machines are built by teams of people, and as such they need to be shaped in such a way as to be easily understood and manipulated by teams of people. The principles that apply in an engineering context—separation of concerns, reducing the system down to simple building blocks, each part having a well-defined function, each part being made to be as reliable as possible—apply only partially in biology, forever restrained by the lack of any conscious thinking minds behind the development of life. The world of biology is not a world of easily distinguishable objects and well-defined relationships, the kind you’d find in an architecture diagram. The world of biology is one of interleaving, self-perpetuating, ever-evolving processes.
Ultimately, this difference between organisms and machines, like everything else, is more a matter of degree than a strict binary. There are evolutionary aspects to the development of technology; a software codebase consisting of millions of lines of code is not designed top-down, but emerges bottom-up as thousands of individual engineers contribute to it. The weights in a neural network that powers a chatbot are not “designed”, they are grown as part of a training process, bringing them closer to biology than traditional software.
But there is a useful difference to keep in mind here, which is less the difference between groups of objects (cells vs computers, plants vs cranes), but between mindsets of understanding and design. There is the mechanical mindset – which emphasizes reduction into simple parts and total control over every aspect of the system – and the organismal mindset – which emphasizes creative autonomy over predictability. What we need to be careful about, especially in a society overrun by machines, is letting the mechanical mindset dominate the other. To view predictability and control as the unalloyed good, and to think of randomness as a problem to be eradicated. To forget that resilience emerges not from rigidity but from flexible adaptation. It’s easy to look at our computers and skyscrapers and admire them for their elegance, contrasting them with the messiness of a neural wiring diagram or a biochemical pathway. It’s harder to see the wisdom in nature’s messiness.
Thanks James for feedback on drafts.
There are, in fairness, newer approaches to robotics that take a biologically inspired view, see e.g. Josh Bongard’s work.
This is a pithy banger of a piece. Really appreciate it as it helps me think through the biological underpinning of Internal Family Systems.
It also seems important to recognize that unreliability or conflict within systems can be goods in themselves, not just ways of providing fault tolerance or adaptability. The example of subunits acting against the goals of the larger system can be destructive, as in cancer, but it can also be transformative. Sometimes misaligned actions are actually better actions. I’m thinking of conversion moments: a self-described selfish person experiencing an unpremeditated act of kindness. The larger organism’s goal (selfishness) is disrupted by the smaller unit’s behavior (kindness), but that misalignment creates the possibility of reflection and a new cohesion built on deeper values.
In fact, I think all moments of creativity, insight, grace, novelty, revelation, and transformation depend on something being introduced into the larger system that was not there before—something the system itself could not predict or control. At the limit, all learning, growth, and creativity require a new, “unpredictable” attitude arising to the larger system. If the world were entirely mechanized, predictable, and controllable, then there could be no more of those good things: no insight, no grace, no transformation, etc