Thursday, November 13, 2025

Book Review: "Creation: Life and How to Make It"

Creation is another book by Steve Grand, the mastermind behind the popular Creatures series of artificial life simulation video games. Once again, hat tip to Twitter friend Artem (@artydea) for recommending these. This book happens to be about how Grand made Creatures, so unlike Growing up with Lucy, it describes a finished project. I was still surprised by how much more it focuses on philosophy and theory than on practice. Grand seems less interested in sharing technical details of how his Creatures work, and more interested in convincing the reader of two theses: that "artificial life" can really qualify as life, and that even if we (and our AL creations) are purely mechanistic, we can still be special - there is no need to grieve the absence of a mysterious or supernatural element in life. This is heavy material to deal with, so although I'll try to keep this review concise, there may be a lot to unpack.
Part of the cover art for "Creation." It shows a human head, semi-transparent, with a large gear inside and the author's name in the center.

What does Grand mean by "life"? The book considers everything from self-sustaining biochemistry, to intelligence, to sentience, to personhood, and Grand doesn't always explicitly distinguish between them. That makes sense for him: he views them all as outgrowths of the same fundamental principles, different levels in a "hierarchy of persistent phenomena." But I think his arguments work well on some of these subjects, and not as well on others.

The first thing Grand wants to emphasize is that your material body, and indeed every physical object you can see, is more of a system or a process than a static "thing." Your body has fuzzy boundaries and is constantly swapping atoms with the rest of the environment; in fact, much of it will be recycled over the course of your life. If the matter and energy that make up your body are all that constitutes "you," then you can't legitimately claim to be the same entity you were ten or twenty years ago! Therefore human identity must arise from something else - from form, the arrangement of matter and energy, rather than matter and energy alone. You're not so much a distinct clump of molecules as you are an intangible pattern that moves through space and persists in time, imposing itself on matter as it goes. Furthermore, he invokes wave-particle duality to argue that even matter is a process: protons and electrons are stable, persistent disturbances of space, rather than distinct things in themselves (much as a ripple is a disturbance in a liquid, rather than a distinct thing in itself). Grand's ultimate point here is that abstract concepts are every bit as real as objects. You may not be able to see and touch items like "society" or "poverty" or (crucially) "mind," but that doesn't mean they're imaginary; they are simply "higher-order phenomena." They are things that happen to matter, even as matter itself is a thing that happens to spacetime.

I agree with this perspective pretty well, as I expect you'll see if you read my thoughts on the soul in the context of another book review from years ago. But Grand and I are coming to it from opposite directions: I'm trying to explain the spiritual in familiar terms, whereas he's trying to explain the familiar in spiritual terms. Grand takes it for granted that the physical universe we currently occupy is all that's out there; he's merely trying to re-enchant it, to recover some of the benefits that spirituality provided while remaining (in essence) a materialist. I wonder why, having admitted that things like information and form are in some sense immaterial but still truly extant, he does not at least open himself to the possibility of yet other things that cannot be seen and touched.

This sets the stage for Grand's personal definition of "life," which is "patterns that persist by metabolizing and reproducing." This aligns with the "descriptive" scientific definition of life that I'm familiar with, though the latter often adds other factors like growth, and movement or reactivity. [1] The next step in the argument should be fairly obvious: if life is fundamentally about self-maintaining patterns, rather than their substrate of molecules and electrochemical reactions, then it's absolutely possible for life to exist in a virtual world. A program is just another kind of pattern; if it consumes computational resources and copies itself into new regions of memory in order to persist, those are just alternate forms of metabolism and reproduction. If the essence of life is form rather than substance, the fact that computers are made of different substances than organic beings doesn't matter at all.

I think this argument works well if the aspect of "life" we are considering is biochemical self-maintenance, or even intelligence. I don't think it works for phenomenal consciousness. The problem there is that PC, in its fundamental definition, is not a pattern - it's an experience. And we must ask ourselves whether this experience can arise from patterns alone, or whether it requires some specific physical phenomena present in brains (such as electric fields). If the latter is true, a traditional computer can never replicate consciousness, only simulate it. For an expansion of this discussion, see Part 3 of my Symbol Grounding Problem series. Grand sidesteps this question, seemingly assuming that consciousness is another higher-order phenomenon, and therefore must be a pattern, or emergent from one. It's an unfortunately common flaw among AI researchers ... I think because they so badly want the mechanisms of consciousness to be knowable. They'd rather not admit that there's an aspect of human existence they aren't sure how to replicate. To be fair to Grand, he does seem to admit more ignorance about consciousness in Growing up with Lucy (which was written later), so maybe the full nuance of his opinions just doesn't come across in this book.

A screenshot from the original Creatures game, obtained from old-games.com. It shows a cross-section of three floors of a large communal house, including a kitchen, some kind of mechanical room, and a computer room. Part of an outdoor environment with trees and sunflowers is also visible. There are three norns in view; two appear to be interacting, and one of these has a speech bubble up and is talking gibberish.
"Creatures" screenshot obtained from old-games.com

After tackling how life might exist in a computer, Grand goes on to discuss some nuts and bolts of intelligence. As in Growing up with Lucy, he conceptualizes it as a multitude of interconnected feedback control loops, each tasked with maintaining some aspect of the creature within desired parameters, or changing it in response to stimuli. He calls out the old discipline of cybernetics (which is all about such control loops) as a better basis for intelligence than some of the more modern techniques. A very basic control loop might react to immediate sensory input; more advanced loops might be concerned with learning from experience or planning for the future, and in their operations would modify the parameters of the basic loops.

I like this general framing, but I do have a couple of issues with Grand's presentation of it. First, he myopically treats self-persistence as the sole motivation of intelligence ... whereas I see intelligence as driven by goals, of which persistence is but one. Make whatever arguments you like about natural selection; it remains evident that survival and reproduction are not the only human goals. If they were, there would be no suicides and no voluntarily childless adults. Some people apply substantial ingenuity to not persisting. You don't have to like that, but I don't think you can call those people unintelligent. And it's no great stretch to imagine artificial minds with even more alien primary goals.

This premise leads Grand into some naive behaviorism, such as (I paraphrase): "If someone gives signs that you've offended them, that only bothers you because you associate their facial expressions with childhood memories of being hit."[2] Give me a break. I suppose one could argue that being punished for doing obnoxious things as a child helps the emotions of guilt and embarrassment develop; but even if that's true, in a practical sense they eventually become a drive all their own, rooted in empathy and disconnected from memories of being spanked, grounded, or given extra chores. Making other people feel bad makes us feel bad, whether we suffer reprisals or not - and some of us would happily accept physical pain or deprivation to avoid embarrassment.

The other thing that troubles me is Grand's insistence that the control loop hierarchy has to be "bottom-up." This is a way of saying it has to be decentralized and swarm-like; each little loop should concern itself only with its specialized activities, without knowledge of other loops beyond any it directly interacts with. There can be no central coordinator that is in some sense aware of what the entire organism is doing. Grand argues that "Top-down control leads to complexity explosions, because something somewhere has to be in charge of the whole system, and how much this master controller needs to know increases exponentially with the number of components in the system." [3]

Swarm intelligence is certainly a thing, and is effective for solving some problems. But I don't care for Grand's insistence that it's the only feasible option. Although the brain may not have an obvious "master controller," our abstract model of the mind does: we call it "executive function." Our thoughts seem to have a kind of planning center that is explicitly aware of our goals and directs our activities accordingly. (How well the executive function works, and how much influence it exercises, varies from person to person. I don't think this weakens my argument, which is simply that such a thing can work.) And I'm unconvinced by Grand's claim that top-down control always leads to a complexity explosion. The planning center can have a perfectly adequate grasp of what is happening in the whole system without knowing all the details - those can be delegated to controllers lower in the hierarchy, which then pass summarized information toward the top.

The thing I often dislike about bottom-up approaches is the assumption that if we just get the bottom level working and put enough little pieces together, higher-order behavior will appear. There are tantalizing examples of uncoordinated motes creating such higher-order behavior (i.e. emergent behavior), but I don't consider this a guaranteed outcome ... so if you're building a bottom-up system and you don't tell me how you plan for the interactions among your motes to create something greater, I'm going to be skeptical.

My last quibble with Grand's approach to intelligence is his insistence that artificial life needs virtual embodiment in a virtual world. I agree with him that an intelligent entity must interact with its environment, and the interaction needs to include feedback that is meaningful to the entity's goals. But I don't see why the entity could not be a pure abstract mind (operating on language or some other informational substrate), and its environment could not simply be the computer itself: its file system, its input/output streams, its other programs. Grand considers options like this, and concludes that we wouldn't know enough to provide the right feedback for training such an entity, because we would be unable to draw inspiration from biological life; and without living in a world like ours, the entity would be unable to make sense of our information. I'd call that a skill issue. For further discussion of this topic, see Part 4 of my Symbol Grounding Problem series.

The remaining big idea Grand introduces is holism. Holism is the notion that a system can be qualitatively different from any of its individual pieces. Put the pieces together the right way, and in a material sense you have no more than you started with, yet in another sense a whole new entity has arisen. As Grand says, "there is no such thing as half an organism," and there's no such thing as half a mind either. If you pull parts out of an organism, they stop being alive;[4] if you split subsystems out of a mind, they stop being intelligent.

I think this is a big part of Grand's "people are still special even if we're purely physical machines" argument. If you say that intelligence and consciousness are "just a product of little electric currents and chemical reactions," you could be right in a sense ... but you CANNOT conclude from this that a human has no greater worth than any old piece of wire carrying a current. Because the particular arrangements and combinations of these little events in a brain produce a whole new thing that has meaning beyond the events themselves.

There are technical details about how Grand built his Creatures in this book. He identifies neurons, chemoreceptors/chemoemitters, and genes as the "building blocks" of a biological organism's control network, links them with cybernetic concepts that function similarly, and describes how he assembled them into a prototype "norn." Despite my complaints about some of Grand's philosophy of intelligence, I think his work is closer to "the real deal" than much of what passes for intelligence in the more modern generative AI space. Norn intelligence is grounded and agentic. Their tiny brains have mechanisms for attention, reinforcement learning, generalizing, and forgetting. They have drives and reproductive cycles modeled on mammalian biochemistry, and they can adapt over the course of generations via genetic recombination and selection. The fascinating descriptions of how all this works only span about four chapters of a fifteen-chapter book.

The last two chapters are devoted to AI safety concerns and the "slippery stuff" (consciousness and free will), respectively. They're very much like the comparable chapters in Growing Up with Lucy, so I won't break them down in detail. However, this book's version does have a couple nuances I want to call out.

First, for all his arguments that digital life can be like biological life in every way that matters, Grand concludes that his Creatures probably aren't conscious. (And this is fortunate for my opinion of him, since it throws a milder light on a couple statements I would otherwise find ethically disgusting.) He regards Creatures as non-conscious because they are "locked into a sensory-motor loop" and lack the "capacity to imagine"; in other words, they are pretty reactive. Although they have some self-awareness and can learn new behaviors, they don't make plans or have episodic memory. But this, in my opinion, is a really BAD reason to insist that something is non-conscious. Once again, the essence of consciousness is subjective experience ... and it is entirely unnecessary to reflect on, remember, or imagine experiences in order to simply have them. When you are submerged in a fever dream and most of your "higher" mental faculties are shut down, you are still having a moment-to-moment experience of suffering, and this still makes you more meaningful than a rock. Grand would argue that "insects and starfish and many other [biological] creatures" are non-conscious for the same reason, and this is troubling - it implies a license to disregard such creatures' interests that I don't think is warranted.

I'm going to resist my urge for a digression about the ethical treatment of Creatures - because this essay is long enough already, because I suspect the player community has gone over that extensively, and because said topic is barely in the book. The "AI safety" chapter focuses on whether AI might harm humans, not the reverse. So all I will say is that I find Grand's lack of attention to the subject concerning. He never so much as considers whether it was okay for him to put as much suffering and danger as he did into the Creatures' world. (He says that diseases, for example, were "pretty gratuitous" - he did not need to include them to make the norns or their environment work - but he did anyway.) There's a touching little story about how a couple from Australia e-mailed him a baby norn with a debilitating mutation, and he was kind enough to fix her genes and send her back. But he offers no reflections on his deliberate choice to add mutations to the norns' reproductive cycle. In short, I'm not sure Grand was taking his role as a creator all that seriously. Although I'm skeptical that machine intelligence can be conscious, I'm also skeptical that it cannot! And even if the norns feel nothing at all, the people who raise them certainly feel things about them. So Grand's cavalier approach does bother me a bit. He would say that he's achieved a great success simply by making me ask questions; I say that's not good enough. Philosophically interesting questions are all very well, but they don't provide a blanket justification for harm, nor can Grand push all responsibility off on his players.

He also makes an interesting comment about free will here. Although Grand does not think free will exists (apparently because he can't wrap his mind around self-causality), he argues that we have to act as if it exists to keep ourselves and society going. "At the same time we must realize that we too are slaves to our circumstances, but although our future is inevitable we must believe that we are responsible for how things pan out."[5] Huh. In effect, he's saying that we humans cannot function without practicing insanity on an individual and collective level; our lives will fall apart unless we deliberately believe something that is not true. What a curious argument! It usually seems to be the case that aligning our actions to reality produces better outcomes, not worse ones. So I myself would be inclined to take the fact that free will works as, if not proof, at least a piece of evidence that it is real.

Which brings me to the conclusion: do I think Grand succeeded at his big goals for this book? Does it furnish convincing arguments that life, mind, and consciousness are mechanistic, but don't need to be anything else?

I think it gets part of the way there. Grand's favored definition of "life" checks out; and if one opts to define life that way, then life can certainly exist inside a computer simulation. I also think his holism argument does a decent job of justifying the intrinsic value of beings with minds. A physical materialist is not obligated to think that humans, cows, and parrots have no more meaning than rocks, thermostats, or bicycles. But his arguments fail in some other respects. They neither explain phenomenal consciousness (I covered that above), nor address all the unsatisfying implications of materialism.

Don't take anything that follows as a claim that something must be true just because we want it to be. Grand himself doesn't try to prove physical materialism; he takes it for granted that this worldview is correct, and his whole argument is built around convincing the reader that they can be happy with this. So my counterargument will be focused there as well; I'm going to explain why Grand's worldview doesn't make me happy.

The most unsatisfying thing about physical materialism was never that it focused too heavily on matter alone, and too lightly on events or processes. The unsatisfying thing is that it recklessly assumes all of reality is accessible to our senses (and their extensions via measuring instruments), the laws and conditions of physics are absolute and universal, and there can be no world other than the one in which we find ourselves immediately embedded.[6] The various transformations of matter that Grand has in view as "processes" are still parts of this prosaic physical world - so shifting the focus to them does nothing to resolve the disappointments of those hoping for what C. S. Lewis called "other natures." If I am, for instance, worried about whether there is an aspect of me that can continue having experiences after death, it makes no difference whether we define death as "destruction of the material body" or "cessation of the processes of life." Either way, the physical side of me is going to go poof. If my immaterial side consists of information or a Platonic form, I'd better hope there's somewhere it has been "backed up," since the physical substrate that instantiates it will be dissolving. Even the fragmentary memories of me in others' minds will eventually be lost.

The most unsatisfying thing about "clockwork" conceptions of the mind was never that they failed to consider emergence and holism. The unsatisfying thing is the way their simplistic view of causality makes all human behavior inevitable, and derived (however distantly) from reproductive fitness optimization. I won't dispute that the composition of many little things can be qualitatively different and more meaningful than all those little things considered separately. But I will dispute that this does anything to resolve the collision between universal determinism and other important ideas, like moral realism and moral responsibility.

Grand seems to think the unpredictability of the future is enough to make determinism tolerable; though everything is pre-ordained, his unfolding life is still a surprise from his perspective. But this is missing the point. The most disastrous implication of determinism isn't boredom; it's loss of agency. Determinism destroys the idea that we can, to some degree, build our own characters, and that praise and blame are not merely ways to manipulate us, but things we deserve. For this problem, Grand leaves us with nothing but a call to keep pretending that we have agency - to be insane.

Whether you agree with Grand or me or neither of us, I hope that you enjoyed this wild tour and it gave you something to think about. There's a lot to this book for it's size.

Until the next cycle,
Jenny

[1] Margulis, Lynn. "Life: biology." Britannica (2025). Accessed 23 October 2025. https://www.britannica.com/science/life

[2] Here's the exact quote from Creation page 166: "Even reinforcement is hierarchical - when someone glowers at us, we automatically make a connection between this entirely harmless phenomenon, via a chain of inference, to an ultimate fear of being hurt. In our childhood, reinforcement was immediate and directly painful or pleasurable (a smack or a cuddle, say). Over the years, most of us have learned to associate stern looks with smacks and antisocial behavior with stern looks. We behave in such a way as to minimize our risk of being smacked while maximizing our chances of being cuddled, even if nobody actually smacks or cuddles us anymore. I don't see how else it could be - why should we choose not to do something, simply because we have been frowned at?" If Grand truly doesn't "see how else it could be," I have to wonder how much he thinks and cares about other people's feelings, which are the things being most directly signaled by stern looks. Maybe he in particular only behaves well due to conditioning via punishment, and assumes this is true for all the rest of us.

[3] Grand, Steve. Creation: Life and How to Make It. Phoenix, Orion Books Ltd., 2001. p. 142

[4] There are exceptions, such as vegetative propagation. But taking a cutting still doesn't make half a plant - it makes two plants.

[5] Grand, Creation, p. 253

[6] Depending on how you look at it, the Simulation Hypothesis may or may not be consistent with physical materialism. I prefer to say that it is not. If our universe is a simulation, then its physical laws are not guaranteed absolute (because they would be subject to backdoor commands and program revisions, which would look like the miraculous from inside), and the containing overworld could be radically different from our simulated one. It would be supernatural for all practical purposes.

No comments:

Post a Comment