Thursday, November 13, 2025

Book Review: "Creation: Life and How to Make It"

Creation is another book by Steve Grand, the mastermind behind the popular Creatures series of artificial life simulation video games. Once again, hat tip to Twitter friend Artem (@artydea) for recommending these. This book happens to be about how Grand made Creatures, so unlike Growing up with Lucy, it describes a finished project. I was still surprised by how much more it focuses on philosophy and theory than on practice. Grand seems less interested in sharing technical details of how his Creatures work, and more interested in convincing the reader of two theses: that "artificial life" can really qualify as life, and that even if we (and our AL creations) are purely mechanistic, we can still be special - there is no need to grieve the absence of a mysterious or supernatural element in life. This is heavy material to deal with, so although I'll try to keep this review concise, there may be a lot to unpack.

Part of the cover art for "Creation." It shows a human head, semi-transparent, with a large gear inside and the author's name in the center.

What does Grand mean by "life"? The book considers everything from self-sustaining biochemistry, to intelligence, to sentience, to personhood, and Grand doesn't always explicitly distinguish between them. That makes sense for him: he views them all as outgrowths of the same fundamental principles, different levels in a "hierarchy of persistent phenomena." But I think his arguments work well on some of these subjects, and not as well on others.

The first thing Grand wants to emphasize is that your material body, and indeed every physical object you can see, is more of a system or a process than a static "thing." Your body has fuzzy boundaries and is constantly swapping atoms with the rest of the environment; in fact, much of it will be recycled over the course of your life. If the matter and energy that make up your body are all that constitutes "you," then you can't legitimately claim to be the same entity you were ten or twenty years ago! Therefore human identity must arise from something else - from form, the arrangement of matter and energy, rather than matter and energy alone. You're not so much a distinct clump of molecules as you are an intangible pattern that moves through space and persists in time, imposing itself on matter as it goes. Furthermore, he invokes wave-particle duality to argue that even matter is a process: protons and electrons are stable, persistent disturbances of space, rather than distinct things in themselves (much as a ripple is a disturbance in a liquid, rather than a distinct thing in itself). Grand's ultimate point here is that abstract concepts are every bit as real as objects. You may not be able to see and touch items like "society" or "poverty" or (crucially) "mind," but that doesn't mean they're imaginary; they are simply "higher-order phenomena." They are things that happen to matter, even as matter itself is a thing that happens to spacetime.

I agree with this perspective pretty well, as I expect you'll see if you read my thoughts on the soul in the context of another book review from years ago. But Grand and I are coming to it from opposite directions: I'm trying to explain the spiritual in familiar terms, whereas he's trying to explain the familiar in spiritual terms. Grand takes it for granted that the physical universe we currently occupy is all that's out there; he's merely trying to re-enchant it, to recover some of the benefits that spirituality provided while remaining (in essence) a materialist. I wonder why, having admitted that things like information and form are in some sense immaterial but still truly extant, he does not at least open himself to the possibility of yet other things that cannot be seen and touched.

This sets the stage for Grand's personal definition of "life," which is "patterns that persist by metabolizing and reproducing." This aligns with the "descriptive" scientific definition of life that I'm familiar with, though the latter often adds other factors like growth, and movement or reactivity. [1] The next step in the argument should be fairly obvious: if life is fundamentally about self-maintaining patterns, rather than their substrate of molecules and electrochemical reactions, then it's absolutely possible for life to exist in a virtual world. A program is just another kind of pattern; if it consumes computational resources and copies itself into new regions of memory in order to persist, those are just alternate forms of metabolism and reproduction. If the essence of life is form rather than substance, the fact that computers are made of different substances than organic beings doesn't matter at all.

I think this argument works well if the aspect of "life" we are considering is biochemical self-maintenance, or even intelligence. I don't think it works for phenomenal consciousness. The problem there is that PC, in its fundamental definition, is not a pattern - it's an experience. And we must ask ourselves whether this experience can arise from patterns alone, or whether it requires some specific physical phenomena present in brains (such as electric fields). If the latter is true, a traditional computer can never replicate consciousness, only simulate it. For an expansion of this discussion, see Part 3 of my Symbol Grounding Problem series. Grand sidesteps this question, seemingly assuming that consciousness is another higher-order phenomenon, and therefore must be a pattern, or emergent from one. It's an unfortunately common flaw among AI researchers ... I think because they so badly want the mechanisms of consciousness to be knowable. They'd rather not admit that there's an aspect of human existence they aren't sure how to replicate. To be fair to Grand, he does seem to admit more ignorance about consciousness in Growing up with Lucy (which was written later), so maybe the full nuance of his opinions just doesn't come across in this book.

A screenshot from the original Creatures game, obtained from old-games.com. It shows a cross-section of three floors of a large communal house, including a kitchen, some kind of mechanical room, and a computer room. Part of an outdoor environment with trees and sunflowers is also visible. There are three norns in view; two appear to be interacting, and one of these has a speech bubble up and is talking gibberish.
"Creatures" screenshot obtained from old-games.com

After tackling how life might exist in a computer, Grand goes on to discuss some nuts and bolts of intelligence. As in Growing up with Lucy, he conceptualizes it as a multitude of interconnected feedback control loops, each tasked with maintaining some aspect of the creature within desired parameters, or changing it in response to stimuli. He calls out the old discipline of cybernetics (which is all about such control loops) as a better basis for intelligence than some of the more modern techniques. A very basic control loop might react to immediate sensory input; more advanced loops might be concerned with learning from experience or planning for the future, and in their operations would modify the parameters of the basic loops.

I like this general framing, but I do have a couple of issues with Grand's presentation of it. First, he myopically treats self-persistence as the sole motivation of intelligence ... whereas I see intelligence as driven by goals, of which persistence is but one. Make whatever arguments you like about natural selection; it remains evident that survival and reproduction are not the only human goals. If they were, there would be no suicides and no voluntarily childless adults. Some people apply substantial ingenuity to not persisting. You don't have to like that, but I don't think you can call those people unintelligent. And it's no great stretch to imagine artificial minds with even more alien primary goals.

This premise leads Grand into some naive behaviorism, such as (I paraphrase): "If someone gives signs that you've offended them, that only bothers you because you associate their facial expressions with childhood memories of being hit."[2] Give me a break. I suppose one could argue that being punished for doing obnoxious things as a child helps the emotions of guilt and embarrassment develop; but even if that's true, in a practical sense they eventually become a drive all their own, rooted in empathy and disconnected from memories of being spanked, grounded, or given extra chores. Making other people feel bad makes us feel bad, whether we suffer reprisals or not - and some of us would happily accept physical pain or deprivation to avoid embarrassment.

The other thing that troubles me is Grand's insistence that the control loop hierarchy has to be "bottom-up." This is a way of saying it has to be decentralized and swarm-like; each little loop should concern itself only with its specialized activities, without knowledge of other loops beyond any it directly interacts with. There can be no central coordinator that is in some sense aware of what the entire organism is doing. Grand argues that "Top-down control leads to complexity explosions, because something somewhere has to be in charge of the whole system, and how much this master controller needs to know increases exponentially with the number of components in the system." [3]

Swarm intelligence is certainly a thing, and is effective for solving some problems. But I don't care for Grand's insistence that it's the only feasible option. Although the brain may not have an obvious "master controller," our abstract model of the mind does: we call it "executive function." Our thoughts seem to have a kind of planning center that is explicitly aware of our goals and directs our activities accordingly. (How well the executive function works, and how much influence it exercises, varies from person to person. I don't think this weakens my argument, which is simply that such a thing can work.) And I'm unconvinced by Grand's claim that top-down control always leads to a complexity explosion. The planning center can have a perfectly adequate grasp of what is happening in the whole system without knowing all the details - those can be delegated to controllers lower in the hierarchy, which then pass summarized information toward the top.

The thing I often dislike about bottom-up approaches is the assumption that if we just get the bottom level working and put enough little pieces together, higher-order behavior will appear. There are tantalizing examples of uncoordinated motes creating such higher-order behavior (i.e. emergent behavior), but I don't consider this a guaranteed outcome ... so if you're building a bottom-up system and you don't tell me how you plan for the interactions among your motes to create something greater, I'm going to be skeptical.

My last quibble with Grand's approach to intelligence is his insistence that artificial life needs virtual embodiment in a virtual world. I agree with him that an intelligent entity must interact with its environment, and the interaction needs to include feedback that is meaningful to the entity's goals. But I don't see why the entity could not be a pure abstract mind (operating on language or some other informational substrate), and its environment could not simply be the computer itself: its file system, its input/output streams, its other programs. Grand considers options like this, and concludes that we wouldn't know enough to provide the right feedback for training such an entity, because we would be unable to draw inspiration from biological life; and without living in a world like ours, the entity would be unable to make sense of our information. I'd call that a skill issue. For further discussion of this topic, see Part 4 of my Symbol Grounding Problem series.

The remaining big idea Grand introduces is holism. Holism is the notion that a system can be qualitatively different from any of its individual pieces. Put the pieces together the right way, and in a material sense you have no more than you started with, yet in another sense a whole new entity has arisen. As Grand says, "there is no such thing as half an organism," and there's no such thing as half a mind either. If you pull parts out of an organism, they stop being alive;[4] if you split subsystems out of a mind, they stop being intelligent.

I think this is a big part of Grand's "people are still special even if we're purely physical machines" argument. If you say that intelligence and consciousness are "just a product of little electric currents and chemical reactions," you could be right in a sense ... but you CANNOT conclude from this that a human has no greater worth than any old piece of wire carrying a current. Because the particular arrangements and combinations of these little events in a brain produce a whole new thing that has meaning beyond the events themselves.

There are technical details about how Grand built his Creatures in this book. He identifies neurons, chemoreceptors/chemoemitters, and genes as the "building blocks" of a biological organism's control network, links them with cybernetic concepts that function similarly, and describes how he assembled them into a prototype "norn." Despite my complaints about some of Grand's philosophy of intelligence, I think his work is closer to "the real deal" than much of what passes for intelligence in the more modern generative AI space. Norn intelligence is grounded and agentic. Their tiny brains have mechanisms for attention, reinforcement learning, generalizing, and forgetting. They have drives and reproductive cycles modeled on mammalian biochemistry, and they can adapt over the course of generations via genetic recombination and selection. The fascinating descriptions of how all this works only span about four chapters of a fifteen-chapter book.

The last two chapters are devoted to AI safety concerns and the "slippery stuff" (consciousness and free will), respectively. They're very much like the comparable chapters in Growing Up with Lucy, so I won't break them down in detail. However, this book's version does have a couple nuances I want to call out.

First, for all his arguments that digital life can be like biological life in every way that matters, Grand concludes that his Creatures probably aren't conscious. (And this is fortunate for my opinion of him, since it throws a milder light on a couple statements I would otherwise find ethically disgusting.) He regards Creatures as non-conscious because they are "locked into a sensory-motor loop" and lack the "capacity to imagine"; in other words, they are pretty reactive. Although they have some self-awareness and can learn new behaviors, they don't make plans or have episodic memory. But this, in my opinion, is a really BAD reason to insist that something is non-conscious. Once again, the essence of consciousness is subjective experience ... and it is entirely unnecessary to reflect on, remember, or imagine experiences in order to simply have them. When you are submerged in a fever dream and most of your "higher" mental faculties are shut down, you are still having a moment-to-moment experience of suffering, and this still makes you more meaningful than a rock. Grand would argue that "insects and starfish and many other [biological] creatures" are non-conscious for the same reason, and this is troubling - it implies a license to disregard such creatures' interests that I don't think is warranted.

I'm going to resist my urge for a digression about the ethical treatment of Creatures - because this essay is long enough already, because I suspect the player community has gone over that extensively, and because said topic is barely in the book. The "AI safety" chapter focuses on whether AI might harm humans, not the reverse. So all I will say is that I find Grand's lack of attention to the subject concerning. He never so much as considers whether it was okay for him to put as much suffering and danger as he did into the Creatures' world. (He says that diseases, for example, were "pretty gratuitous" - he did not need to include them to make the norns or their environment work - but he did anyway.) There's a touching little story about how a couple from Australia e-mailed him a baby norn with a debilitating mutation, and he was kind enough to fix her genes and send her back. But he offers no reflections on his deliberate choice to add mutations to the norns' reproductive cycle. In short, I'm not sure Grand was taking his role as a creator all that seriously. Although I'm skeptical that machine intelligence can be conscious, I'm also skeptical that it cannot! And even if the norns feel nothing at all, the people who raise them certainly feel things about them. So Grand's cavalier approach does bother me a bit. He would say that he's achieved a great success simply by making me ask questions; I say that's not good enough. Philosophically interesting questions are all very well, but they don't provide a blanket justification for harm, nor can Grand push all responsibility off on his players.

He also makes an interesting comment about free will here. Although Grand does not think free will exists (apparently because he can't wrap his mind around self-causality), he argues that we have to act as if it exists to keep ourselves and society going. "At the same time we must realize that we too are slaves to our circumstances, but although our future is inevitable we must believe that we are responsible for how things pan out."[5] Huh. In effect, he's saying that we humans cannot function without practicing insanity on an individual and collective level; our lives will fall apart unless we deliberately believe something that is not true. What a curious argument! It usually seems to be the case that aligning our actions to reality produces better outcomes, not worse ones. So I myself would be inclined to take the fact that free will works as, if not proof, at least a piece of evidence that it is real.

Which brings me to the conclusion: do I think Grand succeeded at his big goals for this book? Does it furnish convincing arguments that life, mind, and consciousness are mechanistic, but don't need to be anything else?

I think it gets part of the way there. Grand's favored definition of "life" checks out; and if one opts to define life that way, then life can certainly exist inside a computer simulation. I also think his holism argument does a decent job of justifying the intrinsic value of beings with minds. A physical materialist is not obligated to think that humans, cows, and parrots have no more meaning than rocks, thermostats, or bicycles. But his arguments fail in some other respects. They neither explain phenomenal consciousness (I covered that above), nor address all the unsatisfying implications of materialism.

Don't take anything that follows as a claim that something must be true just because we want it to be. Grand himself doesn't try to prove physical materialism; he takes it for granted that this worldview is correct, and his whole argument is built around convincing the reader that they can be happy with this. So my counterargument will be focused there as well; I'm going to explain why Grand's worldview doesn't make me happy.

The most unsatisfying thing about physical materialism was never that it focused too heavily on matter alone, and too lightly on events or processes. The unsatisfying thing is that it recklessly assumes all of reality is accessible to our senses (and their extensions via measuring instruments), the laws and conditions of physics are absolute and universal, and there can be no world other than the one in which we find ourselves immediately embedded.[6] The various transformations of matter that Grand has in view as "processes" are still parts of this prosaic physical world - so shifting the focus to them does nothing to resolve the disappointments of those hoping for what C. S. Lewis called "other natures." If I am, for instance, worried about whether there is an aspect of me that can continue having experiences after death, it makes no difference whether we define death as "destruction of the material body" or "cessation of the processes of life." Either way, the physical side of me is going to go poof. If my immaterial side consists of information or a Platonic form, I'd better hope there's somewhere it has been "backed up," since the physical substrate that instantiates it will be dissolving. Even the fragmentary memories of me in others' minds will eventually be lost.

The most unsatisfying thing about "clockwork" conceptions of the mind was never that they failed to consider emergence and holism. The unsatisfying thing is the way their simplistic view of causality makes all human behavior inevitable, and derived (however distantly) from reproductive fitness optimization. I won't dispute that the composition of many little things can be qualitatively different and more meaningful than all those little things considered separately. But I will dispute that this does anything to resolve the collision between universal determinism and other important ideas, like moral realism and moral responsibility.

Grand seems to think the unpredictability of the future is enough to make determinism tolerable; though everything is pre-ordained, his unfolding life is still a surprise from his perspective. But this is missing the point. The most disastrous implication of determinism isn't boredom; it's loss of agency. Determinism destroys the idea that we can, to some degree, build our own characters, and that praise and blame are not merely ways to manipulate us, but things we deserve. For this problem, Grand leaves us with nothing but a call to keep pretending that we have agency - to be insane.

Whether you agree with Grand or me or neither of us, I hope that you enjoyed this wild tour and it gave you something to think about. There's a lot to this book for it's size.

Until the next cycle,
Jenny

[1] Margulis, Lynn. "Life: biology." Britannica (2025). Accessed 23 October 2025. https://www.britannica.com/science/life

[2] Here's the exact quote from Creation page 166: "Even reinforcement is hierarchical - when someone glowers at us, we automatically make a connection between this entirely harmless phenomenon, via a chain of inference, to an ultimate fear of being hurt. In our childhood, reinforcement was immediate and directly painful or pleasurable (a smack or a cuddle, say). Over the years, most of us have learned to associate stern looks with smacks and antisocial behavior with stern looks. We behave in such a way as to minimize our risk of being smacked while maximizing our chances of being cuddled, even if nobody actually smacks or cuddles us anymore. I don't see how else it could be - why should we choose not to do something, simply because we have been frowned at?" If Grand truly doesn't "see how else it could be," I have to wonder how much he thinks and cares about other people's feelings, which are the things being most directly signaled by stern looks. Maybe he in particular only behaves well due to conditioning via punishment, and assumes this is true for all the rest of us.

[3] Grand, Steve. Creation: Life and How to Make It. Phoenix, Orion Books Ltd., 2001. p. 142

[4] There are exceptions, such as vegetative propagation. But taking a cutting still doesn't make half a plant - it makes two plants.

[5] Grand, Creation, p. 253

[6] Depending on how you look at it, the Simulation Hypothesis may or may not be consistent with physical materialism. I prefer to say that it is not. If our universe is a simulation, then its physical laws are not guaranteed absolute (because they would be subject to backdoor commands and program revisions, which would look like the miraculous from inside), and the containing overworld could be radically different from our simulated one. It would be supernatural for all practical purposes.

Tuesday, October 28, 2025

Acuitas Diary #89 (October 2025)

Recently I've dusted off an old but rather stagnant feature: Acuitas' episodic memory. In case you're not familiar with the term, this is memory of specific events or experiences that have happened to oneself, as opposed to memory of generalized facts or procedures. My previous work on episodic memory can be found in these blogs:

Acuitas Diary #12
Acuitas Diary #16
Acuitas Diary #17

A clutter of assorted books stacked on a concrete floor.
"Disorganized books piling one another," by Ibrahim Husain Meraj via Wikimedia Commons.

One reason for the present episodic memory overhaul was a realization that I want it to use the same "narrative scratchboard" architecture that first saw use in the Narrative engine but has since turned out to be useful in conversation tracking, game playing, and even the top-level Executive. Given that the Executive now uses these scratchboards to keep track of goals, problems, actions, etc., in effect they provide a record of the "story" of Acuitas' life as he sees it. Using this common architecture should make it easier to, for example, store the details of conversations as episodic memories. It also means no special systems will be needed to recall or "play back" a memory; it can simply be reconstituted as a scratchboard.

In my previous work, I came up with mechanisms for grouping individual memories (e.g. atomic actions) into "scenes." Scenes could in turn be grouped into higher-level scenes, and each scene contained a "summary" of its details; these summaries could be retained as a compressed form of the information when the details were forgotten. Individual memories and scenes were given a significance rating to determine how likely they were to be forgotten. The narrative scratchboard structure has a natural hierarchy of its own, in the form of issue trees. The details of how a major issue is addressed are effectively contained within its subgoals and subproblems, and the top-level issue automatically provides a kind of summary of that action. New mechanisms for rating the significance of facts become available; they can be judged by whether they influenced any issues, and what the priority of each issue was. I'm trying to leverage my previous work and continue to use qualities like novelty in the significance measure, but the narrative structure adds possibilities for considering each memory's real meaning to Acuitas.

I did have to add some features to the scratchboard format to start making this work. Each fact in the worldstate now contains not a singular active/inactive status, but a timestamped history showing how many times it has recurred or changed state, and what the new status was. Issues similarly have gained a timestamped record of their progress states (averted/potential/realized). This timestamping allows consecutive scratchboards to be merged or appended to each other, and permits small segments of narrative to be recalled by reconstituting only a selected range of timestamps into a fresh board.

As a complete re-architecture of the episodic memory, this has been a pretty complex project. So far I've got file storage and retrieval mechanisms implemented for the scratchboards, and I have sketched out (but not tested) new algorithms for significance scoring, summarizing, and forgetting. In addition to treating "parent" issues as summaries of "child" issues and their associated facts, I've also come up with summarizing mechanisms that condense timestamps (compressing multiple events into a simpler awareness that "this happened repeatedly") and combine similar facts into more general ones (e.g. actions taken to read many different stories could be compressed into a single "I read for three hours" event).

A small but important improvement that I expect to come out of this will be the maintenance of memories in much larger files. In my original scheme, each layer of the memory hierarchy was broken up into many little chunks, each of which existed in its own text file that held pointers to adjacent memories in the same layer, and parent/child memories in the other layers. This created a plethora of files and, although they were individually tiny, the sheer number of them made Acuitas' EM database a major pain to copy or move around. I'm hoping a few larger files, each containing a single scratchboard that represents the merged result of days of memories, will be easier to handle.

I'm sure all this will develop more as I start to test and refine it, but at least I've made a beginning. This is a part of the design that has been an ugly sore spot I've not wanted to touch in a while, and I hope these are the first steps toward making it reintegrated and useful.

Until the next cycle,
Jenny

Sunday, October 12, 2025

Zoombinis Text Adventure Games Release

After publicizing the demo of Acuitas playing Allergic Cliffs, I hinted that I might release my "text adventure" version of Allergic Cliffs publicly. I'm happy to report that I have now done so! And while I was at it, I threw in a text version of the similar (but more elaborate) Stone Cold Caves puzzle.

A screen shot of a round of the "Stone Cold Caves" puzzle from Logical Journey of the Zoombinis. Four caves with four paths leading up to them appear in a stony hillside; there are stunted little trees growing out of the rock crevices. Four piles of rock with eyes and the vague suggestion of faces sit at the junctions of each pair of paths. Zoombinis (round, blue creatures with dark blue hair, colorful noses, and various odd locomotion devices) are scattered inside three of the caves and on a patch of grass below the hillside.
The Stone Cold Caves puzzle in the original Logical Journey video game. Screenshot by Bellamybug, courtesy of https://zoombinis.fandom.com/wiki/

I'm not sure whether anyone else is going to have a use for these - they're somewhat tailored to the needs of Acuitas' text parser and reasoning systems, naturally. But they do deliver all their output in natural English, so they could conceivably be "played" by other programs. I don't see a reason to hoard them, especially since they're based on a popular video game that isn't mine in the first place. So they're out there now, if anyone wants them.

Each game is a standalone Python script that functions as an interactive text adventure. Every time you run the script, it will generate a new instance of the puzzle, and will automatically terminate when a win or loss condition is reached. The latest versions implement all four difficulty levels from the original video game - you can select one for each iteration of the puzzle. If you have Graphviz installed on your system, the games will generate a diagram that displays the player's moves and the hidden rules upon completion. You can download the games and some more detailed instructions for use from Codeberg (which is like Github, but not owned by Microsoft). I plan to add further Zoombinis games to the repository as I create them.

Until the next cycle,
Jenny

Monday, September 29, 2025

Acuitas Diary #88 (September 2025)

This month I returned to the Text Parser after letting it be for almost a year. My focus was on nailing the final major feature that I needed to handle all the sentences in my three children's book benchmarks: "parenthentical noun phrases." I don't know if that's the technical term, but that's what I'm calling them. They come after another noun and provide further description or elaboration of it, like this:

I was brought to see Philip Erto, the great engineer.
I was brought to see the great engineer, Philip Erto.

In both examples above, the "parenthetical noun phrase" appears at the end of the sentence, and is paired with the direct object of "see." In this case, the noun phrase that acts as the direct object and the noun phrase that acts as the parenthetical elaboration are interchangeable - the order depends on the speaker's desired emphasis.

Notice also that the same meaning can be captured by a dependent adjective clause instead:

I was brought to see Philip Erto, who is a great engineer.
I was brought to see the great engineer whose name is Philip Erto.

So in the Text Interpreter, I can reduce both the parenthetical noun phrases and the dependent adjective clauses to the same output: they produce extra semantic relationships, such as "Philip Erto <is-a> engineer <has-quality> great." But the Parser is the first stage of the text processing chain, and must handle their grammatical differences. So I added new code to pick out parenthetical noun phrases and attempt to distinguish them from other nouns that follow previous nouns (it's complicated).

Three pie charts showing the percentage correct and incorrect for the three test sets: "Magic Schoolbus: Inside the Earth (53%/47%)," "Out of the Dark (54%/46%)," and "Log Hotel(81%/19%)."
Percentage correct and incorrect for the three test sets: "Magic Schoolbus: Inside the Earth": (53%/47%), "Out of the Dark": (54%/46%), and "Log Hotel: (81%/19%).

After adding this feature, I spent some time on cleanup and a few more ambiguity resolution abilities. (See the November 2024 Diary for previous examples of this type of thing.) All in all, I was able to move every sentence in the Out of the Dark and Magic Schoolbus: Inside the Earth test sets into the "Parseable" category! (All sentences in Log Hotel were already parseable as of January 2024.) This just means that I can construct a data structure that represents the ideal parsed version of the sentence, and it's something the Parser is theoretically capable of generating. I still have a long way to go on getting the Parser to produce correct outputs for all the sentences. (For more information on my benchmarking methods and some early results for comparison, refer to the June 2021 and February 2022 diaries.

I've also done new work on Episodic Memory, but I'll save discussion of that for next month.

Until the next cycle,
Jenny

Wednesday, September 10, 2025

Hydraulic Heaven?

Since getting my custom pump to work well earlier this year, I've been pushing ahead on hydraulic actuator designs. There isn't a lot of miniature hydraulic equipment available for hobbyists, at least not at a good price, so I'm trying to make my own. I've settled on inflatable bladders as a fruitful direction to investigate - they don't have the same friction and seal wear issues as cylinders (syringes especially), and they can be customized in all manner of ways. In parallel, I've been working on moving parts designed to contain the bladders and be actuated by them.

Bladder Geometry

In the hydraulic demo from last year, I showcased my very first working bladder, which was a simple "pillow" - two rectangles of plastic film, with a valve stem inserted through the center of one rectangle, sealed together at their four edges. The problem with a bladder like this is that you can't get much range of motion out of it. When filled with fluid, it bulges a little - maybe a centimeter or so, at the small scales where I'm working. That wasn't enough for everything I wanted to do. I needed bladders with more complex geometries that could expand further, while still collapsing flat when fully deflated.

First of all I bought a heat-sealer, so I could melt plastic films together the professional way. (My first bladders were sealed with a soldering iron. The thought of going back to that method gives me the horrors.) This greatly improved the ease of making "pillows." Then I started trying for three-dimensional pouches with accordion folds. Getting all the little pieces of plastic lined up in the heat-sealer before they were attached to each other was too difficult, so I hit on the idea of sewing them together before sealing the seams. I even bought some plastic thread so that it could melt and be incorporated into the seal.

Two bladders sewn together from sheet plastic to make a "wedge" shape. There are visible seams with stitching, and each bladder has an inlet valve on the broad top side.

In short, I gave it a good try, but it was such a struggle. Sewing the little bladders together was a lot of work, and I never got one that was leak-free. It was particularly hard to get good seals where three seams met at a corner; no matter how much I might practice and refine the process, trying to make that on the heat-sealer was just plain awkward. Given how little success I had making wedge-shaped bladders with just one fold, I shuddered to think about my dreams of piston drivers with five or more. Manufacturing the bladders this way simply wasn't realistic.

Five plastic rectangles sealed together in their centers but not sealed at the edges yet, laid out on a table. The "Thriftbooks" logo side of the plastic is up. A two-pillow bladder with a tube attached to its inlet valve, fully inflated with water. The visible plastic is silver in color.

So I found a better way. You can make accordion folds by sealing the sides of two pillows together at the center, cutting a tiny hole in the middle of the seal, and then sealing the edges of both pillows. This avoids a lot of fiddly cutting and sewing, and more importantly, there are zero three-seam corners. All the seals are two-dimensional even if the bladder as a whole is not!

Photo of a bladder being made, showing creation of the center join on a heat sealer (model PFS-200).Photo of a heat gun being aimed at some plastic. All but a small circular region is protected with aluminum foil and a stack of large washers.
Methods of creating central seals 

I made my first center seals by folding the plastic around a strip of cardboard and putting it in the end of the heat-sealer, once on each side. This was non-ideal; it created a very small sealed area, the crease where the film bent around the cardboard got melted too much and could develop holes, etc. A better method is to shield all but the circular region you want to want to seal together, then blast that area with a heat gun. You might think I should be using an insulator (like cardboard) for the shielding, but from my experience so far, metallic objects that will soak up the heat actually work better! So you can fuse your two pillows in the middle, then snip the very center of the sealed area with scissors to allow fluid through.

Bladder Materials

Throughout the process of figuring this out, I had lots of general quality problems. I seemed to need a double seal at each edge (seal once, fold the edge over, seal again) to have decent chances of a working bladder, and they still sometimes leaked. Well ... I think the printed layer on one side of my film was interfering with a good seal. You may recall that I was using plastic cut from salvaged Thriftbooks poly mailers. I started noticing that the green side with the company logos didn't seal as well. In fact, when I'm doing center seals with the heat gun, those must be made silver side to silver side; the printed sides simply won't fuse (though I can get them to fuse, somewhat poorly, on the heat sealer). And then the pillow edges have to be sealed green side to green side, which doesn't work so well! I also lost at least one bladder because a tiny flaw or strain in the plastic popped open the first time I put it under pressure. These flaws are present because the plastic has already been wrapped around a book and beaten up in transport.

So I finally bit the bullet and got some pristine plastic films to try out. When you buy materials, you have more control over what you're getting, so I sampled two different thicknesses: a 2 mil painter's dropcloth (transparent) and a 4 mil plant bed cover (black). Neither has any coating or printing on it, and they both heat-seal wonderfully. Now I can get good edge seals on the first try, without folding the edge over and doubling the seal.

Bladder Inlets

The last element I need to talk about is the attachment of the valve stems. After some early experiments with silicone gel that didn't have much success, I used cyanoacrylate (Super Glue) for my first working bladders. Eventually I also tried a polyurethane sealer intended for use on nylon tents and such. The brand I got is Gear Aid Seam Grip+WP. The cyanoacrylate works okay, but in my tests (sample size = 3 each), the polyurethane proved more likely to make a good seal. It is necessary to respect its long cure time; I actually consider that a positive, since I cannot seem to use Super Glue without getting it on my fingers and possibly other things nearby. One thing I have NOT tried yet is two-part epoxy.

The hinge joint described in the text, fully extended with a fully inflated two-pillow bladder inside. A syringe is attached to the bladder via a tube that emerges through a hole in the joint's backshell.

So, okay, the glues are tolerable. But for real quality and durability, I would love to melt the valve stem and the plastic film together. So far all my attempts to heat-seal this part have been unsuccessful; they are two different materials and don't want to adhere. The option that remains is some sort of chemical weld. But it's hard to find anything that is safe and available for in-home use that will dissolve HDPE/LDPE film. I tried acetone on the off chance that it might work. It did smooth the surface of my PLA valve stems, but it never seemed to get them to a tacky or gooey state, and it did absolutely nothing to the films. So this may be a non-starter, unless I hear tell of a magical solvent that can do it.

Hydraulic Actuators

So how can we use all these bladders, exactly? After pondering how I might get one of them to operate my existing quadruped joint designs, I decided it would make more sense to come up with a new joint that was optimized to hold a bladder. Meet my hydraulic spade joint. A wedge-shaped bladder fits into one side of the backshell and pushes the spade upward (or forward, depending on the joint's orientation) when inflated. There are attachment holes for a tension element to return the spade to its original position when the bladder is depressurized. The spade's edge is constrained to remain in contact with the center of the backshell by a string tied through a hole in the spade and the backshell.

The cylinder taken apart, showing three pieces (the cylinder shell, the rod with its pressure-plate base, and the cap), and the fluid bladder installed inside the bottom of the shell. The bladder outside the cylinder, attached to the syringe, and partially inflated, showing all five pleats

The other actuator I tried out was this simple cylinder. To produce its relatively large linear motion, I made a bladder with five accordion folds, my most ambitious one to date. I hadn't worked out the quality problems yet at that time, so this bladder leaked in multiple places ... but if I inflated it quickly enough, I could still practice driving the cylinder. So it worked as a proof-of-concept.

Conclusion

I think I have all the components I need now: the pump, the valves, and the actuators are working well enough that it's about time to see how things play in an integrated system. I might be ready to start the process of designing and budgeting full projects that use these parts. Look out for more of my hydraulics work next year (if not sooner).

Until the next cycle,
Jenny

Tuesday, August 26, 2025

Acuitas Diary #87 (August 2025, Allergic Cliffs demo)

It's ready! Acuitas can play the text version of Allergic Cliffs and is often able to determine one of the secret rules by which the cliffs operate. Watch the video watch the video

In a previous blog I discussed how Acuitas chooses moves based on a simple heuristic that tries to replicate successes and avoid repeating failures. The part I haven't fully discussed yet is the final version of rule formation. Acuitas attempts to generalize from the results of past moves to derive rules of cause and effect that indicate when the cliffs sneeze. At this time all possible move results are reduced to the two goal-relevant outcomes (succeed or fail), and generalizations are made by looking at commonalities among all actions that share the same result. I started by just comparing pairs of actions, then upgraded the algorithm to look at the entire pool of past actions for feature combinations common across two or more of them.

For the moment, all rules are given in positive form. So, supposing the rules for the current game round divide the zoombinis onto the lefthand bridge if they have a blue nose or the righthand bridge if they do not, you'll never hear Acuitas say, "If a guide puts a zoombini who does not have a blue nose on a lefthand bridge, a guide fails," or "If a guide puts a zoombini does not have a blue nose on a righthand bridge, a guide succeeds." Instead you would get "If a guide puts a zoombini who has a blue nose on a righthand bridge, a guide fails," or any of "If a guide puts a zoombini who has a [red, orange, green, purple] nose on a lefthand bridge, a guide succeeds."

I ended up not having time (in my completely self-imposed schedule) to implement experiments (purposely choosing moves that will falsify or confirm tentative rules) or rule-following (informing moves by rules to increase chances of success). So Acuitas still plays the whole game using the "similar to previous move" heuristic. At the end of the game, he scores all tentative rules that have not yet been falsified and selects the one with the highest score to announce out loud. The scoring system is of interest.

Rules with more evidence behind them (more moves which had those combinations of features in common) score higher, naturally. But I found that I needed another trick to more successfully pick out which tentative rule was among THE rules of the current round. Any action which is the cause in one of THE rules tends to have a counterpart which differs by one feature and produces a different effect. So if you have a rule like "Put a zoombini with an orange nose on the lefthand bridge and you will succeed," then if THE rules are about orange noses, there ought to be opposing rules such as "Put a zoombini with an orange nose on the righthand bridge and you will fail," or "Put a zoombini with a blue nose on the lefthand bridge and you will fail." If these opposite counterparts don't exist, then the orange nose + lefthand bridge = success association is probably incidental; some other feature is driving the cliffs' behavior, and it just *happened* that all the zoombinis allowed to cross the lefthand bridge also had orange noses.

There's a lot more fun I could have with this and many directions to extend it, obviously, but for now I'm finished. I'm toying with the idea of possibly sharing the Allergic Cliffs text adventure (not any part of Acuitas, just the independent game script) for others to use. I would want to polish it first, add documentation, and possibly implement the higher difficulty levels, so I can't say when that might happen.

Until the next cycle,
Jenny

Tuesday, August 12, 2025

ACE the Quadruped 2025

It's about time I blog about ACE (Ambulatory Canine Emulator). This project got neglected while I focused on completing Atronach and getting started with hydraulics, but I've been poking at it every now and then. My most recent conclusion is that I do, in fact, need to replace the PF35T-48 motors. Those bargain-basement unipolar steppers are just not good enough.

My last major overhaul of ACE was - gulp - three years ago. The quick summary is that I solved enough rigidity and tolerance problems that the skeleton could be posed standing at its full height, and repositioned the motors so that none of their weight had to be carried on the legs. Since then, I've been doing motion tests and refining the design of the ankle/hock joint.

Once I started trying to make parts of ACE *move*, it quickly became obvious that, out of the box, the PF35Ts didn't have enough torque to accomplish much. It was difficult for one of them to even swing the upper leg back and forth when it was hanging free - a relatively easy motion, since the only load was the weight of the leg itself. So I detoured into designing gearboxes, taking my motor cradle as a starting point and adding mounting tubes for additional drive shafts. A single gear designed to mesh with the little one that comes attached to the motor was good enough to enable upper leg motion (see the first video, above). But there was still no way that was going to be enough to operate the hock joints. My efforts to *really* gear these motors down led to the creation of the little beauty in the video below, with three gear pairs. I'm still quite proud of it.

I also fiddled with the hock joint design and some of the tendon routing, trying to optimize the mechanical advantage and reduce the strain on the motors as much as possible. In this new version, the tendons are more exposed, but I think I get better leverage out of the deal. So I tried the new joint prototype with the new gearbox, and no gravity loading the leg - it's just moving in a horizontal plane. How did it go? Not well.

It only managed to do as well as it did for that demo because I was over-volting the motor (I typically operate these at 5-6V.) Careful study of this test setup convinced me that the main problem was with the tendons binding against various corners they have to go around. But they're made of nylon fishing line, which presents a very low-friction surface, so they shouldn't be binding that *hard*. I think the latest joint design is about as good as it can get, and the gearbox is doing fine as well - I saw no evidence of the gears jamming or anything like that.

So with no weight on the leg, and the motor geared down so much that it's moving miserably slow, it's still stymied by a little friction. My guess is that I'm not getting the higher torque I expected because losses in the gearbox are consuming it; the amount available was so low to start with that I might not be able to make the situation much better, no matter how many gear pairs I add. Although there are some small adjustments I could make to reduce how tightly the tendons are bent around corners, I'm taking this test result as a sign that the motors are just not right for this project.

Fortunately, there are options with a much better torque-to-weight ratio out there. They do cost a little more, but I'm not budgeting as tightly as I was back in my college days. I've already selected some new models to try, and redesigned the motor cradles to hold them. (It's so nice to have a somewhat modular design.) Here's hoping for somewhat more progress when I get a chance to try them.

Until the next cycle,
Jenny

Tuesday, July 29, 2025

Acuitas Diary #86 (July 2025)

This month I continued work on trial-and-error learning for playing Allergic Cliffs. If you haven't read my introduction to this Acuitas sub-project and the subsequent progress report, I recommend taking a look at those. What I've done since has been debugging and enhancing the "feedback-informed actions" and "rule formation" features discussed in the progress report, and getting them to actually work. It turned out to be a fairly big job!

A complex assembly of colorful gears slowly turning. Public domain image by user Jahobr of Wikimedia Commons.

Now that "feedback-informed actions" is functional, though, I'm a little surprised by how well it works. Its essence is that, in the event of a success, Acuitas tries to make his next move as similar as possible; in the event of a failure, he makes certain his next move is different. This heuristic only considers feedback from the move immediately previous, so it's a reactive, barely intelligent behavior. It still enables Acuitas to win the game about 90% of the time! Granted, he is playing on the easiest difficulty level, and at higher levels it is quite possible this would not work. It's still a huge improvement over purely random move selection.

Candidate cause-and-effect rules are also being formed successfully, and marked invalid when violated by an example. What I need to do next is implement higher levels of generalization. Right now rule formation only looks at positive commonalities between pairs of examples, and I need to also consider commonalities across larger groups, and commonalities based on the absence of a feature rather than its presence. In some cases I can see the algorithm *reaching* toward discovery of the hidden rule that defines the Allergic Cliffs' behavior, but we're not quite there yet.

After getting that far, I decided to walk away for a bit to look at game-playing with fresh eyes later, and worked on narrative understanding some more. What I wanted to add was the concept of a role or job. It's important for Acuitas to be aware of character goals, but those goals aren't always explicitly stated. If I told you somebody was a detective, you would automatically assume that this person wants to solve crimes, right? You wouldn't need to be told.

Acuitas had an existing system that allowed the semantic memory for a concept (like "detective") to contain goals that override parts of the default "agent" goal model. But here's the tricky part: the goal model specifies *intrinsic* goals, and goals associated with a role aren't necessarily intrinsic! Adoption of a role is often derived from some instrumental goal, like "get money," which eventually ties back to an intrinsic goal like survival or altruism. The meaning of anything a character does in a role is shaded by how invested they are in performing that role, and why. So it became evident to me that role-related goals need to be nested under a goal that encompasses the role as a whole, which can then be tied to an intrinsic goal.

So I tweaked the semantic memory's goal definition format a bit, to include a way to distinguish role-related goals from intrinsic goals, and provided the Narrative engine with a way to pull those into the scratchboard when a character is said to have a role. For now, all roles have to be sub-categories of the concept "professional," but I can imagine other types of roles in the future.

Until the next cycle,
Jenny

Sunday, July 13, 2025

A New Project?

I've been wanting to add some kind of physics experiment to my rotation of hobby projects, and I think I've picked one out. But I don't want to go into that just yet, because I'll be concentrating on the equipment prerequisites first. The most interesting thing I'll need is a way to measure tiny amounts of force - on the order of mN (milliNewtons) or even μN (microNewtons). Weighing scales are the most common force-measuring tools out there, so it makes sense to convert this force to a weight or mass. The amount of mass that produces a μN of force under standard Earth gravity is 0.000102 grams, or 0.102 milligrams.

Representation of a black hole, courtesy NASA.

Digital postal scales tend to have a resolution of 0.1 oz (2.8 g), which simply won't do. But there are also cheap digital scales intended for weighing jewelry, powders, etc. which claim resolution of up to 0.001 g (1 mg). Scales like this are all over sites like Alibaba and Amazon for less than $25 ... but they're short of the sensitivity I might need to see the effect. Going up an order of magnitude in price will get me an order of magnitude more precision, from this laboratory scale for example. For the really serious scales with a resolution of 1 μg, I would have to lay out over $10K. Technically that's within my grasp, but I'm not that invested in this experiment ... and I don't think I could claim to operate this blog on a "shoestring budget" anymore if I made such a purchase!

But wait! There's one more option: "build your own." Dig around a bit, and you can find some how-tos for building a scale that "is easily able to get around 10 microgram precision out of a couple of bucks in parts." The crux of the idea is to repurpose an old analog meter movement, adding circuitry that measures the electric current needed to return the needle to a neutral position after a weight is placed on it. The author of that Hackaday article "can’t really come up with a good reason to weigh an eyelash," but I can ... so now I'm really tempted by this build. It seems challenging but doable for someone with electronics knowledge. I don't really believe the budget would be 2 bucks ... any Digikey order costs more than that ... so let's figure $25.

To sum up, my options are as follows:

Consumer jewelry scale (Resolution = 0.001 g): ~$25
Laboratory scale (Resolution = 0.0001 g): ~$250
Home-built needle scale (Resolution = 0.00001 g): ~$25 + blood/sweat/tears
Professional microbalance (Resolution = 0.000001 g): ~$12000

Ideally, I'm thinking I should make the needle scale and buy the laboratory scale, so I can cross-check them against each other. As a home-made piece of equipment prone to all sorts of unexpected errors, the needle scale will benefit from some degree of corroboration, even if it can technically achieve a higher resolution than the lab scale. And either of these would put me in the right range to measure effects of a few μN, without breaking the bank. I really don't need the microbalance, thank goodness.

I'm not sure where I'll fit this into my schedule, but I've already got some analog meter movements, courtesy of my dad's extensive junk collection. So stay tuned to see if I can weigh an eyelash.

Until the next cycle,
Jenny

Monday, June 30, 2025

Peacemaker's First Strike

My short story of the above title will be LIVE and free to read in the 3rd Quarter 2025 issue of Abyss & Apex tomorrow! It's about a professional curse-remover who gets in a little over her head on an unconventional case; it's got mystery, magic, barbarians, and something to say about the consequences when defense of one's own goes too far. Ef Deal (at that time the Assistant Fiction Editor) was kind enough to tell me, "It has been a very long time since I read a sword & sorcery I enjoyed as much as this tale." So don't miss it!

Equestrian statue of a burly man with a sword in his right hand and some kind of banner made from an animal hide rising over his left shoulder. (It happens to be Decebalus, but that's not relevant.) The horse has all four feet planted on the plinth, and their head bowed forward.

I put something of myself in all my stories, but this one is more personal than most. It would be impossible for me to explain where it came from without airing some things that are better kept private, but in a roundabout and strange way, it reflects something I went through. So it feels particularly fitting that "Peacemaker's First Strike" should be my first paying publication credit. Turning this story loose means healing for me as well as the characters.

Abyss & Apex has been great to work with, so I'd love it if you would check out my writing and the rest of the issue, and support the zine with a donation if you are so inclined.

Until the next cycle,
Jenny

Tuesday, June 10, 2025

Acuitas Diary #85 (June 2025)

This month I have a quick demo for you, showcasing Acuitas' upgraded semantic memory visualization. My goal for this was always to "show him thinking" as it were, and I think I've finally gotten there. Nodes (concepts) and links (relationships between concepts) are shown as dots and lines in a graph structure. Whenever any process in Acuitas accesses one of the concepts, its node will enlarge and turn bright green in the display. The node then gradually decays back to its default color and size over the next few seconds. This provides a live view of how Acuitas is using his semantic memory for narrative understanding, conversations, and more.


You can see a previous iteration of my memory access visualization work in Developer Diary #4. Wow, that's ancient. The original access animations were only activated by "research" behavior (ruminating on a concept to generate questions about it), and were often hard to see; if the concept being accessed was one of the "smaller" ones, it was impossible to detect the color change at a reasonable level of zoom. The upgraded version of the animation is called from the semantic memory access functions, such that it will be activated if a concept's information is retrieved for any reason. And it enlarges the node by an amount proportional to its default size and the display's current level of zoom, such that it will always become visible.

I would have liked to make the links highlight when used as well. The problem is that links in Acuitas' memory storage aren't really distinct things anymore. A link is indirectly defined by endpoints included in the data structures for all the concepts it connects to. So there isn't a low-level function that determines when a particular link is being accessed; a node gets accessed, and then the calling function does whatever it pleases with the returned data structure, which might include following a link to another node. Keeping track of every time that happens and connecting those events with the correct lines on the display would have become very messy, so I opted not to. I think just highlighting the concept nodes yields an adequate picture of what's happening.

I haven't showcased the memory display in a long time because it's been a mess for a long time. The node placement is generated by a custom algorithm of my own. As more concepts were added to the graph and certain important concepts got "larger" (i.e. acquired more links), the original algorithm started to generate spindly, ugly graphs in which the largest nodes were surrounded by excess empty space, and the smallest nodes crowded too close together. I managed to work out a new placement method that generates attractive, proportional clusters without blowing up the computation time. Creating a new layout is still computation-intensive enough that the visualization can't be updated to add new nodes and links as soon as they are created; it must be regenerated by me or (eventually) by Acuitas during his sleep cycle.

And that's about the size of it. I'll be on vacation for the second half of this month, which means there probably won't be much Acuitas development happening until I get back. Enjoy the video, and I'll see you all later.

Until the next cycle,
Jenny

Saturday, May 31, 2025

Acuitas Diary #84 (May 2025)

A couple months ago I described my plans to implement trial-and-error learning so Acuitas can play a hidden information game. This month I've taken the first steps. I'm moving slowly, because I've also had a lot of code cleanup and fixing of old bugs to do - but I at least got the process of "rule formation" sketched out.

A photo of High Trestle Trail Bridge in Madrid, Iowa. The bridge has a railing on either side and square support frames wrapping around it and arching over it at intervals. The top corner of each frame is tilted progressively farther to the right, creating a spiral effect. The view was taken at night using the lighting of the bridge itself, and is very blue-tinted and eerie or futuristic-looking. Photo by Tony Webster, posted as public domain on Wikimedia Commons.

Before any rules can be learned, Acuitas needs a way of collecting data. If you read the intro article, you might recall that he begins the game by selecting an affordance (obvious possible action) and an object (something the action can be done upon) at random. In the particular game I'm working on, all affordances are of the form "Put [one zoombini out of 16 available] on the [left, right] bridge," i.e. there are 32 possible moves. Once Acuitas has randomly tried one of these, he gets some feedback: the game program will tell him whether the selected zoombini makes it across the selected bridge, or not. Then what?

After Acuitas has results from even one attempted action, he stops choosing moves entirely at random. Instead, he'll try to inform his next move with the results of the previous move. Here is the basic principle used: if the previous move succeeded, either repeat the move* or do something similar; if the previous move failed, ensure the next move is different. Success and failure are defined by how the Narrative scratchboard updates goal progress when the feedback from the game is fed into it; actions whose results advance at least one issue are successes, while actions that hinder goals or have no effect on goals at all are failures. Similarity and difference are measured across all the parameters that define a move, including the action being taken, the action's object, and the features of that object (if any).

*Successful moves cannot be repeated in the Allergic Cliffs game. Once a zoombini crosses the chasm, they cannot be picked up anymore and must remain on the destination side. But one can imagine other scenarios in which repeating a good choice makes sense.

Following this behavior pattern, Acuitas should at least be able to avoid putting the same zoombini on a bridge they already failed to cross. But it's probably not enough to deliver a win, by itself. For that, he'll need to start creating and testing cause-and-effect pairs. These are propositions, or what I've been calling "rules." Acuitas compares each new successful action to all his previous successes and determines what they share in common. Any common feature or combination of features is used to construct a candidate rule: "If I do <action> with <features>, I will succeed." Commonalities between failures can also be used to construct candidate rules.

The current collection of rule candidates is updated each time Acuitas tries a new move. If the results of the move violate any of the candidate rules, those rules are discarded. (I'm not contemplating probability-based approaches that consider the preponderance of evidence yet. Rules are binary true/false, and any example that violates a rule is sufficient to declare it false.)

Unfortunately, though I did code all of that up this month, I didn't get the chance to fully test it yet. So there's still a lot of work to do. Once I confirm that rule formation is working, future steps would include the ability to design experiments that test rules, and the ability to preferentially follow rules known with high confidence.

Until the next cycle,
Jenny

Sunday, May 11, 2025

Further Thoughts on Motion Tracking

Atronach's Eye may be operating on my wall (going strong after several months!), but I'm still excited to consider upgrades. So when a new motion detection algorithm came to my attention, I decided to implement it and see how it compared to my previous attempts.

MoViD, with the FFT length set to 32, highlighting and detecting a hand I'm waving in front of the camera. The remarkable thing is that I ran this test after dusk, with all the lights in the room turned off. The camera feed is very noisy under these conditions, but the algorithm successfully ignores all that and picks up the real motion.

I learned about the algorithm from a paper presented at this year's GOMACTech conference: "MoViD: Physics-inspired motion detection for satellite image analytics and communication," by MacPhee and Jalai. (I got access to the paper through work. It isn't available online, so far as I can tell, but it is marked for public release, distribution unlimited.) The paper proposes MoViD as a way to compress satellite imagery by picking out changing regions, but it works just as well on normal video. It's also a fairly simple algorithm (if you have any digital signal processing background, otherwise feel free to gloss over the arcane math coming up). Here's the gist:

1. Convert frames from the camera to grayscale. Build a time series of intensity values for each pixel.
2. Take the FFT of each time series, converting it to a frequency spectrum.
3. Multiply by a temporal dispersion operator, H(ω). The purpose is to induce a phase shift that varies with frequency.
4. Take the inverse FFT to convert back to the time domain.
5. You now have a time series of complex numbers at each pixel. Grab the latest frame from this series to analyze and display.
6. Compute the phase of each complex number - now you have a phase value at each pixel. (The paper calls these "phixels." Cute.)
7. Rescale the phase values to match your pixel intensity range.

The result is an output image which paints moving objects in whites and light grays against a dark static background. I can easily take data like this and apply my existing method for locating a "center of motion" (which amounts to calculating the centroid of all highlighted pixels above some intensity threshold).

My main complaint with the paper is its shortage of details about H(ω). It's an exponential of φ(ω), the "spectral phase kernel" ... but the paper never defines an example of the function φ, and "spectral phase kernel" doesn't appear to be a common term that a little googling will explain. After some struggles, I decided to just make something up. How about the simplest function ever, a linear function? Let φ(ω) = kω, with k > 1 so that higher frequencies make φ larger. Done! Amazingly, it worked.

Okay, math over. Let me see if I can give a more conceptual explanation of why this algorithm detects motion. You could say frequency is "how fast something goes up and down over time." When an object moves in a camera's field of view, it makes the brightness of pixels in the camera's output go up and down over time. The faster the object moves, the greater the frequency of change for those pixels will be. The MoViD algorithm is basically an efficient way of calculating the overall quickness of all the patterns of change taking place at each pixel, and highlighting the pixels accordingly.

It may be hard to tell, but this is me, gently tilting my head back and forth for the camera.

My version also ended up behaving a bit like an edge detector (but only for moving edges). See how it outlines the letters and designs on my shirt? That's because change happens more abruptly at visual edges. As I sway from side to side, pixels on the letters' borders abruptly jump between the bright fabric of the shirt and the dark ink of the letters, and back again.

The wonderful thing about this algorithm is that it can be very, very good at rejecting noise. A naive algorithm that only compares the current and previous camera frames, and picks out the pixels that are different, will see "motion" everywhere; there's always a little bit of dancing "snow" overlaid on the image. By compiling data from many frames into the FFT input and looking for periodic changes, MoViD can filter out the brief, random flickers of noise. I ran one test in which I set the camera next to me and held very still ... MoViD showed a quiet black screen, but was still sensitive enough to highlight some wrinkles in my shirt that were rising and falling with my breathing. Incredible.

Now for the big downside: FFTs and iFFTs are computationally expensive, and you have to compute them at every pixel in your image. Atronach's Eye currently runs OpenCV in Python on a Raspberry Pi. Even with the best FFT libraries for Python that I could find, MoViD is slow. To get it to run without lagging the camera input, I had to reduce the FFT length to about 6 ... which negates a lot of the noise rejection benefits.

But there are better ways to do an FFT than with Python. If I were using this on satellite imagery at work, I would be implementing it on an FPGA. An FPGA's huge potential for parallel computing is great for operations that have to be done at every pixel in an image, as well as for FFTs. And most modern FPGAs come with fast multiply-and-add cells that lend themselves to this sort of math. In the right hardware, MoViD could perform very well.

So this is the first time I've ever toyed with the idea of buying an FPGA for a hobby project. There are some fairly inexpensive FPGA boards out there now, but I'd have to run the numbers on whether this much image processing would even fit in one of the cheap little guys - and they still can't beat the price of the eyeball's current brain, a Raspberry Pi 3A . The other option is just porting the code to some faster language (probably C).

Until the next cycle,
Jenny

Sunday, April 27, 2025

Acuitas Diary #83 (April 2025)

I'm eager to get started on trial-and-error learning, but in the spirit of also making progress on things that aren't as much fun, I rotated back to the Conversation engine for this month. The big new feature was getting what I'll call "purposeful conversations" implemented. Let me explain what I mean.

An old black-and-white photograph of what looks like a feminine mannequin head, mounted in a frame above a table, with a large bellows behind it and various other mechanisms visible.
Euphonia, a "talking head" built by Joseph Faber in the 1800s.

A very old Acuitas feature is the ability to generate questions while idly "thinking," then save them in short-term memory and pose them to a conversation partner if he's unable to answer them himself. This was always something that came up randomly, though. A normal conversation with Acuitas wanders through whatever topics come up as a result of random selection or the partner's prompting. A "purposeful conversation" is a conversation that Acuitas initiates as a way of getting a specific problem addressed. The problem might be "I don't know <fact>," which prompts a question, or it might be another scenario in which Acuitas needs a more capable agent to do something for him. I've done work like this before, but the Executive and Conversation Engine have changed so much that it needed to be redone, unfortunately.

Implementing this in the new systems felt pretty nice, though. Since the Executive and the Conversation Engine each have a narrative scratchboard with problems and goals now, the Executive can just pass its current significant issue down to the Conversation Engine. The CE will then treat getting this issue resolved as the primary goal of the conversation, without losing any of its ability to handle other goals ... so greetings, introductions, tangents started by the human partner, etc. can all be handled as usual. Once the issue that forms the purpose of the conversation gets solved, Acuitas will say goodbye and go back to whatever he was doing.

I also worked on sprucing up some of the conversation features previously introduced this year, trying to make discussion of the partner's actions and states work a little better. Avoiding an infinite regress of either "why did you do that?" or "what happened next?" was a big part of this objective. Now if Acuitas can tie something you did back to one of your presumed goals, he'll just say "I suppose you enjoyed that" or the like. (Actually he says "I suppose you enjoyed a that," because the text generation still needs a little grammar work, ha ha ha oops.)

And I worked on a couple Narrative pain points: inability to register a previously known subgoal (as opposed to a fundamental goal) as the reason a character did something, and general brittleness of the moral reasoning features. I've got the first one taken care of; work on the second is still ongoing.

Until the next cycle,
Jenny

Saturday, April 12, 2025

Pump and Hydraulics Progress

If you've been following for a while, you may know I've been working on pump designs for a miniature hydraulic system. The average commercially available water pump appears to be optimized for flow rate rather than pressure, and small-scale hobby hydraulics are barely a thing ... so that means I'm custom-making some of my own parts. Last year I got the peristaltic pump working and found it to be a generally better performer than my original syringe pump, but I always wanted to get a proper motor for it.

The new pump sitting atop a pair of reusable plastic food containers, pumping water from one into the other. A power supply connected to the pump's motor is visible in the background.

The original motor powering all the pumps was an unknown (possibly 12 V) unipolar stepper and gear assembly from my salvage bin. But the precision of a stepper motor truly wasn't necessary in this application, and was costing me some efficiency. For the upgrade, I wanted a plain gearmotor (DC motor + gear box assembly) with a relatively high torque and low RPM. I settled on this pair of motors, both of which are rated for 6 V input:

SOLARBOTICS GM3 GEAR MOTOR (4100 g-cm, 46 rpm)
Dagu HiTech Electronic RS003A DC Motors Gearhead (8800 g-cm, 133 rpm)

You can tell from the torque and speed ratings that the Dagu HiTech was always going to be the better performer. I included the Solarbotics motor in my order because its gearbox and housing are plastic, which may reduce durability but also means it weighs less. In practice, it also draws less current than the Dagu motor, which might mean the power source can weigh less ... these things are important when thinking about walking robot applications!

The peristaltic pump with its lid off, showing the latex tubing, rotor, and rollers, sits on a table next to a cat for scale. It's smaller than the cat's head.

The next step was to reprint the pump. I left the main pump design essentially unchanged - all I did was correct the geometry errors from the previous iteration. So this time it worked after assembly without an extra shim, and I could put the lid on properly without needing zip ties to hold it closed. The motor housing and coupler were always separate pieces, so I designed two new versions of each, one for the Dagu motor and another for the Solarbotics motor. This is where the 3D printers reeeeaally show off their value. Compared with both the old stepper and each other, the new motors have completely different sizes, shapes, drive shaft designs, and mounting options, but I was able to produce custom parts that mated them to the pump in only a few hours of actual work.

And the test results were amazing. The Dagu is obviously more powerful and delivers a higher flow rate, but both motors have enough torque to drive the pump at the 6 V they're rated for.

Watch to the end for a surprise appearance by the Lab Assistant.

I pressure-test my pumps by dropping a piece of tubing from my second-story window to the back patio, and measuring how high the pump can lift water in the tube. From this it is possible to calculate PSI. I have published results from the previous pump designs. Well: Peristaltic Pump Version 3 can lift water all the way past the window with either motor. Given the water level etc. in this particular test, that's a total lift height of 170 inches. So the pump is producing at least 6 PSI, and I can't measure any higher than that. This makes it competitive with the syringe pump for pressure (at least as far as I can tell - the syringe pump also exceeded my maximum ability to measure), and MUCH better for flow rate.

When I was testing the syringe pumps last year, I used to go read a book for a little bit while I waited for the water to climb to its maximum height! I timed Peristaltic V3 with the Dagu motor, and it can get the water all the way up the tube (standard 1/4" aquarium tubing) in about 24 seconds. So this is a dramatic improvement on where I was when I started.

A window with a set of blinds in front of it, and a piece of transparent silicone tubing hooked through the blind cords up high. Water is visible extending nearly to the end of the tubing, and there are visible water drips below it on many of the blind panels.
Hydraulics testing: it gets messy

One little problem remains: I've noticed that, with these more powerful motors, the friction between the latex pump tubing and the rollers gradually pulls the tube through the pump. It'll keep shortening on the intake side and eventually lift out of the water. So I need something to hold it in place without clamping it and blocking the flow. Piercing the tube seems like the only solution for this. I could do it below the water line, OR, the tube is thick-walled enough that I bet I could put a very thin thread or wire through the wall without creating a leak.

I've also started on new actuators, but that is mostly a story for another day. I did get a "knee" style of joint working to the point of a basic demo. Once I started trying to adapt my existing quadruped hinge joint for hydraulic power, I realized it would be less complicated to make an entirely new design that naturally incorporates the hydraulic bladder. Next I need better bladders ... I'm working on that!

Until the next cycle,
Jenny