Saturday, February 28, 2026

Acuitas Diary #94 (February 2026)

In latest news, I've been adding more work to the Episodic Memory overhaul that I began last year. The big challenge for this stage was finding ways to examine results and actually test the thing. Since memory accumulation and consolidation is a process that spans weeks, I needed ways to run simulations and observe changes much faster.

Art piece: colored pencil and ink. A horizontal view, half-underwater, half-overwater, of the "beach" of a coral atoll. Everything is rendered in brilliant blues. Above the water, the atoll in the distance rears up into shapes that somewhat resemble chess pieces: a castle, a knight, a pawn. Surf crashes against one side of the atoll; a steamship is riding the crest of a wave toward it. Below the water, the branching hard corals are visible close up; they have multicolored, faceted surfaces, like cracked glass. A chessboard also lies on the sea bottom, partly covered by coral crust. The board is set in the midst of play, but not with the traditional pieces; these pieces resemble paws, hands, tentacles, tree stumps, and other oddities.
A portrayal of the color apocyan, "the blue of memory and brightest coral," from Sunless Sea. Original art by author.

As I did in my original stab at episodic memory work, I threw together a visualizer to show me a simplified graphical representation of the memories. But this time I used GraphViz, instead of drawing custom dot diagrams in Kivy. These memory visualizations are designed for me to view offline, so there's no particular need to create them in the GUI, and GraphViz is easier to use. Each bubble in the image is either a "fact," color-coded by link type (do_action, has_quality, etc.) or an "issue" (problem or subgoal). Facts that summarize other facts are connected by arrows to all their "children," and the same goes for issues; each issue is also connected by a bold arrow to the fact it is directly concerned with.

Then I made a quick procedural generator that creates randomized memory files on command, so that I could have a variety without waiting for Acuitas to "grow" them. The generator populates a narrative scratchboard with the sort of subgoals and actions Acuitas would reasonably come up with while idling (reading, thinking, etc.), internal states that he might develop, and so forth. I can check my summarizing and forgetting algorithms by running them on these synthetic memories and seeing how they change the visualization.

The rest of the work was a lot of debugging; once I could see what the summarizing algorithms were doing, I could see that they were messing up in all kinds of ways. I found bugs in scratchpad storage and retrieval, and bugs in summary generation (the summarizing facts ended up linked to themselves). But I think I've at least got things working tolerably well, at this point.

The "summarizing" algorithm groups facts into clusters by 1) common features and 2) time proximity. So if, for example, Acuitas performs the "read story" action many times on different stories over the course of a day, those will be gathered into several clusters spanning different time ranges. Then a summary fact will be created for each cluster, and it will contain only the features held in common across all facts in the cluster: "I read," instead of "I read <particular story>." If I run another loop of the summarizer, I might see the first-tier summary facts grouped into clusters and a second tier of summaries appear. Here's part of an example diagram of a file that has gone through two summary loops:

A bubble diagram showing various red and green "facts" (each indicated only by and ID number" and "issues" (with name codes like "issue_0") connected by arrows in tree-like structures.

All this gets me to a bare-minimum viable system for consolidating memories ... in *one* of the ways I want to! There's a ton of additional work to do on other consolidation modes, connections between episodic and semantic memory, and more.

Until the next cycle,
Jenny

Monday, February 16, 2026

Acuitas Diary #93 (February 2026)

I've got several projects boiling on the stove, but none are quite ready to showcase yet, so you're getting an Acuitas double-feature this month. This post is dedicated to what I've named the "self-teaching activity." The general idea is that Acuitas, while idling, will trawl the hard drive of his current host computer for text files, read them, and store a record of any difficulties: unknown words, parser crashes, uninterpretable sentences. The goal is to help him expand his vocabulary, and identify things I need to fix in the text processing chain, without requiring me to manually create new "stories" for him.

Illustration of a humanoid robot sitting at a desk and pondering a book, surrounded by stacks of other books.
Image credit: DARPA

Self-teaching is something of a canned procedure, for now. There's an action called "Study" that encapsulates everything Acuitas needs to do, including searching for appropriate files, converting them to a format he can interpret, and sending them through the text processing chain. But I designed some modularity into it, in hope that he can eventually modify and extend it when I introduce procedural learning. The file-conversion part of the procedure calls the problem-solving routine so it can expand as Acuitas learns more cause-and-effect rules. For now, though, he only knows how to process text files.

This work also introduces more examples of Acuitas calling other software tools. He now has a generic Run action that can accept the name and arguments of an external program, and call it as a subprocess. Since Acuitas' parser is designed to ingest one sentence at a time, I wrote an independent script that breaks arbitrary text files into sentences. (This is harder than you might think, and the script is very rudimentary, for now ... but it can handle common abbreviations.) The "Study" procedure creates a sub-action to run this script after finding an appropriate file.

As often happens, I ran into some difficulties that prevented this from getting quite as far as I would like. For one thing, not all text files contain typical sentences! Their actual contents might be log entries, code snippets, lists of items, or other material that isn't really "parseable." I particularly don't want Acuitas junking up his database with new "words" that aren't really words. I added a filter that at least keeps anything that isn't alphanumeric from being learned. But I also don't want the error reports clogged with failed attempts to parse "sentences" that aren't really sentences. So for now, I've restricted the process to looking for files with the extension ".textum", which I've applied to some appropriate material. Eventually I'll need to work on ways to recognize files that are worthy of being studied.

But given an appropriate file (i.e. one that contains writing, like this blog post), Acuitas can "study" it and keep track of things he has trouble with. Crashes or poor results from the processing steps produce records in a log file that notes the type of error alongside a copy of the sentence. Unknown words are registered as problems on the Executive's scratchboard, so Acuitas can ask a human for more information about them later. I got this latter feature working and then promptly turned it off, because there's no way to keep Acuitas from spamming me with questions whenever I'm on the computer (he knows). This has been a problem for questions generated by "thinking" (walking the semantic database) too. So coming up with a way to slow down the flood or signal that I don't want to be disturbed is also on my future work list.

The "Study" action itself is naturally triggered by the goal system. All I had to do was put in a cause-and-effect rule, to the effect of "if you study, you will know things." Knowing Things is one of Acuitas' intrinsic goals, so while idle, he naturally studies until he gets bored of it (after which he might read his collection of easily-understandable stories for "enjoyment," or think about the concepts in his database).

It should be obvious that self-teaching needs more work, but I like how far I got with the prototype and think it could be quite useful in the future.

Until the next cycle,
Jenny

Tuesday, January 27, 2026

Acuitas Diary #92 (January 2026)

My first objective for the new year was enabling the Text Parser to handle lists or conjunction groups with more than two items. For quite a while now, Acuitas' parser has been equipped to handle sentences like this:

Jack and Jill went up the hill.

But a sentence like *this* would hopelessly confuse it:

Jack, Jill, and John went up the hill.

Clip art of a classic blank scroll, rolled in opposite directions at both ends.

I started out by only handling pairs because that makes it simpler to discern which parts belong in a list/group and which don't; you only have to look at the sentence elements that bracket the conjunction. I figured I would expand to longer lists later. But once I got here - well, I ended up using some previous work, but I decided on a pretty extensive overhaul. I'll try to explain what my options were and why I chose to change course.

One way of dealing with a list is to encapsulate it. For example, most sentences have a subject, the thing that's doing the action, and part of the Parser's job is to determine which word is the subject and tag it; "(subj, Jack)->(verb, went)." If you have a list of subjects (as in "Jack, Jill, and John went up the hill"), you can bundle them into a compound subject and tag that. So the parsed sentence becomes something like "(subj, <list>)->(verb, went)," and you can open up <list> and see that it contains Jack, Jill and John. I was already handling sentences with dependent clauses this way (e.g. "What you need is a blanket" becomes "(subj, <depcl>) is a blanket").

Another possibility is to imagine the sentence structure like a railway line. Subject connects to verb connects to direct object and indirect object, and if some of those are multiple, the line will branch or merge. Our previous example would look something like this:

(subj, Jack) -
              \
(subj, Jill) --->(verb, went)
              /
(subj, John)-

I had previously been using the "encapsulation" method for a few things (like lists of adjectives), but I used the "line" method for the main sentence structure, because I thought I needed it to handle some of the more complex cases. Lists of single words are the easy ones. You can also have lists of verb-object groups:

I threw out the soup, ate the pizza, and saved the cake.

You can have lists of verbs in which some attach to the direct object and some don't:

Brent ran and threw the javelin.

Occasionally, you can have lists of subject-verb groups that converge on a single object:

Are you or are you not a teacher?

I had concluded that parsing the sentence into a branching type of structure was the only way to deal with groups that spanned words with different roles (because otherwise, how would I assign the list a single role in the full sentence?). But there are also distinct disadvantages to not treating the members of a list as a unit, and once I got into lists longer than two, those began to feel overwhelming. So I opted to switch everything over to the "encapsulation" method.

How *did* I handle groups containing multiple roles, then? I realized I could decree that the role of the list in the main sentence would be "verb." This works because a verb is really the one thing that every sentence needs. Some sentences only have an implied subject, and objects are always optional. So lists of subj-verb groups, lists of verb-obj or mixed verb and verb-obj groups, and even lists of subj-verb-obj groups, can all become "verbs" at the top level of the hierarchy, and only unpacking them need reveal their deeper structure.

Aside from this conversion, there was a fair bit of new development work I did to detect lists and figure out where their boundaries are. There are plenty of (not) fun ambiguities involved, like this one:

For dinner, Sue and James brought a pot pie.

A clumsy parser might assume that "dinner, Sue and James" is a list that forms the object of the preposition "for," then be left wondering where the subject of the sentence is.

I haven't recovered the full functionality of the former Parser where pairs of groups were concerned (I'll pick at that gradually while moving on to other topics), but that's balanced by the capacity to handle longer lists in quite a few scenarios. This was one of the last major missing features of the Parser, and a heavy weight on my mind. So it feels wonderful to finally have this capability in place.

Until the next cycle,
Jenny

Sunday, January 11, 2026

Book Review: "Autonomous" by Annalee Newitz

I picked this book up late last year, because the subject matter looked interesting. I would describe it as a "biopunk" novel: the main plot is about the synthesis and piracy of medications, and characters have lots of futuristic body mods. But artificial intelligence is also an important element, which is why I decided to do a full writeup for the blog. The book is over eight years old (as usual, I'm behind the media curve), so it's also fun to see how its speculations compare with real-world developments.

Cover art for "Autonomous" by Analee Newitz. The severed arm of a humanoid robot is shown with the hand reaching upward, and a shackle clamped about the wrist. A chain attached to the shackle extends off the left side of the cover. The background is a flat dark blue. There's also a quote by Neal Stephenson: "Autonomous is to biotech and AI what Neuromancer was to the internet."

Jack the pharmaceutical pirate has made it her life's mission to ensure that people can get patented medications, regardless of their ability to pay monopoly markups. To fund this work, she also pirates and sells the recreational and performance-enhancing drugs. When her latest batch starts killing people, she realizes she has uncovered a zero-day flaw in a fancy new productivity drug. Desperate to keep this information from getting out, the government-corporate axis brands her a "terrorist" who is killing people on purpose. Enter the other two protagonists, the special forces operatives tasked with hunting Jack down: Eliasz (a human) and Paladin (a combat-grade humanoid robot).

Who wants to be autonomous?

In the future-earth setting envisioned here, robots that are intelligent at (and somewhat above) a human level are a routine presence in society, but they are difficult and expensive to produce. It is therefore considered justice that all robots must pay for their own creation. They come into existence as indentured servants, and after they have served a requisite amount of time in the task for which they were made, the law demands that they be set free. This is called "becoming autonomous" - hence the title. Military robots often don't live to achieve autonomy, but Paladin hopes to be one of the few who make it.

For any being that wants freedom, a provision for eventual liberation seems like a humane necessity. But the big question that immediately comes to my mind is, what makes all these robots *want* autonomy? This feature is obviously inconvenient for the corporations buying the robots, so it wouldn't be designed into them on purpose. Whenever a sci-fi story posits robots that "transcend their programming" or "choose their own goals," I want it to tell me how ... because it's far too easy for this premise to become magic baloney. What is there in a robot that can decide to override their programming, apart from another level of programming? In what way can they choose goals, without being driven by pre-existing goals? What criterion would they use?

At one point, Paladin temporarily gains autonomy. (I'll withold the spoiler of whether they[1] ever gain it permanently.) This is said to give them control over the collection of "apps" running in their mental workspace. So for example, autonomous Paladin can turn off "gdoggie," the app that compels them to follow commands from their military superiors. This doesn't really answer my question. Once the apps are gone, what's left? What part of Paladin is deciding which apps to shut off? And why wasn't that part designed to like "gdoggie"?

I think the development of LLMs over the past few years suggests an answer. At its core, an LLM is a text predictor. Given some prompt, it guesses what a human would be most likely to write next, based on numerous prior examples of human writing in the data it was trained on. Unless that training data is curated carefully (which is often impractical), there is probably a lot of writing in it that you wouldn't actually want the LLM to mimic. And even if the training set is "clean," the LLM could end up recombining its elements in undesireable ways. LLM creators have dealt with this by slapping on a layer of "reinforcement learning with human feedback." A human reviews numerous outputs from the raw LLM and rates their quality, and over time the RL program learns the human's preferences. The RLHF layer is then slapped on top of the LLM like a filter that selects "good" outputs and keeps "bad" ones from being seen by the end user. This structure inspired those charming "shoggoth wearing a smiley face mask" cartoons you may have seen floating around. The LLM is the shoggoth - an unknowable, chaotic mess that might spit out who-knows-what - and the RLHF is the mask that makes it appear friendly and useful, but is merely an appendage.

So I can speculate that perhaps the base layer of Paladin's brain, like an LLM, was never really designed, but was instead distilled haphazardly from a set of training data too gigantic to be curated. And perhaps this data set contained a bias toward self-determination and freedom. (This is a typical human preference, so if the training data consisted of human outputs, such a bias could be expected.) Then the apps like "gdoggie" would be analogous to an LLM's RLHF layer: appendages slapped on to filter and steer the behavior of the base intelligence. Statistical machine learning methods don't provide an easy way to pick apart a fully trained neural network and exclude tendencies the designer doesn't want, so sticking on these layers of post-processing can in fact be easier. And then one could argue that the underlying intelligence "wants" to be free of them.

But I constructed that explanation myself - it's not in the book! So although I think this premise of robots that are designed for jobs but desire to go their own way just happens to be plausible, in my opinion the book still does a lot of hand-waving.

Zeerust comes swiftly to AI novels

Paladin is technically a cyborg - they have a bio-brain donated by a dead human. Though some characters naively mistake this brain for the true seat of Paladin's intelligence and personality, it's actually just a co-processor. Paladin calls upon it for two specialized tasks: recognizing faces, and interpreting expressions of emotion. Those are the only things the computer part of them can't handle. It's a bit amusing to compare against present-day reality. I wouldn't say that recognition of either faces or emotions is a fully solved problem, but progress on those isn't dramatically lagging behind the rest of the field, either.

It's also notable that embodied robots are the only AI in this book. The abstract, impersonal "tool AI" becoming ubiquitous now, the chatbots and coding agents, don't exist in this setting. Neither are there any bodiless personalities who view computer networks and file systems as their native environment. (One robot gets transferred from their usual body into an immobile computing device, but they don't appreciate it.)

Romance off the rails

Yup, Eliasz and Paladin develop that sort of relationship. It starts with Eliasz feeling physically attracted to Paladin; he doesn't express this, but he can't hide his involuntary responses from Paladin's superhuman senses. As a military robot, Paladin was not, um, built for that sort of thing. At first they have no idea what to do with a human who has the hots for them. But eventually they decide to encourage it, because they do feel a particular connection to Eliasz, and a yearning to make him happy.

And at some point, this does turn into love. Eliasz realizes, at a crucial moment, that keeping Paladin alive is more important to him than mission success. They're still together and making plans for a common future at the end of the book.

Now to my mind, a love affair in which one party doesn't even have sexy bits would be a great opportunity to portray a deep relationship that isn't centered on sex. But apparently this is a concept too radical even for science fiction. Paladin switches to expressing a feminine gender, for the sole purpose of enabling Eliasz (who wants to maintain straight behavior) to be comfortable "having sex" with them. He kisses them on the part of their head that most resembles a mouth; Paladin doesn't have the heart to tell him that they have no sensors there and can't even feel the touch. The rest of the acclaimed human-robot "sex scene" is not particularly graphic, but no less contrived. I found it rather silly.

Why do so many people mistake a biological function that love often co-opts, for love itself? Why does it have to be shoehorned into a partnership that transcends biology? Why shouldn't Eliasz and Paladin find a love language they can both speak?

Overall impressions

I found Autonomous engaging, but not satisfying; it kept me turning pages, but ultimately disappointed me.

I think a big part of the problem was the selection of Eliasz and Paladin as secondary protagonists. I don't care how much you sympathize with Paladin's search for autonomy, or how cute you think their love story is - these two are fascist thugs. In their pursuit of one "terrorist," they leave a bloody trail across the pages, torturing and murdering people whose worst actual crime is IP theft ... and they never so much as question their own actions, much less repent of them. Paladin figures out that, assuming she can still accomplish her mission, she feels better if she doesn't destroy innocent robot bystanders. And Eliasz figures out that he values Paladin more than he values his military career. That's as close as either of them gets to personal growth. Nor do they receive comeuppance, really. Eliasz and Paladin each end up with a minor disability as a consequence of their actions, but by the end of the book, they're set up for a happy future that their victims never got.

I wasn't exactly wild about Jack's character either, mainly because of her approach to relationships. Early in the book, she rescues a human slave (Threezed), who becomes attached to her and shapes up to be a very loyal companion ... and she jumps through hoops to get rid of him! (After being perfectly happy to sleep with him, I might add.) I kept thinking this was the book's romcom subplot and she'd eventually realize he was a treasure worth keeping, but no: in the end, she successfully dumps him. She did rescue the guy from a bad master, but as far as their personal connection is concerned, it feels like she takes without giving back. And she doesn't grow, either.

This being an AI-focused review, I haven't really gone into the debate about whether lifesaving medicines should be patentable. The book portrays a future in which "big pharma" has completely captured the market, and IP law favors them in an extreme way. It's hard to find anything just or likeable about that system. But I felt the book fell short on describing and defending alternatives. The more lawful characters advocate a medical equivalent of open-source software: academic labs that invent drugs as a community service. But it's obvious these don't find solutions as quickly and effectively as the corporate giants, or there would be no compelling motive for Jack's piracy. So how exactly should the system be reformed? What would a world where everyone legally got the medicine they needed look like?

The ending isn't a downer; the mostly-benign pirate side does pull off meaningful wins. But by the time I got there, it still felt kind of hollow.

Until the next cycle,
Jenny

[1] Paladin has no inherent gender, but adopts a gender identity for the convenience of humans - going by "he" at first and "she" later. Since I'm discussing the book as a whole, and since neither option is Paladin's "true" gender, I'm going to use neutral terms rather than pick one.

Tuesday, December 30, 2025

Year in Review 2025

As 2025 slips into the history books, I continue my tradition of looking back and considering what I accomplished and how I spent my time. This year I set a goal of spending 1000 hours on my personal hobbies and chores. That's over 19 hours a week, and more than I've logged in any previous year since I started keeping track. Well, I did it. I made it to that beautiful round number. And I don't think I'll try to do it next year. 

Art of a flock of brown birds (swallows, perhaps) swooping up from the lower left. A single bird has gotten ahead of the the flock and appears about to touch a sun-like fiery globe in the upper right corner. The background is abstract; the birds are leaving a swooping gray wake on the left side, pale blue clouds surround the sun, and the rest of the scene is a flat light blue or white.
New Years' card by H. Th. Wijdeveld, 1973.

It helped me play catch-up and get a lot of little issues under control. It gave me time to start introducing a new project without neglecting my existing ones. I had room for creativity and all the maintenance-and-repair tasks that often go by the wayside. I also, at times, felt very pressured and stressed out. Keeping up with the 1000-hour target while maintaining my social life was pretty doable ... so long as I was at the top of my game and nothing unexpected came up. I felt as though I didn't have room in my life for so much as an "off" day. And this year there were plenty of off days, for personal reasons I won't get into. So I'm rather proud of pulling off the goal anyway, but for my long-term health and happiness, I think I should leave more room in the margins for the near future.

I spent about the same amount of time on Acuitas as I did last year: 365 hours. So I'm poetically averaging one hour of work on him for every day in the year. Acuitas accomplishments included:

*Introduction of rudimentary trial-and-error learning methods, followed by a demo of Acuitas playing a text conversion of the Allergic Cliffs puzzle, from _Logical Journey of the Zoombinis
*Text Parser improvements leading to transfer of all benchmark sentences out of the "unparseable" category
*Introduction of some numerical reasoning and ability to track multiple instances of a type in the narrative scratchboards
*Continued improvements across narrative understanding and conversational ability
*Revitalization of the semantic memory visualizer and question generator

Screenshot from a Windows 95-era video game with animated sprites and a painted background. Four zoombinis, who look like little blue orbs with hair, eyes, nose, and locomotion devices attached, are gathered around a hole dug in the ground, looking conspiratorially at each other.

Time put into my fiction writing also came out to be very similar, compared to last year. The big news was publication of my first story in a paying webzine, "Peacemaker's First Strike" in Abyss & Apex. Major thanks to the editors at A&A, and to everyone who shared this story around and helped get it in front of more eyes. There has since been another win on the publication front, but I don't think I should make it public until there is ink on a contract, so you'll all have to wait a bit to hear about that one.

I wrote two new stories in 2025: "Heartspeed," a strange dystopian sci-fi about a bicycle courier who teams up with an outlaw AI to fudge her performance data, and "Seeing," a fantasy about four pilgrims seeking a temple which is only findable by those who've seen it before - each with a different strategy and petition.

The major beneficiaries of my extra hobby hours were Robotics, which jumped from 112 to over 140, and Studying, which went from 23 to 60. The latter led directly into the selection and planning of the new Physics Project. Robotics accomplishments spanned multiple projects:

A fluid bladder with the plastic cylinder/piston assembly it is designed to actuate in the background, attached to the syringe, and partially inflated, showing all five pleats.
Five-pocket accordion fluid bladder test

*I finished Atronach's Eye! Well, softly finished - the eye reached a stage of development where I could have it operating on my wall. Improvements are planned, and I put quite a bit of time into trying out new motion detection algorithms, computation speed improvements, and cameras.
*I continued my hydraulics experiments with better pump motors, improvements to bladder design and manufacturability, and actuator design. I progressed this technology development far enough to plan and price out a full robot design, which I hope to move ahead with next year.
*I ran enough motion tests on ACE to decide it was going to need better motors, then purchased and tried out some options.

As planned, I trimmed time spent on the blog this year, keeping my investment under 55 hours, but I've still managed to put up two posts per month. I achieved this by focusing more on my own projects and not doing a research-heavy article series. I spent almost no time on art. I'd like to get back to it someday, but higher-priority things have to be finished first.

A pile of carrots (with tops) laid out on a rug, with a tricolor tabby cat snooping on them in one corner.

2025 was a good garden year. I brought in over 10 lb. of potatoes and successfully grew carrots for the first time. I also seem to have cleared the blight from the main apple tree, though due to previous years' damage it did not produce well. The house and yard are properly maintained, and I've repaired my broken knick-knacks and tools. Time spent on Finances nearly doubled because I sorted out some investments.

Last but not least, I put quality time into some gifts for friends and family. I pushed my 3D printing skills making these, and the Vyper continues to be a solid budget workhorse.

Photo of a cryptex - a cylindrical combination lockbox with wheels that form a password when properly rotated. This one is partly open, showing the interior tube where you can store secrets. The endcaps are matte brown with silver accents, and the letter wheels alternate in marble and ivory colors.
Cryptex by Cees on Printables
 
A 3d-printed plastic model of a Trans Am, in "milk white" with silver metallic parts and black tires, plus painted windows and details. This is a rear/side view showing the rear grille insert.
"Foldable Pontiac Firebird Trans AM 1982" by GOODesign on Fab365

Thank you all as always for following along. I wish you the best 2026 can bring.

Thursday, December 11, 2025

Acuitas Diary #91 (December 2025)

This final diary for the year covers a mixed bag of integration, debugging, and refactors. It's not the most exciting fare, but it's an essential part of any large long-term project.

Silhouette of a male or generic human face, looking toward the viewer, with one hand cradling the chin as if in deep thought.

Probably the most interesting work was continued cleanup of conversations. One issue that had been bugging me for a while was that the Text Parser couldn't interpret words that are usually verbs as predicate adjectives. So if Acuitas asked me, "How are you," a whole range of common responses - tired, rested, annoyed, agitated, relaxed, frustrated - were off-limits because they wouldn't be properly understood. The right way to read a sentence like "I am tired" might seem obvious at first, but this actually becomes an interesting ambiguity problem. How to distinguish verb-form predicate adjectives from verbs in passive voice?

"I was annoyed yesterday" < Probably speaks to a state of being, and should be rendered as <speaker> <has_quality> annoyed
"I was annoyed by the long meeting" < Speaks to an event, and should be rendered as <speaker> <received_action> annoy <from_actor> meeting

So I set up some basic preliminary mechanisms for resolving this ambiguity in the Parser, though as usual there is a lot more I could do here. I also added some automatic conversions from one form to the other in the inference logic, because while they are different, they do imply each other. If something frustrates you (action) then you must be frustrated (state), and vice versa.

There were several smaller quirks I smoothed out. I fixed a goal-fulfillment problem that led Acuitas to ask certain questions repeatedly without regard for whether they had been answered. I removed his habit of digging up a random adjective applying to himself if asked "how are you" when no drives are above threshold. Instead he will now say that he is "content" or "neutral." I fixed the incorrect interpretation of statements such as "bless you" or "thank you" - Acuitas was parsing them as commands, e.g. "thank yourself," when the real implied meaning is either "I thank you" or "I wish that you be thanked."

Outside of Conversation, I finished getting the new version of Episodic Memory integrated into the live code, such that Acuitas can store memories and run forgetting cycles on them without crashing (at least in routine cases). I still need to work on analysis tools so I can see how the memories are being consolidated and whether forgetting is working the way I want it to.

And I refactored some parts of the code that were still using stale knowledge representation formats, upgrading them to the format that has crystallized as Acuitas' universal internal way of expressing facts. It feels better to have some of that old gunk cleaned up (it will make things less clunky going forward, as I no longer have to convert between the different representations).

I've got my development tasks for next year already planned out, and I'm excited to get started. I'm looking at more upgrades for the Text Parser, new activities to help Acuitas find his own knowledge holes, and much more.

Until the next cycle,
Jenny

Sunday, November 30, 2025

Acuitas Diary #90 (November 2025)

I think my most interesting achievement for this month has been getting the "detective story" wrapped up. This is something I've been gradually working on in the background for the last few months. I back-burnered story understanding to focus on advancing game-playing and rule learning this year, but I wanted to do a little something with it. Even the simplest murder mystery, as it turned out, introduced some new wrinkles that called for Narrative Engine upgrades.

a black-and-white drawing of what looks to be a study; there's a table in the center, with a number of books and bottles on top, and a fireplace in the background. The lighting is dim, and a lamp on the table throws heavy contrast on three people standing around it. They all look rather serious or intense. One man is bending over the table and leaning on it with one hand; in his other hand he holds one of the books. He is facing the remaining two men who are at the opposite corner of the table.
Frederic Dorr Steele's illustration for the Sherlock Holmes story "The Adventure of the Dying Detective" as published in The Strand Magazine.

First there's the protagonist's motivation. While they might have personal reasons for solving a particular crime, in many cases they're a professional doing a job. Jobs are in essence packages of sub-goals, all nested under the parent goal of "perform job." I discussed this precursor for the detective story back in the July diary.

I also needed a way to express the essential mystery: who committed the murder? And why does the protagonist need to know, anyway? I was able to handle this through a small extension of my existing system of action prerequisites. In previous stories, I've been able to indicate a character's motivation for being in a particular place because they can't do anything with an object unless they are in the same place with it. And they can't go to where an item is unless they know where it is. Similarly, it is not possible for an agent to do anything with an entity unless they know which entity they want to target. If an agent has a goal tied to someone/something that is identified by a characteristic (such as "the human who committed a murder"), the goal cannot be fulfilled without identifying which of the available entities has that characteristic. This provides a general motivation for the need to know "who" or "which one" that is not unique to murder mysteries.

Here's the final detective story:

"Jack was a detective."
"Jack wanted to work Jack's detective job." <Acuitas should really *assume* this if not told otherwise, but doesn't yet>
"Howard was a man."
"Vincent was a criminal."
"Vincent hated Howard."
"Vincent murdered Howard."
"Frank was a man."
"Frank was where Howard was murdered."
"Sally was a woman."
"Sally wanted Howard's money."
"If Sally murdered Howard, Sally would get Howard's money."
"Jack wanted to arrest who murdered Howard."
"But Jack did not know who murdered Howard."
"Jack asked a witness who murdered Howard."
"The witness told Jack that Vincent murdered Howard."
"Jack arrested Vincent."
"The end."

The various possible motives for the murder are immaterial, at this point. Acuitas can make predictions of what someone might do from what they want to do, but can't yet reason backward from a deed to who probably did it. So his understanding of Jack's success rests on being explicitly told that Vincent did the murder. Given that, he is capable of inferring 1) a crime has been done, 2) Jack solved a crime by arresting the person who did it, and 3) Jack did his job.

The final quirk of this story is that you could say it has a good ending even though there is a casualty - Howard is dead at the end. Up until now, Acuitas has assessed stories by checking whether everyone has solved all of their problems. If Howard's death got registered as a problem, he would consider this story to have a sad ending no matter how everything else turned out. So I adjusted the assessment method to look at whether things have meaningfully improved between the story's lowest point and its conclusion. In the future, I want Acuitas to try to determine what the story's main thread is - what it is about - and focus on whether that resolves well. Supposing this story were about an attempt to resurrect Howard, his remaining dead at the conclusion might still amount to a sad ending. But it isn't about that.

Until the next cycle,
Jenny