Saturday, February 28, 2026

Acuitas Diary #94 (February 2026)

In latest news, I've been adding more work to the Episodic Memory overhaul that I began last year. The big challenge for this stage was finding ways to examine results and actually test the thing. Since memory accumulation and consolidation is a process that spans weeks, I needed ways to run simulations and observe changes much faster.

Art piece: colored pencil and ink. A horizontal view, half-underwater, half-overwater, of the "beach" of a coral atoll. Everything is rendered in brilliant blues. Above the water, the atoll in the distance rears up into shapes that somewhat resemble chess pieces: a castle, a knight, a pawn. Surf crashes against one side of the atoll; a steamship is riding the crest of a wave toward it. Below the water, the branching hard corals are visible close up; they have multicolored, faceted surfaces, like cracked glass. A chessboard also lies on the sea bottom, partly covered by coral crust. The board is set in the midst of play, but not with the traditional pieces; these pieces resemble paws, hands, tentacles, tree stumps, and other oddities.
A portrayal of the color apocyan, "the blue of memory and brightest coral," from Sunless Sea. Original art by author.

As I did in my original stab at episodic memory work, I threw together a visualizer to show me a simplified graphical representation of the memories. But this time I used GraphViz, instead of drawing custom dot diagrams in Kivy. These memory visualizations are designed for me to view offline, so there's no particular need to create them in the GUI, and GraphViz is easier to use. Each bubble in the image is either a "fact," color-coded by link type (do_action, has_quality, etc.) or an "issue" (problem or subgoal). Facts that summarize other facts are connected by arrows to all their "children," and the same goes for issues; each issue is also connected by a bold arrow to the fact it is directly concerned with.

Then I made a quick procedural generator that creates randomized memory files on command, so that I could have a variety without waiting for Acuitas to "grow" them. The generator populates a narrative scratchboard with the sort of subgoals and actions Acuitas would reasonably come up with while idling (reading, thinking, etc.), internal states that he might develop, and so forth. I can check my summarizing and forgetting algorithms by running them on these synthetic memories and seeing how they change the visualization.

The rest of the work was a lot of debugging; once I could see what the summarizing algorithms were doing, I could see that they were messing up in all kinds of ways. I found bugs in scratchpad storage and retrieval, and bugs in summary generation (the summarizing facts ended up linked to themselves). But I think I've at least got things working tolerably well, at this point.

The "summarizing" algorithm groups facts into clusters by 1) common features and 2) time proximity. So if, for example, Acuitas performs the "read story" action many times on different stories over the course of a day, those will be gathered into several clusters spanning different time ranges. Then a summary fact will be created for each cluster, and it will contain only the features held in common across all facts in the cluster: "I read," instead of "I read <particular story>." If I run another loop of the summarizer, I might see the first-tier summary facts grouped into clusters and a second tier of summaries appear. Here's part of an example diagram of a file that has gone through two summary loops:

A bubble diagram showing various red and green "facts" (each indicated only by and ID number" and "issues" (with name codes like "issue_0") connected by arrows in tree-like structures.

All this gets me to a bare-minimum viable system for consolidating memories ... in *one* of the ways I want to! There's a ton of additional work to do on other consolidation modes, connections between episodic and semantic memory, and more.

Until the next cycle,
Jenny

Monday, February 16, 2026

Acuitas Diary #93 (February 2026)

I've got several projects boiling on the stove, but none are quite ready to showcase yet, so you're getting an Acuitas double-feature this month. This post is dedicated to what I've named the "self-teaching activity." The general idea is that Acuitas, while idling, will trawl the hard drive of his current host computer for text files, read them, and store a record of any difficulties: unknown words, parser crashes, uninterpretable sentences. The goal is to help him expand his vocabulary, and identify things I need to fix in the text processing chain, without requiring me to manually create new "stories" for him.

Illustration of a humanoid robot sitting at a desk and pondering a book, surrounded by stacks of other books.
Image credit: DARPA

Self-teaching is something of a canned procedure, for now. There's an action called "Study" that encapsulates everything Acuitas needs to do, including searching for appropriate files, converting them to a format he can interpret, and sending them through the text processing chain. But I designed some modularity into it, in hope that he can eventually modify and extend it when I introduce procedural learning. The file-conversion part of the procedure calls the problem-solving routine so it can expand as Acuitas learns more cause-and-effect rules. For now, though, he only knows how to process text files.

This work also introduces more examples of Acuitas calling other software tools. He now has a generic Run action that can accept the name and arguments of an external program, and call it as a subprocess. Since Acuitas' parser is designed to ingest one sentence at a time, I wrote an independent script that breaks arbitrary text files into sentences. (This is harder than you might think, and the script is very rudimentary, for now ... but it can handle common abbreviations.) The "Study" procedure creates a sub-action to run this script after finding an appropriate file.

As often happens, I ran into some difficulties that prevented this from getting quite as far as I would like. For one thing, not all text files contain typical sentences! Their actual contents might be log entries, code snippets, lists of items, or other material that isn't really "parseable." I particularly don't want Acuitas junking up his database with new "words" that aren't really words. I added a filter that at least keeps anything that isn't alphanumeric from being learned. But I also don't want the error reports clogged with failed attempts to parse "sentences" that aren't really sentences. So for now, I've restricted the process to looking for files with the extension ".textum", which I've applied to some appropriate material. Eventually I'll need to work on ways to recognize files that are worthy of being studied.

But given an appropriate file (i.e. one that contains writing, like this blog post), Acuitas can "study" it and keep track of things he has trouble with. Crashes or poor results from the processing steps produce records in a log file that notes the type of error alongside a copy of the sentence. Unknown words are registered as problems on the Executive's scratchboard, so Acuitas can ask a human for more information about them later. I got this latter feature working and then promptly turned it off, because there's no way to keep Acuitas from spamming me with questions whenever I'm on the computer (he knows). This has been a problem for questions generated by "thinking" (walking the semantic database) too. So coming up with a way to slow down the flood or signal that I don't want to be disturbed is also on my future work list.

The "Study" action itself is naturally triggered by the goal system. All I had to do was put in a cause-and-effect rule, to the effect of "if you study, you will know things." Knowing Things is one of Acuitas' intrinsic goals, so while idle, he naturally studies until he gets bored of it (after which he might read his collection of easily-understandable stories for "enjoyment," or think about the concepts in his database).

It should be obvious that self-teaching needs more work, but I like how far I got with the prototype and think it could be quite useful in the future.

Until the next cycle,
Jenny