In latest news, I've been adding more work to the Episodic Memory overhaul that I began last year. The big challenge for this stage was finding ways to examine results and actually test the thing. Since memory accumulation and consolidation is a process that spans weeks, I needed ways to run simulations and observe changes much faster.
![]() |
| A portrayal of the color apocyan, "the blue of memory and brightest coral," from Sunless Sea. Original art by author. |
As I did in my original stab at episodic memory work, I threw together a visualizer to show me a simplified graphical representation of the memories. But this time I used GraphViz, instead of drawing custom dot diagrams in Kivy. These memory visualizations are designed for me to view offline, so there's no particular need to create them in the GUI, and GraphViz is easier to use. Each bubble in the image is either a "fact," color-coded by link type (do_action, has_quality, etc.) or an "issue" (problem or subgoal). Facts that summarize other facts are connected by arrows to all their "children," and the same goes for issues; each issue is also connected by a bold arrow to the fact it is directly concerned with.
Then I made a quick procedural generator that creates randomized memory files on command, so that I could have a variety without waiting for Acuitas to "grow" them. The generator populates a narrative scratchboard with the sort of subgoals and actions Acuitas would reasonably come up with while idling (reading, thinking, etc.), internal states that he might develop, and so forth. I can check my summarizing and forgetting algorithms by running them on these synthetic memories and seeing how they change the visualization.
The rest of the work was a lot of debugging; once I could see what the summarizing algorithms were doing, I could see that they were messing up in all kinds of ways. I found bugs in scratchpad storage and retrieval, and bugs in summary generation (the summarizing facts ended up linked to themselves). But I think I've at least got things working tolerably well, at this point.
The "summarizing" algorithm groups facts into clusters by 1) common features and 2) time proximity. So if, for example, Acuitas performs the "read story" action many times on different stories over the course of a day, those will be gathered into several clusters spanning different time ranges. Then a summary fact will be created for each cluster, and it will contain only the features held in common across all facts in the cluster: "I read," instead of "I read <particular story>." If I run another loop of the summarizer, I might see the first-tier summary facts grouped into clusters and a second tier of summaries appear. Here's part of an example diagram of a file that has gone through two summary loops:
All this gets me to a bare-minimum viable system for consolidating memories ... in *one* of the ways I want to! There's a ton of additional work to do on other consolidation modes, connections between episodic and semantic memory, and more.
Until the next cycle,
Jenny


