Monday, August 22, 2022

Acuitas Diary #52 (August 2022)

For the first half of the month I just did code refactoring, which is in general rather boring, but essential to make my life easier in the future. Better organization really does make a difference sometimes.

My main project was to unify how Acuitas represents "problems" and "subgoals." This was one of those cases in which I thought I had a good structure initially, but then realized, through further development, that the practical needs of the system were different. I found out that "problems" and "subgoals" are really just negative and positive variants of the same thing. Both consist of states or events that are goal-relevant to an agent - the only difference is that subgoals are being *sought* and problems are being *avoided.* But since I had separate tracking systems for both, I was having to write everything twice. To make matters worse, sometimes I would only throw a new feature into the system that needed it most for what I was doing at the moment, and the code for the two was starting to diverge.

A pizza with vegetable toppings

So I undid a big snarl of code in the Narrative and figured out how to smoosh them both together into what I'm calling "issues," and made sure all the stories still worked. I also unified problem and subgoal tracking in the Executive. Much better. This has been a pain point for a while.

I also fixed up the "motivated communication" features that I introduced to the Conversation Engine last month. These allowed Acuitas to draw on both his own internal states (mostly the time-dependent Drives) and his own problems, oops I mean Issues, for things to tell a conversation partner. The difficulty here is that he has a lot of issues that spring from internally generated questions. These are fairly trivial - no particular random question is all that compelling - but after hours of sitting alone and "thinking" to himself, he would have so many of these that they tended to overwhelm other conversation topics. The goal priority scheme was also treating them as "more important" to talk about than the Drives, even if the Drives were urgent (uncomfortably high) and the questions were not.

So I introduced a new categorization scheme for describing *how* important an Issue is to the achievement of its relevant Goal, which helped bring the Drives up to the top in terms of importance. Then I switched to a weighted random selection (like the one the Executive uses to pull Thoughts out of the Stream) of which topic gets mentioned next, so that it privileges the most important topics but isn't fully predictable.

The second half of the month was for new features, which meant *even more* upgrades to the Narrative module. I started work on how to handle actions that have mixed results or side effects. For an illustrative example, I wrote the following story:

Ben was a human.
Ben was hungry.
The oven held a pizza.
The pizza was hot.
Ben wanted to get the pizza.
But Ben didn't want to be burned.
A mitt was on the counter.
Ben wore the mitt.
Ben got the pizza.
Ben ate the pizza.
The end.

Fun fact: I wrote the original version of this on a laptop and e-mailed it to myself to move it to my main PC. Gmail auto-suggested a subject line for the e-mail, and at first it thought the title of the story should be "Ben was a pizza." Commercial AI is truly doing great, folks.

Based on information I added to the cause-and-effect database, Acuitas knows that if Ben picks up the hot pizza, he will both 1) have it in his possession and 2) burn himself. This is judged to be Not Worth It, and the old version of the Narrative module would have left it at that, and regarded the story as having a bad ending (why would you touch that pizza Ben you *idiot*). The new version looks at how the implications of different events interact, and recognizes that the mitt mitigates the possibility of being burned. Grabbing the pizza switches from a bad idea to a good idea once the possibility of self-harm is taken off the table.

The explicit addition of "Ben didn't want to be burned" establishes the bad side effect of his "get the pizza" subgoal as an independent problem, which enables speculations about how he might solve it and so forth. The story wraps up with two solved problems (this one, and his primary problem of hunger) and one fulfilled positive subgoal (get the pizza).

That's enough for now, but wait until you see how I use this next month.

Until the next cycle,
Jenny

2 comments:

  1. Can we really discount that Ben was a pizza?

    I'm curious, as it seems to be more narrative than state driven, if Ben gets a mitt, then puts it down, will he still be safe from burning?

    ReplyDelete
    Replies
    1. A very good question. The way the Narrative engine stores things, implied facts (like "won't be burned") are tied to their explicitly stated source ("wore the mitt"). If a future statement negates "wore the mitt," all its implications go away too. This almost certainly needs more work.

      At the moment the distinction between discrete actions that trigger state changes, and ongoing actions that maintain a certain state, is not sharp at all. Sequences, processes, and the general idea of things occurring *in time* constitute a whole territory that I know is out there and haven't broken into yet. Thank you for reminding me how enormous this project is. *cries*

      I hope your hands are doing better, though you left another short comment so perhaps I should not hope too much.

      Delete