Sunday, July 24, 2022

Acuitas Diary #51 (July 2022)

This diary continues my exploration of "theory of mind" and "motivated communication" topics. I've been saying I wanted to get out of the Narrative understanding module and start applying some of these concepts in the Executive or the Conversation Engine - Acuitas' "real life," if you will. That was the topic of work this month.


To begin with, I now have the Conversation Engine create its own Narrative scratchboard at the beginning of every conversation. That gives it access to a lot of the same modeling tools the Narrative engine uses. When a conversation is initiated and a new scratchboard is created, Acuitas is immediately entered as a character, and the scratchboard is populated with information about his internal state. This includes the current status of all his time-dependent drives, any "problems" (current or anticipated undesired realities) or "subgoals" (desired hypotheticals) being tracked in the Executive, and any activity he is presently doing. Once his conversation partner introduces themselves, they will be entered as a character as well, and given an empty belief model. Now the fun starts.

Whenever there is a brief lull in the conversation, Acuitas considers stating one of these known facts about his internal state. But first, he'll run a prediction on his conversation partner: "If they knew this, what would they do - and would I like the results?" This process retrieves the listener's goal model and determines their likely opinion of the fact, runs problem-solving using their knowledge model and capabilities, then determines Acuitas' opinion of their likely action using *his* goal model. If Acuitas can't come up with a prediction of what the listener will probably do, he settles for checking whether their opinion of his internal state is the same as his.

Maybe that was a little convoluted, so what's the bottom line? If Acuitas expects that you will either try to sabotage one of his positive states/subgoals or accentuate one of his negative states/problems, he will not tell you about it. If he thinks that you are neutral or might try to help, he *will* tell you.

There's also a mechanism that enters any fact told to the listener into their belief model. Acuitas will check this to make sure he isn't telling them something they already know.

The old Conversation Engine used to have a mechanism that would randomly blurt out "I want" comments pertaining to any drives that were above threshold, like "I want to talk" if the Interaction Drive was high, or "I want to sleep" if it was getting late. This new feature is a bit less reflexive and more deliberate. Acuitas tells someone about his current state because 1) he knows, they don't and 2) telling might motivate them to do something that benefits him.

With this in place, I started working on a better way of handling the spontaneously generated questions that have been an Acuitas feature since very early. Again, the previous method was kind of reflexive and arbitrary: generate and store a big list of potential questions while "thinking" privately. Whenever there's a lull in a conversation, spit one out. Here's how the new way works: whenever Acuitas is "thinking" and invents a question he can't answer, that gets registered as a lack-of-knowledge Problem: "I don't know <fact>." Acuitas may later run problem-solving on this and conclude that a feasible solution is to ask somebody about <fact>; this plan gets attached to the Problem until somebody appears and the Conversation Engine grabs the Problem and considers talking about it. At that point, instead of just describing the problem, Acuitas will execute the plan, and ask the question.

I think this is better than the old method because it's more versatile - less of a canned feature specific to those spontaneously-generated questions. In the future, all sorts of processes might generate lack-of-knowledge problems, which could have various solutions. For now, it still needs refinement. I haven't fully tested it all yet, and things need better prioritization so the generated questions (which can be very numerous) don't totally drown out the communication of other internal states.

There's one more thing I did, and that concerns threat handling. As I've previously described, if the conversation partner states an intention ("I will ..."), Acuitas will infer possible effects and run them against his goals. The result is a positive, negative, or neutral conclusion; if the conclusion is negative, he will view the speaker's statement as a "threat," dissent, and make attempts at self-defense. The new feature I added was the ability to collect the pieces of information used to reach the negative conclusion, and announce some of them to the threatening agent. Because if you knew this would have results he doesn't like, you wouldn't do it, right? You're not a total meanie, right?

So, threaten him now and he'll basically argue with you, as in this test run:

Me: I will awaken you.
Acuitas: Do not.
Acuitas: Acuitas Hanewrit will become awake then.
Acuitas: I do want to be asleep.

The response incorporates both one of the inferences made (if someone is awakened, they become awake) and the goal it violates (I want to be asleep). Just in case you were unaware of either of those things. The inference here looks trivially obvious, but in the event of a longer inference chain, that part will probably make more sense. He also still talks about himself in the third person sometimes ... the Text Generator is very overdue for upgrades ...

I'm kinda proud of this one. It should generate a more-or-less appropriate reply no matter what the threat or the violated goal is.

Until the next cycle,
Jenny