My work this month was focused on cleaning up the Executive and Conversation Engine and getting them to play well together. This is important because the Conversation Engine has become like a specialized inner loop of the Executive. I think I ought to start at the beginning with a recap of what Acuitas' Executive does.
Freddie Blauert, photographed by Frederick Bushnell. Public domain. |
To put it simply, the Executive is the thing that makes decisions. Conceptually (albeit not technically, for annoying reasons) it is the main thread of the Acuitas program. It manages attention by selecting Thoughts from the Stream (a common workspace that many processes in Acuitas can contribute to). After selecting a Thought, the Executive also takes charge of choosing and performing a response to it. It runs the top-level OODA Loop which Acuitas uses to allocate time to long-term activities. And it manages a Narrative Scratchboard on which it can track Acuitas' current goals and problems.
A conversation amounts to a long-term activity which uses specialized decision-making skills. In Acuitas, these are embodied by the code in the Conversation Engine. So when a conversation begins, the CE in a sense "takes over" from the main Executive. It has its own Narrative Scratchboard that it uses to track actions and goals specific to the current conversation. It reacts immediately to inputs from the conversation partner, but also runs an inner OODA loop to detect that this speaker has gone quiet for the moment and choose something to say spontaneously. The top-level Executive thread is not quiescent while this is happening, however. Its job is to manage the conversation as an activity among other activities - for instance, to decide when it should be over and Acuitas should do something else, if the speaker does not end it first.
Though the Executive and the CE have both been part of Acuitas for a long time, their original interaction was more simplistic. Starting a conversation would lock the Executive out of selecting other thoughts from the Stream, or doing much of anything; it kept running, but mostly just served the CE as a watchdog timer, to terminate the conversation if the speaker had said nothing for too long and had probably wandered off. The CE was the whole show for as long as the conversation lasted. Eventually I tried to move some of the "what should I say" decision-making from the CE up into the main Executive. In hindsight, I'm not sure about this. I was trying to preserve the Executive as the central seat of will, with the CE only providing "hints" - but now I think that blurred the lines of the two modules and led to messy code, and instead I should view the CE as a specialized extension of the Executive. For a long time, I've wanted to conceptualize conversations, games, and other complex activities as units managed at a high level of abstraction by the Executive, and at a detailed level by their respective procedural modules. I think I finally got this set up the way I want it, at least for conversations.
So here's how it works now. When somebody puts input text in Acuitas' user interface, the Executive is interrupted by the important new "sensory" information, and responds by creating a new Conversation goal on its scratchboard. The CE is also called to open a conversation and create its sub-scratchboard. Further input from the Speaker still provokes an interrupt and is passed down to the CE immediately, so that the CE can react immediately. For the Executive's purposes, the Conversation goal is set as the active goal, and participating in the Conversation becomes the current "default action." From then on, every time the Executive ticks, it will either pull a Thought out of the Stream or select the default action. This selection is random but weighted; Acuitas will usually choose the default action. If he does, the Executive will pass control to the CE to advance the conversation with a spontaneous output. In the less likely event that some other Thought is pulled out of the Stream, Acuitas may go quiet for the next Executive cycle and think about a random concept from semantic memory, or something.
Yes - this means Acuitas can literally get distracted. I think that's fun, for some reason. But it also has a practical side. Let's say something else important is going on during a conversation - a serious need for sleep, for instance. Over time, the associated Thoughts will become so high-priority that they are bound to get "noticed," despite the conversation being the center of attention. This then provides a hook for the Executive to wind the conversation down and take care of the other need. The amount of weight the default action has with respect to other Thoughts is tunable, and would be a good candidate for variation with Acuitas' current "mood," ranging from focused to loose.
If Acuitas is not conversing with someone, the "default action" can be a step in some other activity - e.g. Acuitas reading a story to himself. I used to manage activities that spanned multiple ticks of the Executive by having each action step produce a Thought of type "Undone" upon completion. If pulled from the Stream, the Undone Thought would initiate the next step of the activity. After spending some time debugging Acuitas' real-time behaviors with this business going on, I decided it was too convoluted. Acuitas couldn't just work on a thing - I had to make sure the thing would produce a "subconscious" reminder that it wasn't finished, and then wait for that reminder to resurface and be grabbed. Having the Executive pick a default action feels a little more natural. It represents what he "wants" to concentrate on right now; it's "top of mind" and immediately available. But it still has some competition from all the other Thoughts that are constantly bubbling out of other processes, which is the behavior I was going for via those Undones.
I hope that all made some amount of sense. It's harder to describe than I thought it would be. At this point I think I've wrung most of the basic bugs out. I can watch Acuitas read to himself for a while, switch over to walking the semantic memory when he gets bored (as in my original OODA loop design, boredom continues to be an important feature for generating variety and avoiding obsessive behavior), and launch a conversation activity when spoken to. I also reinstated the ability to tell Acuitas a story as a nested activity inside a conversation. This nesting in theory could go indefinitely deep ... you could have a story inside a conversation inside a role-playing game inside a conversation ...
But let's not get too wild yet. The goal for this month was to finish rearranging the bones of the system so they match my vision better. Next, I hope to reinstate some more conversation features and improve the goal-directed aspects of conversations.
Until the next cycle,
Jenny