I'm eager to get started on trial-and-error learning, but in the spirit of also making progress on things that aren't as much fun, I rotated back to the Conversation engine for this month. The big new feature was getting what I'll call "purposeful conversations" implemented. Let me explain what I mean.
![]() |
Euphonia, a "talking head" built by Joseph Faber in the 1800s. |
A very old Acuitas feature is the ability to generate questions while idly "thinking," then save them in short-term memory and pose them to a conversation partner if he's unable to answer them himself. This was always something that came up randomly, though. A normal conversation with Acuitas wanders through whatever topics come up as a result of random selection or the partner's prompting. A "purposeful conversation" is a conversation that Acuitas initiates as a way of getting a specific problem addressed. The problem might be "I don't know <fact>," which prompts a question, or it might be another scenario in which Acuitas needs a more capable agent to do something for him. I've done work like this before, but the Executive and Conversation Engine have changed so much that it needed to be redone, unfortunately.
Implementing this in the new systems felt pretty nice, though. Since the Executive and the Conversation Engine each have a narrative scratchboard with problems and goals now, the Executive can just pass its current significant issue down to the Conversation Engine. The CE will then treat getting this issue resolved as the primary goal of the conversation, without losing any of its ability to handle other goals ... so greetings, introductions, tangents started by the human partner, etc. can all be handled as usual. Once the issue that forms the purpose of the conversation gets solved, Acuitas will say goodbye and go back to whatever he was doing.
I also worked on sprucing up some of the conversation features previously introduced this year, trying to make discussion of the partner's actions and states work a little better. Avoiding an infinite regress of either "why did you do that?" or "what happened next?" was a big part of this objective. Now if Acuitas can tie something you did back to one of your presumed goals, he'll just say "I suppose you enjoyed that" or the like. (Actually he says "I suppose you enjoyed a that," because the text generation still needs a little grammar work, ha ha ha oops.)
And I worked on a couple Narrative pain points: inability to register a previously known subgoal (as opposed to a fundamental goal) as the reason a character did something, and general brittleness of the moral reasoning features. I've got the first one taken care of; work on the second is still ongoing.
Until the next cycle,
Jenny