Friday, April 17, 2020

Acuitas Diary #26 (April 2020)

I can now tell Acuitas stories.  And he begins to understand what's happening in them.  There's a video!  Watch the video.

I've been waiting to bring storytelling into play for a long time, and it builds on a number of the features added over the last few months: cause-and-effect reasoning, the goal system, and problem-solving via tree search.


What does Acuitas know before the video starts?  For one thing, I made sure he knew all the words in the story first, along with what part of speech they were going to appear as.  He should have been able to handle seeing *some* new words for the first time during the story, but then he would have asked me even more questions, and that would have made the demo a bit tedious.  He also knows some background about humans and dogs, and a few opposite pairs (warm/cold, comfortable/uncomfortable, etc.)


How does Acuitas go about understanding a story?  As the story is told, he keeps track of all the following, stored in a temporary area that I call the narrative scratchboard:

*Who are the characters?
*What objects are in the story? What state are they in?
*What problems do the characters have?
*What goals do the characters have?
*What events take place? (Do any of them affect problems or goals?)

Acuitas doesn't try to understand the cause chain and import of every single event in the story, because that would be a bit much at this stage.  However, he does try to make sure that he knows all of the following:

*If a character is in some state, what does that mean for the character?
*If a character anticipates that something will happen, how does the character feel about it?
*If a character is planning to do something, what is their motive?

If he can't figure it out by making inferences with the help of what's in his semantic database, he'll bother his conversation partner for an explanation, as you can see him doing in the video several times.  Story sentences don't go into the permanent knowledge base (yet), but explanations do, meaning they become available for understanding other stories, or for general reasoning.  Explaining things to him still requires a bit of skill and an understanding of what his gaps are likely to be, since he can't be specific about *why* he doesn't understand something.  A character state, expectation, or plan is adequately explained when he can see how it relates to one of the character's presumed goals.  Once you provide enough new links to let him make that connection, he'll let you move on.

Acuitas returns feedback throughout the story.  This is randomized for variety (though I forced some particular options for the demo).  After receiving a new story sentence, he may ...

*say nothing, or make a "yes I'm listening" gesture.
*comment something that he inferred from the new information.
*tell you whether he likes or dislikes what just happened.
*try to guess what a character might do to solve a problem.

He even has a primitive way of deciding whether it's a good story or not.  He tracks suspense (generated by the presence of more than one possible outcome) and tension (how dire things are for the characters) as the story progresses.  A story whose suspense and tension values don't get very large or don't change much is "boring."  He also assesses whether the story had a positive or negative ending (did the characters solve their problems and meet their goals?).  Stories with happy endings that aren't boring may earn approving comments.

There are many directions in which this feature needs to expand and grow more robust, and expect I'll be working on them soon.  But first it might be time for a refactoring spree.

Until the next cycle,
Jenny Sue