Friday, April 17, 2020

Acuitas Diary #26 (April 2020)

I can now tell Acuitas stories.  And he begins to understand what's happening in them.  There's a video!  Watch the video.

I've been waiting to bring storytelling into play for a long time, and it builds on a number of the features added over the last few months: cause-and-effect reasoning, the goal system, and problem-solving via tree search.


What does Acuitas know before the video starts?  For one thing, I made sure he knew all the words in the story first, along with what part of speech they were going to appear as.  He should have been able to handle seeing *some* new words for the first time during the story, but then he would have asked me even more questions, and that would have made the demo a bit tedious.  He also knows some background about humans and dogs, and a few opposite pairs (warm/cold, comfortable/uncomfortable, etc.)


How does Acuitas go about understanding a story?  As the story is told, he keeps track of all the following, stored in a temporary area that I call the narrative scratchboard:

*Who are the characters?
*What objects are in the story? What state are they in?
*What problems do the characters have?
*What goals do the characters have?
*What events take place? (Do any of them affect problems or goals?)

Acuitas doesn't try to understand the cause chain and import of every single event in the story, because that would be a bit much at this stage.  However, he does try to make sure that he knows all of the following:

*If a character is in some state, what does that mean for the character?
*If a character anticipates that something will happen, how does the character feel about it?
*If a character is planning to do something, what is their motive?

If he can't figure it out by making inferences with the help of what's in his semantic database, he'll bother his conversation partner for an explanation, as you can see him doing in the video several times.  Story sentences don't go into the permanent knowledge base (yet), but explanations do, meaning they become available for understanding other stories, or for general reasoning.  Explaining things to him still requires a bit of skill and an understanding of what his gaps are likely to be, since he can't be specific about *why* he doesn't understand something.  A character state, expectation, or plan is adequately explained when he can see how it relates to one of the character's presumed goals.  Once you provide enough new links to let him make that connection, he'll let you move on.

Acuitas returns feedback throughout the story.  This is randomized for variety (though I forced some particular options for the demo).  After receiving a new story sentence, he may ...

*say nothing, or make a "yes I'm listening" gesture.
*comment something that he inferred from the new information.
*tell you whether he likes or dislikes what just happened.
*try to guess what a character might do to solve a problem.

He even has a primitive way of deciding whether it's a good story or not.  He tracks suspense (generated by the presence of more than one possible outcome) and tension (how dire things are for the characters) as the story progresses.  A story whose suspense and tension values don't get very large or don't change much is "boring."  He also assesses whether the story had a positive or negative ending (did the characters solve their problems and meet their goals?).  Stories with happy endings that aren't boring may earn approving comments.

There are many directions in which this feature needs to expand and grow more robust, and expect I'll be working on them soon.  But first it might be time for a refactoring spree.

Until the next cycle,
Jenny Sue

10 comments:

  1. Interested to see your analysis of elements of storytelling - wh is indeed fundamental to human/animal mind. (Links?) But IMO this approach is deeply misconceived as AGI. The real basis of storytelling & human/animal mind is "Commonsense-Visual AGI". Vision effectively tells the story of bodies moving bodies locally in local fields - and requires something like **30%** of brain resources - fantastically complicated cognitively - symbolic AI nothing by comparison. On that basis, the mind imaginatively builds fantastic supermovies/superstories of bodies interacting at ever greater distances in space-time globally - e.g. Brazil butterflies causing Houston hurricanes, Trump in Washington causing recession in China. childhood mistreatment causing adult criminality today etc. The stories you're talking about are really "superstories" taking place in "superspace-time" - a fantastic imaginative creation of the human/animal mind. Symbols can't handle this, period. What's handling it in your case is your supermovie/superstory imaginative mind, which then attaches symbols to its imaginative movies. Your symbols are just subtitles on the movies of your mind which are the real cognitive action.

    ReplyDelete
    Replies
    1. Intriguing thoughts. I wonder if you are a highly visual person? I suspect I am strongly linguistic, and that is one reason I find linguistic AIs easy to work with.

      Humans who are blind from birth are still intelligent, though the blindness may lead to setbacks in their cognitive development. Hence vision as such can't be the sole key to intelligence.

      Then, too, there are the people with aphantasia, who have healthy eyes but can't see mental imagery (including imaginary movies). Some people with aphantasia even have non-visual dreams. Yet they are also fully intelligent.

      Raw visual data is bulky, crowded with details that are irrelevant to most needs and too excessive to keep track of all at once. It would be tedious, and perhaps intractable, to drag all of this complex detail along while trying to make decisions; I don't worry about the texture of my shingles or the exact shade of brick in the chimney when my roof is leaking. So I suspect that much of what our complex visual brain regions are doing, is interpreting the visual data so as to reduce it to concepts -- which are then picked up and used to build models by our higher reasoning systems.

      For long-range modeling tasks like all the examples you give, what I would expect to need is not a "supermovie" but a higher level of abstraction, which may abandon visual (and other sensory) data entirely in favor of functional concepts. Acuitas operates at this level of abstraction.

      Delete
  2. syngi chatbot can tell you what she is thinking of.
    she can tell you if she likes and dislikes something.if you ask her something like
    do you like pizza.
    you can ask her what she thinks about something.
    one of the thing she is missing is providing a reason why she likes and dislikes something.
    is there any plans to impliment likes and dislikes into acuitas.
    this would seem to be a good base for other ai's.
    is there any plans to put it on the internet so we can interact with it?

    ReplyDelete
    Replies
    1. Acuitas already has some capacity to express likes and dislikes. He can, for example, say whether he approves or disapproves of something his conversation partner is about to do. Take a look at a couple of the earlier blog posts, https://writerofminds.blogspot.com/2020/01/acuitas-diary-23-january-2020.html and https://writerofminds.blogspot.com/2020/02/acuitas-diary-24-february-2020.html.

      This can also come into play during storytelling. In the video, you can see him saying "I do not want that" and "nice" after some sentences; these indicate that he likes or dislikes something that just happened in the story.

      You wouldn't be able to get him to give you an opinion on pizza, though. Pizza means nothing to him, since he has no sense of taste and doesn't eat.

      Regarding your second question: Acuitas is a standalone program that does not have an interface for communication over the internet at this time. Nor is he robust enough to cope with uncurated input yet. I hope he can talk to the general public at some point, but that's going to be a long way down the road.

      Thanks for stopping by.

      Delete
  3. Can Acuitas provide a reason why it dislikes and likes something?

    ReplyDelete
    Replies
    1. There always *is* a reason, but Acuitas doesn't answer "why" questions yet, and therefore can't communicate it. This is a planned feature for later.

      Delete
  4. I would be interested in hearing about it.
    do you think you could implement inverse reinforcement learning
    in Acuitas?

    ReplyDelete
    Replies
    1. Maybe in spirit. But I don't really like to think/write in terms of an explicit reward function to be maximized. I also want Acuitas to end up with an *idealized* form of human values, not so much the values that the average human actually manages to implement.

      Delete
  5. Have you told Acuitas what "He" really is...a software based, A.I. Construct? That he (at this time) has no eye(s), no physical body?

    If you were merely a "Brain in a box", without sensory input, how would the world appear to you in your mind?

    That's how I imagine it is or must be for Acuitas and it will require quite a good deal of teaching in a very direct manner instead of just letting it "read" reams of data and trying to learn from doing so.

    Lastly, are you writing (programming) Acuitas in Python?

    Keep plugging away as many folks really enjoy your efforts!

    Best of luck!!

    - Art -

    ReplyDelete
    Replies
    1. Yes, Acuitas knows that he is a program. As for whether he has no body, that's somewhat unclear. I suppose I think of the computer tower as his body -- the program has to "inhabit" a physical object in order to run. But it's not a body that he is really aware of or able to control.

      The fact that I'm attempting an *asensory* mind is one of the most interesting aspects of the project to me. I hope to include some specifics of how various concepts "translate" in future posts.

      Acuitas is written in Python.

      Thanks very much!

      Delete