Sunday, November 1, 2020

Acuitas Diary #31 (October 2020)

My first self-assigned job for this month was to permit Acuitas to answer questions related to a story while it is being told. The question-answering process typically goes directly to the semantic memory for results; what I needed to do was also give it access to the temporary information on the Narrative Scratchboard. It now checks the Scratchboard first to see if there's an answer related to the context of the story in progress. If there isn't, then it will assume the question is more general and try to find an answer in the long-term memory.

Here's an example featuring a story from last month:

Me: Zach was a human.
Me: Zach had a book.
Me: What did Zach have?
Acuitas: Book. [We're having a little trouble with articles right now ...]
Me: A thief took the book.
Me: Who has the book?
Acuitas: Thief.
Me: Zach wanted his book.
Me: Zach talked to a policeman.
Me: The policeman found the thief.
Me: The policeman took the book from the thief.
Me: Who has the book?
Acuitas: Policeman.

There was also some hasty Text Parser work involved. Questions in which the wildcard is the direct object ("What does Zach have?") were standard, but questions in which the wildcard is the subject ("Who can run?") were never fully supported before. Fixed that, now they are.


Polyphemus was maaaad ("Odysseus and Polyphemus", by Arnold Bocklin)

Next, I wanted to start getting into some stories with character vs. character conflict, and that meant bringing some rudimentary moral reasoning into play. Acuitas' original dirt-simple method of story appreciation was to hope for any agent in the story to achieve their goals ... without any awareness of whether some agents' goals might be mutually exclusive. That's why the first couple of stories I tested with were character vs. environment stories, with no villain. I got away with the "Zach's Stolen Book" story because I only talked about Zach's goals ... I never actually mentioned that the thief wanted the book or was upset about losing it. So, that needed some work. Here's the story I used as a testbed for the new features:

"Odysseus was a man. Odysseus sailed to an island. Polyphemus was a cyclops. Odysseus met Polyphemus. Polyphemus planned to eat Odysseus. Odysseus feared to be eaten. Odysseus decided to blind Polyphemus. Polyphemus had one eye. Odysseus broke the eye. Thus, Odysseus blinded the Cyclops. Polyphemus could not catch Odysseus. Odysseus was not eaten. Odysseus left the island. The end."

One possible way to conceptualize evil is as a mis-valuation of two different goods. People rarely (if ever) do "evil for evil's sake" – rather, evil is done in service of desires that (viewed in isolation) are legitimate, but in practice are satisfied at an unacceptable cost to someone else. Morality is thus closely tied to the notion of *goal priority.*

Fortunately, Acuitas' goal modeling system already included a priority ranking to indicate which goals an agent considers most important. I just wasn't doing anything with it yet. The single basic principle that I added this month could be rendered as, "Don't thwart someone else's high-priority goal for one of your low-priority goals." This is less tedious, less arbitrary, and more flexible than trying to write up a whole bunch of specific rules, e.g. "eating humans is bad." It's still a major over-simplification that doesn't cover everything ... but we're just getting started here.

In the test story, there are two different character goals to assess. First,

"Polyphemus planned to eat Odysseus."

Acuitas always asks for motivation when a character makes a plan, if he can't infer it on his own. The reason I gave out was "If a cyclops eats a human, the cyclops will enjoy [it]." (It's pretty clear from the original myth that Polyphemus could have eaten something else. We don't need to get into the gray area of what becomes acceptable when one is starving.) So if the plan is successfully executed, we have these outcomes:

Polyphemus enjoys something (minor goal fulfillment)
Odysseus gets eaten -> dies (major goal failure)

This is a poor balance, and Acuitas does *not* want Polyphemus to achieve this goal. Next, we have:

"Odysseus decided to blind Polyphemus."

I made sure Acuitas knew that blinding the cyclops would render him "nonfunctional" (disabled), but would also prevent him from eating Odysseus. So we get these outcomes:

Polyphemus becomes nonfunctional (moderately important goal failure)
Odysseus avoids being eaten -> lives (major goal fulfillment)

Odysseus is making one of Polyphemus' goals fail, but it's only in service of his own goal, which is *more* important to him than Polyphemus' goal is to Polyphemus, so this is tolerable. Acuitas will go ahead and hope that Odysseus achieves this goal. (You may notice that the ideas of innocence, guilt, and natural rights are nowhere in this reasoning process. As I said, it's an oversimplification!)

Final result: Acuitas picks Odysseus to root for, which I hope you'll agree is the correct choice, and appreciates the end of the story.

Whew!

Until the next cycle,
Jenny