Tuesday, May 24, 2022

Acuitas Diary #49 (May 2022)

As I hinted last month, the goal this time was to keep expanding on knowledge modeling. There were still a lot of things I needed to do with the King's Quest I derivative story, and many different directions I could branch out in. I ended up picking three of them.

The first one was "use a character's knowledge model when making predictions of their behavior." One long-standing feature of the Narrative module has Acuitas run his own problem-solving search whenever a new Problem is logged for a character, to see if he can guess what they're going to do about it. The search utilizes not only "common knowledge" facts in Acuitas' own databases, but also facts from the story domain that have been collected on the Narrative scratchboard. Using the knowledge models was a relatively simple extension of this feature: when running problem-solving for a specific character, input the general facts from the story domain AND the things this character knows - or thinks to know (more on that later).

A page from a Persian manuscript "The Benefits of Animals," showing two mountain rams locking horns on the left leaf and two gazelles running on the right leaf.

But before I could do that, I quickly found out that I needed a little more work on Problems. I needed any thwarted prerequisite on a Subgoal to become a new Problem: both for a more thorough understanding of the story, and to get the solution prediction to run. The King's Quest story was almost too complicated to work with, so to make sure I got this right, I invented the simplest bodily-needs story I could think of: an animal has to go to the water hole. And I made multiple versions of it, to gradually work up to where I was going.

Variant 0:
0:"Altan was a gazelle."
1:"Altan was thirsty."
2:"Altan decided to drink water, but Altan did not have water."
3:"Altan decided to get water."
4:"There was water in the valley."
5:"Altan was not in the valley."
6:"Altan went to the valley."
7:"Altan got water."
8:"Altan drank the water."
9:"The end."

This version just sets up the subgoal-problem-subgoal chaining. We get our first problem on Line 1; "thirsty" implies "uncomfortable" which is a basic goal failure. Acuitas then guesses that Altan will drink some water. As expected, he decides to do so on the next line (this enters "drink water" as a subgoal for Altan), but to drink any water you must "have" it and right away we find the subgoal is blocked. This registers "Altan doesn't have water" as a new problem. Acuitas runs another prediction; will he get water? Yes, Line 3; yet another new subgoal. But remember that "get <object>" has a prerequisite of "be where <object> is."

Since this prerequisite is relative, Acuitas needs two pieces of information to find out that it's blocked: where the water is, and where Altan is. By Line 5 he has both, and can enter Altan's incorrect location as yet another problem. After which he predicts: "Maybe Altan will go to the valley." And Altan does ... we solve the problems and fulfill the subgoals in reverse order on Lines 6, 7, and 8. All done!

Variant 1:
0:"Altan was a gazelle."
1:"Altan was on the plateau."
2:"Altan was thirsty."
3:"Altan decided to drink water, but Altan did not have water."
4:"There was water in the valley."
5:"Altan decided to get water."
6:"Altan went to the valley."
7:"Altan got water."
8:"Altan drank the water."
9:"The end."

This time, instead of saying bluntly, "Altan was not in the valley," I talk about him being somewhere else (the plateau). Technically, two non-specific locations are not guaranteed to be mutually exclusive; one could be inside the other (a plateau in a valley? probably not, but Acuitas doesn't know this). But I want him to go ahead and make a soft assumption that they're two different places unless told otherwise. So in this story, "Altan was on the plateau" substitutes for "Altan was not in the valley."

I've also played with the order. I introduce the information that will tell us the "get water" subgoal is blocked *before* Acuitas registers the subgoal on Line 5. This may not seem like a big deal to your flexible human mind, but in code, things like that are a big deal if you're not careful. New problems have to be checked for, both when new subgoals are introduced (do any previously mentioned facts block the new subgoal?) and when new facts are introduced (are any previously mentioned subgoals blocked by the new fact?). Failing to look at both makes story understanding uncomfortably order-dependent.

A group of gazelles (Antilope dorcas) are gathered near a water hole. Black-and-white woodcut or ink drawing.
From Brehm's Life of Animals, via Internet Archive.

Variant 2:
0:"Altan was a gazelle."
1:"Altan was on the plateau."
2:"Altan was thirsty."
3:"Altan decided to drink water, but Altan did not have water."
4:"Altan decided to get water."
5:"Altan knew that there was water in the valley."
6:"Altan went to the valley."
7:"Altan found water in the valley."
8:"Altan got water."
9:"Altan drank the water."
10:"The end."

Now we get to the endgame: replace a blunt statement of fact with a statement about Altan's *knowledge* of that fact, entering it in his knowledge model rather than the main scratchboard. Derive the same results: 1) Altan has a location problem and 2) he will probably solve it by going to the valley.

Lack of knowledge is important here too; if we are explicitly told that there is water in the valley but Altan *doesn't* know this, then the scratchboard fact "water is in the valley" should be canceled out and unavailable when we're figuring out what Altan might do. I didn't even implement this half of it. It should be easy enough to add later - there was just too much to do, and I forgot.

And that was only the first thing! The second was to bring in the idea of knowledge uncertainty, and the possibility of being mistaken. So I converted the "facts" in the knowledge model into more generic propositions, with two new properties attached: 1) is it true (by comparison with facts stated by the story's omniscient narrator), and 2) does the agent believe it? For now, these have ternary values ("yes," "no," and "unknown").

A variant of the classic "I want to believe" poster, but instead of a flying saucer, the sky holds the words "P=NP"
Lest someone not get the joke: https://news.mit.edu/2009/explainer-pnp

Truth is determined by checking the belief against the facts on the Narrative scratchboard, as noted. Belief level can be updated by including "<agent> believed that <proposition>" or "<agent> didn't believe that <proposition>" statements in the story. Belief can also be modified by perception, so sentences such as "<agent> saw that <proposition>" or "<agent> observed that <proposition>" will set belief in <proposition> to a yes, or belief in its inverse to a no.

For the third update, I wanted to get knowledge transfer working. So if Agent A tells Agent B a <proposition>, that propagates a belief in <proposition> into Agent B's knowledge model. Agent B's confidence level in this proposition is initially unknown to the Narrative, but again, this can be updated with "belief" statements. So now we're ready to go back to a slightly modified version of the "Search for the Chest" story:

0:"Graham was a knight."
1:"Graham served a king."
2:"The king wanted the Chest of Gold."
3:"The king brought Graham to his castle."
4:"The king told Graham to get the Chest of Gold."
5:"Graham wanted to get the chest, but Graham did not know where the chest was."
6:"Graham left the castle to seek the chest."
7:"Graham went to the lake, but Graham did not find the chest."
8:"Graham went to the dark forest, but Graham did not find the chest."
9:"Graham asked of a troll where the chest was."
10:"The troll didn't know where the chest was."
11:"The troll told to Graham that the chest was at the gingerbread house."
12:"Graham believed that the chest was at the gingerbread house."
13:"Graham went to the gingerbread house."
14:"Graham saw that the chest was not at the gingerbread house."
15:"A witch was at the gingerbread house."
16:"The witch wanted to eat Graham."
17:"Graham ran and the witch could not catch Graham."
18:"Finally Graham went to the Land of the Clouds."
19:"In the Land of the Clouds, Graham found the chest."
20:"Graham got the chest and gave the chest to the king."
21:"The end."

The ultimate goal of all this month's upgrades was to start figuring out how lying works. If that seems like a sordid topic - well, there's a story I want to introduce that really needs it. Both villains are telling a Big Lie and that's almost the whole point. Getting back to the current story: now Line 11 actually does something. The "tell" statement means that the proposition "chest <is_at> gingerbread house" has been communicated to Graham and goes into his knowledge model. At this point, Acuitas will happily predict that Graham will try going to the gingerbread house. (Whether Graham believes the troll is unclear, but the possibility that he believes is enough to provoke this guess.) On Line 12, we learn that Graham does believe the troll and his knowledge model is updated accordingly. But on Line 14, he finds out for himself that what the troll told him was untrue, and his belief level for that statement is switched to "no."

The story never explicitly says that the troll lied, though. Can we infer that? Yes - from a combination of Lines 10 and 11. If an agent claims something while not believing it, that's a lie. Since the troll doesn't know where the chest is, he's just making stuff up here (replacing Line 10 with "The troll knew that the chest was not at the gingerbread house" also works; that's even more definitely a lie). To get the Narrative module to generate this inference, I had to put in sort of a ... complex verb definition detector. "If <agent> did <action> under <circumstances>, then <agent> <verb>." We've got enough modeling now that the Narrative module can read this story, see that the troll told somebody else a proposition that was marked as a non-belief in the troll's knowledge module, and spit out the implication "The troll lied."

The missing piece to the puzzle is "Why bother lying? For that matter, why bother telling the truth? Why would these characters communicate anything?" But the answer's close - Acuitas can run those problem-solving predictions and find out that putting beliefs in other agents' knowledge models *changes their behavior.* From there, it's not too hard to figure out how giving beliefs to other people might help or hinder your goals or theirs. But all that has to come later because I'm out of time for the month, and this blog got looong. Sorry everyone. It all makes my head spin a little so if you're feeling confused, I don't blame you.

Until the next cycle,
Jenny

No comments:

Post a Comment