Tuesday, May 24, 2022

Acuitas Diary #49 (May 2022)

As I hinted last month, the goal this time was to keep expanding on knowledge modeling. There were still a lot of things I needed to do with the King's Quest I derivative story, and many different directions I could branch out in. I ended up picking three of them.

The first one was "use a character's knowledge model when making predictions of their behavior." One long-standing feature of the Narrative module has Acuitas run his own problem-solving search whenever a new Problem is logged for a character, to see if he can guess what they're going to do about it. The search utilizes not only "common knowledge" facts in Acuitas' own databases, but also facts from the story domain that have been collected on the Narrative scratchboard. Using the knowledge models was a relatively simple extension of this feature: when running problem-solving for a specific character, input the general facts from the story domain AND the things this character knows - or thinks to know (more on that later).

A page from a Persian manuscript "The Benefits of Animals," showing two mountain rams locking horns on the left leaf and two gazelles running on the right leaf.

But before I could do that, I quickly found out that I needed a little more work on Problems. I needed any thwarted prerequisite on a Subgoal to become a new Problem: both for a more thorough understanding of the story, and to get the solution prediction to run. The King's Quest story was almost too complicated to work with, so to make sure I got this right, I invented the simplest bodily-needs story I could think of: an animal has to go to the water hole. And I made multiple versions of it, to gradually work up to where I was going.

Variant 0:
0:"Altan was a gazelle."
1:"Altan was thirsty."
2:"Altan decided to drink water, but Altan did not have water."
3:"Altan decided to get water."
4:"There was water in the valley."
5:"Altan was not in the valley."
6:"Altan went to the valley."
7:"Altan got water."
8:"Altan drank the water."
9:"The end."

This version just sets up the subgoal-problem-subgoal chaining. We get our first problem on Line 1; "thirsty" implies "uncomfortable" which is a basic goal failure. Acuitas then guesses that Altan will drink some water. As expected, he decides to do so on the next line (this enters "drink water" as a subgoal for Altan), but to drink any water you must "have" it and right away we find the subgoal is blocked. This registers "Altan doesn't have water" as a new problem. Acuitas runs another prediction; will he get water? Yes, Line 3; yet another new subgoal. But remember that "get <object>" has a prerequisite of "be where <object> is."

Since this prerequisite is relative, Acuitas needs two pieces of information to find out that it's blocked: where the water is, and where Altan is. By Line 5 he has both, and can enter Altan's incorrect location as yet another problem. After which he predicts: "Maybe Altan will go to the valley." And Altan does ... we solve the problems and fulfill the subgoals in reverse order on Lines 6, 7, and 8. All done!

Variant 1:
0:"Altan was a gazelle."
1:"Altan was on the plateau."
2:"Altan was thirsty."
3:"Altan decided to drink water, but Altan did not have water."
4:"There was water in the valley."
5:"Altan decided to get water."
6:"Altan went to the valley."
7:"Altan got water."
8:"Altan drank the water."
9:"The end."

This time, instead of saying bluntly, "Altan was not in the valley," I talk about him being somewhere else (the plateau). Technically, two non-specific locations are not guaranteed to be mutually exclusive; one could be inside the other (a plateau in a valley? probably not, but Acuitas doesn't know this). But I want him to go ahead and make a soft assumption that they're two different places unless told otherwise. So in this story, "Altan was on the plateau" substitutes for "Altan was not in the valley."

I've also played with the order. I introduce the information that will tell us the "get water" subgoal is blocked *before* Acuitas registers the subgoal on Line 5. This may not seem like a big deal to your flexible human mind, but in code, things like that are a big deal if you're not careful. New problems have to be checked for, both when new subgoals are introduced (do any previously mentioned facts block the new subgoal?) and when new facts are introduced (are any previously mentioned subgoals blocked by the new fact?). Failing to look at both makes story understanding uncomfortably order-dependent.

A group of gazelles (Antilope dorcas) are gathered near a water hole. Black-and-white woodcut or ink drawing.
From Brehm's Life of Animals, via Internet Archive.

Variant 2:
0:"Altan was a gazelle."
1:"Altan was on the plateau."
2:"Altan was thirsty."
3:"Altan decided to drink water, but Altan did not have water."
4:"Altan decided to get water."
5:"Altan knew that there was water in the valley."
6:"Altan went to the valley."
7:"Altan found water in the valley."
8:"Altan got water."
9:"Altan drank the water."
10:"The end."

Now we get to the endgame: replace a blunt statement of fact with a statement about Altan's *knowledge* of that fact, entering it in his knowledge model rather than the main scratchboard. Derive the same results: 1) Altan has a location problem and 2) he will probably solve it by going to the valley.

Lack of knowledge is important here too; if we are explicitly told that there is water in the valley but Altan *doesn't* know this, then the scratchboard fact "water is in the valley" should be canceled out and unavailable when we're figuring out what Altan might do. I didn't even implement this half of it. It should be easy enough to add later - there was just too much to do, and I forgot.

And that was only the first thing! The second was to bring in the idea of knowledge uncertainty, and the possibility of being mistaken. So I converted the "facts" in the knowledge model into more generic propositions, with two new properties attached: 1) is it true (by comparison with facts stated by the story's omniscient narrator), and 2) does the agent believe it? For now, these have ternary values ("yes," "no," and "unknown").

A variant of the classic "I want to believe" poster, but instead of a flying saucer, the sky holds the words "P=NP"
Lest someone not get the joke: https://news.mit.edu/2009/explainer-pnp

Truth is determined by checking the belief against the facts on the Narrative scratchboard, as noted. Belief level can be updated by including "<agent> believed that <proposition>" or "<agent> didn't believe that <proposition>" statements in the story. Belief can also be modified by perception, so sentences such as "<agent> saw that <proposition>" or "<agent> observed that <proposition>" will set belief in <proposition> to a yes, or belief in its inverse to a no.

For the third update, I wanted to get knowledge transfer working. So if Agent A tells Agent B a <proposition>, that propagates a belief in <proposition> into Agent B's knowledge model. Agent B's confidence level in this proposition is initially unknown to the Narrative, but again, this can be updated with "belief" statements. So now we're ready to go back to a slightly modified version of the "Search for the Chest" story:

0:"Graham was a knight."
1:"Graham served a king."
2:"The king wanted the Chest of Gold."
3:"The king brought Graham to his castle."
4:"The king told Graham to get the Chest of Gold."
5:"Graham wanted to get the chest, but Graham did not know where the chest was."
6:"Graham left the castle to seek the chest."
7:"Graham went to the lake, but Graham did not find the chest."
8:"Graham went to the dark forest, but Graham did not find the chest."
9:"Graham asked of a troll where the chest was."
10:"The troll didn't know where the chest was."
11:"The troll told to Graham that the chest was at the gingerbread house."
12:"Graham believed that the chest was at the gingerbread house."
13:"Graham went to the gingerbread house."
14:"Graham saw that the chest was not at the gingerbread house."
15:"A witch was at the gingerbread house."
16:"The witch wanted to eat Graham."
17:"Graham ran and the witch could not catch Graham."
18:"Finally Graham went to the Land of the Clouds."
19:"In the Land of the Clouds, Graham found the chest."
20:"Graham got the chest and gave the chest to the king."
21:"The end."

The ultimate goal of all this month's upgrades was to start figuring out how lying works. If that seems like a sordid topic - well, there's a story I want to introduce that really needs it. Both villains are telling a Big Lie and that's almost the whole point. Getting back to the current story: now Line 11 actually does something. The "tell" statement means that the proposition "chest <is_at> gingerbread house" has been communicated to Graham and goes into his knowledge model. At this point, Acuitas will happily predict that Graham will try going to the gingerbread house. (Whether Graham believes the troll is unclear, but the possibility that he believes is enough to provoke this guess.) On Line 12, we learn that Graham does believe the troll and his knowledge model is updated accordingly. But on Line 14, he finds out for himself that what the troll told him was untrue, and his belief level for that statement is switched to "no."

The story never explicitly says that the troll lied, though. Can we infer that? Yes - from a combination of Lines 10 and 11. If an agent claims something while not believing it, that's a lie. Since the troll doesn't know where the chest is, he's just making stuff up here (replacing Line 10 with "The troll knew that the chest was not at the gingerbread house" also works; that's even more definitely a lie). To get the Narrative module to generate this inference, I had to put in sort of a ... complex verb definition detector. "If <agent> did <action> under <circumstances>, then <agent> <verb>." We've got enough modeling now that the Narrative module can read this story, see that the troll told somebody else a proposition that was marked as a non-belief in the troll's knowledge module, and spit out the implication "The troll lied."

The missing piece to the puzzle is "Why bother lying? For that matter, why bother telling the truth? Why would these characters communicate anything?" But the answer's close - Acuitas can run those problem-solving predictions and find out that putting beliefs in other agents' knowledge models *changes their behavior.* From there, it's not too hard to figure out how giving beliefs to other people might help or hinder your goals or theirs. But all that has to come later because I'm out of time for the month, and this blog got looong. Sorry everyone. It all makes my head spin a little so if you're feeling confused, I don't blame you.

Until the next cycle,
Jenny

Thursday, May 12, 2022

A Minor Academic Outreach Disaster

For the mid-month blog this time, I've decided I'll tell a silly story about me, from back when I was getting my Master's degree and working as a Research Assistant. I hope it entertains and reveals that we aren't always as polished as we look.

Oh, no. Is that *me*? Yeah, that's me. *Hides under the table* At the Museum of the Rockies "Science Night," I think, with our primitive early version of the robot and its bean mining play area. 

At the time this story takes place, my advising professor/principal investigator is also supervising our university's team for the Lunabotics Mining Competition. I myself had been on the team back when I was an undergraduate, and he thinks he can take a load off the crop of students building this year's robot, by assigning me to do some of the side activities -- in particular, public outreach. In the technical professions, the goal of "outreach" is to enhance public interest in our work, partly so that we can encourage kids to study it when they grow up.

The Lunabotics Mining Competition is about building a robot that can dig up, transport, and deposit as much simulated moon dirt[1] as possible within a time limit. So my professor's bright idea for outreach is to build a kid-friendly version of this. We'll use a LEGO Mindstorms kit plus a custom part or two[2] to create a rough model of the design team's robot. We'll make a miniature arena to put the robot in, and fill it with dry beans. Then we'll hand the controller over to kids at the outreach events and let them "mine" the beans. Very cute.

I do some work on the LEGO robot, and my professor makes the arena, and soon everything is ready. There are several places we want to run the demonstration, one of which is the Billings Clinic Research Center Science Expo. We're in Bozeman - so my professor tasks me with going to Billings and running our booth all by myself. I accept.

Narrator: "Sometimes Jenny Sue is bad at estimating the relative sizes of objects."

The LunArena in its pristine state.

The night before I'm supposed to leave for the event, I bring the "Lunarena" down to my little Honda Civic and realize it doesn't fit. The arena has a roof, to represent the fact that the real competition arena is contained in a tent. So it's like a little house, almost, and I just can't stuff it into my car's back seat. It's made of foam-core board and dowels and fabric, it's all glued together, and there's no way to take it apart or fold it up.

"Well," I think to myself, "I said I would take this thing to Billings, so that is what I'm going to do." And I proceed to tie it onto the roof of my car.

Narrator: "However smart Jenny Sue may look, she sometimes displays a poor grasp of real-world physics."

The next morning I get up before sunrise and set off. My car must look absurd with that arena on top of it. But all goes well -- until I make it onto the highway. I gather speed gradually, testing the situation, and hear ominous sounds as the wind begins to tug at the big, flimsy object on the roof. I know I am in danger ... but I also know I can't putt-putt all the way to Billings at 30 miles per hour, and there are no cars behind me. So I start going faster, and faster.

And then I hear the big, scraping, clattering noise as the whole thing rips apart and goes tumbling off my car roof.

My saving grace is the lack of traffic. Montana highways can be lonely, and in the pre-dawn hours I have this one almost all to myself. So I pull over, hop out of my car, and hurry back to grab the wreckage before somebody else runs over it. Several pieces have pulled apart, one of the dowels is broken, and it's generally a mess ... but now that it has collapsed, it fits in my car! So I shove all the pieces in there and proceed to Billings.

I arrive at the outreach event with some time to spare. I repair the arena with what I have on hand (mostly masking tape), and actually get the thing to stand up and look passable. And then I run the booth like nothing ever went wrong, and show a bunch of little kids how to collect and unload beans with the robot all morning. When the event is over, I, uh, "disassemble" the arena so I can put it back in my car.

A photo from the Billings event (I cropped out the kids' faces for their privacy). You can see the Arena is ... leaning a bit.

It all turned out well enough, and my professor didn't even seem mad ... though I had to help with a more permanent repair of the arena later. We even won second place in the outreach category of the competition! You can read a version of the year's successes *without* my inglorious background details here: MSU Students Earn Medals.

In hindsight, tying that arena onto my car roof was so stupid that I almost can't believe I did it. I guess I couldn't see any better option at the time. It goes to show that an apparently professional and successful project can get ridiculous behind the scenes.

[1] The fancy word for this is "regolith."
[2] We made the excavation drum from a Pringles can.