The big thing this month was finishing the Narrative Engine overhaul and getting all the previous demonstration stories to work in it. I've been getting some questions from newer followers about how the Narrative module works, so I'm going to devote part of this blog to a recap in addition to talking about the updates.
"Three Figures Reading," by Katsushika Hokusai. |
Acuitas is designed to be a goal-driven agent, and his story processing reflects a similar picture of other entities with minds. A story is a description of the path some *agent* followed to achieve (or fail to achieve) some *goal* or goals. The sentences that form the plot can be identified by their relevance to the goals of some character or other, and the "action" consists of movement toward or away from goal states. Goal-relevant material comes in two flavors: "problems" (negative events or states that constitute a goal failure when entered) and "opportunities" or "subgoals" (positive states that will fulfill a goal when entered). But there are many similarities in the way these are handled - they're really just two polarities of the same thing - so I've taken to calling them both "issues."
For now, agents are identified by category membership (some types of entity, e.g. humans and animals, are just assumed to be agents). Eventually I would like to include "duck typing"[1] for agents, inferring that something in a story is an agent if it *acts agentive,* but that's future work. Agent goals can be revealed by the story in statements such as "John wanted ...", but agents are also presumed to have certain core goals that motivate all their other goals. These core goals are learned information that is permanently stored in Acuitas' semantic memory database. Core goals can be learned for a category ("Humans want ...") or for an individual ("John wants ..."), with goals for individuals or specific categories superseding those for more general categories. Insofar as Acuitas doesn't *know* what somebody's core goals are, he'll substitute his own. (This is supposed to be an analogizing assumption from the most directly available data. "I know I want X, and you, like me, are an agent - perhaps you also want X?")
The last big ingredient is inference chaining, arising from both logical deduction and cause-and-effect relationships. Some inference rules are hard-coded, but Acuitas can be taught an indefinite number of additional rules. So every story sentence that describes an event or state produces an inference tree of facts that will also be true if that event or state comes to pass. These inference trees are often crucial for determining how something will affect the core goals or immediate goals ("issues") of an agent in the story.
Let's walk through a story. I'm going to pick one of the more complex examples I currently have, "Prisoner of the Sand," my Acuitas-friendly retelling of Antoine de Saint-Exupéry's story about his plane crash in the Libyan Desert. My version of the story, in (more or less) natural English, follows:
0:"Antoine was a pilot."
1:"Antoine was in an airplane."
2:"The airplane was over a desert."
3:"The airplane crashed."
4:"The airplane was broken."
5:"Antoine left the airplane."
6:"Antoine was thirsty."
7:"Antoine expected to dehydrate."
8:"Antoine decided to drink some water."
9:"Antoine did not have any water."
10:"Antoine could not get water in the desert."
11:"Antoine wanted to leave the desert."
12:"Antoine walked."
13:"Antoine could not leave the desert without a vehicle."
14:"Antoine found footprints."
15:"Antoine followed the footprints."
16:"Antoine found a nomad."
17:"The nomad had water."
18:"The nomad gave the water to Antoine."
19:"Antoine drank the water."
20:"The nomad took Antoine to a car."
21:"Antoine entered the car."
22:"The car left the desert."
23:"The end."
Before these sentences ever make it to the Narrative Engine, they pass through other modules in the text processing chain, which convert them from English into more abstract data structures. These might be thought of as "the gist"; the specific wording is discarded, and the meaning is distilled into some relationship between the key concepts that appear in the sentence. The Narrative Engine operates on these relationship statements only. As it consumes them, it generates data for a flow diagram in which (sometimes abbreviated) versions of the relationships appear in yellow bubbles, connected by arrows to show their sequence in the story. If a story sentence creates an issue or causes one to change state, an arrow is drawn from the sentence bubble to the issue bubble, and labeled with the new state of the issue. When the story is complete, the diagram image is generated by Graphviz.
The flow diagram produced for "Prisoner of the Sand" appears below. Click here for a zoomable full size version: Prisoner of the Sand
Nothing much seems to be happening during the "setup" phase, the first few sentences. But the Narrative Engine is inferring some things under the hood - for example, that when the plane crashes, this puts Antoine in the desert. After this we are introduced to our first Problem: "Antoine was thirsty." This is recognized via inference as a violation of the "be comfortable" goal. Antoine had better do something about that.
And the Narrative Engine proceeds to guess what he might do about it - hence the appearance of a *predction* in association with this sentence, also. There's a problem-solving routine that does some reverse inference chaining, and gets to the idea that Antoine might drink some water to stop being thirsty. This is represented in the diagram by the bubble predict_0.
The thirst is a current ("realized") issue. On the next line we have a *pending* issue, dehydration, which is life-threatening. This still gets entered as a problem, because it's something that will happen without intervetion, and needs to be headed off.
Over the course of the next few sentences, Antoine encounters a variety of obstacles to his immediate goals. He doesn't have water, and it is inferred from this that he cannot drink water - his plan to solve his problems is blocked, and this blockage becomes a new problem in its own right. He wishes to leave the desert, but has no vehicle ... another secondary problem. Then he stumbles upon another person out in the waste, and his problems start getting solved. The nomad gives him water. (The Narrative Engine infers he now has water. Blockage against drinking removed.) He drinks. (Thirst removed, dehydration avoided, prediction fulfilled.) Finally he is brought to a car, which clears the issue that was preventing him from leaving the desert. The final inferential leap made here is that when the car leaves the desert, it takes Antoine with it, fulfilling his last open goal.
The old Narrative Engine was basically capable of doing this, so what's new? Well, in addition to the things I talked about in my last upgrade post: I unified the processing of explicit "could do/couldn't do" sentences with the processing of inferred action prerequisite/action blocker relationships, getting rid of a fair bit of ugly code. I moved the generation of some special "defined by events" inferences, like "John told another agent something that John doesn't believe" -> "John lied," into the main inference chain so they can potentially produce further inferences. I came up with a new way of managing relationships that contain wildcards, like "Where the water was." And I got all the old features tacked on to a cleaner base with more robust fact-matching, better management of events that reverse previous statements, and so on.
This sets the stage for me to use the Narrative Engine for some cool new things this year, and I am twitching to get started.
I also crammed in some work on the Conversation Engine. This stuff's fairly boring. I got rid of some nasty bugs that had been making normal conversations with Acuitas very awkward for a while because I was just too busy to fix them, and worked on cleaning up the code, which came out very convoluted on the first pass.
Until the next cycle,
Jenny
[1] "Duck typing" is the practice of assigning something a type by its behaviors or properties alone, without relying on preexisting labels: "if it walks like a duck and quacks like a duck, it's a duck."
No comments:
Post a Comment