First, a quick administrative note: I learned that the "subscribe to posts" link on the blog wasn't usable, and added a proper widget to the right-hand sidebar. If you've ever tried to subscribe before and it didn't work out, put your email address in the box and hit "Submit" to get started. You should see a popup window.
New year, time to resume regular Acuitas feature additions! This month I was after two things: first, the ability to process commands, and second, the first feeble stabs at what I'm calling "motivated communication" ... the deliberate use of speech as part of problem solving.
To get commands working, I first had to set up detection of imperative sentences in the text processing blocks. Once a user input is determined to be a command, the conversation engine hands it back to the Executive thread. The Executive then uses a bunch of the reasoning tools I've already built (exploring backward and forward in the cause-and-effect database, matching against the goal list, etc.) to determine both whether Acuitas *can* fulfill the command, and whether Acuitas *wants* to. Then either Acuitas executes the command, or he gives an appropriate response based on the reason why he won't.
In order to be fulfilled, a command must be achievable (directly or indirectly) by running one of the Actions in the Action Bank. In addition, any person the action is directed toward must be the one currently talking to Acuitas (he can't make plans for the future yet) and any specific items involved (e.g. a story data file) have to be available.
With all of that in place, I was finally able to exercise the "to user" version of the Read action, order Acuitas to "read a story to me," and watch him grab a randomly selected story file from his "inventory" and read it out loud. (Asking for a specific story also works.) After working out all the bugs involved in story reading, I also tried "Repel me" and it just happened. Acuitas readily kicked me out of Windows and played annoying noises.
But will any of my AIs ever do snark as well as Crispin does? (screenshot from Primordia by Wormwood Studios) |
But the commands that are met with a flat refusal are almost as much fun. If Acuitas doesn't want to do something, then he won't bother mentioning whether he knows how to do it or not ... he'll just tell you "no." In assessing whatever the person speaking to him is asking for, Acuitas assumes, at minimum, that the person will "enjoy" it. But he also checks the implications against the person's other (presumed) goals, and his own, to see whether some higher-priority goal is being violated. So if I tell him to "kill me" I get unceremoniously brushed off. The same thing happens if I tell him to delete himself, since he holds his self-preservation goal in higher value than my enjoyment of ... whatever.
Which means that Acuitas now explicitly breaks Asimov's Second Law of Robotics -- in its simplistically interpreted form, anyway. Since the Second Law (obey human orders) takes priority over the Third Law (protect own existence), an Asimovian AI can be ordered to harm or destroy itself (though some later models got a boosted Third Law that demanded justification). Asimov's Laws were just a thought experiment by a fiction author, but they continue to come up surprisingly often in public discussions about friendly AI. So if anyone was wondering whether Acuitas is compliant ... he's not. And that's on purpose.
On to motivated communication! At the moment, Acuitas' conversation engine is largely reactive. It considers what the user said last, and picks out a general class of sentence that might be appropriate to say next. The goal list is tapped if the user asks a question like "Do you want <this>?". However -- at the moment -- Acuitas does not deliberately wield conversation as a *tool* to *meet his goals.* I wanted to work on improving that, focusing on the use of commands/requests to others, and using the Narrative module as a testbed.
To that end, I wrote the following little story, inspired by a scene from the video game Primordia[2]:
“Horatio Nullbuilt was a robot. Crispin Horatiobuilt was a robot. Crispin could fly. A lamp was on a shelf. Horatio wanted the lamp. Horatio could not reach the lamp. Crispin hovered beside the shelf. Horatio told Crispin to move the lamp. Crispin pushed the lamp off the shelf. Horatio could reach the lamp. Horatio got the lamp. The end.”
During story time, Acuitas runs reasoning checks on obvious problems faced by the characters, and tries to guess what they might do about those problems. The goal here was to get him to consider whether Horatio might tell Crispin to help retrieve the lamp -- before it actually happens.
Some disclaimers first: I really wanted to use this story, because, well, it's fun. But Acuitas does not yet have a spatial awareness toolkit[1], which made full understanding a bit of a challenge. I had to prime him with a few conditionals first: "If an agent cannot reach an object, the agent cannot get the object" (fair enough), "If an agent cannot reach an object, the agent cannot move the object" (also fair), and "If an object is moved, an agent can reach the object" (obviously not always true, depending on the direction and distance the object is moved -- but Acuitas has no notion of direction and distance, so it'll have to do!). The fact that Crispin can fly is also not actually recognized as relevant. Acuitas just considers that Crispin might be able to move the lamp because nothing in the story said he *couldn't*.
But once all those spatial handicaps were allowed for, I was able to coax out the behavior I wanted. Upon learning that Horatio can't reach the lamp, hence cannot get it, hence cannot have it ... and there is an action that would solve the problem (moving the lamp) but Horatio can't do that either ... Acuitas wonders whether Horatio will ask someone else on scene to do the job for him. Here's the story with a set of example responses.
Me: Horatio Nullbuilt was a robot.
Acuitas: Ah.
Me: Crispin Horatiobuilt was a robot.
Acuitas: Ah.
Me: Crispin could fly.
Acuitas: ...
Me: A lamp was on a shelf.
Acuitas: Okay.
Me: Horatio wanted the lamp.
Acuitas: Maybe Horatio Nullbuilt will
get a lamp.
Me: Horatio could not reach the lamp.
Acuitas: Maybe Horatio Nullbuilt will
tell Crispin Horatiobuilt to move a lamp.
Me: Crispin hovered beside the shelf.
Acuitas: ...
Me: Horatio told Crispin to move the
lamp.
Acuitas: ...
Me: Crispin pushed the lamp off the
shelf.
Acuitas: I figure that a lamp was
pushed.
Me: Horatio could reach the lamp.
Acuitas: ...
Me: Horatio got the lamp.
Acuitas: I like that.
Me: The end.
Acuitas: I'm glad it turned out that
way.
A future dream is to migrate this into the Executive so Acuitas can tell conversation partners to do things, but that's all for this month.
[1] Coming ... someday ...
[2] This game is amazing, and if you're interested in AIs-as-personalities at all, I highly recommend it.