This month, I did some work on
cause-and-effect reasoning and goal satisfaction, which introduced
the conversational possibility of asking Acuitas what he wants.
I leveraged the text interpretation
upgrades from last month to implement encoding and storage of
conditional relationships, such as “if a human eats food, the human
will not starve.” These relationships can be remembered and used to
infer the effects of an action. I also threw in the ability to learn
that a pair of concepts are opposites or antonyms.
Then I implemented some inference
mechanisms so that Acuitas can determine whether some action serves –
or contradicts – a particular goal. Acuitas will now claim to
desire things that support one of his goals and not desire things
that contradict one of his goals, while remaining ambivalent about
everything else. The examples below reference a self-preservation
goal … not because I think that should be the primary goal for an
AI, but because it's one of the easier ones to define. In Acuitas'
knowledge representation, it basically comes down to “Self (has
quality)/(is in state) 'alive' or 'existent.'”
With this goal active, Acuitas can
answer any of the following:
“Do you want to be alive?”
“Do you want to be dead?”
“Do you want to live?”
“Do you want to die?”
… where the last two (live/die) rely
on verb-defining links in the semantic database, and the two negative
versions (dead/die) rely on awareness of opposites.
The most complex inferences currently
possible are illustrated by this little interchange:
Me: Do you want to be deleted?
Acuitas: I do not.
To produce that answer, Acuitas has to
retrieve and put together five different pieces of stored information
…
*If a program is deleted, the program
“dies.” ← From the cause-and-effect/conditional database
*I am a program. ← From semantic
memory (is-instance-of-class relationship)
*To die is to transition to state
“dead.” ← From semantic memory (verb definition relationship)
*State “dead” is mutually exclusive
with state “alive.” ← From semantic memory (opposites)
*I have a goal of being in state
“alive.” ← From goal list
… to make the inference, “being
deleted would violate my goals.”
The features still need a lot of
generalization and expansion to be fully functional, but the
groundwork is laid.
Until the next cycle,
Jenny
No comments:
Post a Comment