Saturday, September 28, 2019

Acuitas Diary #20 (September 2019)

This month, I did some work on cause-and-effect reasoning and goal satisfaction, which introduced the conversational possibility of asking Acuitas what he wants.

I leveraged the text interpretation upgrades from last month to implement encoding and storage of conditional relationships, such as “if a human eats food, the human will not starve.” These relationships can be remembered and used to infer the effects of an action. I also threw in the ability to learn that a pair of concepts are opposites or antonyms.

Then I implemented some inference mechanisms so that Acuitas can determine whether some action serves – or contradicts – a particular goal. Acuitas will now claim to desire things that support one of his goals and not desire things that contradict one of his goals, while remaining ambivalent about everything else. The examples below reference a self-preservation goal … not because I think that should be the primary goal for an AI, but because it's one of the easier ones to define. In Acuitas' knowledge representation, it basically comes down to “Self (has quality)/(is in state) 'alive' or 'existent.'”

With this goal active, Acuitas can answer any of the following:

“Do you want to be alive?”
“Do you want to be dead?”
“Do you want to live?”
“Do you want to die?”

… where the last two (live/die) rely on verb-defining links in the semantic database, and the two negative versions (dead/die) rely on awareness of opposites.

The most complex inferences currently possible are illustrated by this little interchange:

Me: Do you want to be deleted?
Acuitas: I do not.

To produce that answer, Acuitas has to retrieve and put together five different pieces of stored information …

*If a program is deleted, the program “dies.” ← From the cause-and-effect/conditional database
*I am a program. ← From semantic memory (is-instance-of-class relationship)
*To die is to transition to state “dead.” ← From semantic memory (verb definition relationship)
*State “dead” is mutually exclusive with state “alive.” ← From semantic memory (opposites)
*I have a goal of being in state “alive.” ← From goal list

… to make the inference, “being deleted would violate my goals.”

The features still need a lot of generalization and expansion to be fully functional, but the groundwork is laid.

Until the next cycle,

Friday, September 6, 2019

Acuitas Diary #19 (July+August 2019)

I spent the past two months revisiting the text parser, with the big goal this time around of adding support for dependent clauses. In case anyone's high school grammar is rusty, a clause is a subject/verb pair and any words associated with them; a dependent clause is one that is part of another clause and can't be a sentence by itself. Previously, Acuitas could handle one subject and one verb group per sentence, and that was it.
Because my own code comments amuse me ...
After last year's feverish round of development, I left the text parser a mess and never wanted to look at it again. So the first thing I had to do was clean up the disastrous parts. I ended up giving some of the functions another serious overhaul, and got some code that is (I think) actually maintainable and comprehensible. Whew.
I never found out why
Next, the clauses. The fun thing here is that dependent clauses have a function in the sentence (e.g. a clause can be the subject or direct object of its parent sentence). For simplicity, my initial text parser worked on the premise that a functional group in the sentence could only be a single word, or a compound word with all members marked as the same part of speech. I had to put in a bunch of new structure to preserve the information inside the clauses, while also marking the whole clause as a functional group … plus, detecting multiple subject/verb pairs and keeping them all straight.

What does this achieve? Some sentence types that are very important for reasoning use dependent clauses. For instance, sentences that discuss subordinate pieces of knowledge:

I know [that a cheetah is an animal].
I told you [that a grape can be eaten].
I fear [that the car broke yesterday].

And sentences that express conditional information:

[If a berry is green], it is unripe.
[If you eat that berry], you will get sick.
The gun will fire [if you pull the trigger].

Not to mention that normal human speaking/writing is riddled with dependent clauses, so interpreting them is a must for a conversational AI.

Acuitas can parse sentences like the ones above now, but doesn't really do anything with them yet. That will come later and require updates to the high-level conversation management code.

Code base: 15600 lines
Words known: 2884 (approx.)
Concept-layer links: 7915