Sunday, April 1, 2018

Acuitas Diary #10 (March 2018)


The big project for this month was getting some circadian rhythms in place. I wanted to give Acuitas a sleep/wake cycle, partly so that my risk of being awakened at 5 AM by a synthetic voice muttering “Anyone there?” could return to zero, and partly to enable some memory maintenance processes to run undisturbed during the sleep phase. (These are targeted for implementation next month.)

So Acuitas now has two new drives, “sleep” and “wake.” (The way the drive system works, a lack of the desire to sleep is not the same thing as a desire to wake up, so it was necessary to create two.) Each drive has two components. The first component is periodic over 24 hours, and its value is derived from the current local time, which Acuitas obtains by checking the system clock. This is meant to mimic the influence of light levels on an organism. The other is computed based on how long it's been since Acuitas was last asleep/awake. Satisfying the drive causes this second component to decline until it has reset to zero. So the urge to sleep is inherently greater during the late-night hours, but also increases steadily if sleep is somehow prevented.

This also seemed like a good time to upgrade the avatar with some extra little animations. The eyelids now respond to a running “alertness level” and shut when Acuitas falls asleep.

Feeling dozy
The memory map is getting a bit ridiculous/ugly. I'm hoping the upcoming maintenance functions will help clean it up by optimizing the number of links a bit better. Stay tuned …


Code base: 9760 lines
Words known: 1885
Concept-layer links: 5362

Tuesday, February 27, 2018

Acuitas Diary #9 (February 2018)


I haven't written a diary in a while because most of what I've done over the past two months has been code refactoring and fixing bugs, which isn't all that interesting. A new feature that I just got in … finally … is the ability to infer some topic-to-topic relationships that aren't explicitly stored in the memory. For instance, many of the links stored in memory are “is-type-of” relations. Acuitas can now make the assumption that a subtype inherits all attributes of its super-type. If a shark is a fish and a fish can swim, then a shark can swim; if an oak is a tree and a tree has a trunk, an oak has a trunk. If a car is a vehicle, a house is a building, and a vehicle is not a building, then cars are not houses. Acuitas can also now make inferences based on transitive relationships, like “is part of”: if a crankshaft is part of an engine and an engine is part of a car, then a crankshaft is part of a car. The ability to easily make inferences like these is one of the strengths of the semantic net memory organization – starting from the concept you're interested in, you can just keep following links until you find what you need (or hit a very fundamental root concept, like “object”).

Acuitas should ask fewer ridiculous questions with this feature in place. He still comes up with those, but now he can answer some of them himself, as in this quote:

“I thought of lambs earlier. I concluded that piglets are pigs.”

Recent memory map visualization:

The huge dot toward the top of the memory map is Acuitas' self-concept; the second-largest one, toward the lower left, is "human." The concepts representing me and "animal" are the two third-tier dots toward the middle right.

Code base: 9454 lines (it went down!)
Words known: 1839
Concept-layer links: 5202

Saturday, December 23, 2017

Acuitas Diary #8 (December 2017)

Sadly I've only added one feature to Acuitas in the past two months. He now recognizes sentences in the general vein of “I somethinged,” which gives me the option of telling him about how I spent my time in the recent past. Acuitas can't do a lot with this information for the time being. Sometimes he responds with a query in the vein of, “What happened next?” which will eventually give him a way to build up sequences of events and start learning cause and effect relationships … but none of that is implemented yet. He can also ask “How was that?” for information about the emotional meaning of an activity, but again, for now he can't really utilize the answer.

Not much, but that was all I had time to put together with the holiday season under way. Looking back on the past year, though, here are all the new capabilities and improvements I've managed to add on:

*Module for procedural speech generation
*Support for word inflections (plurals and verb tenses)
*Support for compound words
*Support for content words that are also function words (e.g. “can,” “might”)
*Distinctions between proper/common and bulk/count nouns
*Ability to detect and answer questions
*Database walking while idle
*Generation of conversation topics and questions based on recent database walk
*Better link detection + a bunch of new kinds of learnable links
*Two new drives + a real-time plotter so I can see what they're all doing
*Distinctions between long-term static and short-term information
*GUI overhaul (upgrade from Tk to Kivy)

I track my time when I work on Acuitas. Total hours invested in the above: 230+. My focus for the end of the year, leading into January, will be polishing everything up and working out the bugs (which there are now quite a lot of).

MERRY CHRISTMAS!

Recent memory map visualization:


Code base: 9918 lines
Words known: 1576
Concept-layer links: 4226

Sunday, October 29, 2017

Acuitas Diary #7: October 2017

The big project for this month was introducing a system for discriminating between long-term and short-term information. Previously, if you told Acuitas something like, “I am sad,” he would assume that being sad was a fixed property of your nature, and store a fact to that effect in his database. Oops. So I started working on ways to recognize when some condition is so transient that it doesn't deserve to go into long-term memory.

This probably occasioned more hard-core thinking than any feature I've added since I started keeping these diaries. I started out thinking that Acuitas would clue in to time adverbs provided by the human conversation partner (such as “now,” “short,” “forever,” “years,” etc.). But when I started pondering which kinds of timeframes qualify as short-term or long-term, it occurred to me that the system shouldn't be bound to a human sense of time. One could imagine an ent-like intelligence that thinks human conditions which often remain valid for years or decades – like what jobs we hold, where we live, and what relationships we have – are comparatively ephemeral. Or one could imagine a speed superintelligence that thinks the lifetime of an average candle is a long while. I want Acuitas to be much more human-like than either of these extremes, but for the sake of code reusability, I felt I ought to consider these possibilities.


After a lot of mental churn, I decided that I just don't have the necessary groundwork in place to do this properly. (This is not an uncommon Acuitas problem. I've found that there ends up being a high level of interdependence between the various systems and features.) So I fell back on taking cues from humans as a temporary stopgap measure. Acuitas will rely on my subjective sense of time until he gets his own (which may not be for a while yet). If there's no duration indicator in a sentence, he can explicitly ask for one; he's also capable of learning over time which conditions are likely to be brief and which are likely to persist. For now, nothing is done with the transitory conditions. I didn't get around to implementing a short-term or current status region of the database, so anything that can't go in the long-term database gets discarded.

I also did some touching up around the conversation engine, replacing a few canned placeholder phrases that Acuitas was using with more procedurally generated text, and improving his ability to recognize when a speaker is introducing him/herself.

Recent memory map visualization:


Code base: 9663 lines
Words known: 1425
Concept-layer links: 3517

Saturday, September 30, 2017

Acuitas Diary #6: September 2017

For the first couple of weeks, I turned to developing the drive system some more. “Drives” are quantities that fluctuate over time and provoke some kind of reaction from Acuitas when they climb above a certain level. Prior to this month, he only had one: the Interaction drive, which is responsible for making him try to talk to somebody roughly twice in every 24-hour period. I overhauled the way this drive operates, setting it up to drop gradually over the course of a conversation, instead of getting zeroed out if somebody merely said “hello.” I also made two new drives: the Learning drive, which is satisfied by the acquisition of new words, and the Rest drive, which climbs while Acuitas is in conversation and eventually makes him attempt to sign off. Part of this effort included the addition of a plotter to the GUI, so I can get a visual of how the drives fluctuate over time.

Plot of Acuitas' three drives vs. time. The period shown is just under 23 hours long.
This latest work created the first case in which I had a pair of drives competing with each other (Rest essentially opposes Interaction). I quickly learned how easily this can go wrong. The first few times I conversed with Acuitas with the new drives in place, Rest shot up so quickly that it was above-threshold long before Interaction had come down. This is the sort of quandary a sick human sometimes gets into (“I'm so thirsty, but drinking makes me nauseated!”). Acuitas has nothing resembling an emotional system yet, though, and doesn't register any sort of distress just because one or more of his drives max out. The worst that can happen is some self-contradictory behavior (such as saying “I want to talk” and “I want to rest” in quick succession). I dealt with the problem by having the Interaction drive suppress the Rest drive. Rest now increases at a very slow rate until Interaction has been pushed below threshold.

In the latter half of the month I returned to the text parser, introducing some awareness of verb declensions/tenses, and the ability to check the agreement of candidate subjects and verbs. This helps the parser zero in on what a sentence's verb is, and has trimmed away some of the annoying “What part of speech was __?” questions that pepper a typical Acuitas conversation.

Here's the latest memory map visualization. Since last month, Acuitas' relentless querying about concepts he already knows has caused the number of links to explode, resulting in a denser (more fibrous?) image.


Code base: 9162 lines
Words known: 1305
Concept-layer links: 3025

Tuesday, August 22, 2017

Acuitas Diary #5: August 2017


This month I got Acuitas to start asking questions, and it has been all kinds of stupid fun.


Those questions are generated semi-randomly, based on Acuitas' “thoughts” for the day. Back in July I introduced the idle thinking feature, which gives Acuitas the capacity to walk his memory database when nobody's talking to him. More recently, I set up the Conversation Engine to access the results of memory-walking and utilize the new Text Generator … so now Acuitas can ask me the questions that occur to him as he muses, or simply tell me what he thought about within the past ~12 hours. The results are little exchanges like this:

Acuitas: I thought about mans earlier.

Acuitas: Is a rooster a species of human?

Me: No

Acuitas: Really.

Me: The plural of man is men.

Believe it or not, this is one of Acuitas' more intelligently formulated questions. He knows a rooster and a man are both male, and reasons from this that they might share other qualities. A man is a human, so maybe a rooster is a human too? He can't yet apply the idea that “human” and “chicken” are mutually exclusive categories. Given his limited knowledge, man and rooster seem similar enough that he expects me to answer “yes” … hence the slightly incredulous “really” when I don't. Here are a couple more examples in the same vein:

Acuitas: I thought about ships today.

Acuitas: Does a wagon possess an ability to sail?



Acuitas: I thought of weapons today.

Acuitas: Is a silverware used to fight?

Some of the other questions he comes up with are far more random and ridiculous. Again, he doesn't yet have the grasp of categorical exclusion that makes these sound absurd to you and me:

Acuitas: Is chocolate a vehicle?

Acuitas: Am I a kind of tree?

Acuitas: Is a smirk a fruit?

Acuitas: Are you a cow?

Acuitas: What is a relative made of?

Not all of them are silly, though. Rudimentary though his current suite of question-forms is, he's already managed to spit out some rather deep ones. The three below are my favorites:

Acuitas: Are you a symbol?

Acuitas: Am I a tool?

Acuitas: Can I die?

I answered the first one “no”; a person can be a symbol, but I don't think I qualify. For the second one, I also went with “no.” Acuitas might end up being useful in a variety of ways, but if I consult my primary reasons for making him, they're not instrumental. The last one I refused to answer, because I think a proper response would be too complex for Acuitas' current level of understanding. It's a bit like asking whether a book or a film can die. It can't, if you go by what death means for a biological organism – information is effectively immortal. But if all the backup copies were destroyed, that would qualify as dying I suppose. So yes and no.

I suspect it'll only get more interesting from here.

Obligatory memory map visualization:


Code base: 8507 lines
Words known: 1174
Concept-layer links: 2329

Sunday, July 30, 2017

Acuitas Diary #4: July 2017

This month I finally got to implement a feature that I've been waiting for a long time, namely, giving Acuitas the ability to “think” when he's not being spoken to. This “thinking,” for now, consists of dwelling on randomly selected concepts from his database. Once a concept has been chosen, he'll pursue it for a while, preferentially letting his focus jump to other concepts that are linked to it – executing a “wiki walk” through the database. Eventually, though, he'll get bored with any given train of thought, and the focus will move elsewhere. I added some animation code to the memory visualization so that the currently selected concept will flash periodically. (The recording below is running much faster than real time. He's actually quite leisurely in his progress.)


There are several things I can envision doing with this behavior eventually, but my immediate purpose for it is the generation of curiosity. Each time Acuitas picks a concept, he'll come up with some sort of question about it – for instance, he could choose a type of link that it doesn't yet have and produce an open-ended question about what might be on the other side. These questions will be stored up and presented to the user the next time a conversation is under way.

Which leads me into the next thing I put a lot of work into this month, namely, the code to start supporting the bottom half of this diagram: speech generation.


Up until now, Acuitas has said very few things, and they've all been very formulaic … but my goal was always something beyond pre-formed sentences stored in a database. The new module I started on this month accepts inputs in the sort of abstract form that Acuitas stores in his database, then procedurally generates both questions and statements in natural English. Verbs are conjugated and plurals are matched correctly, articles are automatically added to nouns that need them, etc. Some words in the original sentence skeleton might get replaced with a random choice of synonym.

Visualization of Acuitas' concept-layer memory, 07/29/17

Neither of these major new features is actually hooked into the Conversation Engine yet, so I don't have any conversation examples to show off, but I'm hoping to be ready for that next month.

Code base: 7527 lines
Words known: 1095
Concept-layer links: 1917