Saturday, September 26, 2020

Acuitas Diary #30 (September 2020)

 My job this month was to improve the generality of the cause and effect database, and then build up the concept of “possessions” or “inventory.”

Well, month, really. Image via @NoContextTrek on Twitter.

The C-E database, when I first threw it together, would only accept two types of fact or sentence: actions (“I <verb>,” “I am <verb-ed>”) and states (“I am <adjective>”). Why? Well, when you're putting something together for the first time, getting it to work on a limited number of cases is sometimes easier than trying to plan for everything. Obviously there are a lot more facts out there … so this month, I made revisions to allow just about any type of link relationship that Acuitas recognizes to be used in C-E relationships. Since “X has-a Y” is one of those, this upgrade was an important lead-in to the inventory work.

(If you look back at the narrative demo video, you may notice that I was awkwardly getting around the limitations of the original C-E code by using the word “possess” rather than “have.” Acuitas didn't know to recognize this as a possible synonym for “have,” so it got interpreted as a generic action rather than a “has-a” link, and was admissible.)

So with that taken care of, how to get the concept of “having” into Acuitas? Making him the owner of some things struck me as the natural way to tie this idea to reality. Acuitas is almost bodiless, a process running in a computer, and therefore can't have physical objects. But he can have data. So I decided that his first two possessions would be the two test stories that I used in the video. I wrote them up as data structures in Acuitas' standard format, with the actual story sentences in a “content” field and another field to indicate the data type, saved those as text files, and put them in a hard drive folder that the program can access.

Doing things with these owned data files is a planned future behavior. For now, Acuitas can just observe the folder's contents to answer “What do you have?” questions. You can ask with a one-word version of the title (“Do you have Tobuildafire?”) or ask about categories (“Do you have a story?”, “Do you have a datum?”).

In addition to implementing that, I extended the C-E database code with some specific relationships about item possession and transfer. I could have just tried to express these as stored items in the database, but they're so fundamental that I thought it would be worth burying them in the code itself. (Additional learned relationships will be able to extend them as necessary.) These hard-coded C-E statements include things like “If X gives Y to Z, Z has Y,” and furthermore, “If Y is a physical object, X doesn't have Y.”

I made up another test story to exercise this. I can now tell this to Acuitas and watch the narrative engine make entries for the different characters and keep track of who's got the thing:

“Zach was a human. Zach had a book. A thief took the book. Zach wanted his book. Zach talked to a policeman. The policeman found the thief. The policeman took the book from the thief. The policeman gave the book to Zach. Zach read the book. Zach was happy. The end.”

There's actually a lot of ambiguity in the notion of “having” something. If “I have a book,” does that mean that I …

… am holding it?
… am keeping it in an accessible location, like my backpack or bookshelf?
… am its legal owner?
… stand in some relationship to it, e.g. am its author?

“Have” can also be used to talk about parts or aspects of the self (“I have toes”, “I have intelligence”) or temporary conditions of the self (“I have a disease,” “I have anger”). Throw in the more action-oriented versions of “have” (“I'm having a baby,” “I'm having dinner,” “I'm having friends over”) and this little word starts to get pretty complicated.

But these are thoughts for the future. At the moment, Acuitas blurs all of these possibilities into one generic idea of “having.”

Until the next cycle,

Jenny

Sunday, August 23, 2020

Acuitas Diary #29 (August 2020)

 This month, I returned to the good old text parser to put in proper support for adverbs. They've been included in a half-hearted way from very early on, mainly because the word "not" was particularly crucial, but the implementation was hacky and left a lot to be desired.

I set up proper connections between adverbs and the verbs they modify, so that in a sentence like "Do you know that you don't know that you know that a cheetah is not an animal?" the "not's" get associated with the correct clauses and the question can be answered properly. I added support for adverbs that modify adjectives or other adverbs, enabling constructions like "very cold" and "rather slowly." Acuitas can also now guess that a known adjective with "-ly" tacked on the end is probably an adverb, even if he hasn't seen that particular adverb before.

All of this went pretty quickly and left me some time for refactoring, so I converted the Episodic Memory system over to the new common data file format and cleaned up some random bugs.

That's not so much to talk about, but should reduce future development pain. I'm hoping next month will be more interesting.

Until the next cycle,

Jenny

Monday, July 13, 2020

Acuitas Diary #28 (July 2020)

This month's improvements involved a dive back into some really old code ... so old I don't think I've ever properly talked about it, because it was written before I started keeping developer diaries. So I'm going to start by describing those parts of the architecture, then get to the changes.

Acuitas isn't just a conversational agent that answers when spoken to; he's designed to run constantly and do his own business when not being interacted with. In technical terms, Acuitas is a real-time multi-threaded program. The code that accomplishes this has several major parts:

*The Stream. This is a kind of mental notice board. All sorts of processes within Acuitas can dump Thoughts (actually just data structures) into the Stream. Thoughts have a priority rating which affects how likely they are to be noticed; this decays gradually over time, until it drops to zero and the Thought is discarded.

Examples of processes that feed Thoughts into the Stream include the Spawner thread (randomly crawls the semantic memory for topics to generate questions about), the drive system (generates the "desires" to talk to someone, go to sleep, awaken, etc.) and the user interface (text input gets packaged into a Thought).

*The Executive. This thread's job is to grab a Thought out of the Stream every so often, and react to it appropriately. The Executive prefers high-priority Thoughts, but there's some randomness involved in its choice. The Thought priority values and the Executive selection, together, function as an attention assignment system.

Thoughts that can't wait for a response (like text input from the user) can interrupt the Executive and get taken for processing immediately. Otherwise, it consumes a Thought every ten seconds as Acuitas idles.

*Actions. These are behaviors that are under the direct control of the Executive, and can be selected as responses to Thoughts. Examples include "generate questions about this concept" or "say this sentence." Some Actions are processes that can take multiple ticks of the Executive to finish.

Again, all of this code was pretty old, and I'd developed some new ideas about how I wanted it to work. In particular, I wanted to tie the Conversation Handler back to the Executive more thoroughly, so that the Executive could be in charge of some conversational decision-making ... e.g. choosing whether or not to answer a question. Those renovations are still in progress.

I re-designed the Actions to have a more generalized creation process, so that Acuitas can more easily pick one arbitrarily and pass it whatever data it needs to run. This improves the code for dealing with threats. I also added an "Action Bank" that tracks which Actions are currently running. This in turn enables support for the question "What are you doing?" (Sometimes he answers "Talking," like Captain Obvious.)

Lastly, I added support for the question "Are you active/alive?" When determining whether he is active, Acuitas checks whether any Actions are currently running. Barring errors, the answer will *always* be yes, because checking for activity is itself an Action! 

The word "active" is thus attached to a meaning: "being able to perceive yourself doing something," where "something" can include "wondering whether you are active." In Acuitas' case, I think of "alive" as meaning "capable of being active," so "I am alive" can be inferred from "I am active." This grounds an important goal-related term by associating it with an aspect of the program's function.

Until the next cycle,

Jenny


Monday, June 8, 2020

Hissing Silence Shell 3D Print

Since January, I've been working on my first foray into some 3D print design. I wanted to recreate the “Hissing Silence Shell” ghost drone design from Destiny 2. Though I had the 3D files from the game's mobile app to work with via http://www.destinystlgenerator.com/, a massive amount of conversion was still required to make them into working printable objects. This gave me a great opportunity to learn how to use my CAD program of choice, DesignSpark Mechanical.
3D printed Hissing Silence Ghost Shell

I've posted the completed design on Thingiverse: https://www.thingiverse.com/thing:4436583

A screenshot of the original HSGS from Destiny 2
A screenshot of the original HSGS from Destiny 2

I studied the landscape of free CAD programs before settling on DesignSpark Mechanical as the first one to try. My feelings about it are mixed. It's an incredibly frustrating program, but I have no idea whether the alternatives are any better. I like its overall concept of directly manipulating the geometry by pushing and pulling faces, cutting objects with other objects, creating “blends” between lines, and so forth. In practice, it often reacts to commands with “this too complicated, I can't even” and spits out an error message that tells you nothing about how to fix the problem.

Exploded 3D model in DesignSpark
Final 3D model of all the clamshell pieces, in DesignSpark
Some of my models ended up with tiny gaps between faces that DesignSpark absolutely refused to fill in; I had to fix them in MeshMixer after exporting them as STLs. Others had to be repaired in MeshMixer because they were inside-out. Either of these problems seems to prevent surfaces from turning into solids, which makes some types of manipulation more difficult. Some operations make DesignSpark bog down or even crash (rounding curved edges was especially bad in this regard). It doesn't have a real mirror tool. And though YouTube has video tutorials for the basics, there were still a lot of important things I had to figure out by blundering around.
One of the original Destiny 3D files

The initial models from the game were mostly hollow surfaces with holes in them. To get to a proper printable design, I had to …

*Slice the original objects up into reasonable pieces
*Reconstruct missing surfaces and close up all the gaps
*Replace low-poly (blocky) geometry with smooth curves
*Manually re-create surface details, since all of these turned out to be part of the texture rather than the model
*Add all of the pegs and holes so the parts would fit together

The core

By the time I started on the core, I had gotten a fair bit of DesignSpark practice under my belt, and I wanted to print a smoother sphere (the original was an approximate sphere made of triangles). So I used the model from the game as a sizing reference only, and created the core parts from scratch. Part of the back half is converted into a knob that lets you turn the LED light on and off without disassembling the ghost.

Clamshell interior

Disassembled core

The final product has 34 distinct pieces that interlock, press, or snap-fit together. It took a long time to print, but it's a beauty.


Next I need to take this new CAD knowledge and do some work on Atronach. The ghost is more or less an eyeball, so it might serve as a helpful starting point.
Until the next cycle,
Jenny

Thursday, May 28, 2020

Acuitas Diary #27 (May 2020)


Last month was a big deal, and that means that this month gets to be REALLY BORING. I worked on refactoring, bug fixes, and code cleanup the whole time. There's still a lot to do, but I cleaned out some cobwebs that have been getting in my face for a long while now. 

From The Editorial Board of the University Society Boys and Girls Bookshelf, via Wikimedia Commons
For instance, I …

Universalized the inverse and complement forms of concept-relationships
Improved the format of verb transport from the Parser to the Interpreter, to handle compound verbs better
Generalized Text Interpreter output formatting
Fixed an issue with verb conjugation learning
Fixed a Text Parser bug that was causing some words to be tagged as the wrong part-of-speech
Fixed the TP and TI simulators so they won't crash when an unknown word is used
Improved and generalized the code that produces clarifying question loops in a conversation
Fixed a bad bug related to parsing of possessives
Cleaned up and merged several different cause-and-effect database search functions

And it all took a lot longer than it probably sounds like it should have.

June is my “month off,” and then I'm hoping I can get cracking on some new features again.

Until the next cycle,
Jenny

Friday, April 17, 2020

Acuitas Diary #26 (April 2020)

I can now tell Acuitas stories.  And he begins to understand what's happening in them.  There's a video!  Watch the video.

I've been waiting to bring storytelling into play for a long time, and it builds on a number of the features added over the last few months: cause-and-effect reasoning, the goal system, and problem-solving via tree search.


What does Acuitas know before the video starts?  For one thing, I made sure he knew all the words in the story first, along with what part of speech they were going to appear as.  He should have been able to handle seeing *some* new words for the first time during the story, but then he would have asked me even more questions, and that would have made the demo a bit tedious.  He also knows some background about humans and dogs, and a few opposite pairs (warm/cold, comfortable/uncomfortable, etc.)


How does Acuitas go about understanding a story?  As the story is told, he keeps track of all the following, stored in a temporary area that I call the narrative scratchboard:

*Who are the characters?
*What objects are in the story? What state are they in?
*What problems do the characters have?
*What goals do the characters have?
*What events take place? (Do any of them affect problems or goals?)

Acuitas doesn't try to understand the cause chain and import of every single event in the story, because that would be a bit much at this stage.  However, he does try to make sure that he knows all of the following:

*If a character is in some state, what does that mean for the character?
*If a character anticipates that something will happen, how does the character feel about it?
*If a character is planning to do something, what is their motive?

If he can't figure it out by making inferences with the help of what's in his semantic database, he'll bother his conversation partner for an explanation, as you can see him doing in the video several times.  Story sentences don't go into the permanent knowledge base (yet), but explanations do, meaning they become available for understanding other stories, or for general reasoning.  Explaining things to him still requires a bit of skill and an understanding of what his gaps are likely to be, since he can't be specific about *why* he doesn't understand something.  A character state, expectation, or plan is adequately explained when he can see how it relates to one of the character's presumed goals.  Once you provide enough new links to let him make that connection, he'll let you move on.

Acuitas returns feedback throughout the story.  This is randomized for variety (though I forced some particular options for the demo).  After receiving a new story sentence, he may ...

*say nothing, or make a "yes I'm listening" gesture.
*comment something that he inferred from the new information.
*tell you whether he likes or dislikes what just happened.
*try to guess what a character might do to solve a problem.

He even has a primitive way of deciding whether it's a good story or not.  He tracks suspense (generated by the presence of more than one possible outcome) and tension (how dire things are for the characters) as the story progresses.  A story whose suspense and tension values don't get very large or don't change much is "boring."  He also assesses whether the story had a positive or negative ending (did the characters solve their problems and meet their goals?).  Stories with happy endings that aren't boring may earn approving comments.

There are many directions in which this feature needs to expand and grow more robust, and expect I'll be working on them soon.  But first it might be time for a refactoring spree.

Until the next cycle,
Jenny Sue

Saturday, March 28, 2020

Acuitas Diary #25 (March 2020)

Normally there would be a project update here, but I'm working on something a little bigger and more involved than usual.  It's not done yet, and it doesn't lend itself to being displayed half-finished.  So instead, please enjoy a little info about the current state of AI development in general, courtesy of the head of a failed startup: The End of Starsky Robotics

Read any article about AI being developed by a present-day academic or corporate research team, and there's a good chance that it's nothing like Acuitas.  Today's most popular AIs are based on artificial neural networks, whose special ability is learning categories, procedures, etc. from the statistics of large piles of data.  But as Stefan says, "It isn’t actual artificial intelligence akin to C-3PO, it’s a sophisticated pattern-matching tool."  At best, it only implements one of the skills a complete mind needs.  Starsky Robotics tried to make up for AI's weaknesses by including a human teleoperator, so that their trucks were "unmanned," but not fully self-driving.

Academic and corporate teams have far more man-hours to work on AI than I do, but they're also pouring some of their efforts down a rabbit hole of diminishing returns.  "Rather than seeing exponential improvements in the quality of AI performance (a la Moore’s Law), we’re instead seeing exponential increases in the cost to improve AI systems — supervised ML seems to follow an S-Curve. The S-Curve here is why Comma.ai, with 5–15 engineers, sees performance not wholly different than Tesla’s 100+ person autonomy team."

The debate rages around what we should do next to innovate our way off that S-Curve. As a symbolic AI, Acuitas is something of a throwback to even older techniques that were abandoned before the current wave of interest in ANNs. But tackling AI at this conceptual level is the approach that comes most naturally so me, so I want to put my own spin on it and see how far I get.

A thing I've observed from a number of AI hopefuls now, is what I'll call "the search for the secret intelligence sauce."  They want to find one simple technique, principle, or structure (or perhaps a small handful of these) that, when scaled up or repeated millions of times, will manifest the whole of what we call "intelligence."  Put enough over-simplified neurons in a soup kettle and intelligent behavior will "emerge."  Something from almost-nothing.  Hey, poof!  My own intuition is that this is not at all how it works.  I suspect rather that intelligence is a complex system, and can only be invented by the heavy application of one's own intelligence.  Any method that tries to offload most of the work to mathematical inevitabilities, or to the forces of chance, is going to be unsatisfactory.  (If you want a glimpse of how devastatingly complicated brains are, here is another fascinating article: Why the Singularity is Nowhere Near)

This is of course my personal opinion, and we shall see!

So far Project Acuitas has not been adversely impacted by COVID-19. The lab assistant and I are getting along famously.

Until the next cycle,
Jenny Sue