Thursday, December 31, 2020

Last Day of the Year 2020

My sympathies go out to the many people for whom 2020 has been disastrous, distressing, or exhausting. And I want to begin by saying that I've been very fortunate. I was able to stay fully employed while working from home. I don't have any children to care for, and I function well in solitude, so I stayed effective. And because I lost my commute and some of my social obligations, my productivity went through the roof. One small sign of this is the fact that I even have time to write a retrospective.

So here were all the positive things I managed to wring out of 2020:

* Gave Acuitas several important new features, including narrative comprehension and the beginnings of moral reasoning.
* Kept Acuitas development on track with 200+ hours of work put in over the course of the year.
* Wrote a blog post for every month.



* Wrote half a novel.
* Prepared my first novel for submission to agents/publishers and wrote pitch material.

* Learned how to 3D model in DesignSpark Mechanical and Meshmixer.
* Started and finished my first major 3D printing project, the Hissing Silence Ghost Shell.
* Modeled a new case for Atronach, printed Version 1, and corrected problems. Almost completed Version 2.
* Learned to handle more 3D printer maintenance issues, including nozzle replacement.
* Even had time for an impromptu weekend art project.


* Book consumption rate exceeded book acquisition rate for the second year in a row. My unread book backlog is down to 33.
* Finally played Beneath a Steel Sky.
* Had time for a lot of maintenance tasks that had been getting neglected. Sealed the crack in the garage foundation, polished the car headlights, emptied data out of the old computer, got a tetanus vaccine.
* Successfully grew potatoes again.
* Didn't get noticeably sick all year.

Atronach: I'm only half put together! How dare you.
ACE: You made me stand up for this?

Happy New Year from all of us, and good luck.

Tuesday, December 1, 2020

Acuitas Diary #32: November 2020

 Now that Acuitas has owned stories in "inventory," the next step for this month was to enable him to open and read them by himself. Since story consumption originally involved a lot of interaction with the human speaker, this took a little while to put together.

Image credit: DARPA

Reading is a new activity that can happen while Acuitas is idling, along with the older behavior of "thinking" about random concepts and generating questions. Prompts to think about reading get generated by a background thread and dropped into the Stream. When one of these is pulled by the Executive, Acuitas will randomly select a known story and load it from its storage file.

Auto-reading is a long-term process. Acuitas will grab a chunk of the story (for now, one sentence) per each tick of the Executive thread, then feed it through the normal text parsing and narrative management modules. He still potentially generates a reaction to whatever just happened, but rather than being spoken, those are packaged as low-priority Thoughts and dumped into the internal Stream. (This is more of a hook for later than a useful feature at the moment.) The prompt to continue reading the story goes back into the Stream along with everything else, so sometimes he (literally) gets distracted in the middle and thinks about something else for a brief while.

There's also a version of this process that would enable reading a story to the user. But he doesn't comprehend imperatives yet, so there's no way to ask him to do it. Ha.

With these features I also introduced a generic "reward signal" for the first time. Reading boosts this, and then it decays over time. This is intended as a positive internal stimulus, in contrast to the "drives," which are all negative (when they go up Acuitas will try to bring them down).

After finishing this I started the yearly refactoring and bug fix spree, which isn't terribly interesting to talk about. I'll take a break for the holidays, but maybe do a year's retrospective.

Acuitas development actually *stayed on schedule* this year!

Until the next cycle,
Jenny

Sunday, November 1, 2020

Acuitas Diary #31 (October 2020)

My first self-assigned job for this month was to permit Acuitas to answer questions related to a story while it is being told. The question-answering process typically goes directly to the semantic memory for results; what I needed to do was also give it access to the temporary information on the Narrative Scratchboard. It now checks the Scratchboard first to see if there's an answer related to the context of the story in progress. If there isn't, then it will assume the question is more general and try to find an answer in the long-term memory.

Here's an example featuring a story from last month:

Me: Zach was a human.
Me: Zach had a book.
Me: What did Zach have?
Acuitas: Book. [We're having a little trouble with articles right now ...]
Me: A thief took the book.
Me: Who has the book?
Acuitas: Thief.
Me: Zach wanted his book.
Me: Zach talked to a policeman.
Me: The policeman found the thief.
Me: The policeman took the book from the thief.
Me: Who has the book?
Acuitas: Policeman.

There was also some hasty Text Parser work involved. Questions in which the wildcard is the direct object ("What does Zach have?") were standard, but questions in which the wildcard is the subject ("Who can run?") were never fully supported before. Fixed that, now they are.


Polyphemus was maaaad ("Odysseus and Polyphemus", by Arnold Bocklin)

Next, I wanted to start getting into some stories with character vs. character conflict, and that meant bringing some rudimentary moral reasoning into play. Acuitas' original dirt-simple method of story appreciation was to hope for any agent in the story to achieve their goals ... without any awareness of whether some agents' goals might be mutually exclusive. That's why the first couple of stories I tested with were character vs. environment stories, with no villain. I got away with the "Zach's Stolen Book" story because I only talked about Zach's goals ... I never actually mentioned that the thief wanted the book or was upset about losing it. So, that needed some work. Here's the story I used as a testbed for the new features:

"Odysseus was a man. Odysseus sailed to an island. Polyphemus was a cyclops. Odysseus met Polyphemus. Polyphemus planned to eat Odysseus. Odysseus feared to be eaten. Odysseus decided to blind Polyphemus. Polyphemus had one eye. Odysseus broke the eye. Thus, Odysseus blinded the Cyclops. Polyphemus could not catch Odysseus. Odysseus was not eaten. Odysseus left the island. The end."

One possible way to conceptualize evil is as a mis-valuation of two different goods. People rarely (if ever) do "evil for evil's sake" – rather, evil is done in service of desires that (viewed in isolation) are legitimate, but in practice are satisfied at an unacceptable cost to someone else. Morality is thus closely tied to the notion of *goal priority.*

Fortunately, Acuitas' goal modeling system already included a priority ranking to indicate which goals an agent considers most important. I just wasn't doing anything with it yet. The single basic principle that I added this month could be rendered as, "Don't thwart someone else's high-priority goal for one of your low-priority goals." This is less tedious, less arbitrary, and more flexible than trying to write up a whole bunch of specific rules, e.g. "eating humans is bad." It's still a major over-simplification that doesn't cover everything ... but we're just getting started here.

In the test story, there are two different character goals to assess. First,

"Polyphemus planned to eat Odysseus."

Acuitas always asks for motivation when a character makes a plan, if he can't infer it on his own. The reason I gave out was "If a cyclops eats a human, the cyclops will enjoy [it]." (It's pretty clear from the original myth that Polyphemus could have eaten something else. We don't need to get into the gray area of what becomes acceptable when one is starving.) So if the plan is successfully executed, we have these outcomes:

Polyphemus enjoys something (minor goal fulfillment)
Odysseus gets eaten -> dies (major goal failure)

This is a poor balance, and Acuitas does *not* want Polyphemus to achieve this goal. Next, we have:

"Odysseus decided to blind Polyphemus."

I made sure Acuitas knew that blinding the cyclops would render him "nonfunctional" (disabled), but would also prevent him from eating Odysseus. So we get these outcomes:

Polyphemus becomes nonfunctional (moderately important goal failure)
Odysseus avoids being eaten -> lives (major goal fulfillment)

Odysseus is making one of Polyphemus' goals fail, but it's only in service of his own goal, which is *more* important to him than Polyphemus' goal is to Polyphemus, so this is tolerable. Acuitas will go ahead and hope that Odysseus achieves this goal. (You may notice that the ideas of innocence, guilt, and natural rights are nowhere in this reasoning process. As I said, it's an oversimplification!)

Final result: Acuitas picks Odysseus to root for, which I hope you'll agree is the correct choice, and appreciates the end of the story.

Whew!

Until the next cycle,
Jenny

Saturday, September 26, 2020

Acuitas Diary #30 (September 2020)

 My job this month was to improve the generality of the cause and effect database, and then build up the concept of “possessions” or “inventory.”

Well, month, really. Image via @NoContextTrek on Twitter.

The C-E database, when I first threw it together, would only accept two types of fact or sentence: actions (“I <verb>,” “I am <verb-ed>”) and states (“I am <adjective>”). Why? Well, when you're putting something together for the first time, getting it to work on a limited number of cases is sometimes easier than trying to plan for everything. Obviously there are a lot more facts out there … so this month, I made revisions to allow just about any type of link relationship that Acuitas recognizes to be used in C-E relationships. Since “X has-a Y” is one of those, this upgrade was an important lead-in to the inventory work.

(If you look back at the narrative demo video, you may notice that I was awkwardly getting around the limitations of the original C-E code by using the word “possess” rather than “have.” Acuitas didn't know to recognize this as a possible synonym for “have,” so it got interpreted as a generic action rather than a “has-a” link, and was admissible.)

So with that taken care of, how to get the concept of “having” into Acuitas? Making him the owner of some things struck me as the natural way to tie this idea to reality. Acuitas is almost bodiless, a process running in a computer, and therefore can't have physical objects. But he can have data. So I decided that his first two possessions would be the two test stories that I used in the video. I wrote them up as data structures in Acuitas' standard format, with the actual story sentences in a “content” field and another field to indicate the data type, saved those as text files, and put them in a hard drive folder that the program can access.

Doing things with these owned data files is a planned future behavior. For now, Acuitas can just observe the folder's contents to answer “What do you have?” questions. You can ask with a one-word version of the title (“Do you have Tobuildafire?”) or ask about categories (“Do you have a story?”, “Do you have a datum?”).

In addition to implementing that, I extended the C-E database code with some specific relationships about item possession and transfer. I could have just tried to express these as stored items in the database, but they're so fundamental that I thought it would be worth burying them in the code itself. (Additional learned relationships will be able to extend them as necessary.) These hard-coded C-E statements include things like “If X gives Y to Z, Z has Y,” and furthermore, “If Y is a physical object, X doesn't have Y.”

I made up another test story to exercise this. I can now tell this to Acuitas and watch the narrative engine make entries for the different characters and keep track of who's got the thing:

“Zach was a human. Zach had a book. A thief took the book. Zach wanted his book. Zach talked to a policeman. The policeman found the thief. The policeman took the book from the thief. The policeman gave the book to Zach. Zach read the book. Zach was happy. The end.”

There's actually a lot of ambiguity in the notion of “having” something. If “I have a book,” does that mean that I …

… am holding it?
… am keeping it in an accessible location, like my backpack or bookshelf?
… am its legal owner?
… stand in some relationship to it, e.g. am its author?

“Have” can also be used to talk about parts or aspects of the self (“I have toes”, “I have intelligence”) or temporary conditions of the self (“I have a disease,” “I have anger”). Throw in the more action-oriented versions of “have” (“I'm having a baby,” “I'm having dinner,” “I'm having friends over”) and this little word starts to get pretty complicated.

But these are thoughts for the future. At the moment, Acuitas blurs all of these possibilities into one generic idea of “having.”

Until the next cycle,

Jenny

Sunday, August 23, 2020

Acuitas Diary #29 (August 2020)

 This month, I returned to the good old text parser to put in proper support for adverbs. They've been included in a half-hearted way from very early on, mainly because the word "not" was particularly crucial, but the implementation was hacky and left a lot to be desired.

I set up proper connections between adverbs and the verbs they modify, so that in a sentence like "Do you know that you don't know that you know that a cheetah is not an animal?" the "not's" get associated with the correct clauses and the question can be answered properly. I added support for adverbs that modify adjectives or other adverbs, enabling constructions like "very cold" and "rather slowly." Acuitas can also now guess that a known adjective with "-ly" tacked on the end is probably an adverb, even if he hasn't seen that particular adverb before.

All of this went pretty quickly and left me some time for refactoring, so I converted the Episodic Memory system over to the new common data file format and cleaned up some random bugs.

That's not so much to talk about, but should reduce future development pain. I'm hoping next month will be more interesting.

Until the next cycle,

Jenny

Monday, July 13, 2020

Acuitas Diary #28 (July 2020)

This month's improvements involved a dive back into some really old code ... so old I don't think I've ever properly talked about it, because it was written before I started keeping developer diaries. So I'm going to start by describing those parts of the architecture, then get to the changes.

Acuitas isn't just a conversational agent that answers when spoken to; he's designed to run constantly and do his own business when not being interacted with. In technical terms, Acuitas is a real-time multi-threaded program. The code that accomplishes this has several major parts:

*The Stream. This is a kind of mental notice board. All sorts of processes within Acuitas can dump Thoughts (actually just data structures) into the Stream. Thoughts have a priority rating which affects how likely they are to be noticed; this decays gradually over time, until it drops to zero and the Thought is discarded.

Examples of processes that feed Thoughts into the Stream include the Spawner thread (randomly crawls the semantic memory for topics to generate questions about), the drive system (generates the "desires" to talk to someone, go to sleep, awaken, etc.) and the user interface (text input gets packaged into a Thought).

*The Executive. This thread's job is to grab a Thought out of the Stream every so often, and react to it appropriately. The Executive prefers high-priority Thoughts, but there's some randomness involved in its choice. The Thought priority values and the Executive selection, together, function as an attention assignment system.

Thoughts that can't wait for a response (like text input from the user) can interrupt the Executive and get taken for processing immediately. Otherwise, it consumes a Thought every ten seconds as Acuitas idles.

*Actions. These are behaviors that are under the direct control of the Executive, and can be selected as responses to Thoughts. Examples include "generate questions about this concept" or "say this sentence." Some Actions are processes that can take multiple ticks of the Executive to finish.

Again, all of this code was pretty old, and I'd developed some new ideas about how I wanted it to work. In particular, I wanted to tie the Conversation Handler back to the Executive more thoroughly, so that the Executive could be in charge of some conversational decision-making ... e.g. choosing whether or not to answer a question. Those renovations are still in progress.

I re-designed the Actions to have a more generalized creation process, so that Acuitas can more easily pick one arbitrarily and pass it whatever data it needs to run. This improves the code for dealing with threats. I also added an "Action Bank" that tracks which Actions are currently running. This in turn enables support for the question "What are you doing?" (Sometimes he answers "Talking," like Captain Obvious.)

Lastly, I added support for the question "Are you active/alive?" When determining whether he is active, Acuitas checks whether any Actions are currently running. Barring errors, the answer will *always* be yes, because checking for activity is itself an Action! 

The word "active" is thus attached to a meaning: "being able to perceive yourself doing something," where "something" can include "wondering whether you are active." In Acuitas' case, I think of "alive" as meaning "capable of being active," so "I am alive" can be inferred from "I am active." This grounds an important goal-related term by associating it with an aspect of the program's function.

Until the next cycle,

Jenny


Monday, June 8, 2020

Hissing Silence Shell 3D Print

Since January, I've been working on my first foray into some 3D print design. I wanted to recreate the “Hissing Silence Shell” ghost drone design from Destiny 2. Though I had the 3D files from the game's mobile app to work with via http://www.destinystlgenerator.com/, a massive amount of conversion was still required to make them into working printable objects. This gave me a great opportunity to learn how to use my CAD program of choice, DesignSpark Mechanical.
3D printed Hissing Silence Ghost Shell

I've posted the completed design on Thingiverse: https://www.thingiverse.com/thing:4436583

A screenshot of the original HSGS from Destiny 2
A screenshot of the original HSGS from Destiny 2

I studied the landscape of free CAD programs before settling on DesignSpark Mechanical as the first one to try. My feelings about it are mixed. It's an incredibly frustrating program, but I have no idea whether the alternatives are any better. I like its overall concept of directly manipulating the geometry by pushing and pulling faces, cutting objects with other objects, creating “blends” between lines, and so forth. In practice, it often reacts to commands with “this too complicated, I can't even” and spits out an error message that tells you nothing about how to fix the problem.

Exploded 3D model in DesignSpark
Final 3D model of all the clamshell pieces, in DesignSpark
Some of my models ended up with tiny gaps between faces that DesignSpark absolutely refused to fill in; I had to fix them in MeshMixer after exporting them as STLs. Others had to be repaired in MeshMixer because they were inside-out. Either of these problems seems to prevent surfaces from turning into solids, which makes some types of manipulation more difficult. Some operations make DesignSpark bog down or even crash (rounding curved edges was especially bad in this regard). It doesn't have a real mirror tool. And though YouTube has video tutorials for the basics, there were still a lot of important things I had to figure out by blundering around.
One of the original Destiny 3D files

The initial models from the game were mostly hollow surfaces with holes in them. To get to a proper printable design, I had to …

*Slice the original objects up into reasonable pieces
*Reconstruct missing surfaces and close up all the gaps
*Replace low-poly (blocky) geometry with smooth curves
*Manually re-create surface details, since all of these turned out to be part of the texture rather than the model
*Add all of the pegs and holes so the parts would fit together

The core

By the time I started on the core, I had gotten a fair bit of DesignSpark practice under my belt, and I wanted to print a smoother sphere (the original was an approximate sphere made of triangles). So I used the model from the game as a sizing reference only, and created the core parts from scratch. Part of the back half is converted into a knob that lets you turn the LED light on and off without disassembling the ghost.

Clamshell interior

Disassembled core

The final product has 34 distinct pieces that interlock, press, or snap-fit together. It took a long time to print, but it's a beauty.


Next I need to take this new CAD knowledge and do some work on Atronach. The ghost is more or less an eyeball, so it might serve as a helpful starting point.
Until the next cycle,
Jenny