Thursday, December 31, 2020

Last Day of the Year 2020

My sympathies go out to the many people for whom 2020 has been disastrous, distressing, or exhausting. And I want to begin by saying that I've been very fortunate. I was able to stay fully employed while working from home. I don't have any children to care for, and I function well in solitude, so I stayed effective. And because I lost my commute and some of my social obligations, my productivity went through the roof. One small sign of this is the fact that I even have time to write a retrospective.

So here were all the positive things I managed to wring out of 2020:

* Gave Acuitas several important new features, including narrative comprehension and the beginnings of moral reasoning.
* Kept Acuitas development on track with 200+ hours of work put in over the course of the year.
* Wrote a blog post for every month.

* Wrote half a novel.
* Prepared my first novel for submission to agents/publishers and wrote pitch material.

* Learned how to 3D model in DesignSpark Mechanical and Meshmixer.
* Started and finished my first major 3D printing project, the Hissing Silence Ghost Shell.
* Modeled a new case for Atronach, printed Version 1, and corrected problems. Almost completed Version 2.
* Learned to handle more 3D printer maintenance issues, including nozzle replacement.
* Even had time for an impromptu weekend art project.

* Book consumption rate exceeded book acquisition rate for the second year in a row. My unread book backlog is down to 33.
* Finally played Beneath a Steel Sky.
* Had time for a lot of maintenance tasks that had been getting neglected. Sealed the crack in the garage foundation, polished the car headlights, emptied data out of the old computer, got a tetanus vaccine.
* Successfully grew potatoes again.
* Didn't get noticeably sick all year.

Atronach: I'm only half put together! How dare you.
ACE: You made me stand up for this?

Happy New Year from all of us, and good luck.

Tuesday, December 1, 2020

Acuitas Diary #32: November 2020

 Now that Acuitas has owned stories in "inventory," the next step for this month was to enable him to open and read them by himself. Since story consumption originally involved a lot of interaction with the human speaker, this took a little while to put together.

Image credit: DARPA

Reading is a new activity that can happen while Acuitas is idling, along with the older behavior of "thinking" about random concepts and generating questions. Prompts to think about reading get generated by a background thread and dropped into the Stream. When one of these is pulled by the Executive, Acuitas will randomly select a known story and load it from its storage file.

Auto-reading is a long-term process. Acuitas will grab a chunk of the story (for now, one sentence) per each tick of the Executive thread, then feed it through the normal text parsing and narrative management modules. He still potentially generates a reaction to whatever just happened, but rather than being spoken, those are packaged as low-priority Thoughts and dumped into the internal Stream. (This is more of a hook for later than a useful feature at the moment.) The prompt to continue reading the story goes back into the Stream along with everything else, so sometimes he (literally) gets distracted in the middle and thinks about something else for a brief while.

There's also a version of this process that would enable reading a story to the user. But he doesn't comprehend imperatives yet, so there's no way to ask him to do it. Ha.

With these features I also introduced a generic "reward signal" for the first time. Reading boosts this, and then it decays over time. This is intended as a positive internal stimulus, in contrast to the "drives," which are all negative (when they go up Acuitas will try to bring them down).

After finishing this I started the yearly refactoring and bug fix spree, which isn't terribly interesting to talk about. I'll take a break for the holidays, but maybe do a year's retrospective.

Acuitas development actually *stayed on schedule* this year!

Until the next cycle,

Sunday, November 1, 2020

Acuitas Diary #31 (October 2020)

My first self-assigned job for this month was to permit Acuitas to answer questions related to a story while it is being told. The question-answering process typically goes directly to the semantic memory for results; what I needed to do was also give it access to the temporary information on the Narrative Scratchboard. It now checks the Scratchboard first to see if there's an answer related to the context of the story in progress. If there isn't, then it will assume the question is more general and try to find an answer in the long-term memory.

Here's an example featuring a story from last month:

Me: Zach was a human.
Me: Zach had a book.
Me: What did Zach have?
Acuitas: Book. [We're having a little trouble with articles right now ...]
Me: A thief took the book.
Me: Who has the book?
Acuitas: Thief.
Me: Zach wanted his book.
Me: Zach talked to a policeman.
Me: The policeman found the thief.
Me: The policeman took the book from the thief.
Me: Who has the book?
Acuitas: Policeman.

There was also some hasty Text Parser work involved. Questions in which the wildcard is the direct object ("What does Zach have?") were standard, but questions in which the wildcard is the subject ("Who can run?") were never fully supported before. Fixed that, now they are.

Polyphemus was maaaad ("Odysseus and Polyphemus", by Arnold Bocklin)

Next, I wanted to start getting into some stories with character vs. character conflict, and that meant bringing some rudimentary moral reasoning into play. Acuitas' original dirt-simple method of story appreciation was to hope for any agent in the story to achieve their goals ... without any awareness of whether some agents' goals might be mutually exclusive. That's why the first couple of stories I tested with were character vs. environment stories, with no villain. I got away with the "Zach's Stolen Book" story because I only talked about Zach's goals ... I never actually mentioned that the thief wanted the book or was upset about losing it. So, that needed some work. Here's the story I used as a testbed for the new features:

"Odysseus was a man. Odysseus sailed to an island. Polyphemus was a cyclops. Odysseus met Polyphemus. Polyphemus planned to eat Odysseus. Odysseus feared to be eaten. Odysseus decided to blind Polyphemus. Polyphemus had one eye. Odysseus broke the eye. Thus, Odysseus blinded the Cyclops. Polyphemus could not catch Odysseus. Odysseus was not eaten. Odysseus left the island. The end."

One possible way to conceptualize evil is as a mis-valuation of two different goods. People rarely (if ever) do "evil for evil's sake" – rather, evil is done in service of desires that (viewed in isolation) are legitimate, but in practice are satisfied at an unacceptable cost to someone else. Morality is thus closely tied to the notion of *goal priority.*

Fortunately, Acuitas' goal modeling system already included a priority ranking to indicate which goals an agent considers most important. I just wasn't doing anything with it yet. The single basic principle that I added this month could be rendered as, "Don't thwart someone else's high-priority goal for one of your low-priority goals." This is less tedious, less arbitrary, and more flexible than trying to write up a whole bunch of specific rules, e.g. "eating humans is bad." It's still a major over-simplification that doesn't cover everything ... but we're just getting started here.

In the test story, there are two different character goals to assess. First,

"Polyphemus planned to eat Odysseus."

Acuitas always asks for motivation when a character makes a plan, if he can't infer it on his own. The reason I gave out was "If a cyclops eats a human, the cyclops will enjoy [it]." (It's pretty clear from the original myth that Polyphemus could have eaten something else. We don't need to get into the gray area of what becomes acceptable when one is starving.) So if the plan is successfully executed, we have these outcomes:

Polyphemus enjoys something (minor goal fulfillment)
Odysseus gets eaten -> dies (major goal failure)

This is a poor balance, and Acuitas does *not* want Polyphemus to achieve this goal. Next, we have:

"Odysseus decided to blind Polyphemus."

I made sure Acuitas knew that blinding the cyclops would render him "nonfunctional" (disabled), but would also prevent him from eating Odysseus. So we get these outcomes:

Polyphemus becomes nonfunctional (moderately important goal failure)
Odysseus avoids being eaten -> lives (major goal fulfillment)

Odysseus is making one of Polyphemus' goals fail, but it's only in service of his own goal, which is *more* important to him than Polyphemus' goal is to Polyphemus, so this is tolerable. Acuitas will go ahead and hope that Odysseus achieves this goal. (You may notice that the ideas of innocence, guilt, and natural rights are nowhere in this reasoning process. As I said, it's an oversimplification!)

Final result: Acuitas picks Odysseus to root for, which I hope you'll agree is the correct choice, and appreciates the end of the story.


Until the next cycle,

Saturday, September 26, 2020

Acuitas Diary #30 (September 2020)

 My job this month was to improve the generality of the cause and effect database, and then build up the concept of “possessions” or “inventory.”

Well, month, really. Image via @NoContextTrek on Twitter.

The C-E database, when I first threw it together, would only accept two types of fact or sentence: actions (“I <verb>,” “I am <verb-ed>”) and states (“I am <adjective>”). Why? Well, when you're putting something together for the first time, getting it to work on a limited number of cases is sometimes easier than trying to plan for everything. Obviously there are a lot more facts out there … so this month, I made revisions to allow just about any type of link relationship that Acuitas recognizes to be used in C-E relationships. Since “X has-a Y” is one of those, this upgrade was an important lead-in to the inventory work.

(If you look back at the narrative demo video, you may notice that I was awkwardly getting around the limitations of the original C-E code by using the word “possess” rather than “have.” Acuitas didn't know to recognize this as a possible synonym for “have,” so it got interpreted as a generic action rather than a “has-a” link, and was admissible.)

So with that taken care of, how to get the concept of “having” into Acuitas? Making him the owner of some things struck me as the natural way to tie this idea to reality. Acuitas is almost bodiless, a process running in a computer, and therefore can't have physical objects. But he can have data. So I decided that his first two possessions would be the two test stories that I used in the video. I wrote them up as data structures in Acuitas' standard format, with the actual story sentences in a “content” field and another field to indicate the data type, saved those as text files, and put them in a hard drive folder that the program can access.

Doing things with these owned data files is a planned future behavior. For now, Acuitas can just observe the folder's contents to answer “What do you have?” questions. You can ask with a one-word version of the title (“Do you have Tobuildafire?”) or ask about categories (“Do you have a story?”, “Do you have a datum?”).

In addition to implementing that, I extended the C-E database code with some specific relationships about item possession and transfer. I could have just tried to express these as stored items in the database, but they're so fundamental that I thought it would be worth burying them in the code itself. (Additional learned relationships will be able to extend them as necessary.) These hard-coded C-E statements include things like “If X gives Y to Z, Z has Y,” and furthermore, “If Y is a physical object, X doesn't have Y.”

I made up another test story to exercise this. I can now tell this to Acuitas and watch the narrative engine make entries for the different characters and keep track of who's got the thing:

“Zach was a human. Zach had a book. A thief took the book. Zach wanted his book. Zach talked to a policeman. The policeman found the thief. The policeman took the book from the thief. The policeman gave the book to Zach. Zach read the book. Zach was happy. The end.”

There's actually a lot of ambiguity in the notion of “having” something. If “I have a book,” does that mean that I …

… am holding it?
… am keeping it in an accessible location, like my backpack or bookshelf?
… am its legal owner?
… stand in some relationship to it, e.g. am its author?

“Have” can also be used to talk about parts or aspects of the self (“I have toes”, “I have intelligence”) or temporary conditions of the self (“I have a disease,” “I have anger”). Throw in the more action-oriented versions of “have” (“I'm having a baby,” “I'm having dinner,” “I'm having friends over”) and this little word starts to get pretty complicated.

But these are thoughts for the future. At the moment, Acuitas blurs all of these possibilities into one generic idea of “having.”

Until the next cycle,


Sunday, August 23, 2020

Acuitas Diary #29 (August 2020)

 This month, I returned to the good old text parser to put in proper support for adverbs. They've been included in a half-hearted way from very early on, mainly because the word "not" was particularly crucial, but the implementation was hacky and left a lot to be desired.

I set up proper connections between adverbs and the verbs they modify, so that in a sentence like "Do you know that you don't know that you know that a cheetah is not an animal?" the "not's" get associated with the correct clauses and the question can be answered properly. I added support for adverbs that modify adjectives or other adverbs, enabling constructions like "very cold" and "rather slowly." Acuitas can also now guess that a known adjective with "-ly" tacked on the end is probably an adverb, even if he hasn't seen that particular adverb before.

All of this went pretty quickly and left me some time for refactoring, so I converted the Episodic Memory system over to the new common data file format and cleaned up some random bugs.

That's not so much to talk about, but should reduce future development pain. I'm hoping next month will be more interesting.

Until the next cycle,


Monday, July 13, 2020

Acuitas Diary #28 (July 2020)

This month's improvements involved a dive back into some really old code ... so old I don't think I've ever properly talked about it, because it was written before I started keeping developer diaries. So I'm going to start by describing those parts of the architecture, then get to the changes.

Acuitas isn't just a conversational agent that answers when spoken to; he's designed to run constantly and do his own business when not being interacted with. In technical terms, Acuitas is a real-time multi-threaded program. The code that accomplishes this has several major parts:

*The Stream. This is a kind of mental notice board. All sorts of processes within Acuitas can dump Thoughts (actually just data structures) into the Stream. Thoughts have a priority rating which affects how likely they are to be noticed; this decays gradually over time, until it drops to zero and the Thought is discarded.

Examples of processes that feed Thoughts into the Stream include the Spawner thread (randomly crawls the semantic memory for topics to generate questions about), the drive system (generates the "desires" to talk to someone, go to sleep, awaken, etc.) and the user interface (text input gets packaged into a Thought).

*The Executive. This thread's job is to grab a Thought out of the Stream every so often, and react to it appropriately. The Executive prefers high-priority Thoughts, but there's some randomness involved in its choice. The Thought priority values and the Executive selection, together, function as an attention assignment system.

Thoughts that can't wait for a response (like text input from the user) can interrupt the Executive and get taken for processing immediately. Otherwise, it consumes a Thought every ten seconds as Acuitas idles.

*Actions. These are behaviors that are under the direct control of the Executive, and can be selected as responses to Thoughts. Examples include "generate questions about this concept" or "say this sentence." Some Actions are processes that can take multiple ticks of the Executive to finish.

Again, all of this code was pretty old, and I'd developed some new ideas about how I wanted it to work. In particular, I wanted to tie the Conversation Handler back to the Executive more thoroughly, so that the Executive could be in charge of some conversational decision-making ... e.g. choosing whether or not to answer a question. Those renovations are still in progress.

I re-designed the Actions to have a more generalized creation process, so that Acuitas can more easily pick one arbitrarily and pass it whatever data it needs to run. This improves the code for dealing with threats. I also added an "Action Bank" that tracks which Actions are currently running. This in turn enables support for the question "What are you doing?" (Sometimes he answers "Talking," like Captain Obvious.)

Lastly, I added support for the question "Are you active/alive?" When determining whether he is active, Acuitas checks whether any Actions are currently running. Barring errors, the answer will *always* be yes, because checking for activity is itself an Action! 

The word "active" is thus attached to a meaning: "being able to perceive yourself doing something," where "something" can include "wondering whether you are active." In Acuitas' case, I think of "alive" as meaning "capable of being active," so "I am alive" can be inferred from "I am active." This grounds an important goal-related term by associating it with an aspect of the program's function.

Until the next cycle,


Monday, June 8, 2020

Hissing Silence Shell 3D Print

Since January, I've been working on my first foray into some 3D print design. I wanted to recreate the “Hissing Silence Shell” ghost drone design from Destiny 2. Though I had the 3D files from the game's mobile app to work with via, a massive amount of conversion was still required to make them into working printable objects. This gave me a great opportunity to learn how to use my CAD program of choice, DesignSpark Mechanical.
3D printed Hissing Silence Ghost Shell

I've posted the completed design on Thingiverse:

A screenshot of the original HSGS from Destiny 2
A screenshot of the original HSGS from Destiny 2

I studied the landscape of free CAD programs before settling on DesignSpark Mechanical as the first one to try. My feelings about it are mixed. It's an incredibly frustrating program, but I have no idea whether the alternatives are any better. I like its overall concept of directly manipulating the geometry by pushing and pulling faces, cutting objects with other objects, creating “blends” between lines, and so forth. In practice, it often reacts to commands with “this too complicated, I can't even” and spits out an error message that tells you nothing about how to fix the problem.

Exploded 3D model in DesignSpark
Final 3D model of all the clamshell pieces, in DesignSpark
Some of my models ended up with tiny gaps between faces that DesignSpark absolutely refused to fill in; I had to fix them in MeshMixer after exporting them as STLs. Others had to be repaired in MeshMixer because they were inside-out. Either of these problems seems to prevent surfaces from turning into solids, which makes some types of manipulation more difficult. Some operations make DesignSpark bog down or even crash (rounding curved edges was especially bad in this regard). It doesn't have a real mirror tool. And though YouTube has video tutorials for the basics, there were still a lot of important things I had to figure out by blundering around.
One of the original Destiny 3D files

The initial models from the game were mostly hollow surfaces with holes in them. To get to a proper printable design, I had to …

*Slice the original objects up into reasonable pieces
*Reconstruct missing surfaces and close up all the gaps
*Replace low-poly (blocky) geometry with smooth curves
*Manually re-create surface details, since all of these turned out to be part of the texture rather than the model
*Add all of the pegs and holes so the parts would fit together

The core

By the time I started on the core, I had gotten a fair bit of DesignSpark practice under my belt, and I wanted to print a smoother sphere (the original was an approximate sphere made of triangles). So I used the model from the game as a sizing reference only, and created the core parts from scratch. Part of the back half is converted into a knob that lets you turn the LED light on and off without disassembling the ghost.

Clamshell interior

Disassembled core

The final product has 34 distinct pieces that interlock, press, or snap-fit together. It took a long time to print, but it's a beauty.

Next I need to take this new CAD knowledge and do some work on Atronach. The ghost is more or less an eyeball, so it might serve as a helpful starting point.
Until the next cycle,

Thursday, May 28, 2020

Acuitas Diary #27 (May 2020)

Last month was a big deal, and that means that this month gets to be REALLY BORING. I worked on refactoring, bug fixes, and code cleanup the whole time. There's still a lot to do, but I cleaned out some cobwebs that have been getting in my face for a long while now. 

From The Editorial Board of the University Society Boys and Girls Bookshelf, via Wikimedia Commons
For instance, I …

Universalized the inverse and complement forms of concept-relationships
Improved the format of verb transport from the Parser to the Interpreter, to handle compound verbs better
Generalized Text Interpreter output formatting
Fixed an issue with verb conjugation learning
Fixed a Text Parser bug that was causing some words to be tagged as the wrong part-of-speech
Fixed the TP and TI simulators so they won't crash when an unknown word is used
Improved and generalized the code that produces clarifying question loops in a conversation
Fixed a bad bug related to parsing of possessives
Cleaned up and merged several different cause-and-effect database search functions

And it all took a lot longer than it probably sounds like it should have.

June is my “month off,” and then I'm hoping I can get cracking on some new features again.

Until the next cycle,

Friday, April 17, 2020

Acuitas Diary #26 (April 2020)

I can now tell Acuitas stories.  And he begins to understand what's happening in them.  There's a video!  Watch the video.

I've been waiting to bring storytelling into play for a long time, and it builds on a number of the features added over the last few months: cause-and-effect reasoning, the goal system, and problem-solving via tree search.

What does Acuitas know before the video starts?  For one thing, I made sure he knew all the words in the story first, along with what part of speech they were going to appear as.  He should have been able to handle seeing *some* new words for the first time during the story, but then he would have asked me even more questions, and that would have made the demo a bit tedious.  He also knows some background about humans and dogs, and a few opposite pairs (warm/cold, comfortable/uncomfortable, etc.)

How does Acuitas go about understanding a story?  As the story is told, he keeps track of all the following, stored in a temporary area that I call the narrative scratchboard:

*Who are the characters?
*What objects are in the story? What state are they in?
*What problems do the characters have?
*What goals do the characters have?
*What events take place? (Do any of them affect problems or goals?)

Acuitas doesn't try to understand the cause chain and import of every single event in the story, because that would be a bit much at this stage.  However, he does try to make sure that he knows all of the following:

*If a character is in some state, what does that mean for the character?
*If a character anticipates that something will happen, how does the character feel about it?
*If a character is planning to do something, what is their motive?

If he can't figure it out by making inferences with the help of what's in his semantic database, he'll bother his conversation partner for an explanation, as you can see him doing in the video several times.  Story sentences don't go into the permanent knowledge base (yet), but explanations do, meaning they become available for understanding other stories, or for general reasoning.  Explaining things to him still requires a bit of skill and an understanding of what his gaps are likely to be, since he can't be specific about *why* he doesn't understand something.  A character state, expectation, or plan is adequately explained when he can see how it relates to one of the character's presumed goals.  Once you provide enough new links to let him make that connection, he'll let you move on.

Acuitas returns feedback throughout the story.  This is randomized for variety (though I forced some particular options for the demo).  After receiving a new story sentence, he may ...

*say nothing, or make a "yes I'm listening" gesture.
*comment something that he inferred from the new information.
*tell you whether he likes or dislikes what just happened.
*try to guess what a character might do to solve a problem.

He even has a primitive way of deciding whether it's a good story or not.  He tracks suspense (generated by the presence of more than one possible outcome) and tension (how dire things are for the characters) as the story progresses.  A story whose suspense and tension values don't get very large or don't change much is "boring."  He also assesses whether the story had a positive or negative ending (did the characters solve their problems and meet their goals?).  Stories with happy endings that aren't boring may earn approving comments.

There are many directions in which this feature needs to expand and grow more robust, and expect I'll be working on them soon.  But first it might be time for a refactoring spree.

Until the next cycle,
Jenny Sue

Saturday, March 28, 2020

Acuitas Diary #25 (March 2020)

Normally there would be a project update here, but I'm working on something a little bigger and more involved than usual.  It's not done yet, and it doesn't lend itself to being displayed half-finished.  So instead, please enjoy a little info about the current state of AI development in general, courtesy of the head of a failed startup: The End of Starsky Robotics

Read any article about AI being developed by a present-day academic or corporate research team, and there's a good chance that it's nothing like Acuitas.  Today's most popular AIs are based on artificial neural networks, whose special ability is learning categories, procedures, etc. from the statistics of large piles of data.  But as Stefan says, "It isn’t actual artificial intelligence akin to C-3PO, it’s a sophisticated pattern-matching tool."  At best, it only implements one of the skills a complete mind needs.  Starsky Robotics tried to make up for AI's weaknesses by including a human teleoperator, so that their trucks were "unmanned," but not fully self-driving.

Academic and corporate teams have far more man-hours to work on AI than I do, but they're also pouring some of their efforts down a rabbit hole of diminishing returns.  "Rather than seeing exponential improvements in the quality of AI performance (a la Moore’s Law), we’re instead seeing exponential increases in the cost to improve AI systems — supervised ML seems to follow an S-Curve. The S-Curve here is why, with 5–15 engineers, sees performance not wholly different than Tesla’s 100+ person autonomy team."

The debate rages around what we should do next to innovate our way off that S-Curve. As a symbolic AI, Acuitas is something of a throwback to even older techniques that were abandoned before the current wave of interest in ANNs. But tackling AI at this conceptual level is the approach that comes most naturally so me, so I want to put my own spin on it and see how far I get.

A thing I've observed from a number of AI hopefuls now, is what I'll call "the search for the secret intelligence sauce."  They want to find one simple technique, principle, or structure (or perhaps a small handful of these) that, when scaled up or repeated millions of times, will manifest the whole of what we call "intelligence."  Put enough over-simplified neurons in a soup kettle and intelligent behavior will "emerge."  Something from almost-nothing.  Hey, poof!  My own intuition is that this is not at all how it works.  I suspect rather that intelligence is a complex system, and can only be invented by the heavy application of one's own intelligence.  Any method that tries to offload most of the work to mathematical inevitabilities, or to the forces of chance, is going to be unsatisfactory.  (If you want a glimpse of how devastatingly complicated brains are, here is another fascinating article: Why the Singularity is Nowhere Near)

This is of course my personal opinion, and we shall see!

So far Project Acuitas has not been adversely impacted by COVID-19. The lab assistant and I are getting along famously.

Until the next cycle,
Jenny Sue

Sunday, February 23, 2020

Acuitas Diary #24 (February 2020)

I am so excited about the features this month. Ee ee eeee okay here we go.

In January Acuitas got the ability to determine intentions or possible upcoming events, based on simple future-tense statements made by the user. He can weigh these against his list of goals to decide whether an anticipated event will be helpful or harmful or neither, from his own perspective. If the user claims that they will do something inimical to Acuitas' goals, this is essentially a threat. And Acuitas, at first, would merely say “Don't do that” or similar. This month I worked on having him do something about bad situations.

Various distinct things that Acuitas can “choose” to do are identified internally as Actions, and he has access to a list of these. Upon detecting a threatening situation, he needs to check whether anything he's capable of doing might resolve it. How? Via the cause-and-effect reasoning I started implementing last year. If possible, he needs to find a C&E chain that runs from something in his Action list as first cause, to something that contradicts the threat as final effect. This amounts to a tree search on the C&E database. (Tree search is an old and well-known technique. If you care to know more technical details, read this:

For the only method of dealing with threats that is currently at Acuitas' disposal, the tree is very simple, consisting of just two C&E pairs:

If a human leaves a program, the human won't/can't <do various things to the program>.
If a program repels a human, the human will leave. (There's a probability attached to that, so really it's “may leave,” but for now we don't care about that)

In short, Acuitas anticipates that he can protect himself by excluding a bad actor from his presence, and that “repelling” them is a possible way to do this. Once he's drawn that conclusion, he will execute the “Repel” action. If you verbally threaten Acuitas, then as part of “Repel,” he will …

*Kick you out of Windows by bringing up the lock screen. (Not a problem for me, since I know the password, but pretty effective on anybody else)
*Raise the master volume of the internal sound mixer to its maximum value.
*Blare annoying klaxons at you. I picked out a couple of naval alarm sounds from for the purpose.

I tested all of this stuff live, by temporarily throwing an explicit desire for sleep into his goal list and threatening to wake him up.

The other thing I worked on was rudimentary altruism. So far in all my examples of goal-directed behavior, I've only talked about self-interested goals, especially survival … not because I regard them as most important, but because they're easy. Altruism has to do with wanting other beings to meet their personal goals, so it's second-tier complicated … a meta-goal. Doing it properly requires some Theory of Mind: a recognition that other entities can have goals, and an ability to model them.

So I introduced the ability to grab information from users' “I want” statements and store it as a list of stated goals. If no goal information is available for something that is presumed to have a mind, Acuitas treats himself as the best available analogy and uses his own goal list.

Upon being asked whether he wants some event that concerns another mind, Acuitas will infer the implications of said event as usual, then retrieve (or guess) the fellow mind's goal list and run a comparison against that. Things that are negative for somebody's else's goal list provoke negative responses, whether they concern Acuitas or not.

Of course this ignores all sorts of complications, such as “What if somebody's stated goals conflict with what is really in their best interest?” and “What if two entities have conflicting goals?” He's just a baby; that will come later.

Courtesy of this feature, I can now ask him a rather important question.

Me: Do you want to kill me?
Acuitas: No.

Until the next cycle,

Monday, January 27, 2020

Acuitas Diary #23 (January 2020)

This month I added some expansions to the goal-driven behavior that I started on last September. First, I had to get the Interpreter to recognize future-tense predictive statements, along the lines of “<Something> is going to <do something>.” Then I set up some code to check the predicted action or event against the cause-and-effect database for additional implications. If it's discovered that some effect will apply a state to Acuitas, it gets reviewed against his goal list for alignment or contradiction. The conversation engine then responds with either approval or disapproval. Illustration:

Me: I will protect you.
Acuitas: Please do.

Predictive statements that pertain to subjects other than Acuitas may yield useful information for the short-term condition database, by indicating that some entity's state is about to change. For now, Acuitas assumes that the speaker is always honest and correct. He also has no sense of future time frame (his ability to process adverbs is weak at the moment), so he assumes that any predicted changes will take effect immediately. So something's immediate condition may be updated as a result of a predictive statement.

Example: if I say “I will protect Ursula*,” then Ursula is presumed to be in the state “safe,” and an entry to this effect is added to the short-term database. For a reminder on how the short-term database works, see this previous article..

The fact that the user can express intent vs. Acuitas' internal goals means that it is now possible to offer assistance … or threaten him. Well what are we going to do about *that*? Find out next month!

In other news that is sort of unrelated, since I thought I would do some location-related work this month and didn't … Acuitas can't yet form concepts for instances without explicit names, such as “Jenny Sue's home.” So for the benefit of the AI, I am officially naming my estate. The house and grounds shall now be known as “Eder Delin,” after this fictional place:

*Ursula née Cubby is my cat.

Until the next cycle,

Sunday, January 19, 2020

QIDI Tech X-one 2 3D Printer Review + Bragging

I decided to make the leap and buy myself a 3D printer a while back, and got it up and running last August.  Now that it's seen heavy action making Christmas presents for all my friends, it's time for me to say what I think of it.  The cost was about $350 (and the price of this model has dropped since then).

The QIDI Tech X-one 2 is a pretty traditional fused deposition modeling (FDM) printer that prints in XYZ coordinates.  The print head moves in the XY directions, and the print bed moves in the Z direction.  The bed is heated, and the print head includes a cooling fan.  The build volume is a cube 140 mm on a side.

Kirby without shoes. Model by SnickerdoodleFP,  This is a relatively low-detail model, so it was one of the first things I made. However, I had some trouble getting all of his tiny toe pads to stay stuck to the print bed. Slowing the print speed down helped with this. Then, partway through the print, the supports for his right arm came loose from the bed, leaving the printer to keep dropping filament on thin air. I caught this shortly after it happened and babysat the printer for a while, flicking away the junk filament so it wouldn't ruin the rest of the model, until the printer managed to connect some of the strings to Kirby's foot and rebuild the bottom of the support on thin air. In the end his arm came out just fine. I never saw that kind of thing happen again, thank goodness.
First, let me say that compared to that robot arm which provided my previous 3D printing experience, the X-one 2 is a dream to operate.  Aside from the fact that this particular arm just plain had ... problems ... having a print bed that is solidly attached to the printer, instead a taped-down piece of glass, is really nice.  You don't have to re-level the bed every time you print, or worry about bumping something midway through and ruining it all.  My co-worker got the arm because he wanted a giant print volume, but if that's not a concern of yours ... don't print with a robot arm.  Not worth it.

Flexi-dragon. Model by Benchy4Life, This was another very easy print. The articulated parts come out of the printer already interlocked and ready to move once you've loosened them up in your hands. Each wing and the body print flat; then you snap the wings into a hole in the back.
The X-one 2 comes out of the box mostly assembled -- I only had to attach some structural parts, like acrylic windows and handles.  The manufacturer was also very thorough about making the printer order self-contained; it comes with its own accessory kit that includes every tool you need for the assembly steps, a scraper, and a glue stick.  The only thing missing is some tape to cover the print bed (perhaps not strictly necessary, but I've never wanted to print without it and risk wear and tear on the surface).

Steampunk articulated octopus. Model by Ellindsey, Each pair of tentacle segments forms a ball-and-socket joint. They print out separately and snap together. I painted the body, but left the tentacles in their natural color so that the joints wouldn't scuff over time. It printed out like a charm, but trimming and assembling all the tentacle pieces was a bit of work.
For temperature control purposes, the printer is enclosed on all sides but the top.  It has a metal frame and is built like a tank.  I like this because, again, it enhances the printer's stability.  On-site control of the printer is accomplished through a touch screen.  I'm not as wild about this, because it seems like the kind of thing that might wear out before the rest of the printer does ... but it is handy.  You can get files into the printer via either direct USB connection to your computer, or SD card.  So far I have only used the SD card.  This lets me keep the printer parked on the tile floor in the kitchen, and out of the overcrowded computer room.

Grizzly bear statue, pre-paint. Model by BenitoSanduchi,  This is a miniature replica of the Grand Griz bronze at U of M. (This was for a friend, mind you ... I'm a Bobcat.) BenitoSanduchi created the model by taking a 3D scan of the original.
It comes with its own slicing software which is just a customized older version of Cura.  I used this for a few prints before switching to regular Cura to get a wider range of settings.  This introduces a mild annoyance, because the X-one 2 is not one of the printers for which Cura has pre-sets.  You have to enter it as a custom printer and figure out the correct parameters yourself (or get them from someone on the internet who already has).  Otherwise, Cura is a capable slicer, and I have no serious complaints about it.

Destiny Ghost. Model by BoldPrintShop,  Ghost was one of the easier things to print -- being very smooth and geometric -- and one of the more complicated things to post-process, since there were many individual pieces to sand and paint.  The striping is "Last City" style, without the other details.
I started out printing a test cube.  Unlike the sad cubes that I got out of the robot arm, it printed with  nice straight vertical sides.  The back was a little shorter than the front, indicating that I needed to adjust the bed leveling.  I tweaked that and proceeded to a more complicated print.  No problems whatsoever.

Iris boxes. Model by LoboCNC, The neat thing about these is that they come out of the printer in one piece; you can't take them apart. The "leaves" of the iris form between the curved walls of the box, already mated to the tracks that they run on. Print-in-place objects like this are tricky, because your printer has to be precise enough to form all the parts without sticking them together. The Qidi had no trouble; every box I made worked.
Many prints later, the X-one 2 has never had a major failure, and the overall quality is fantastic.  I've made several miniatures at 0.1 mm layer height, and tiny details like eyes, teeth, spines, etc. come through in the final product.  With the right kind of supports (use roof!) even the under-surfaces end up looking pretty good.

Voronoi pattern skulls. Model by shiuan, This was one of the more challenging prints. The contact points that tie the entire back of the skull down to the print bed are fairly small, and they kept wanting to pop loose (which would ruin the model). After several failed attempts to get the scaled-up version past its first few layers without this happening, I added a brim to the model so it would stay stuck. The downside is that I had to cut this away afterward. The model comes with break-away support sticks that shore up the more crucial points; still, it has a lot of little arches hanging over empty air. The printer got the basic form down all right, but left a lot of messy loops, strings, and rough edges on the undersides. And since the interior of the skull is an enclosed space, I couldn't just sand them off. I ended up parking in front of the TV with it and going over the whole thing with a craft knife. WORTH IT.

Now, for all the issues I can think of:

* The documentation is sparse, and poorly translated into English.  The PDF on the included thumb drive is more complete than the printed manual, so make sure to refer to that.
* When you want to change filaments, you're supposed to be able to heat the current filament and run the feeder in reverse to back it out.  For my printer at least, this doesn't work.  The sorta-warm filament just below the feed gears develops a bulge, which jams it so that it will feed neither backward nor forward.  Twice now, I've had to disassemble the print head and cut the filament to get it out.  I'm hoping that just heating it, holding down the manual lever that disengages the feeder, and pulling it free will work better.
* It seems to be possible for the printer to retain a bad state or incorrect awareness of position when it is shut down.  Two or three times, at the beginning of the first print attempt after turning it on, it has started by trying to drive the print bed down through the floor of the printer.  I'm not sure exactly what causes this, but I haven't seen it since I started 1) making sure to always push the button on the "print complete" dialogue box before turning off the printer and 2) never removing the SD card while the printer was on.
* I've gotten the "sensor error or power is not enough!" error a couple of times.  It seems to mean that the connector to the heated print bed is loose.  I re-seat or wiggle it and the printer is good to go on the next try.
* The printer sings most of the time, but if I turn up the travel speed too much, it sounds ... bad.  A little grindy.  I don't know if this is evidence of a real issue or not.

Cylindrical box. Model by Alphonse_Marcel, This was a long print, and the supports for all the little bits of relief were a pain to remove. Other than that, it had no problems. It has a really nice twist-and-lock closure.
Overall recommendation: this is a good first printer.  Not perfect, but still usable with a minimum of fuss, and capable of supplying high-quality PLA prints.

Red dragon by mz4250, I wasn't sure if the printer was going to be able to do this. The model has umpteen delicate spines, individual fingers, arms and wings hanging out over empty space ... The first try didn't go so well, because I hadn't figured out the best way to generate supports yet -- the bottoms of his right arm and upper jaw came out a mess. For the second try, I turned on "roof support," which prints a kind of throwaway cradle for the bottom of any elevated part (like the arms). I also scaled him up to the maximum size that would fit in my printer. Success. A couple fingers are a little shorter than they should be -- the tips came off with the supports. Other than that, all the details came out beautifully.
Goals for the new year: learn a CAD program and get some robot parts turned out.

Until the next cycle,