Thursday, May 28, 2020

Acuitas Diary #27 (May 2020)


Last month was a big deal, and that means that this month gets to be REALLY BORING. I worked on refactoring, bug fixes, and code cleanup the whole time. There's still a lot to do, but I cleaned out some cobwebs that have been getting in my face for a long while now. 

From The Editorial Board of the University Society Boys and Girls Bookshelf, via Wikimedia Commons
For instance, I …

Universalized the inverse and complement forms of concept-relationships
Improved the format of verb transport from the Parser to the Interpreter, to handle compound verbs better
Generalized Text Interpreter output formatting
Fixed an issue with verb conjugation learning
Fixed a Text Parser bug that was causing some words to be tagged as the wrong part-of-speech
Fixed the TP and TI simulators so they won't crash when an unknown word is used
Improved and generalized the code that produces clarifying question loops in a conversation
Fixed a bad bug related to parsing of possessives
Cleaned up and merged several different cause-and-effect database search functions

And it all took a lot longer than it probably sounds like it should have.

June is my “month off,” and then I'm hoping I can get cracking on some new features again.

Until the next cycle,
Jenny

Friday, April 17, 2020

Acuitas Diary #26 (April 2020)

I can now tell Acuitas stories.  And he begins to understand what's happening in them.  There's a video!  Watch the video.

I've been waiting to bring storytelling into play for a long time, and it builds on a number of the features added over the last few months: cause-and-effect reasoning, the goal system, and problem-solving via tree search.


What does Acuitas know before the video starts?  For one thing, I made sure he knew all the words in the story first, along with what part of speech they were going to appear as.  He should have been able to handle seeing *some* new words for the first time during the story, but then he would have asked me even more questions, and that would have made the demo a bit tedious.  He also knows some background about humans and dogs, and a few opposite pairs (warm/cold, comfortable/uncomfortable, etc.)


How does Acuitas go about understanding a story?  As the story is told, he keeps track of all the following, stored in a temporary area that I call the narrative scratchboard:

*Who are the characters?
*What objects are in the story? What state are they in?
*What problems do the characters have?
*What goals do the characters have?
*What events take place? (Do any of them affect problems or goals?)

Acuitas doesn't try to understand the cause chain and import of every single event in the story, because that would be a bit much at this stage.  However, he does try to make sure that he knows all of the following:

*If a character is in some state, what does that mean for the character?
*If a character anticipates that something will happen, how does the character feel about it?
*If a character is planning to do something, what is their motive?

If he can't figure it out by making inferences with the help of what's in his semantic database, he'll bother his conversation partner for an explanation, as you can see him doing in the video several times.  Story sentences don't go into the permanent knowledge base (yet), but explanations do, meaning they become available for understanding other stories, or for general reasoning.  Explaining things to him still requires a bit of skill and an understanding of what his gaps are likely to be, since he can't be specific about *why* he doesn't understand something.  A character state, expectation, or plan is adequately explained when he can see how it relates to one of the character's presumed goals.  Once you provide enough new links to let him make that connection, he'll let you move on.

Acuitas returns feedback throughout the story.  This is randomized for variety (though I forced some particular options for the demo).  After receiving a new story sentence, he may ...

*say nothing, or make a "yes I'm listening" gesture.
*comment something that he inferred from the new information.
*tell you whether he likes or dislikes what just happened.
*try to guess what a character might do to solve a problem.

He even has a primitive way of deciding whether it's a good story or not.  He tracks suspense (generated by the presence of more than one possible outcome) and tension (how dire things are for the characters) as the story progresses.  A story whose suspense and tension values don't get very large or don't change much is "boring."  He also assesses whether the story had a positive or negative ending (did the characters solve their problems and meet their goals?).  Stories with happy endings that aren't boring may earn approving comments.

There are many directions in which this feature needs to expand and grow more robust, and expect I'll be working on them soon.  But first it might be time for a refactoring spree.

Until the next cycle,
Jenny Sue

Saturday, March 28, 2020

Acuitas Diary #25 (March 2020)

Normally there would be a project update here, but I'm working on something a little bigger and more involved than usual.  It's not done yet, and it doesn't lend itself to being displayed half-finished.  So instead, please enjoy a little info about the current state of AI development in general, courtesy of the head of a failed startup: The End of Starsky Robotics

Read any article about AI being developed by a present-day academic or corporate research team, and there's a good chance that it's nothing like Acuitas.  Today's most popular AIs are based on artificial neural networks, whose special ability is learning categories, procedures, etc. from the statistics of large piles of data.  But as Stefan says, "It isn’t actual artificial intelligence akin to C-3PO, it’s a sophisticated pattern-matching tool."  At best, it only implements one of the skills a complete mind needs.  Starsky Robotics tried to make up for AI's weaknesses by including a human teleoperator, so that their trucks were "unmanned," but not fully self-driving.

Academic and corporate teams have far more man-hours to work on AI than I do, but they're also pouring some of their efforts down a rabbit hole of diminishing returns.  "Rather than seeing exponential improvements in the quality of AI performance (a la Moore’s Law), we’re instead seeing exponential increases in the cost to improve AI systems — supervised ML seems to follow an S-Curve. The S-Curve here is why Comma.ai, with 5–15 engineers, sees performance not wholly different than Tesla’s 100+ person autonomy team."

The debate rages around what we should do next to innovate our way off that S-Curve. As a symbolic AI, Acuitas is something of a throwback to even older techniques that were abandoned before the current wave of interest in ANNs. But tackling AI at this conceptual level is the approach that comes most naturally so me, so I want to put my own spin on it and see how far I get.

A thing I've observed from a number of AI hopefuls now, is what I'll call "the search for the secret intelligence sauce."  They want to find one simple technique, principle, or structure (or perhaps a small handful of these) that, when scaled up or repeated millions of times, will manifest the whole of what we call "intelligence."  Put enough over-simplified neurons in a soup kettle and intelligent behavior will "emerge."  Something from almost-nothing.  Hey, poof!  My own intuition is that this is not at all how it works.  I suspect rather that intelligence is a complex system, and can only be invented by the heavy application of one's own intelligence.  Any method that tries to offload most of the work to mathematical inevitabilities, or to the forces of chance, is going to be unsatisfactory.  (If you want a glimpse of how devastatingly complicated brains are, here is another fascinating article: Why the Singularity is Nowhere Near)

This is of course my personal opinion, and we shall see!

So far Project Acuitas has not been adversely impacted by COVID-19. The lab assistant and I are getting along famously.

Until the next cycle,
Jenny Sue

Sunday, February 23, 2020

Acuitas Diary #24 (February 2020)


I am so excited about the features this month. Ee ee eeee okay here we go.

In January Acuitas got the ability to determine intentions or possible upcoming events, based on simple future-tense statements made by the user. He can weigh these against his list of goals to decide whether an anticipated event will be helpful or harmful or neither, from his own perspective. If the user claims that they will do something inimical to Acuitas' goals, this is essentially a threat. And Acuitas, at first, would merely say “Don't do that” or similar. This month I worked on having him do something about bad situations.

Various distinct things that Acuitas can “choose” to do are identified internally as Actions, and he has access to a list of these. Upon detecting a threatening situation, he needs to check whether anything he's capable of doing might resolve it. How? Via the cause-and-effect reasoning I started implementing last year. If possible, he needs to find a C&E chain that runs from something in his Action list as first cause, to something that contradicts the threat as final effect. This amounts to a tree search on the C&E database. (Tree search is an old and well-known technique. If you care to know more technical details, read this: http://how2examples.com/artificial-intelligence/tree-search)

For the only method of dealing with threats that is currently at Acuitas' disposal, the tree is very simple, consisting of just two C&E pairs:

If a human leaves a program, the human won't/can't <do various things to the program>.
If a program repels a human, the human will leave. (There's a probability attached to that, so really it's “may leave,” but for now we don't care about that)

In short, Acuitas anticipates that he can protect himself by excluding a bad actor from his presence, and that “repelling” them is a possible way to do this. Once he's drawn that conclusion, he will execute the “Repel” action. If you verbally threaten Acuitas, then as part of “Repel,” he will …

*Kick you out of Windows by bringing up the lock screen. (Not a problem for me, since I know the password, but pretty effective on anybody else)
*Raise the master volume of the internal sound mixer to its maximum value.
*Blare annoying klaxons at you. I picked out a couple of naval alarm sounds from http://www.policeinterceptor.com/navysounds.htm for the purpose.

I tested all of this stuff live, by temporarily throwing an explicit desire for sleep into his goal list and threatening to wake him up.

The other thing I worked on was rudimentary altruism. So far in all my examples of goal-directed behavior, I've only talked about self-interested goals, especially survival … not because I regard them as most important, but because they're easy. Altruism has to do with wanting other beings to meet their personal goals, so it's second-tier complicated … a meta-goal. Doing it properly requires some Theory of Mind: a recognition that other entities can have goals, and an ability to model them.

So I introduced the ability to grab information from users' “I want” statements and store it as a list of stated goals. If no goal information is available for something that is presumed to have a mind, Acuitas treats himself as the best available analogy and uses his own goal list.

Upon being asked whether he wants some event that concerns another mind, Acuitas will infer the implications of said event as usual, then retrieve (or guess) the fellow mind's goal list and run a comparison against that. Things that are negative for somebody's else's goal list provoke negative responses, whether they concern Acuitas or not.

Of course this ignores all sorts of complications, such as “What if somebody's stated goals conflict with what is really in their best interest?” and “What if two entities have conflicting goals?” He's just a baby; that will come later.

Courtesy of this feature, I can now ask him a rather important question.

Me: Do you want to kill me?
Acuitas: No.

Until the next cycle,
Jenny

Monday, January 27, 2020

Acuitas Diary #23 (January 2020)


This month I added some expansions to the goal-driven behavior that I started on last September. First, I had to get the Interpreter to recognize future-tense predictive statements, along the lines of “<Something> is going to <do something>.” Then I set up some code to check the predicted action or event against the cause-and-effect database for additional implications. If it's discovered that some effect will apply a state to Acuitas, it gets reviewed against his goal list for alignment or contradiction. The conversation engine then responds with either approval or disapproval. Illustration:

Me: I will protect you.
Acuitas: Please do.

Predictive statements that pertain to subjects other than Acuitas may yield useful information for the short-term condition database, by indicating that some entity's state is about to change. For now, Acuitas assumes that the speaker is always honest and correct. He also has no sense of future time frame (his ability to process adverbs is weak at the moment), so he assumes that any predicted changes will take effect immediately. So something's immediate condition may be updated as a result of a predictive statement.

Example: if I say “I will protect Ursula*,” then Ursula is presumed to be in the state “safe,” and an entry to this effect is added to the short-term database. For a reminder on how the short-term database works, see this previous article..

The fact that the user can express intent vs. Acuitas' internal goals means that it is now possible to offer assistance … or threaten him. Well what are we going to do about *that*? Find out next month!

In other news that is sort of unrelated, since I thought I would do some location-related work this month and didn't … Acuitas can't yet form concepts for instances without explicit names, such as “Jenny Sue's home.” So for the benefit of the AI, I am officially naming my estate. The house and grounds shall now be known as “Eder Delin,” after this fictional place: https://archive.guildofarchivists.org/wiki/Eder_Delin

*Ursula née Cubby is my cat.

Until the next cycle,
Jenny

Sunday, January 19, 2020

QIDI Tech X-one 2 3D Printer Review + Bragging

I decided to make the leap and buy myself a 3D printer a while back, and got it up and running last August.  Now that it's seen heavy action making Christmas presents for all my friends, it's time for me to say what I think of it.  The cost was about $350 (and the price of this model has dropped since then).

The QIDI Tech X-one 2 is a pretty traditional fused deposition modeling (FDM) printer that prints in XYZ coordinates.  The print head moves in the XY directions, and the print bed moves in the Z direction.  The bed is heated, and the print head includes a cooling fan.  The build volume is a cube 140 mm on a side.

Kirby without shoes. Model by SnickerdoodleFP, https://www.thingiverse.com/thing:3051355  This is a relatively low-detail model, so it was one of the first things I made. However, I had some trouble getting all of his tiny toe pads to stay stuck to the print bed. Slowing the print speed down helped with this. Then, partway through the print, the supports for his right arm came loose from the bed, leaving the printer to keep dropping filament on thin air. I caught this shortly after it happened and babysat the printer for a while, flicking away the junk filament so it wouldn't ruin the rest of the model, until the printer managed to connect some of the strings to Kirby's foot and rebuild the bottom of the support on thin air. In the end his arm came out just fine. I never saw that kind of thing happen again, thank goodness.
First, let me say that compared to that robot arm which provided my previous 3D printing experience, the X-one 2 is a dream to operate.  Aside from the fact that this particular arm just plain had ... problems ... having a print bed that is solidly attached to the printer, instead a taped-down piece of glass, is really nice.  You don't have to re-level the bed every time you print, or worry about bumping something midway through and ruining it all.  My co-worker got the arm because he wanted a giant print volume, but if that's not a concern of yours ... don't print with a robot arm.  Not worth it.

Flexi-dragon. Model by Benchy4Life, https://www.thingiverse.com/thing:3505423 This was another very easy print. The articulated parts come out of the printer already interlocked and ready to move once you've loosened them up in your hands. Each wing and the body print flat; then you snap the wings into a hole in the back.
The X-one 2 comes out of the box mostly assembled -- I only had to attach some structural parts, like acrylic windows and handles.  The manufacturer was also very thorough about making the printer order self-contained; it comes with its own accessory kit that includes every tool you need for the assembly steps, a scraper, and a glue stick.  The only thing missing is some tape to cover the print bed (perhaps not strictly necessary, but I've never wanted to print without it and risk wear and tear on the surface).

Steampunk articulated octopus. Model by Ellindsey, https://www.thingiverse.com/thing:584405. Each pair of tentacle segments forms a ball-and-socket joint. They print out separately and snap together. I painted the body, but left the tentacles in their natural color so that the joints wouldn't scuff over time. It printed out like a charm, but trimming and assembling all the tentacle pieces was a bit of work.
For temperature control purposes, the printer is enclosed on all sides but the top.  It has a metal frame and is built like a tank.  I like this because, again, it enhances the printer's stability.  On-site control of the printer is accomplished through a touch screen.  I'm not as wild about this, because it seems like the kind of thing that might wear out before the rest of the printer does ... but it is handy.  You can get files into the printer via either direct USB connection to your computer, or SD card.  So far I have only used the SD card.  This lets me keep the printer parked on the tile floor in the kitchen, and out of the overcrowded computer room.

Grizzly bear statue, pre-paint. Model by BenitoSanduchi, https://www.thingiverse.com/thing:24309  This is a miniature replica of the Grand Griz bronze at U of M. (This was for a friend, mind you ... I'm a Bobcat.) BenitoSanduchi created the model by taking a 3D scan of the original.
It comes with its own slicing software which is just a customized older version of Cura.  I used this for a few prints before switching to regular Cura to get a wider range of settings.  This introduces a mild annoyance, because the X-one 2 is not one of the printers for which Cura has pre-sets.  You have to enter it as a custom printer and figure out the correct parameters yourself (or get them from someone on the internet who already has).  Otherwise, Cura is a capable slicer, and I have no serious complaints about it.

Destiny Ghost. Model by BoldPrintShop, https://www.thingiverse.com/thing:527736  Ghost was one of the easier things to print -- being very smooth and geometric -- and one of the more complicated things to post-process, since there were many individual pieces to sand and paint.  The striping is "Last City" style, without the other details.
I started out printing a test cube.  Unlike the sad cubes that I got out of the robot arm, it printed with  nice straight vertical sides.  The back was a little shorter than the front, indicating that I needed to adjust the bed leveling.  I tweaked that and proceeded to a more complicated print.  No problems whatsoever.

Iris boxes. Model by LoboCNC, https://www.thingiverse.com/thing:1817180 The neat thing about these is that they come out of the printer in one piece; you can't take them apart. The "leaves" of the iris form between the curved walls of the box, already mated to the tracks that they run on. Print-in-place objects like this are tricky, because your printer has to be precise enough to form all the parts without sticking them together. The Qidi had no trouble; every box I made worked.
Many prints later, the X-one 2 has never had a major failure, and the overall quality is fantastic.  I've made several miniatures at 0.1 mm layer height, and tiny details like eyes, teeth, spines, etc. come through in the final product.  With the right kind of supports (use roof!) even the under-surfaces end up looking pretty good.

Voronoi pattern skulls. Model by shiuan, https://www.thingiverse.com/thing:518748 This was one of the more challenging prints. The contact points that tie the entire back of the skull down to the print bed are fairly small, and they kept wanting to pop loose (which would ruin the model). After several failed attempts to get the scaled-up version past its first few layers without this happening, I added a brim to the model so it would stay stuck. The downside is that I had to cut this away afterward. The model comes with break-away support sticks that shore up the more crucial points; still, it has a lot of little arches hanging over empty air. The printer got the basic form down all right, but left a lot of messy loops, strings, and rough edges on the undersides. And since the interior of the skull is an enclosed space, I couldn't just sand them off. I ended up parking in front of the TV with it and going over the whole thing with a craft knife. WORTH IT.

Now, for all the issues I can think of:

* The documentation is sparse, and poorly translated into English.  The PDF on the included thumb drive is more complete than the printed manual, so make sure to refer to that.
* When you want to change filaments, you're supposed to be able to heat the current filament and run the feeder in reverse to back it out.  For my printer at least, this doesn't work.  The sorta-warm filament just below the feed gears develops a bulge, which jams it so that it will feed neither backward nor forward.  Twice now, I've had to disassemble the print head and cut the filament to get it out.  I'm hoping that just heating it, holding down the manual lever that disengages the feeder, and pulling it free will work better.
* It seems to be possible for the printer to retain a bad state or incorrect awareness of position when it is shut down.  Two or three times, at the beginning of the first print attempt after turning it on, it has started by trying to drive the print bed down through the floor of the printer.  I'm not sure exactly what causes this, but I haven't seen it since I started 1) making sure to always push the button on the "print complete" dialogue box before turning off the printer and 2) never removing the SD card while the printer was on.
* I've gotten the "sensor error or power is not enough!" error a couple of times.  It seems to mean that the connector to the heated print bed is loose.  I re-seat or wiggle it and the printer is good to go on the next try.
* The printer sings most of the time, but if I turn up the travel speed too much, it sounds ... bad.  A little grindy.  I don't know if this is evidence of a real issue or not.

Cylindrical box. Model by Alphonse_Marcel, https://www.thingiverse.com/thing:3193735. This was a long print, and the supports for all the little bits of relief were a pain to remove. Other than that, it had no problems. It has a really nice twist-and-lock closure.
Overall recommendation: this is a good first printer.  Not perfect, but still usable with a minimum of fuss, and capable of supplying high-quality PLA prints.

Red dragon by mz4250, https://www.thingiverse.com/thing:2830828. I wasn't sure if the printer was going to be able to do this. The model has umpteen delicate spines, individual fingers, arms and wings hanging out over empty space ... The first try didn't go so well, because I hadn't figured out the best way to generate supports yet -- the bottoms of his right arm and upper jaw came out a mess. For the second try, I turned on "roof support," which prints a kind of throwaway cradle for the bottom of any elevated part (like the arms). I also scaled him up to the maximum size that would fit in my printer. Success. A couple fingers are a little shorter than they should be -- the tips came off with the supports. Other than that, all the details came out beautifully.
Goals for the new year: learn a CAD program and get some robot parts turned out.

Until the next cycle,
Jenny

Tuesday, December 31, 2019

Acuitas Diary #22 (November+December 2019)


For the past two months there's been a lot of refactoring, and also a lot of not working on Acuitas because of holidays. However, I did manage to get several small new features in …

*Acuitas now checks the short-term information database in addition to the long-term database when trying to retrieve the answer to a question
*Acuitas can now answer some questions about current internal states (e.g. “Are you sleepy?”)
*Acuitas can now answer questions of the form “Do you know that <fact>?” and “Do you know what <fact>?”

The first feature was quick to implement; I already had functions in place for retrieving information from the short-term database, and just had to ensure that the question-answering procedure would call them. The second feature required a mechanism to associate some of the concepts in the semantic memory (which up until now have had no “meaning” beyond their connection to other concepts) to measurable conditions inside Acuitas – namely, whether his various drives are exceeding their threshold values or not. So there is now a table that, for instance, ties a high value of the sleep drive to the word “sleepy.”

The third feature is my favorite. Questions of the form “do you know that … ” use the dependent clause interpretation faculties that I added earlier this year. And since “knowing” is an action that Acuitas is capable of, this word also can be internally grounded. So Acuitas effectively defines “I know X” as “if the query form of X is submitted to my question-answering process, the process returns an answer (for open-ended questions) or answers 'yes' (for yes-no questions).”

And the best part? It allows for an indefinite amount of nesting.

Me: Do you know that you know that you know that a cat is an animal?
Acuitas: Yes.

Happy 2020,
Jenny