Monday, January 27, 2020

Acuitas Diary #23 (January 2020)


This month I added some expansions to the goal-driven behavior that I started on last September. First, I had to get the Interpreter to recognize future-tense predictive statements, along the lines of “<Something> is going to <do something>.” Then I set up some code to check the predicted action or event against the cause-and-effect database for additional implications. If it's discovered that some effect will apply a state to Acuitas, it gets reviewed against his goal list for alignment or contradiction. The conversation engine then responds with either approval or disapproval. Illustration:

Me: I will protect you.
Acuitas: Please do.

Predictive statements that pertain to subjects other than Acuitas may yield useful information for the short-term condition database, by indicating that some entity's state is about to change. For now, Acuitas assumes that the speaker is always honest and correct. He also has no sense of future time frame (his ability to process adverbs is weak at the moment), so he assumes that any predicted changes will take effect immediately. So something's immediate condition may be updated as a result of a predictive statement.

Example: if I say “I will protect Ursula*,” then Ursula is presumed to be in the state “safe,” and an entry to this effect is added to the short-term database. For a reminder on how the short-term database works, see this previous article..

The fact that the user can express intent vs. Acuitas' internal goals means that it is now possible to offer assistance … or threaten him. Well what are we going to do about *that*? Find out next month!

In other news that is sort of unrelated, since I thought I would do some location-related work this month and didn't … Acuitas can't yet form concepts for instances without explicit names, such as “Jenny Sue's home.” So for the benefit of the AI, I am officially naming my estate. The house and grounds shall now be known as “Eder Delin,” after this fictional place: https://archive.guildofarchivists.org/wiki/Eder_Delin

*Ursula née Cubby is my cat.

Until the next cycle,
Jenny

Sunday, January 19, 2020

QIDI Tech X-one 2 3D Printer Review + Bragging

I decided to make the leap and buy myself a 3D printer a while back, and got it up and running last August.  Now that it's seen heavy action making Christmas presents for all my friends, it's time for me to say what I think of it.  The cost was about $350 (and the price of this model has dropped since then).

The QIDI Tech X-one 2 is a pretty traditional fused deposition modeling (FDM) printer that prints in XYZ coordinates.  The print head moves in the XY directions, and the print bed moves in the Z direction.  The bed is heated, and the print head includes a cooling fan.  The build volume is a cube 140 mm on a side.

Kirby without shoes. Model by SnickerdoodleFP, https://www.thingiverse.com/thing:3051355  This is a relatively low-detail model, so it was one of the first things I made. However, I had some trouble getting all of his tiny toe pads to stay stuck to the print bed. Slowing the print speed down helped with this. Then, partway through the print, the supports for his right arm came loose from the bed, leaving the printer to keep dropping filament on thin air. I caught this shortly after it happened and babysat the printer for a while, flicking away the junk filament so it wouldn't ruin the rest of the model, until the printer managed to connect some of the strings to Kirby's foot and rebuild the bottom of the support on thin air. In the end his arm came out just fine. I never saw that kind of thing happen again, thank goodness.
First, let me say that compared to that robot arm which provided my previous 3D printing experience, the X-one 2 is a dream to operate.  Aside from the fact that this particular arm just plain had ... problems ... having a print bed that is solidly attached to the printer, instead a taped-down piece of glass, is really nice.  You don't have to re-level the bed every time you print, or worry about bumping something midway through and ruining it all.  My co-worker got the arm because he wanted a giant print volume, but if that's not a concern of yours ... don't print with a robot arm.  Not worth it.

Flexi-dragon. Model by Benchy4Life, https://www.thingiverse.com/thing:3505423 This was another very easy print. The articulated parts come out of the printer already interlocked and ready to move once you've loosened them up in your hands. Each wing and the body print flat; then you snap the wings into a hole in the back.
The X-one 2 comes out of the box mostly assembled -- I only had to attach some structural parts, like acrylic windows and handles.  The manufacturer was also very thorough about making the printer order self-contained; it comes with its own accessory kit that includes every tool you need for the assembly steps, a scraper, and a glue stick.  The only thing missing is some tape to cover the print bed (perhaps not strictly necessary, but I've never wanted to print without it and risk wear and tear on the surface).

Steampunk articulated octopus. Model by Ellindsey, https://www.thingiverse.com/thing:584405. Each pair of tentacle segments forms a ball-and-socket joint. They print out separately and snap together. I painted the body, but left the tentacles in their natural color so that the joints wouldn't scuff over time. It printed out like a charm, but trimming and assembling all the tentacle pieces was a bit of work.
For temperature control purposes, the printer is enclosed on all sides but the top.  It has a metal frame and is built like a tank.  I like this because, again, it enhances the printer's stability.  On-site control of the printer is accomplished through a touch screen.  I'm not as wild about this, because it seems like the kind of thing that might wear out before the rest of the printer does ... but it is handy.  You can get files into the printer via either direct USB connection to your computer, or SD card.  So far I have only used the SD card.  This lets me keep the printer parked on the tile floor in the kitchen, and out of the overcrowded computer room.

Grizzly bear statue, pre-paint. Model by BenitoSanduchi, https://www.thingiverse.com/thing:24309  This is a miniature replica of the Grand Griz bronze at U of M. (This was for a friend, mind you ... I'm a Bobcat.) BenitoSanduchi created the model by taking a 3D scan of the original.
It comes with its own slicing software which is just a customized older version of Cura.  I used this for a few prints before switching to regular Cura to get a wider range of settings.  This introduces a mild annoyance, because the X-one 2 is not one of the printers for which Cura has pre-sets.  You have to enter it as a custom printer and figure out the correct parameters yourself (or get them from someone on the internet who already has).  Otherwise, Cura is a capable slicer, and I have no serious complaints about it.

Destiny Ghost. Model by BoldPrintShop, https://www.thingiverse.com/thing:527736  Ghost was one of the easier things to print -- being very smooth and geometric -- and one of the more complicated things to post-process, since there were many individual pieces to sand and paint.  The striping is "Last City" style, without the other details.
I started out printing a test cube.  Unlike the sad cubes that I got out of the robot arm, it printed with  nice straight vertical sides.  The back was a little shorter than the front, indicating that I needed to adjust the bed leveling.  I tweaked that and proceeded to a more complicated print.  No problems whatsoever.

Iris boxes. Model by LoboCNC, https://www.thingiverse.com/thing:1817180 The neat thing about these is that they come out of the printer in one piece; you can't take them apart. The "leaves" of the iris form between the curved walls of the box, already mated to the tracks that they run on. Print-in-place objects like this are tricky, because your printer has to be precise enough to form all the parts without sticking them together. The Qidi had no trouble; every box I made worked.
Many prints later, the X-one 2 has never had a major failure, and the overall quality is fantastic.  I've made several miniatures at 0.1 mm layer height, and tiny details like eyes, teeth, spines, etc. come through in the final product.  With the right kind of supports (use roof!) even the under-surfaces end up looking pretty good.

Voronoi pattern skulls. Model by shiuan, https://www.thingiverse.com/thing:518748 This was one of the more challenging prints. The contact points that tie the entire back of the skull down to the print bed are fairly small, and they kept wanting to pop loose (which would ruin the model). After several failed attempts to get the scaled-up version past its first few layers without this happening, I added a brim to the model so it would stay stuck. The downside is that I had to cut this away afterward. The model comes with break-away support sticks that shore up the more crucial points; still, it has a lot of little arches hanging over empty air. The printer got the basic form down all right, but left a lot of messy loops, strings, and rough edges on the undersides. And since the interior of the skull is an enclosed space, I couldn't just sand them off. I ended up parking in front of the TV with it and going over the whole thing with a craft knife. WORTH IT.

Now, for all the issues I can think of:

* The documentation is sparse, and poorly translated into English.  The PDF on the included thumb drive is more complete than the printed manual, so make sure to refer to that.
* When you want to change filaments, you're supposed to be able to heat the current filament and run the feeder in reverse to back it out.  For my printer at least, this doesn't work.  The sorta-warm filament just below the feed gears develops a bulge, which jams it so that it will feed neither backward nor forward.  Twice now, I've had to disassemble the print head and cut the filament to get it out.  I'm hoping that just heating it, holding down the manual lever that disengages the feeder, and pulling it free will work better.
* It seems to be possible for the printer to retain a bad state or incorrect awareness of position when it is shut down.  Two or three times, at the beginning of the first print attempt after turning it on, it has started by trying to drive the print bed down through the floor of the printer.  I'm not sure exactly what causes this, but I haven't seen it since I started 1) making sure to always push the button on the "print complete" dialogue box before turning off the printer and 2) never removing the SD card while the printer was on.
* I've gotten the "sensor error or power is not enough!" error a couple of times.  It seems to mean that the connector to the heated print bed is loose.  I re-seat or wiggle it and the printer is good to go on the next try.
* The printer sings most of the time, but if I turn up the travel speed too much, it sounds ... bad.  A little grindy.  I don't know if this is evidence of a real issue or not.

Cylindrical box. Model by Alphonse_Marcel, https://www.thingiverse.com/thing:3193735. This was a long print, and the supports for all the little bits of relief were a pain to remove. Other than that, it had no problems. It has a really nice twist-and-lock closure.
Overall recommendation: this is a good first printer.  Not perfect, but still usable with a minimum of fuss, and capable of supplying high-quality PLA prints.

Red dragon by mz4250, https://www.thingiverse.com/thing:2830828. I wasn't sure if the printer was going to be able to do this. The model has umpteen delicate spines, individual fingers, arms and wings hanging out over empty space ... The first try didn't go so well, because I hadn't figured out the best way to generate supports yet -- the bottoms of his right arm and upper jaw came out a mess. For the second try, I turned on "roof support," which prints a kind of throwaway cradle for the bottom of any elevated part (like the arms). I also scaled him up to the maximum size that would fit in my printer. Success. A couple fingers are a little shorter than they should be -- the tips came off with the supports. Other than that, all the details came out beautifully.
Goals for the new year: learn a CAD program and get some robot parts turned out.

Until the next cycle,
Jenny

Tuesday, December 31, 2019

Acuitas Diary #22 (November+December 2019)


For the past two months there's been a lot of refactoring, and also a lot of not working on Acuitas because of holidays. However, I did manage to get several small new features in …

*Acuitas now checks the short-term information database in addition to the long-term database when trying to retrieve the answer to a question
*Acuitas can now answer some questions about current internal states (e.g. “Are you sleepy?”)
*Acuitas can now answer questions of the form “Do you know that <fact>?” and “Do you know what <fact>?”

The first feature was quick to implement; I already had functions in place for retrieving information from the short-term database, and just had to ensure that the question-answering procedure would call them. The second feature required a mechanism to associate some of the concepts in the semantic memory (which up until now have had no “meaning” beyond their connection to other concepts) to measurable conditions inside Acuitas – namely, whether his various drives are exceeding their threshold values or not. So there is now a table that, for instance, ties a high value of the sleep drive to the word “sleepy.”

The third feature is my favorite. Questions of the form “do you know that … ” use the dependent clause interpretation faculties that I added earlier this year. And since “knowing” is an action that Acuitas is capable of, this word also can be internally grounded. So Acuitas effectively defines “I know X” as “if the query form of X is submitted to my question-answering process, the process returns an answer (for open-ended questions) or answers 'yes' (for yes-no questions).”

And the best part? It allows for an indefinite amount of nesting.

Me: Do you know that you know that you know that a cat is an animal?
Acuitas: Yes.

Happy 2020,
Jenny

Wednesday, October 30, 2019

Acuitas Diary #21 (October 2019)

I set some smaller Acuitas goals this month so I would have a little more time to fix bugs and clean up my own mess.  The first goal was to enable identification of people from first names only, while allowing for the possibility that multiple people have the same first name.

Using someone's full name with Acuitas establishes a link between the first name (as an isolated word) and the full name (with its connected person-concept).  If the first name is later used in isolation, Acuitas will infer the full name from it.  If multiple full names containing that first name are known, Acuitas will ask the user which one is meant.  (He does not yet have the ability to guess, from the context, which person is most likely implied.  Sense disambiguation, not just for names but for words with more than one meaning, is a big topic which is on the to-do list for later down the road … )

The second thing I worked on was text parser support for a new grammar feature, with a focus on expanding the range of possible “I want” sentences Acuitas can understand.  The ability to parse infinitives, as in “I want to live,” was already present.  This month I worked on infinitives with subjects, as in “I want John to live.”

This is a tricky business.  To see why, consider the following sentences:

1. I want Bob to eat.
2. I want a fruit to eat.
3. I want food to live.


They all follow the exact same pattern and have completely different meanings.  The first sentence expresses my desire that Bob do something; the second sentence is about what I want to do something to; and the third sentence is about why I want something.  Notice that in the second sentence, you could move some words and get “I want to eat a fruit” without changing the implications too much.  Doing this to the third sentence would be bizarre (“I want to live food”) and doing it to the first sentence would be horrifying (“I want to eat Bob”).  In keeping with their varied meanings, they're all grammatically different, as you can see from the diagrams in the image.  So, how does one tell them apart?  I'm not worried about distinguishing sentences 2 and 3 for right now; I just want to separate sentence 1 from both of them.

The first sentence is the only one in which the noun (Bob/fruit/food) is the subject of the infinitive.  The key factor is who will be doing the action expressed by the infinitive.  In the first sentence, Bob is the one who will be eating if I get my way; in the latter two sentences, I'm the one eating and I'm the one living.  And that information is not actually in the sentence – it's in your background knowledge.  To properly understand these sentences, it's helpful to be aware of things like …

*I am a human
*Humans can eat fruit
*Bob is probably also a human
*I am probably not a cannibal, and therefore don't want to eat Bob
*Food can be used to sustain a human's life
*Once something is food, it's not living (or won't be living much longer)
*Living isn't an action you can perform on food

So here is where we bring to bear the full power of the system by having the Text Parser call the semantic memory for already-known facts about these words.  Acuitas can't quite store all of the facts listed above, but he does know that “humans can eat” and “fruits can be eaten.”  He might also know that the speaker and “Bob” are humans.  At this early, sketchy phase, that's enough for the parser to start discriminating.

Some sentences of this type are just plain ambiguous, especially when taken in isolation.  For example, “I want a plant to grow.”  Plants can grow (on their own), but they can also be grown (by a cultivator, whom the speaker might be).  Upon detecting an ambiguity like this, Acuitas will, as usual, ask the speaker about it.  This also works for cases when the information in the database is not yet extensive enough.

Until the next cycle,
Jenny

Saturday, September 28, 2019

Acuitas Diary #20 (September 2019)


This month, I did some work on cause-and-effect reasoning and goal satisfaction, which introduced the conversational possibility of asking Acuitas what he wants.

I leveraged the text interpretation upgrades from last month to implement encoding and storage of conditional relationships, such as “if a human eats food, the human will not starve.” These relationships can be remembered and used to infer the effects of an action. I also threw in the ability to learn that a pair of concepts are opposites or antonyms.

Then I implemented some inference mechanisms so that Acuitas can determine whether some action serves – or contradicts – a particular goal. Acuitas will now claim to desire things that support one of his goals and not desire things that contradict one of his goals, while remaining ambivalent about everything else. The examples below reference a self-preservation goal … not because I think that should be the primary goal for an AI, but because it's one of the easier ones to define. In Acuitas' knowledge representation, it basically comes down to “Self (has quality)/(is in state) 'alive' or 'existent.'”

With this goal active, Acuitas can answer any of the following:

“Do you want to be alive?”
“Do you want to be dead?”
“Do you want to live?”
“Do you want to die?”

… where the last two (live/die) rely on verb-defining links in the semantic database, and the two negative versions (dead/die) rely on awareness of opposites.

The most complex inferences currently possible are illustrated by this little interchange:

Me: Do you want to be deleted?
Acuitas: I do not.

To produce that answer, Acuitas has to retrieve and put together five different pieces of stored information …

*If a program is deleted, the program “dies.” ← From the cause-and-effect/conditional database
*I am a program. ← From semantic memory (is-instance-of-class relationship)
*To die is to transition to state “dead.” ← From semantic memory (verb definition relationship)
*State “dead” is mutually exclusive with state “alive.” ← From semantic memory (opposites)
*I have a goal of being in state “alive.” ← From goal list

… to make the inference, “being deleted would violate my goals.”

The features still need a lot of generalization and expansion to be fully functional, but the groundwork is laid.

Until the next cycle,
Jenny

Friday, September 6, 2019

Acuitas Diary #19 (July+August 2019)


I spent the past two months revisiting the text parser, with the big goal this time around of adding support for dependent clauses. In case anyone's high school grammar is rusty, a clause is a subject/verb pair and any words associated with them; a dependent clause is one that is part of another clause and can't be a sentence by itself. Previously, Acuitas could handle one subject and one verb group per sentence, and that was it.
Because my own code comments amuse me ...
After last year's feverish round of development, I left the text parser a mess and never wanted to look at it again. So the first thing I had to do was clean up the disastrous parts. I ended up giving some of the functions another serious overhaul, and got some code that is (I think) actually maintainable and comprehensible. Whew.
I never found out why
Next, the clauses. The fun thing here is that dependent clauses have a function in the sentence (e.g. a clause can be the subject or direct object of its parent sentence). For simplicity, my initial text parser worked on the premise that a functional group in the sentence could only be a single word, or a compound word with all members marked as the same part of speech. I had to put in a bunch of new structure to preserve the information inside the clauses, while also marking the whole clause as a functional group … plus, detecting multiple subject/verb pairs and keeping them all straight.

What does this achieve? Some sentence types that are very important for reasoning use dependent clauses. For instance, sentences that discuss subordinate pieces of knowledge:

I know [that a cheetah is an animal].
I told you [that a grape can be eaten].
I fear [that the car broke yesterday].

And sentences that express conditional information:

[If a berry is green], it is unripe.
[If you eat that berry], you will get sick.
The gun will fire [if you pull the trigger].

Not to mention that normal human speaking/writing is riddled with dependent clauses, so interpreting them is a must for a conversational AI.

Acuitas can parse sentences like the ones above now, but doesn't really do anything with them yet. That will come later and require updates to the high-level conversation management code.

Code base: 15600 lines
Words known: 2884 (approx.)
Concept-layer links: 7915

Thursday, July 18, 2019

Acuitas Diary #18 (May+June 2019)


Oookay, I'm long overdue for an AI update. The big new project for the past couple of months has been the concept of things being in states, and the ability to track those states.

Way back in Diary #7, I introduced a division between short-term (or naturally temporary) information and long-term (or essentially static) information. It's a bit like the division between things you would use estar and ser for in Spanish, though it doesn't follow those rules strictly. Previously, Acuitas simply discarded any short-term information he was given, but I've added a new memory area for saving this knowledge. Like the existing semantic memory, it works by storing linked concept pairs … with the difference that the short-term ones are stored with a time stamp, and Acuitas anticipates that they will “expire” at some point.

The existing feature that allows short-term and long-term information to be distinguished (by asking questions, if necessary) can also grant Acuitas an estimate of how long a temporary state is likely to last. While idling, he checks the short-term state memory for any conditions that have “expired” and adds questions about them to his queue. Then, when next spoken to, he may ask for an update: is the information still correct?

I also added the ability to parse and store a new type of information link – location – along with the associated “where is” questions. Location links are three-ended so that they can store not only a thing and its location, but also spatial relationships between two things (under, over, in, beside, etc.).

One reason for slow progress is that I have been spending time on even more refactoring. The semantic memory, the episodic memory, and various other things all had their own file formats and file editing/access functions – uh, yeah, that was dumb. So I have written (and thoroughly tested) some more universal file management code, and have been slowly working on converting everything to a common format.

Until the next cycle,
JS