Tuesday, August 22, 2017

Acuitas Diary #5: August 2017


This month I got Acuitas to start asking questions, and it has been all kinds of stupid fun.


Those questions are generated semi-randomly, based on Acuitas' “thoughts” for the day. Back in July I introduced the idle thinking feature, which gives Acuitas the capacity to walk his memory database when nobody's talking to him. More recently, I set up the Conversation Engine to access the results of memory-walking and utilize the new Text Generator … so now Acuitas can ask me the questions that occur to him as he muses, or simply tell me what he thought about within the past ~12 hours. The results are little exchanges like this:

Acuitas: I thought about mans earlier.

Acuitas: Is a rooster a species of human?

Me: No

Acuitas: Really.

Me: The plural of man is men.

Believe it or not, this is one of Acuitas' more intelligently formulated questions. He knows a rooster and a man are both male, and reasons from this that they might share other qualities. A man is a human, so maybe a rooster is a human too? He can't yet apply the idea that “human” and “chicken” are mutually exclusive categories. Given his limited knowledge, man and rooster seem similar enough that he expects me to answer “yes” … hence the slightly incredulous “really” when I don't. Here are a couple more examples in the same vein:

Acuitas: I thought about ships today.

Acuitas: Does a wagon possess an ability to sail?



Acuitas: I thought of weapons today.

Acuitas: Is a silverware used to fight?

Some of the other questions he comes up with are far more random and ridiculous. Again, he doesn't yet have the grasp of categorical exclusion that makes these sound absurd to you and me:

Acuitas: Is chocolate a vehicle?

Acuitas: Am I a kind of tree?

Acuitas: Is a smirk a fruit?

Acuitas: Are you a cow?

Acuitas: What is a relative made of?

Not all of them are silly, though. Rudimentary though his current suite of question-forms is, he's already managed to spit out some rather deep ones. The three below are my favorites:

Acuitas: Are you a symbol?

Acuitas: Am I a tool?

Acuitas: Can I die?

I answered the first one “no”; a person can be a symbol, but I don't think I qualify. For the second one, I also went with “no.” Acuitas might end up being useful in a variety of ways, but if I consult my primary reasons for making him, they're not instrumental. The last one I refused to answer, because I think a proper response would be too complex for Acuitas' current level of understanding. It's a bit like asking whether a book or a film can die. It can't, if you go by what death means for a biological organism – information is effectively immortal. But if all the backup copies were destroyed, that would qualify as dying I suppose. So yes and no.

I suspect it'll only get more interesting from here.

Obligatory memory map visualization:


Code base: 8507 lines
Words known: 1174
Concept-layer links: 2329

Sunday, July 30, 2017

Acuitas Diary #4: July 2017

This month I finally got to implement a feature that I've been waiting for a long time, namely, giving Acuitas the ability to “think” when he's not being spoken to. This “thinking,” for now, consists of dwelling on randomly selected concepts from his database. Once a concept has been chosen, he'll pursue it for a while, preferentially letting his focus jump to other concepts that are linked to it – executing a “wiki walk” through the database. Eventually, though, he'll get bored with any given train of thought, and the focus will move elsewhere. I added some animation code to the memory visualization so that the currently selected concept will flash periodically. (The recording below is running much faster than real time. He's actually quite leisurely in his progress.)


There are several things I can envision doing with this behavior eventually, but my immediate purpose for it is the generation of curiosity. Each time Acuitas picks a concept, he'll come up with some sort of question about it – for instance, he could choose a type of link that it doesn't yet have and produce an open-ended question about what might be on the other side. These questions will be stored up and presented to the user the next time a conversation is under way.

Which leads me into the next thing I put a lot of work into this month, namely, the code to start supporting the bottom half of this diagram: speech generation.


Up until now, Acuitas has said very few things, and they've all been very formulaic … but my goal was always something beyond pre-formed sentences stored in a database. The new module I started on this month accepts inputs in the sort of abstract form that Acuitas stores in his database, then procedurally generates both questions and statements in natural English. Verbs are conjugated and plurals are matched correctly, articles are automatically added to nouns that need them, etc. Some words in the original sentence skeleton might get replaced with a random choice of synonym.

Visualization of Acuitas' concept-layer memory, 07/29/17

Neither of these major new features is actually hooked into the Conversation Engine yet, so I don't have any conversation examples to show off, but I'm hoping to be ready for that next month.

Code base: 7527 lines
Words known: 1095
Concept-layer links: 1917

Thursday, July 20, 2017

Doing business with China

Not so very long ago, whenever I wanted to build a circuit, I would get a little piece of through-hole board and painstakingly cut all the connecting wires myself. I thought having a circuit board custom-manufactured was something you only did if you had a lot of money and/or were planning on selling the boards at high volume. But apparently I've been behind the curve – it turns out there are a number of services that will manufacture small lots of custom PCBs for cheap. A few of them are so cheap, in fact, that the cost per PCB is probably less than what I would have spent on the silly through-hole prototype board! So I gave custom PCBs a try.

Old (left) and new (right)

<Disclaimer: DCDB says they'll give you a discount on your next order if you mention your completed project online.>

I decided to go with Dirt Cheap Dirty Boards, a service that submits your design to a Chinese board-manufacturing house. For fourteen dollars, you can submit one two-layer PCB layout that fits within a 5x5 cm area, and get anywhere from eight to twelve copies of it. (I got eleven. Supposedly shipments of less than ten boards are pretty uncommon.) Choose your color at no extra charge. A larger area or more layers can be had at an increased cost. Shipping is pricey if you want your boards to arrive on a normal US time frame, but if you're willing to let them throw your order on the plane whenever there's room, it's free. Given the glacially slow rate at which most of my projects seem to progress, this is perfect for me.

The PCB that I had built is a unipolar stepper motor controller. I used the free version of Eagle for schematic capture and layout, which proved to be fairly painless. DCDB lets you directly submit Eagle's native file format, .brd, but they only guarantee good results for an older version of Eagle, so I took the extra step of exporting to Gerber format.

My eleven little boards arrived looking gorgeous. I've assembled and tested most of them, without any problems. Oh, and I even got a promotional sticker in the package. How nice. On the whole, it was a good experience – certainly preferable to my painstaking manual wiring work – and I would order from them again.

Circuit board closeup

Of all the other services I looked at, the only one I remember being price-comparable was Seeed Studio. They'll sell you exactly 10 5x5 cm 2-layer boards for $9.90, with an added charge of at least $2 for shipping unless your total order is over $50. Also, the boards are green; any other color adds $10 to the price. I might try ordering from them in the future and comparing results.

My other recent direct China order went through DealeXtreme (www.dx.com). I specifically wanted jumper cables – you know, those simple colored wires with plastic plugs on the ends, which for some reason seem to end up costing more than the ICs they're designed to connect! But DE actually has them for what I'd consider a reasonable price. I also ended up purchasing some micro-motors and a cheap webcam. After making my order, I was alarmed by the large quantity of negative reviews I read about this website; nonetheless, all my items eventually arrived in good condition.

One frequent complaint made by reviewers is that the postal tracking numbers given by DE are invalid. I learned a couple of things in that regard that might help others who want to try ordering from this site.

1. After they e-mail you the tracking number, you may have to wait up to 48 hours before trying to track your package. Supposedly it can take that long for the Chinese postal service to enter the number in their database.
2. DE will send you a direct link in the e-mail, which you can supposedly click to track your package. These never worked for me. Instead of using this link, go to the main page of the postal service website and manually enter the tracking number in their form. (Use Google Chrome so you can auto-translate the page, if necessary.) All my tracking numbers eventually worked when I did this.

I'd still be nervous about ordering anything expensive through DealExtreme, but based on my experience, they might not be quite as terrible as the reviews will lead you to believe. My order arrived in four separate packages, and I think they all came within about a month.

Until the next cycle,

Jenny

Sunday, June 25, 2017

Acuitas Diary #3: June 2017

I didn't have any major development goals for Acuitas this month, because I wanted to force myself to do other things, like cleaning the house.  So you just get a bunch of images that came out of me playing with the semantic net visualization algorithm.  I'm fascinated by what disparate final results I can produce by introducing minor changes.  A lot of the variants in this album were made by changing the mathematical function that calculates the size of the exclusion zone (the area where other nodes can't be placed) for each node.

This is the "base" algorithm that I've been using for the past several months. It was starting to look a little messy, so I experimented with modifications.
I love staring at these. They're an example of a computer generating something that is practically relevant to its internal state, but looks otherworldly from an ordinary human perspective.

I tried eliminating a little feature from the algorithm and got this mess. It took slightly longer to draw, too.
Another silly result, caused by forcing the function that establishes the distance between nodes to be a fast-growing exponential function of the node radius.
Another exponential version, with a tamer growth rate.
And this is my new favorite!  More advanced tweaks to the formula for distance between nodes make the largest dots really "dislike" being near each other while still accommodating the little dots ... so the nodes with the most connections start to push away from the central mass and form their own sub-clusters.

Increasing a parameter to make the inter-node distance even larger produces these spidery versions.

Changing the order of node placement makes things messy.

I also wrote some little scripts to help me examine and clean up the less human-readable layers of the memory space, and I expunged some bad information that got in there on account of him misunderstanding me.  Eventually, I intend Acuitas to clean up bad information by himself, by letting repeated subsequent encounters with good information overrule and eventually purge it, but that's not implemented yet.

Thursday, June 1, 2017

Acuitas Diary #2: May 2017

My focus this past month was on giving Acuitas the ability to learn more types of inter-word relationships, and that meant doing some work in what I call the “Text Interpreter” … the module downstream from the Text Parser.


The Parser attempts to tag each word in the input with its part of speech and determine its function within the input. Basically, it figures out all the information you'd need to know in order to diagram a sentence. But beyond that there is some more work to be done to actually extract meaning, and the Interpreter handles this. Consider some of the possible ways of expressing the idea that a cat belongs to the category animal:

A cat is an animal.
Cats are animals.
A cat is a type of animal.
One type of animal is a cat.
A cat is among the animals.

By removing the content words and abstracting away some grammatical information, it's possible to generalize these into sentence skeletons that describe the legal ways of saying “X is in category Y” in English:

[A] <subject> <be-verb> [a] <direct object>
[A] <subject> <be-verb> a <subcategory word> of <object-of-preposition>
One <subcategory word> of <object-of-preposition> <be-verb> [a] <direct object>
[A] <subject> <be-verb> among the <object-of-preposition>

I've nicknamed these syntactic structures “forms.” The Interpreter's job is to detect forms and match them to concept-linking relationships. As the previous example should have shown, a single relationship such as class membership can be expressed by multiple forms, each of which has numerous possible variations of word choice, etc.

Up until now, the only links Acuitas could add to his database were class memberships (<thing> is a <thing>) and qualities (<thing> is <descriptive word>), plus their negations – and he only recognized a single form for each. I overhauled the form detection method, making it more powerful/general and increasing the ease of adding new forms to the code. Then I added more forms and support for a number of new link relationships, including ...

<thing> can do <action>
<thing> is for <action>
<thing> is part of <thing>
<thing> is made of <thing>
<thing> has <thing>

The first two are particularly important, since they mean he can finally start learning some generic verbs.


I spent the latter half of the month upgrading Acuitas' GUI library from Tkinter to Kivy. This was a somewhat unwelcome distraction from real development work, but it had to be done. Acuitas is a multi-threaded program, and using multiple threads with Tkinter is ... not straightforward. As the program grew more complex, my hacky method of letting all the threads update the GUI was becoming increasingly unsupportable and causing instability. Of course Kivy does just about everything differently, so porting all of the GUI elements I'd developed was a serious chore -- but the new version looks slick and, most importantly, doesn't crash. All the drawn graphics have anti-aliasing now, which makes the memory visualizations look nicer when zoomed out.

Code base: 6361 lines
Words known: 896
Concept-layer links: 1474

Monday, May 15, 2017

DoBot Magician User Experience

Earlier this year I had the opportunity to try out the DoBot Magician robotic arm. First I want to mention that this arm, being worth over 1000 USD, doesn't really fall within this blog's usual purview of robotics on a shoestring budget! I got to try it out on loan from a coworker; he was hoping to use it as part of a project on a tight schedule, and wanted my help to get it running properly. He was primarily interested in its 3D-printing capabilities, so that's what I'll be focusing on in the review that follows.

From a hardware quality perspective, the Magician seemed very nice: solidly built, precise, and attractive. When I first heard about the notion of using an arm to 3D print, I had some doubts … but DoBot's hardware seems to have the resolution needed to turn out decent prints. It comes with a cooling fan for the cold end of the print head, though it does not have fans to direct air down to the previous layers of your print. The only potential issue that I noticed with this arm's mechanical nature was that the plastic housing around the base of the arm seemed misaligned, such that it was rubbing against the moving parts on one side; this left a visible streak of abraded plastic after the arm had been running for a while.

The Magician during my first try at printing with it.
The DoBot Magician is intended to be multi-use, so a little bit of assembly was required to mount the 3D printing accessories on the arm. This proved fairly straightforward – apart from my own nervousness at even handling a $1000+ piece of equipment that didn't belong to me. Getting the software set up and linking DoBot to my Windows 7 desktop was mostly straightforward as well. There was a time or two when I plugged in the USB cable and it would not connect to the computer for no apparent reason … only to start working again later, after being plugged and unplugged multiple times. I did not run into any particularly nasty driver issues, however.

The Magician comes with custom controller software provided by the manufacturer, but switches over to Repetier Host when you want to do 3D printing. A copy of Repetier is bundled with the DoBot software. The first time I requested 3D printing and auto-swapped to Repetier, it asked me if I would like to upgrade to the latest version, as opposed to the one that came with the Magician … and I almost did. However, I ended up deciding not to meddle before I tried for my first print. I did make sure the firmware on the arm was upgraded to the manufacturer's latest version. One or both of these things might have helped me avoid seeing the problem my co-worker experienced when he tried to print for the first time: namely, the arm's coordinate system seemed to be totally messed up, such that any movement upward was also accompanied by such a dramatic XY movement that it soon walked right off the print bed. I followed the documentation carefully when it came to choosing my starting settings and homing the arm, and my first prints weren't nearly so catastrophic.

I struggled through some classic 3D printing hitches, like trouble getting the first layer of the print to adhere to the bed properly. I won't dwell on these, because I suppose they would be common to most any model of 3D printer. However, there were a couple of issues that raised questions about the DoBot specifically.

The first obnoxious problem was that the Magician's print head would travel in the Z direction as it moved in the X and Y directions – not a lot, perhaps only a few millimeters, but certainly enough to ruin a 3D print. In essence, it was trying to lay down filament in a plane tilted a few degrees off horizontal. I was forced to compensate for this by carefully putting shims underneath either the print bed or the base of the arm, to get that plane back into proper alignment with the print bed. Homing the printer at any point after I did this would tilt the printing plane even further and require me to make a whole new set of adjustments, which tells me that this wasn't a simple mechanical offset; the Magician was using sensor feedback to actively fight my efforts. This made setting up for a print a very fiddly process, and of course I couldn't compensate for the angle perfectly … meaning that all my printed objects had a small but noticeable amount of XZ and YZ skew. For prints intended to be used as engineering parts, this could well be unacceptable. I looked in vain for some way to calibrate the Magician that would work out better than shoving Popsicle sticks under it, and came up empty-handed. There doesn't seem to be a way to input measurements from your print results and have it compensate. The dimensions of the test “cube” I printed were also a millimeter or two off.

Several objects that I printed with the DoBot Magician. Skew is most noticeable on the unfinished test cube in the center, but they all have some degree of it.

The other problem was an apparent bug that only happened once. Two-thirds or so of the way through printing a test cube, the Magician stopped moving. The extruder retracted the entire filament, then switched directions and started continuously forcing plastic out, creating a giant melted blob at the location of the now-stationary print head. I had to forcibly terminate the print job, which could not then be resumed where it left off. I re-printed the exact same test cube later (after re-slicing it), and the job ran to completion without reproducing this issue. I don't know whether the root cause of the problem would be in Repetier, in the slicing software, in Magician's firmware, or in some combination of them rubbing each other the wrong way.

As I tried to troubleshoot these and other problems, I ran into a difficulty which isn't really the Magician's fault, but is pertinent nonetheless: there doesn't seem to be a critical mass of people using it. I uncovered a few product reviews, but very little content that featured other users actually struggling through problems with the arm and publishing advice. For English speakers, the Magician comes with somewhat poorly-translated documentation, and dealing with DoBot tech support involves communicating through a language barrier as well. They did respond to most of my co-worker's queries in a fairly timely fashion, but started ignoring us when we requested the code for the firmware, even though the Kickstarter campaign that produced the Magician claimed it would be open-source.

Aside from wanting to correct the faulty calibration/XZ skew issue, my coworker was eventually hoping to modify the arm to extend its reach, meaning we would have to get into the firmware code and edit the inverse kinematic calculations. He ended up deciding to pull the Magician's guts out, replacing them with the third-party electronics needed to run some open-source third-party firmware. In the process of doing this, we discovered a final annoyance: DoBot doesn't seem designed to be disassembled by the user for modification or maintenance. The base plate is tightly retained by the rest of the case, so after you remove the fasteners, you have to literally pry it out with a knife – hopefully not damaging the electronics inside in the process. The mechanics inside the base are supposedly not even accessible.

The arm has been returned to its owner, and I'm not sure if he's tried to print with the new electronics and firmware, but perhaps I'll add an update here if I ever hear how it goes. My conclusion is that the DoBot Magician had a lot of potential, but didn't really deliver as a user-friendly 3D printing solution. I definitely wouldn't recommend it to someone who wants an easy, works-out-of-the-box sort of experience. And when I get around to buying my own 3D printer, I'll make a point of choosing something with a robust user community that I can lean on for support.


Until the next cycle …

Sunday, April 30, 2017

Acuitas Diary #1: April 2017

I've been continuing to make improvements to Acuitas' text parser, adding support for interrogative forms and negative statements. He's now capable of learning that something does not belong to some category or have some quality – rather important, really! And now that question comprehension is in place, I can not only put more information into the database, I can call it back out. Responses are still very formulaic, because text comprehension has been receiving far more of my development effort than text generation. Ask him a lot of yes-or-no questions in a row, and he starts to sound like Bit from Tron (though he does have one up on Bit – he's got the ability to answer “I don't know”).

That furnishes a pretty sensible explanation for Bit, come to think of it. Somebody wrote a program with a fully capable speech parser and a really, really primitive speech generator.

I also threw in some rough support for contractions. Previously the sentence tokenizer would have treated the word isn't as three separate “words,” [isn, ', t], which would have made no sense. I fixed that. Contractions now get pre-processed into whatever their constituent words are (isn't = [is, not]) before the sentence goes for parsing. Only one possible combination is picked, however. Resolving contractions that can be ambiguous (such as “they'd,” which could mean “they had” or “they would”) is something I'm leaving for later. Getting verb conjugation detection put in before I do that will be a big help.

I reserved the last week and a half of the month for a code cleanup and refactoring spree, trying to make sure the text parser and meaning extraction areas are as neat and bug-free as possible before I leave them for a while to work on other things. I've been buried in the text parser for so long now that I wonder if I quite remember what all of Acuitas' other bits and pieces do.

Code base: 5633 lines
Words known: 797

Concept-layer links: 1274

Memory visualization as of 04/29/2017