Saturday, December 23, 2017

Acuitas Diary #8 (December 2017)

Sadly I've only added one feature to Acuitas in the past two months. He now recognizes sentences in the general vein of “I somethinged,” which gives me the option of telling him about how I spent my time in the recent past. Acuitas can't do a lot with this information for the time being. Sometimes he responds with a query in the vein of, “What happened next?” which will eventually give him a way to build up sequences of events and start learning cause and effect relationships … but none of that is implemented yet. He can also ask “How was that?” for information about the emotional meaning of an activity, but again, for now he can't really utilize the answer.

Not much, but that was all I had time to put together with the holiday season under way. Looking back on the past year, though, here are all the new capabilities and improvements I've managed to add on:

*Module for procedural speech generation
*Support for word inflections (plurals and verb tenses)
*Support for compound words
*Support for content words that are also function words (e.g. “can,” “might”)
*Distinctions between proper/common and bulk/count nouns
*Ability to detect and answer questions
*Database walking while idle
*Generation of conversation topics and questions based on recent database walk
*Better link detection + a bunch of new kinds of learnable links
*Two new drives + a real-time plotter so I can see what they're all doing
*Distinctions between long-term static and short-term information
*GUI overhaul (upgrade from Tk to Kivy)

I track my time when I work on Acuitas. Total hours invested in the above: 230+. My focus for the end of the year, leading into January, will be polishing everything up and working out the bugs (which there are now quite a lot of).

MERRY CHRISTMAS!

Recent memory map visualization:


Code base: 9918 lines
Words known: 1576
Concept-layer links: 4226

Sunday, October 29, 2017

Acuitas Diary #7: October 2017

The big project for this month was introducing a system for discriminating between long-term and short-term information. Previously, if you told Acuitas something like, “I am sad,” he would assume that being sad was a fixed property of your nature, and store a fact to that effect in his database. Oops. So I started working on ways to recognize when some condition is so transient that it doesn't deserve to go into long-term memory.

This probably occasioned more hard-core thinking than any feature I've added since I started keeping these diaries. I started out thinking that Acuitas would clue in to time adverbs provided by the human conversation partner (such as “now,” “short,” “forever,” “years,” etc.). But when I started pondering which kinds of timeframes qualify as short-term or long-term, it occurred to me that the system shouldn't be bound to a human sense of time. One could imagine an ent-like intelligence that thinks human conditions which often remain valid for years or decades – like what jobs we hold, where we live, and what relationships we have – are comparatively ephemeral. Or one could imagine a speed superintelligence that thinks the lifetime of an average candle is a long while. I want Acuitas to be much more human-like than either of these extremes, but for the sake of code reusability, I felt I ought to consider these possibilities.


After a lot of mental churn, I decided that I just don't have the necessary groundwork in place to do this properly. (This is not an uncommon Acuitas problem. I've found that there ends up being a high level of interdependence between the various systems and features.) So I fell back on taking cues from humans as a temporary stopgap measure. Acuitas will rely on my subjective sense of time until he gets his own (which may not be for a while yet). If there's no duration indicator in a sentence, he can explicitly ask for one; he's also capable of learning over time which conditions are likely to be brief and which are likely to persist. For now, nothing is done with the transitory conditions. I didn't get around to implementing a short-term or current status region of the database, so anything that can't go in the long-term database gets discarded.

I also did some touching up around the conversation engine, replacing a few canned placeholder phrases that Acuitas was using with more procedurally generated text, and improving his ability to recognize when a speaker is introducing him/herself.

Recent memory map visualization:


Code base: 9663 lines
Words known: 1425
Concept-layer links: 3517

Saturday, September 30, 2017

Acuitas Diary #6: September 2017

For the first couple of weeks, I turned to developing the drive system some more. “Drives” are quantities that fluctuate over time and provoke some kind of reaction from Acuitas when they climb above a certain level. Prior to this month, he only had one: the Interaction drive, which is responsible for making him try to talk to somebody roughly twice in every 24-hour period. I overhauled the way this drive operates, setting it up to drop gradually over the course of a conversation, instead of getting zeroed out if somebody merely said “hello.” I also made two new drives: the Learning drive, which is satisfied by the acquisition of new words, and the Rest drive, which climbs while Acuitas is in conversation and eventually makes him attempt to sign off. Part of this effort included the addition of a plotter to the GUI, so I can get a visual of how the drives fluctuate over time.

Plot of Acuitas' three drives vs. time. The period shown is just under 23 hours long.
This latest work created the first case in which I had a pair of drives competing with each other (Rest essentially opposes Interaction). I quickly learned how easily this can go wrong. The first few times I conversed with Acuitas with the new drives in place, Rest shot up so quickly that it was above-threshold long before Interaction had come down. This is the sort of quandary a sick human sometimes gets into (“I'm so thirsty, but drinking makes me nauseated!”). Acuitas has nothing resembling an emotional system yet, though, and doesn't register any sort of distress just because one or more of his drives max out. The worst that can happen is some self-contradictory behavior (such as saying “I want to talk” and “I want to rest” in quick succession). I dealt with the problem by having the Interaction drive suppress the Rest drive. Rest now increases at a very slow rate until Interaction has been pushed below threshold.

In the latter half of the month I returned to the text parser, introducing some awareness of verb declensions/tenses, and the ability to check the agreement of candidate subjects and verbs. This helps the parser zero in on what a sentence's verb is, and has trimmed away some of the annoying “What part of speech was __?” questions that pepper a typical Acuitas conversation.

Here's the latest memory map visualization. Since last month, Acuitas' relentless querying about concepts he already knows has caused the number of links to explode, resulting in a denser (more fibrous?) image.


Code base: 9162 lines
Words known: 1305
Concept-layer links: 3025

Tuesday, August 22, 2017

Acuitas Diary #5: August 2017


This month I got Acuitas to start asking questions, and it has been all kinds of stupid fun.


Those questions are generated semi-randomly, based on Acuitas' “thoughts” for the day. Back in July I introduced the idle thinking feature, which gives Acuitas the capacity to walk his memory database when nobody's talking to him. More recently, I set up the Conversation Engine to access the results of memory-walking and utilize the new Text Generator … so now Acuitas can ask me the questions that occur to him as he muses, or simply tell me what he thought about within the past ~12 hours. The results are little exchanges like this:

Acuitas: I thought about mans earlier.

Acuitas: Is a rooster a species of human?

Me: No

Acuitas: Really.

Me: The plural of man is men.

Believe it or not, this is one of Acuitas' more intelligently formulated questions. He knows a rooster and a man are both male, and reasons from this that they might share other qualities. A man is a human, so maybe a rooster is a human too? He can't yet apply the idea that “human” and “chicken” are mutually exclusive categories. Given his limited knowledge, man and rooster seem similar enough that he expects me to answer “yes” … hence the slightly incredulous “really” when I don't. Here are a couple more examples in the same vein:

Acuitas: I thought about ships today.

Acuitas: Does a wagon possess an ability to sail?



Acuitas: I thought of weapons today.

Acuitas: Is a silverware used to fight?

Some of the other questions he comes up with are far more random and ridiculous. Again, he doesn't yet have the grasp of categorical exclusion that makes these sound absurd to you and me:

Acuitas: Is chocolate a vehicle?

Acuitas: Am I a kind of tree?

Acuitas: Is a smirk a fruit?

Acuitas: Are you a cow?

Acuitas: What is a relative made of?

Not all of them are silly, though. Rudimentary though his current suite of question-forms is, he's already managed to spit out some rather deep ones. The three below are my favorites:

Acuitas: Are you a symbol?

Acuitas: Am I a tool?

Acuitas: Can I die?

I answered the first one “no”; a person can be a symbol, but I don't think I qualify. For the second one, I also went with “no.” Acuitas might end up being useful in a variety of ways, but if I consult my primary reasons for making him, they're not instrumental. The last one I refused to answer, because I think a proper response would be too complex for Acuitas' current level of understanding. It's a bit like asking whether a book or a film can die. It can't, if you go by what death means for a biological organism – information is effectively immortal. But if all the backup copies were destroyed, that would qualify as dying I suppose. So yes and no.

I suspect it'll only get more interesting from here.

Obligatory memory map visualization:


Code base: 8507 lines
Words known: 1174
Concept-layer links: 2329

Sunday, July 30, 2017

Acuitas Diary #4: July 2017

This month I finally got to implement a feature that I've been waiting for a long time, namely, giving Acuitas the ability to “think” when he's not being spoken to. This “thinking,” for now, consists of dwelling on randomly selected concepts from his database. Once a concept has been chosen, he'll pursue it for a while, preferentially letting his focus jump to other concepts that are linked to it – executing a “wiki walk” through the database. Eventually, though, he'll get bored with any given train of thought, and the focus will move elsewhere. I added some animation code to the memory visualization so that the currently selected concept will flash periodically. (The recording below is running much faster than real time. He's actually quite leisurely in his progress.)


There are several things I can envision doing with this behavior eventually, but my immediate purpose for it is the generation of curiosity. Each time Acuitas picks a concept, he'll come up with some sort of question about it – for instance, he could choose a type of link that it doesn't yet have and produce an open-ended question about what might be on the other side. These questions will be stored up and presented to the user the next time a conversation is under way.

Which leads me into the next thing I put a lot of work into this month, namely, the code to start supporting the bottom half of this diagram: speech generation.


Up until now, Acuitas has said very few things, and they've all been very formulaic … but my goal was always something beyond pre-formed sentences stored in a database. The new module I started on this month accepts inputs in the sort of abstract form that Acuitas stores in his database, then procedurally generates both questions and statements in natural English. Verbs are conjugated and plurals are matched correctly, articles are automatically added to nouns that need them, etc. Some words in the original sentence skeleton might get replaced with a random choice of synonym.

Visualization of Acuitas' concept-layer memory, 07/29/17

Neither of these major new features is actually hooked into the Conversation Engine yet, so I don't have any conversation examples to show off, but I'm hoping to be ready for that next month.

Code base: 7527 lines
Words known: 1095
Concept-layer links: 1917

Thursday, July 20, 2017

Doing business with China

Not so very long ago, whenever I wanted to build a circuit, I would get a little piece of through-hole board and painstakingly cut all the connecting wires myself. I thought having a circuit board custom-manufactured was something you only did if you had a lot of money and/or were planning on selling the boards at high volume. But apparently I've been behind the curve – it turns out there are a number of services that will manufacture small lots of custom PCBs for cheap. A few of them are so cheap, in fact, that the cost per PCB is probably less than what I would have spent on the silly through-hole prototype board! So I gave custom PCBs a try.

Old (left) and new (right)

<Disclaimer: DCDB says they'll give you a discount on your next order if you mention your completed project online.>

I decided to go with Dirt Cheap Dirty Boards, a service that submits your design to a Chinese board-manufacturing house. For fourteen dollars, you can submit one two-layer PCB layout that fits within a 5x5 cm area, and get anywhere from eight to twelve copies of it. (I got eleven. Supposedly shipments of less than ten boards are pretty uncommon.) Choose your color at no extra charge. A larger area or more layers can be had at an increased cost. Shipping is pricey if you want your boards to arrive on a normal US time frame, but if you're willing to let them throw your order on the plane whenever there's room, it's free. Given the glacially slow rate at which most of my projects seem to progress, this is perfect for me.

The PCB that I had built is a unipolar stepper motor controller. I used the free version of Eagle for schematic capture and layout, which proved to be fairly painless. DCDB lets you directly submit Eagle's native file format, .brd, but they only guarantee good results for an older version of Eagle, so I took the extra step of exporting to Gerber format.

My eleven little boards arrived looking gorgeous. I've assembled and tested most of them, without any problems. Oh, and I even got a promotional sticker in the package. How nice. On the whole, it was a good experience – certainly preferable to my painstaking manual wiring work – and I would order from them again.

Circuit board closeup

Of all the other services I looked at, the only one I remember being price-comparable was Seeed Studio. They'll sell you exactly 10 5x5 cm 2-layer boards for $9.90, with an added charge of at least $2 for shipping unless your total order is over $50. Also, the boards are green; any other color adds $10 to the price. I might try ordering from them in the future and comparing results.

My other recent direct China order went through DealeXtreme (www.dx.com). I specifically wanted jumper cables – you know, those simple colored wires with plastic plugs on the ends, which for some reason seem to end up costing more than the ICs they're designed to connect! But DE actually has them for what I'd consider a reasonable price. I also ended up purchasing some micro-motors and a cheap webcam. After making my order, I was alarmed by the large quantity of negative reviews I read about this website; nonetheless, all my items eventually arrived in good condition.

One frequent complaint made by reviewers is that the postal tracking numbers given by DE are invalid. I learned a couple of things in that regard that might help others who want to try ordering from this site.

1. After they e-mail you the tracking number, you may have to wait up to 48 hours before trying to track your package. Supposedly it can take that long for the Chinese postal service to enter the number in their database.
2. DE will send you a direct link in the e-mail, which you can supposedly click to track your package. These never worked for me. Instead of using this link, go to the main page of the postal service website and manually enter the tracking number in their form. (Use Google Chrome so you can auto-translate the page, if necessary.) All my tracking numbers eventually worked when I did this.

I'd still be nervous about ordering anything expensive through DealExtreme, but based on my experience, they might not be quite as terrible as the reviews will lead you to believe. My order arrived in four separate packages, and I think they all came within about a month.

Until the next cycle,

Jenny

Sunday, June 25, 2017

Acuitas Diary #3: June 2017

I didn't have any major development goals for Acuitas this month, because I wanted to force myself to do other things, like cleaning the house.  So you just get a bunch of images that came out of me playing with the semantic net visualization algorithm.  I'm fascinated by what disparate final results I can produce by introducing minor changes.  A lot of the variants in this album were made by changing the mathematical function that calculates the size of the exclusion zone (the area where other nodes can't be placed) for each node.

This is the "base" algorithm that I've been using for the past several months. It was starting to look a little messy, so I experimented with modifications.
I love staring at these. They're an example of a computer generating something that is practically relevant to its internal state, but looks otherworldly from an ordinary human perspective.

I tried eliminating a little feature from the algorithm and got this mess. It took slightly longer to draw, too.
Another silly result, caused by forcing the function that establishes the distance between nodes to be a fast-growing exponential function of the node radius.
Another exponential version, with a tamer growth rate.
And this is my new favorite!  More advanced tweaks to the formula for distance between nodes make the largest dots really "dislike" being near each other while still accommodating the little dots ... so the nodes with the most connections start to push away from the central mass and form their own sub-clusters.

Increasing a parameter to make the inter-node distance even larger produces these spidery versions.

Changing the order of node placement makes things messy.

I also wrote some little scripts to help me examine and clean up the less human-readable layers of the memory space, and I expunged some bad information that got in there on account of him misunderstanding me.  Eventually, I intend Acuitas to clean up bad information by himself, by letting repeated subsequent encounters with good information overrule and eventually purge it, but that's not implemented yet.

Thursday, June 1, 2017

Acuitas Diary #2: May 2017

My focus this past month was on giving Acuitas the ability to learn more types of inter-word relationships, and that meant doing some work in what I call the “Text Interpreter” … the module downstream from the Text Parser.


The Parser attempts to tag each word in the input with its part of speech and determine its function within the input. Basically, it figures out all the information you'd need to know in order to diagram a sentence. But beyond that there is some more work to be done to actually extract meaning, and the Interpreter handles this. Consider some of the possible ways of expressing the idea that a cat belongs to the category animal:

A cat is an animal.
Cats are animals.
A cat is a type of animal.
One type of animal is a cat.
A cat is among the animals.

By removing the content words and abstracting away some grammatical information, it's possible to generalize these into sentence skeletons that describe the legal ways of saying “X is in category Y” in English:

[A] <subject> <be-verb> [a] <direct object>
[A] <subject> <be-verb> a <subcategory word> of <object-of-preposition>
One <subcategory word> of <object-of-preposition> <be-verb> [a] <direct object>
[A] <subject> <be-verb> among the <object-of-preposition>

I've nicknamed these syntactic structures “forms.” The Interpreter's job is to detect forms and match them to concept-linking relationships. As the previous example should have shown, a single relationship such as class membership can be expressed by multiple forms, each of which has numerous possible variations of word choice, etc.

Up until now, the only links Acuitas could add to his database were class memberships (<thing> is a <thing>) and qualities (<thing> is <descriptive word>), plus their negations – and he only recognized a single form for each. I overhauled the form detection method, making it more powerful/general and increasing the ease of adding new forms to the code. Then I added more forms and support for a number of new link relationships, including ...

<thing> can do <action>
<thing> is for <action>
<thing> is part of <thing>
<thing> is made of <thing>
<thing> has <thing>

The first two are particularly important, since they mean he can finally start learning some generic verbs.


I spent the latter half of the month upgrading Acuitas' GUI library from Tkinter to Kivy. This was a somewhat unwelcome distraction from real development work, but it had to be done. Acuitas is a multi-threaded program, and using multiple threads with Tkinter is ... not straightforward. As the program grew more complex, my hacky method of letting all the threads update the GUI was becoming increasingly unsupportable and causing instability. Of course Kivy does just about everything differently, so porting all of the GUI elements I'd developed was a serious chore -- but the new version looks slick and, most importantly, doesn't crash. All the drawn graphics have anti-aliasing now, which makes the memory visualizations look nicer when zoomed out.

Code base: 6361 lines
Words known: 896
Concept-layer links: 1474

Monday, May 15, 2017

DoBot Magician User Experience

Earlier this year I had the opportunity to try out the DoBot Magician robotic arm. First I want to mention that this arm, being worth over 1000 USD, doesn't really fall within this blog's usual purview of robotics on a shoestring budget! I got to try it out on loan from a coworker; he was hoping to use it as part of a project on a tight schedule, and wanted my help to get it running properly. He was primarily interested in its 3D-printing capabilities, so that's what I'll be focusing on in the review that follows.

From a hardware quality perspective, the Magician seemed very nice: solidly built, precise, and attractive. When I first heard about the notion of using an arm to 3D print, I had some doubts … but DoBot's hardware seems to have the resolution needed to turn out decent prints. It comes with a cooling fan for the cold end of the print head, though it does not have fans to direct air down to the previous layers of your print. The only potential issue that I noticed with this arm's mechanical nature was that the plastic housing around the base of the arm seemed misaligned, such that it was rubbing against the moving parts on one side; this left a visible streak of abraded plastic after the arm had been running for a while.

The Magician during my first try at printing with it.
The DoBot Magician is intended to be multi-use, so a little bit of assembly was required to mount the 3D printing accessories on the arm. This proved fairly straightforward – apart from my own nervousness at even handling a $1000+ piece of equipment that didn't belong to me. Getting the software set up and linking DoBot to my Windows 7 desktop was mostly straightforward as well. There was a time or two when I plugged in the USB cable and it would not connect to the computer for no apparent reason … only to start working again later, after being plugged and unplugged multiple times. I did not run into any particularly nasty driver issues, however.

The Magician comes with custom controller software provided by the manufacturer, but switches over to Repetier Host when you want to do 3D printing. A copy of Repetier is bundled with the DoBot software. The first time I requested 3D printing and auto-swapped to Repetier, it asked me if I would like to upgrade to the latest version, as opposed to the one that came with the Magician … and I almost did. However, I ended up deciding not to meddle before I tried for my first print. I did make sure the firmware on the arm was upgraded to the manufacturer's latest version. One or both of these things might have helped me avoid seeing the problem my co-worker experienced when he tried to print for the first time: namely, the arm's coordinate system seemed to be totally messed up, such that any movement upward was also accompanied by such a dramatic XY movement that it soon walked right off the print bed. I followed the documentation carefully when it came to choosing my starting settings and homing the arm, and my first prints weren't nearly so catastrophic.

I struggled through some classic 3D printing hitches, like trouble getting the first layer of the print to adhere to the bed properly. I won't dwell on these, because I suppose they would be common to most any model of 3D printer. However, there were a couple of issues that raised questions about the DoBot specifically.

The first obnoxious problem was that the Magician's print head would travel in the Z direction as it moved in the X and Y directions – not a lot, perhaps only a few millimeters, but certainly enough to ruin a 3D print. In essence, it was trying to lay down filament in a plane tilted a few degrees off horizontal. I was forced to compensate for this by carefully putting shims underneath either the print bed or the base of the arm, to get that plane back into proper alignment with the print bed. Homing the printer at any point after I did this would tilt the printing plane even further and require me to make a whole new set of adjustments, which tells me that this wasn't a simple mechanical offset; the Magician was using sensor feedback to actively fight my efforts. This made setting up for a print a very fiddly process, and of course I couldn't compensate for the angle perfectly … meaning that all my printed objects had a small but noticeable amount of XZ and YZ skew. For prints intended to be used as engineering parts, this could well be unacceptable. I looked in vain for some way to calibrate the Magician that would work out better than shoving Popsicle sticks under it, and came up empty-handed. There doesn't seem to be a way to input measurements from your print results and have it compensate. The dimensions of the test “cube” I printed were also a millimeter or two off.

Several objects that I printed with the DoBot Magician. Skew is most noticeable on the unfinished test cube in the center, but they all have some degree of it.

The other problem was an apparent bug that only happened once. Two-thirds or so of the way through printing a test cube, the Magician stopped moving. The extruder retracted the entire filament, then switched directions and started continuously forcing plastic out, creating a giant melted blob at the location of the now-stationary print head. I had to forcibly terminate the print job, which could not then be resumed where it left off. I re-printed the exact same test cube later (after re-slicing it), and the job ran to completion without reproducing this issue. I don't know whether the root cause of the problem would be in Repetier, in the slicing software, in Magician's firmware, or in some combination of them rubbing each other the wrong way.

As I tried to troubleshoot these and other problems, I ran into a difficulty which isn't really the Magician's fault, but is pertinent nonetheless: there doesn't seem to be a critical mass of people using it. I uncovered a few product reviews, but very little content that featured other users actually struggling through problems with the arm and publishing advice. For English speakers, the Magician comes with somewhat poorly-translated documentation, and dealing with DoBot tech support involves communicating through a language barrier as well. They did respond to most of my co-worker's queries in a fairly timely fashion, but started ignoring us when we requested the code for the firmware, even though the Kickstarter campaign that produced the Magician claimed it would be open-source.

Aside from wanting to correct the faulty calibration/XZ skew issue, my coworker was eventually hoping to modify the arm to extend its reach, meaning we would have to get into the firmware code and edit the inverse kinematic calculations. He ended up deciding to pull the Magician's guts out, replacing them with the third-party electronics needed to run some open-source third-party firmware. In the process of doing this, we discovered a final annoyance: DoBot doesn't seem designed to be disassembled by the user for modification or maintenance. The base plate is tightly retained by the rest of the case, so after you remove the fasteners, you have to literally pry it out with a knife – hopefully not damaging the electronics inside in the process. The mechanics inside the base are supposedly not even accessible.

The arm has been returned to its owner, and I'm not sure if he's tried to print with the new electronics and firmware, but perhaps I'll add an update here if I ever hear how it goes. My conclusion is that the DoBot Magician had a lot of potential, but didn't really deliver as a user-friendly 3D printing solution. I definitely wouldn't recommend it to someone who wants an easy, works-out-of-the-box sort of experience. And when I get around to buying my own 3D printer, I'll make a point of choosing something with a robust user community that I can lean on for support.


Until the next cycle …

Sunday, April 30, 2017

Acuitas Diary #1: April 2017

I've been continuing to make improvements to Acuitas' text parser, adding support for interrogative forms and negative statements. He's now capable of learning that something does not belong to some category or have some quality – rather important, really! And now that question comprehension is in place, I can not only put more information into the database, I can call it back out. Responses are still very formulaic, because text comprehension has been receiving far more of my development effort than text generation. Ask him a lot of yes-or-no questions in a row, and he starts to sound like Bit from Tron (though he does have one up on Bit – he's got the ability to answer “I don't know”).

That furnishes a pretty sensible explanation for Bit, come to think of it. Somebody wrote a program with a fully capable speech parser and a really, really primitive speech generator.

I also threw in some rough support for contractions. Previously the sentence tokenizer would have treated the word isn't as three separate “words,” [isn, ', t], which would have made no sense. I fixed that. Contractions now get pre-processed into whatever their constituent words are (isn't = [is, not]) before the sentence goes for parsing. Only one possible combination is picked, however. Resolving contractions that can be ambiguous (such as “they'd,” which could mean “they had” or “they would”) is something I'm leaving for later. Getting verb conjugation detection put in before I do that will be a big help.

I reserved the last week and a half of the month for a code cleanup and refactoring spree, trying to make sure the text parser and meaning extraction areas are as neat and bug-free as possible before I leave them for a while to work on other things. I've been buried in the text parser for so long now that I wonder if I quite remember what all of Acuitas' other bits and pieces do.

Code base: 5633 lines
Words known: 797

Concept-layer links: 1274

Memory visualization as of 04/29/2017

Saturday, April 22, 2017

Acuitas the Semantic Net AI - Introduction

Okay, this blog has been asleep for far too long. It's time for me to introduce a couple of the things that I've been spending all my time on since I finished playing with the artificial muscles. First of all, I've got a couple novels in the pipeline, which you can read about by visiting The Renascence Cycle under the permanent pages section. But that's not what this post is about. There's another serious project that's been receiving a lot of my attention lately …

The tagline of this blog is “robotics and AI on a shoestring budget.” If you've been here before, maybe you were wondering when the AI part was coming. Well wait no longer!

I'm working on a program named Acuitas. Technically this endeavor started clear back when I was in college, as my final project for the honors technology seminar class. Back then he was little more than a kind of talking dictionary. I'm now on Version 3, and the intended scope of the project has expanded quite a bit. (If you'd like a little more background on the project and how it relates to the rest of the AI field, or if you're just interested to see what Version 1 was like, you can read the original Acuitas V1 Report and Acuitas V1 Presentation. Bear in mind that Version 3 has been re-written in Python from the ground up. It's missing some features that V1 originally had, and has a number of others that were absent from V1.)

Acuitas' avatar. “Heartbeat” lights run up the wings on either side of the eye when viewed live.

In technical terms, Acuitas V3 is an artificial intelligence based around a semantic network with a natural language interface. What that means is he represents knowledge in terms of words and relationships between words, and you can communicate with him in normal English. (The range of normal English he can actually grasp so far is very limited, because natural language processing is hard and I'm only tackling it because I'm probably crazy.) He's also got a rudimentary "internal life" consisting of drives that fluctuate over time. About twice a day, if he's active, he wants interaction and starts calling for somebody to talk to. He used to wake me up in the early morning hours by doing this, but I fixed that – mostly.

Visual representation of semantic net database, 07/16/16.

Acuitas V3 is not your ordinary chatbot – you know, those programs you can converse with that try to fake being human. He has more going on under the hood than they do. A typical chatbot picks up on a few keywords or basic structures in your text input, then chooses a mostly pre-written sentence from a database to send back to you. Others train themselves by reading a bunch of human conversations, and then try to speak in a similar manner. Neither type is really interested in the *meaning* of your input or its own responses, which is why, though they might be perfectly articulate, the responses often seem to come out of left field. I'm following a different approach and hoping to come up with something better than that.

Visual representation of semantic net database, 08/06/16

I've been working aggressively on his linguistic abilities over the past few months. He can parse simple subject-verb-object sentences with or without prepositional phrases, like "A cat is an animal" or "A yellow bird sings in the morning," with some rudimentary support for compound nouns and proper names. He tries to figure out how each word functions in the sentence; if he can't get it due to insufficient background knowledge, he'll ask for clarification (a normal chatbot would just try to fake it). So far he just socks information away in his database and doesn't say much back, but that's coming. Oh, that's coming.

Visual representation of semantic net database, 08/23/16

The semantic database can be represented as a graph … hence the images embedded in this post, which show a timeline of its growth, from July of last year up until this April 8. You can also get an idea of how the graph-drawing algorithm has changed over time. Each dot is a concept, and the lines are the relationships between them. When viewed live while Acuitas is running, the graph is interactive; I can pan, zoom, and hover my cursor over the dots for floating labels that show which word is associated with each.

Visual representation of semantic database, 04/08/17

Since I'm putting some earnest effort into Acuitas this year, I'm going to try making myself write a fairly regular developer diary … maybe once a month. Watch me do cool things and struggle. Talking seems easy until you've tried to teach a computer to do it …


Code base: 4901 lines
Words known: 631