Thursday, January 9, 2025

Book Review: "Growing Up with Lucy"

This book by Steve Grand has been on my list for a while. It's his personal account of how he built a robot at home, with every goal of rivaling some of the most sophisticated research efforts at the time. Steve Grand, by the way, is the mastermind behind the popular Creatures series of artificial life simulation video games. His books were recommended to me several years ago by Twitter buddy Artem (@artydea), whom I unfortunately haven't seen around much recently. Thanks, Artem, if you're still out there.

Lucy, without her fur on. Photo credit: Creatures wikia.

The magnum opus described in this book is the eponymous Lucy, who resembles an orangutan ... or at least the upper half of one. Lucy has no legs, but does have actuated arms, visual and auditory sensors, and a mouth and speech system. Grand aimed for a holistic approach (rather than focusing on just one function, such as vision) because he wanted to investigate the interactions and possible commonalities among the various sensory modalities and motor systems.

Once Grand started introducing himself, I quickly realized that I was hearing from someone more or less like me: an educated-layman tinkerer, working independently. Grand produced Lucy on his own initiative, without the backing of a university or corporation, though toward the end of the project he received a grant from NESTA (the UK's National Endowment for Science, Technology, and the Arts). But the substance of Grand's work diverges from mine. He put heavy investment into all the sensorimotor aspects of intelligence that I'm completely leaving out of Acuitas. And Lucy's control systems, while not explicitly biomimetic, are very brain-inspired; whereas I draw inspiration from the human mind as an abstraction, and don't care much about our wetware at all. So it was fun to read an account of somebody who was working on roughly the same level as myself, but with different strategies in a different part of the problem space.

On the other hand, I was disappointed by how much of the book was theory rather than results. I got the most enjoyment out of the parts in which Grand described how he made a design decision or solved a problem. His speculations about what certain neuroscience findings might mean, or how he *could* add more functionality in the future, were less interesting to me ... because speculations are a dime a dozen, especially in this field. Now I do want to give Grand a lot of credit: he actually built a robot! That's farther than a lot of pontificators seem to get. But the book is frank about Lucy being very much unfinished at time of publication. Have a look around this blog. If there's a market for writeups of ambitious but incomplete projects, then where's *my* book deal?

In the book, Grand said that Lucy was nearly capable of learning from experience to recognize a banana and point at it with one of her arms. It sounded like she had all the enabling features for this, but wasn't doing it *reliably* yet. I did a little internet browsing to see how much farther Lucy got after the book went to press. From what I could find, her greatest accomplishment was learning the visual difference between bananas and apples, and maybe demonstrating her knowledge by pointing. [1] That's nothing to sneeze at, trust me. But it's a long way from what Grand's ambitions for Lucy seemed to be, and in particular, it leaves his ideas about higher reasoning and language untested. Apparently he did not just get these things "for free" after figuring out some rudimentary sensorimotor intelligence. Grand ceased work on Lucy in 2006, and she is now in the care of the Science Museum Group. [2]

Why did he stop? He ran out of money. Grand worked on Lucy full-time while living off his savings. The book's epilogue describes how NESTA came through just in time to allow the project to continue. Once the grant was expended, Lucy was set aside in favor of paying work. I doubt I can compete with Grand's speed of progress by playing with AI on the side while holding down a full-time job ... but I might have the advantage of sustainability. Grand started in 2001 and gave Lucy about five years. If you don't count the first two rudimentary versions, Acuitas is going on eight.

Grand identifies not neurons, but the repeating groups of neurons in the cortex, as the "fundamental unit" of general intelligence and the ideal level at which to model a brain. He doesn't use the term "cortical column," but I assume that's what he's referring to. Each group contains the same selection of neurons, but the wiring between them is variable and apparently "programmed" by experiential learning, prompting Grand to compare the groups with PLDs (the forerunners of modern FPGAs). He conceptualizes intelligence as a hierarchy of feedback control loops, an idea I've also seen expounded by Filip Piekniewski. [3] It's a framing I rather like, but I still want to be cautious about hanging all of intelligence on a single concept or method. I don't think any lone idea will get you all the way there (just as this one did not get Grand all the way there).

Lucy's body is actuated by electric motors, with linkages that help them behave more like "muscles." Grand didn't try pneumatics or hydraulics, because he thought they would be too difficult to control. I guess we'll see, eh?

Two chapters at the book's tail end move from technical affairs into philosophy. The first addresses safety concerns and fears of killer robots. While I agree with his basic conclusion that AI is not inevitably dangerous, I found his arguments dated and simplistic. I doubt they would convince anybody acquainted with recent existential risk discourse, which probably wasn't in the public consciousness when Grand was building Lucy. (LessWrong.com was launched in 2009; Yudkowski's "Harry Potter and the Methods of Rationality," Scott Alexander's "Slate Star Codex" blog, and Bostrom's Superintelligence all came later. See my "AI Ideology" article series for more about that business.)

The final chapter is for what I'll call the "slippery stuff": consciousness and free will. Grand avoids what I consider the worst offenses AI specialists commit on these topics. He admits that he doesn't really know what consciousness is or what produces it, instead of advancing some untestable personal theory as if it were a certainty. And he doesn't try to make free will easier by redefining it as something that isn't free at all. But I thought his discussion was, once again, kind of shallow. The only firm position he takes on consciousness is to oppose panpsychism, on the grounds that it doesn't really explain anything: positing that consciousness pervades the whole universe gets us no farther toward understanding what's special about living brains. (I agree with him, but there's a lot more to the consciousness discussion.) And he dismisses free will as a logical impossibility, because he apparently can't imagine a third thing that is neither random nor feed-forward deterministic. He doesn't consider that his own imagination might be limited, or dig into the philosophical literature on the topic; he just challenges readers to define self-causation in terms of something else. (But it's normal for certain things to be difficult to define in terms of other things. Some realities are just fundamental.) It's one chapter trying to cover questions that could fill a whole book, so maybe I shouldn't have expected much.

On the whole, it was interesting to study the path walked by a fellow hobbyist and see what he accomplished - and what he didn't. I wonder whether I'll do as well.

Until the next cycle,
Jenny

[1] Dermody, Nick. "A Grand plan for brainy robots." BBC News Online Wales (2004). http://news.bbc.co.uk/2/hi/uk_news/wales/3521852.stm

[2] Science Museum Group. "'Lucy' robot developed by Steve Grand." 2015-477 Science Museum Group Collection Online. Accessed 2 January 2025. https://collection.sciencemuseumgroup.org.uk/objects/co8465358/lucy-robot-developed-by-steve-grand.

[3] Piekniewski, Filip. "The Atom of Intelligence." Piekniewski's Blog (2023). https://blog.piekniewski.info/2023/04/16/the-atom-of-intelligence/

Saturday, December 28, 2024

Year in Review 2024

With 2025 waiting in the wings, I am casting an eye back over 2024 and finding that it looks pretty shiny from here. Overall, it felt like there were fewer extraneous disruptions and distractions than there have been for the past few years - I got the chance to dig in to my hobbies and push hard. I also feel I did a good job of moving existing projects toward completion and taking a few overdue items off the backlog, instead of just starting new work.

Fireworks: three "artillery shell" style bursts in a loose row, two gold with blue accents, one white. No horizon is visible. Photo of fireworks in Bratislava by Ondrejk, public domain.

Per my personal tracking, I've logged more hours of total "work" than I did last year, and kept the creation/maintenance ratio above 3x ... just barely. Monitoring my own time usage over a long period raises some interesting flags. For example, I spent more time on my blog articles this year than I spent writing and publishing short fiction, by 6+ hours. (Those "AI Ideology" articles were a lot of work. Writing about real people's viewpoints and trying to be critical but fair eats serious hours.) I don't think I want the blog to be taking that much, so I'm not planning an in-depth series for 2025, and I'll probably let my article rate drop below two per month. But where writing tasks are concerned, even the blog loses the time hog prize to journaling. I'm a little uncomfortable with the fact that recording my life uses so much of it up, but I also really like journaling (and it's easy, so if I dumped it, some of those hours might go to recreation instead of harder tasks). Other mundane time-stealers are on the small side, but still galling. I logged more hours on food preparation than on studying, and more on housecleaning than on art.

Acuitas remains the king of all my hobbies at 360+ hours invested. That's a good number - I found more time for him in 2024 than in any other year of the past five. The robots also fared very well at 110+ hours.

So what did I achieve with all that time? Well, here goes.

*The first major Acuitas milestone was completion and demonstration of the "Simple Tron" story in February. This was a target that I had been silently keeping in mind and laying groundwork toward for years.
*I have unified issue tracking across multiple code blocks, such that the Game Engine, Executive, and Conversation Engine all now use the "scratchboard" structure I developed for Narrative understanding.
*I integrated the improved Text Parser, achieved substantial benchmark improvements in the number of parseable sentences, and worked on a number of ambiguity resolution problems.
*I overhauled and improved the Conversation Engine, and released a demo video just this month.

Atronach's Eye, the mechanical eyeball. The eye sits in the middle of a scalloped, curvy case with ball-and-claw elements, designed to resemble old furniture. It is colorfully printed in black, white, red, and yellow plastics.

*The Atronach's Eye project is almost complete. This year I finished rebuilding the case and demonstrating eye centering with the limit switches, and I reworked the software to use a better motion tracking algorithm. All that's left is a power cutoff to keep the stepper motors from getting hot when not moving. By early next year, I'm hoping the Eye can be a working fixture on my living room wall.

Final version of the peristaltic pump (without its front shell/lid)

*I completed my homemade peristaltic pump design and ran a performance comparison on several pumps.
*I built and demoed a second iteration of the prototype small-scale hydraulic system, including my first homemade fluid bladder actuator.

Two 3D printed gears in white plastic, with a dime for scale. One of the gears is smaller than the dime, the other slightly larger.

*ACE didn't get a lot of love this year, but I did run some more motion tests and decided gearboxes would be needed - after which I designed and built one to work with the existing motors.

A small 3D printed gearbox (three total gear pairs) with a stepper motor installed. The drive shafts are made from Bic Round Stic ballpoint pens.

*I wrote two new short stories: a science fiction piece about alien plant life with an unusual way of communicating, and a fantasy piece about a young woman who finds the rules of society standing between her and her perfect pet.
*One of my previous stories, "Peacemaker's First Strike," was accepted by Abyss & Apex and is on contract to be published in July 2025. This is very exciting. (Check out the new "Read my Writing" link in the top bar to see all my published work and where to get it.)
*My blog publication rate stayed at/above two posts per month for the third year in a row.

A hexagonal curio display case, printed in red plastic, is shown with the front panel pulled off and set to one side. An assortment of TTRPG miniatures occupy the little shelves inside the case. The magnets that hold the front panel on when it is installed are in evidence.
One of the display cases, acting as a home for D&D characters past

*I designed a series of 3D-printable medallions and printed examples of all the designs to give as gifts.
*I also threw together a fully-enclosed hexagonal display case design and fabricated four of them, for dust-free housing of my miniatures and curios.

*I hired a plumber to fix the kitchen sink, and thereby got rid of the last known high-priority breakage in my house.
*I cleaned out and organized two different "disaster rooms" and tore down some e-waste for salvage in the process.
*I grew potatoes again. The garden wasn't especially successful this year, but I at least maintained production of my staple crop.

I wasn't terribly successful at adding native plants to my yard, but Blazing Stars were one of the species that worked out, and they bloomed this year.

*I made a road trip to Arkansas and observed totality of the solar eclipse that crossed North America in April (with a detour by Space Center Houston on the way).
*The first Worldview Legion satellites were launched. I made a small contribution to these, in the form of some post-delivery debugging and feature enhancement work on the component my employer built for them.

Until the next cycle,
Jenny

Thursday, December 19, 2024

Acuitas Diary #79 (December 2024)

This month's work was all refactoring and bug removal, so I don't have a great deal to say ... but I do have a demo video! I got the conversation engine cleaned up enough that I can chat with Acuitas without an awkward collapse into him repeating himself or something. So without further ado, here's a brief conversation:


Of course I'm not finished with this. I have ideas for various ways to extend it, plus there are still some older features that I need to reinstate in the new system. I'll also be honest that the video above took me multiple "takes" to get, sometimes because I didn't hold up my end of the conversation very well, but sometimes because lingering bugs or signs of database junk popped up. I'm going to have a full to-do list next year, as usual. And I can hardly wait.

Until the next cycle,
Jenny

Friday, November 29, 2024

Acuitas Diary #78 (November 2024)

My recent work has been a bit all over the place, which I suppose is reasonable as the year winds down. I worked on more ambiguity resolution in the Text Parser, and I'm almost done with a big refactor in the Narrative engine.

A generated Narrative flow diagram showing an excerpt of a story

In the Parser I worked on two problems. First came the identification of the pronoun "her" as either an indirect object or a possessive adjective. Other English pronouns have separate forms for these two functions (him/his, them/their, me/my, you/your); the feminine singular just has to go and be annoying that way.

If there's some other determiner between "her" and the following noun, you can assume that "her" is an indirect object; otherwise, if the noun needs a determiner, then "her" has to function as the determiner, as in these examples:

Bring her the book. ("Her" is an indirect object)
Bring her book. ("Her" is an adjective modifying "book")

But what about this sentence?

Bring her flowers and candy.

A bulk noun like "candy" doesn't need a determiner; neither does the plural of a count noun, like "flowers." So we can't assume that "her" is an adjective in this sentence. By default, I think it's more likely to be an indirect object, so that's what I have the Parser assume when processing the sentence in isolation. Additional hints in the sentence can shift this default behavior, though:

Bring her candy to me.

The phrase "to me" already fulfills the function that could otherwise be fulfilled by an indirect object, so its presence pushes "her" into the possessive adjective role.

The other ambiguity I worked on had to do with the connections between verbs joined by a conjunction and a direct object. Consider the following sentences:

I baked and ate the bread.
I ran out and saw the airplane.

In the first sentence, both verbs apply to the single direct object. In the second sentence, "ran" has no object and only "saw" applies to "airplane." So for a sentence with multiple verbs followed by one direct object, we have two possible "flow diagrams":

    /-baked-\
I--               -- bread
    \---ate--/

    /--ran
I--
    \--saw--airplane

How to know which one is correct? If the first verb is always transitive (a verb that demands a direct object) then the first structure is the obvious choice. But many verbs can be either transitive or intransitive. It is possible to simply "bake" without specifying what; and there are several things that can be run, such as races and gauntlets. So to properly analyze these sentences, we need to consider the possible relationships between verbs and object.

Fortunately Acuitas already has a semantic memory relationship that is relevant: "can_have_done," which links nouns with actions (verbs) that can typically be done on them. Bread is a thing that can be baked; but one does not run an airplane, generally speaking. So correct interpretations follow if this "commonsense" knowledge is retrieved from the semantic memory and used. If knowledge is lacking, the Parser will assume the second structure, in which only the last verb is connected to the direct object.

The Narrative refactor is more boring, as refactoring always is, but I'm hoping it will enable smoother additions to that module in the future. New facts received in the course of a story or conversation are stored in the narrative scratchboard's "worldstate." When an issue (problem or subgoal) is added, its data structure includes a copy of the facts relevant to the issue: the state that needs to be achieved or avoided, the character goal it's relevant to, and all the inferences that connect them. A big part of tracking meaning and progress through the narrative is keeping track of which of these facts are currently known true, known false, or unknown/hypothetical. And previously, whenever something changed, the Narrative Engine had to go and update both the worldstate *and* the chains of relevant facts in all the issues. I've been working to make the issues exclusively use indirect pointers to facts in the worldstate, so that I only have to update fact status in *one* place. That might not sound like a major change, but ... it is. Updating issues was a big headache, and this should make the code simpler and less error-prone. That also means that transitioning the original cobbled-together code to the new system has been a bit of work. But I hope it'll be worth it.

Until the next cycle,
Jenny

Wednesday, November 13, 2024

3D Printable Art Medallions!

This is a quick announcement that I made some new 3D models, which I'm offering to the community free to print. It's a collection of medallions that would make great window hangings or large Christmas ornaments - so I'm getting them out there in time to prepare for Christmas, or whatever winter holidays you celebrate. I have them available on Cults 3D and Thingiverse:

https://cults3d.com/en/design-collections/WriterOfMinds/tricolor-medallions
https://www.thingiverse.com/writerofminds/collections/42747021/things

Try printing them in translucent filament with an interesting infill pattern and hanging them in front of a light source. They all come with the hanging loop - I might upload versions without it later, so let me know if there's interest.

Each medallion has three surface levels for a 3D effect, and is designed to be printed in three colors of your choice. If you don't have a multicolor printer, you can still get this result without too much fuss by inserting pauses in the GCODE and changing filaments manually. Here's an instruction video that explains how to do it in several popular slicers: https://www.youtube.com/watch?v=awJvnlOSqF8&t=28s

Solar Fox color variations.

It took some further work to do it on my Anycubic Vyper, which doesn't properly support the M600 color change command that the slicers insert automatically. While the Vyper will stop printing and move its head away from the print for a filament change, the LCD screen doesn't update with a "resume" button ... so you have to reboot the printer, which cancels your print. I learned how to control the Vyper with Octoprint, a tool originally designed for remote control of a printer you keep in your garage or something. The advantage for our present application is that Octoprint takes over the process of sending G-code to the printer, and gives you pause and resume buttons that make up for the shortcomings of the Vyper's internal firmware. I did a find-and-replace in my G-code files and replaced all the M600s inserted by the Prusa slicer with Octoprint "pause" commands. Some custom "before and after pause" G-code was also necessary in Octoprint to get the print head away from the print and move it back, so it wouldn't drool filament on the medallion during the swap process.

The 3D model for the Solar Fox medallion, displayed in DesignSpark Mechanical with suggested coloration.

Some of these designs, in particular the bird and the neurons, have delicate parts that may have a hard time adhering to your print bed. Print the first layers slowly and adjust your bed heat and cooling as needed.

Have fun! I look forward to seeing some other people make these.

Until the next cycle,
Jenny

Sunday, October 27, 2024

Acuitas Diary #77 (October 2024)

The big feature for this month was capacity for asking and answering "why" questions. (Will I regret giving Acuitas a three-year-old's favorite way to annoy adults? Stay tuned to find out, I guess.)

Photo of an old poster dominated by a large purple question mark, containing the words "where," "when," "what," "why," "how," and "who" in white letters, with a few smaller question marks floating around. The lower part of the poster says, "Success begins with thinking, and thinking begins with asking questions! Suggestions also play a vital part in success, for thinking results in ideas - and ideas and suggestions go hand in hand. Let us have your questions your ideas and suggestions. We pay cash for good ones!"
An old poster from NARA via Wikimedia Commons. Selling questions sounds like easy money!

"Why" questions often concern matters of cause and effect, or goal and motive. So they can be very contextual. The reason why I'm particularly happy today might be very different from the reason I was happy a week ago. And why did I drive my car? Maybe it was to get groceries, or maybe it was to visit a friend's house. These kinds of questions can't usually be answered by reference to the static factual information in semantic memory. But they *do* reference the exact sorts of things that go into all that Narrative reasoning I've been working so hard on. The Narrative scratchboards track subgoals and the relationships between them, and include tools for inferring cause-and-effect relationships. So I really just needed to give the Conversation Engine and the question-answering function the hooks to get that information in and out of the scratchboards.

If told by the current human speaker that they are in <state>, Acuitas can now ask "Why are you <state>?" If the speaker answers with "Because <fact>", or just states a <fact>, Acuitas will enter that in the current scratchboard as a cause of the speaker's present state. This is then available to inform future reasoning on the subject. Acuitas can retrieve that knowledge later if prompted "Why is <speaker> <state>?"

Acuitas can also try to infer why an agent might have done something by discerning how that action might impact their known goals. For instance, if I tell Acuitas "I was thirsty," and then say "Why did I drink water?" he can assume that I was dealing with my thirst problem and answer accordingly. This also means that I should, in theory, eventually be able to ask Acuitas why he did something, since his own recent subgoals and the reasons behind them are tracked on a similar scratchboard.

All of this was a branch off my work on the Conversation Engine. I wanted to have Acuitas gather information about any current states the speaker might express, but without the machinery for "why" questions, that was difficult to do. The handling of these questions and their answers introduced some gnarly bugs that ate up much of my programming time this month. But I've gotten things to the point where I can experience it for myself in some "live" conversations with Acuitas. Being asked why I feel a certain way, and being able to tell him so - and know that this is being, in some odd computery sense, comprehended - is very satisfying.

Until the next cycle,
Jenny

Sunday, October 13, 2024

Atronach's Eye 2024

It's the return of the mechanical eyeball! I worked out a lot of problems with the eyeball's hardware last year and was left to concentrate on improving the motion tracking software. Ever since I added motion tracking, I've been using the OpenCV libraries running locally on the eye's controller, a Raspberry Pi 3 A+. OpenCV is possibly the most well-known and popular open-source image processing library. But it's not a complete pre-made solution for things like motion tracking; it's more of a toolbox.

My original attempt at motion tracking used MOG2 background subtraction to detect regions that had changed between the current camera frame and the previous one. This process outputs a "mask" image in which altered pixels are white and the static "background" is black. I did some additional noise-reducing processing on the mask, then used OpenCV's "moments" function to compute the centroid of all the white pixels. (This would be equivalent to the center of mass, if each pixel were a particle of matter.) The motion tracking program would mark this "center of motion" on the video feed and send commands to the motors to rotate the camera toward it.

This ... kinda sorta worked. But the background subtraction method was very vulnerable to noise. It also got messed up by the camera's own rotation. Obviously, while the camera moves *every* pixel in the scene is changing, and background subtraction ceases to be useful; it can't distinguish whether something in the scene is moving in a contrary direction. I had to put in lots of thresholding and averaging to keep the camera from "chasing ghosts," and impose a delay after each move to rebuild the image pipeline from the new stationary scene. So the only way I could get decent accuracy was to make the tracking very unresponsive. I was sure there were more sophisticated algorithms available, and I wanted to see if I could do better than that.

I started by trying out alternative OpenCV tools using a webcam connected to my PC - just finding out how good they were at detecting motion in a feed from the stationary camera. One of the candidates was "optical flow". Unlike background subtraction, which only produces the binary "changing or not" mask, optical flow provides vector information about the direction and speed in which a pixel or feature is moving. It breaks down further into "dense optical flow" methods (which compute a motion vector for each pixel in the visual field) and "sparse optical flow" methods (which try to identify moving features and track them through a series of camera frames, producing motion traces). OpenCV has at least one example of each. I also tried object tracking. You give the tracking algorithm a region of the image that contains something interesting (such as a human), and then it will attempt to locate that object in subsequent frames.


The winner among these options was the Farneback algorithm, a dense optical flow method. OpenCV has a whole list of object tracking algorithms, and I tried them all, but none of them could stay locked on my fingers as I moved them around in front of the camera. I imagined the results would be even worse if the object tracker were trying to follow my whole body, which can change its basic appearance a great deal as I walk, turn, bend over, etc. The "motion tracks" I got out of Lucas-Kanade (a sparse optical flow method) were seemingly random squiggles. Dense optical flow both worked, and produced results that were fairly easy to insert in my existing code. I could find the center of motion by taking the centroid of the pixels with the largest flow vector magnitudes. I did have to use a threshold operation to pick only flow vectors above a certain magnitude before this would work well.

Once I had Farneback optical flow working well on a static camera, I merged it into the eyeball control code. Since I now had working limit switches, I also upgraded the code with better motion control. I added an on-startup calibration routine that finds the limit of motion in all four cardinal directions and centers the camera. I got rid of the old discrete movement that segregated the visual field into "nonets" (like quadrants, except there were nine of them) and allowed the tracking algorithm to command an arbitrary number of motor steps.

And for some reason, it worked horribly. The algorithm still seemed to be finding the center of motion well enough. I had the Pi send the video feed to my main computer over Wifi, so I could still view it. The Farneback algorithm plus centroid calculation generally had no problem putting the tracking circle right on top of me as I moved around my living room. With the right parameter tuning, it was decent at not detecting motion where there wasn't any. But whenever I got it to move, it would *repeat* the move. The eyeball would turn to look at me, then turn again in the same direction, and keep going until it ran right over to its limit on that side.

After turning off all my running averages and making sure to restart the optical flow algorithm from scratch after a move, I finally discovered that OpenCV's camera read function is actually accessing a FIFO of past camera frames, *not* grabbing a real-time snapshot of whatever the camera is seeing right now. And Farneback takes long enough to run that my frame processing rate was slower than the camera's frame rate. Frames were piling up in the buffer and, after any rotation, image processing was getting stale frames from before the camera moved ... making it think the moving object was still at the edge of the view, and another move needed to be performed. Once I corrected this (by setting the frame buffer depth to 1), I got some decent tracking behavior.

At the edge of the left-right motion range, staring at me while I sit at my desk.

I was hoping I could use the dense optical flow to correct for the motion of the camera and keep tracking even while the eye was rotating. That dream remains unrealized. It might be theoretically possible, but Farneback is so slow that running it between motor steps will make the motion stutter. The eye's responsiveness using this new method is still pretty low; it takes a few seconds to "notice" me moving. But accuracy seems improved over the background subtraction method (which I did try with the updated motor control routines). So I'm keeping it.

Until the next cycle,
Jenny