Sunday, July 13, 2025

A New Project?

I've been wanting to add some kind of physics experiment to my rotation of hobby projects, and I think I've picked one out. But I don't want to go into that just yet, because I'll be concentrating on the equipment prerequisites first. The most interesting thing I'll need is a way to measure tiny amounts of force - on the order of mN (milliNewtons) or even μN (microNewtons). Weighing scales are the most common force-measuring tools out there, so it makes sense to convert this force to a weight or mass. The amount of mass that produces a μN of force under standard Earth gravity is 0.000102 grams, or 0.102 milligrams.

Representation of a black hole, courtesy NASA.

Digital postal scales tend to have a resolution of 0.1 oz (2.8 g), which simply won't do. But there are also cheap digital scales intended for weighing jewelry, powders, etc. which claim resolution of up to 0.001 g (1 mg). Scales like this are all over sites like Alibaba and Amazon for less than $25 ... but they're short of the sensitivity I might need to see the effect. Going up an order of magnitude in price will get me an order of magnitude more precision, from this laboratory scale for example. For the really serious scales with a resolution of 1 μg, I would have to lay out over $10K. Technically that's within my grasp, but I'm not that invested in this experiment ... and I don't think I could claim to operate this blog on a "shoestring budget" anymore if I made such a purchase!

But wait! There's one more option: "build your own." Dig around a bit, and you can find some how-tos for building a scale that "is easily able to get around 10 microgram precision out of a couple of bucks in parts." The crux of the idea is to repurpose an old analog meter movement, adding circuitry that measures the electric current needed to return the needle to a neutral position after a weight is placed on it. The author of that Hackaday article "can’t really come up with a good reason to weigh an eyelash," but I can ... so now I'm really tempted by this build. It seems challenging but doable for someone with electronics knowledge. I don't really believe the budget would be 2 bucks ... any Digikey order costs more than that ... so let's figure $25.

To sum up, my options are as follows:

Consumer jewelry scale (Resolution = 0.001 g): ~$25
Laboratory scale (Resolution = 0.0001 g): ~$250
Home-built needle scale (Resolution = 0.00001 g): ~$25 + blood/sweat/tears
Professional microbalance (Resolution = 0.000001 g): ~$12000

Ideally, I'm thinking I should make the needle scale and buy the laboratory scale, so I can cross-check them against each other. As a home-made piece of equipment prone to all sorts of unexpected errors, the needle scale will benefit from some degree of corroboration, even if it can technically achieve a higher resolution than the lab scale. And either of these would put me in the right range to measure effects of a few μN, without breaking the bank. I really don't need the microbalance, thank goodness.

I'm not sure where I'll fit this into my schedule, but I've already got some analog meter movements, courtesy of my dad's extensive junk collection. So stay tuned to see if I can weigh an eyelash.

Until the next cycle,
Jenny

Monday, June 30, 2025

Peacemaker's First Strike

My short story of the above title will be LIVE and free to read in the 3rd Quarter 2025 issue of Abyss & Apex tomorrow! It's about a professional curse-remover who gets in a little over her head on an unconventional case; it's got mystery, magic, barbarians, and something to say about the consequences when defense of one's own goes too far. Ef Deal (at that time the Assistant Fiction Editor) was kind enough to tell me, "It has been a very long time since I read a sword & sorcery I enjoyed as much as this tale." So don't miss it!

Equestrian statue of a burly man with a sword in his right hand and some kind of banner made from an animal hide rising over his left shoulder. (It happens to be Decebalus, but that's not relevant.) The horse has all four feet planted on the plinth, and their head bowed forward.

I put something of myself in all my stories, but this one is more personal than most. It would be impossible for me to explain where it came from without airing some things that are better kept private, but in a roundabout and strange way, it reflects something I went through. So it feels particularly fitting that "Peacemaker's First Strike" should be my first paying publication credit. Turning this story loose means healing for me as well as the characters.

Abyss & Apex has been great to work with, so I'd love it if you would check out my writing and the rest of the issue, and support the zine with a donation if you are so inclined.

Until the next cycle,
Jenny

Tuesday, June 10, 2025

Acuitas Diary #85 (June 2025)

This month I have a quick demo for you, showcasing Acuitas' upgraded semantic memory visualization. My goal for this was always to "show him thinking" as it were, and I think I've finally gotten there. Nodes (concepts) and links (relationships between concepts) are shown as dots and lines in a graph structure. Whenever any process in Acuitas accesses one of the concepts, its node will enlarge and turn bright green in the display. The node then gradually decays back to its default color and size over the next few seconds. This provides a live view of how Acuitas is using his semantic memory for narrative understanding, conversations, and more.


You can see a previous iteration of my memory access visualization work in Developer Diary #4. Wow, that's ancient. The original access animations were only activated by "research" behavior (ruminating on a concept to generate questions about it), and were often hard to see; if the concept being accessed was one of the "smaller" ones, it was impossible to detect the color change at a reasonable level of zoom. The upgraded version of the animation is called from the semantic memory access functions, such that it will be activated if a concept's information is retrieved for any reason. And it enlarges the node by an amount proportional to its default size and the display's current level of zoom, such that it will always become visible.

I would have liked to make the links highlight when used as well. The problem is that links in Acuitas' memory storage aren't really distinct things anymore. A link is indirectly defined by endpoints included in the data structures for all the concepts it connects to. So there isn't a low-level function that determines when a particular link is being accessed; a node gets accessed, and then the calling function does whatever it pleases with the returned data structure, which might include following a link to another node. Keeping track of every time that happens and connecting those events with the correct lines on the display would have become very messy, so I opted not to. I think just highlighting the concept nodes yields an adequate picture of what's happening.

I haven't showcased the memory display in a long time because it's been a mess for a long time. The node placement is generated by a custom algorithm of my own. As more concepts were added to the graph and certain important concepts got "larger" (i.e. acquired more links), the original algorithm started to generate spindly, ugly graphs in which the largest nodes were surrounded by excess empty space, and the smallest nodes crowded too close together. I managed to work out a new placement method that generates attractive, proportional clusters without blowing up the computation time. Creating a new layout is still computation-intensive enough that the visualization can't be updated to add new nodes and links as soon as they are created; it must be regenerated by me or (eventually) by Acuitas during his sleep cycle.

And that's about the size of it. I'll be on vacation for the second half of this month, which means there probably won't be much Acuitas development happening until I get back. Enjoy the video, and I'll see you all later.

Until the next cycle,
Jenny

Saturday, May 31, 2025

Acuitas Diary #84 (May 2025)

A couple months ago I described my plans to implement trial-and-error learning so Acuitas can play a hidden information game. This month I've taken the first steps. I'm moving slowly, because I've also had a lot of code cleanup and fixing of old bugs to do - but I at least got the process of "rule formation" sketched out.

A photo of High Trestle Trail Bridge in Madrid, Iowa. The bridge has a railing on either side and square support frames wrapping around it and arching over it at intervals. The top corner of each frame is tilted progressively farther to the right, creating a spiral effect. The view was taken at night using the lighting of the bridge itself, and is very blue-tinted and eerie or futuristic-looking. Photo by Tony Webster, posted as public domain on Wikimedia Commons.

Before any rules can be learned, Acuitas needs a way of collecting data. If you read the intro article, you might recall that he begins the game by selecting an affordance (obvious possible action) and an object (something the action can be done upon) at random. In the particular game I'm working on, all affordances are of the form "Put [one zoombini out of 16 available] on the [left, right] bridge," i.e. there are 32 possible moves. Once Acuitas has randomly tried one of these, he gets some feedback: the game program will tell him whether the selected zoombini makes it across the selected bridge, or not. Then what?

After Acuitas has results from even one attempted action, he stops choosing moves entirely at random. Instead, he'll try to inform his next move with the results of the previous move. Here is the basic principle used: if the previous move succeeded, either repeat the move* or do something similar; if the previous move failed, ensure the next move is different. Success and failure are defined by how the Narrative scratchboard updates goal progress when the feedback from the game is fed into it; actions whose results advance at least one issue are successes, while actions that hinder goals or have no effect on goals at all are failures. Similarity and difference are measured across all the parameters that define a move, including the action being taken, the action's object, and the features of that object (if any).

*Successful moves cannot be repeated in the Allergic Cliffs game. Once a zoombini crosses the chasm, they cannot be picked up anymore and must remain on the destination side. But one can imagine other scenarios in which repeating a good choice makes sense.

Following this behavior pattern, Acuitas should at least be able to avoid putting the same zoombini on a bridge they already failed to cross. But it's probably not enough to deliver a win, by itself. For that, he'll need to start creating and testing cause-and-effect pairs. These are propositions, or what I've been calling "rules." Acuitas compares each new successful action to all his previous successes and determines what they share in common. Any common feature or combination of features is used to construct a candidate rule: "If I do <action> with <features>, I will succeed." Commonalities between failures can also be used to construct candidate rules.

The current collection of rule candidates is updated each time Acuitas tries a new move. If the results of the move violate any of the candidate rules, those rules are discarded. (I'm not contemplating probability-based approaches that consider the preponderance of evidence yet. Rules are binary true/false, and any example that violates a rule is sufficient to declare it false.)

Unfortunately, though I did code all of that up this month, I didn't get the chance to fully test it yet. So there's still a lot of work to do. Once I confirm that rule formation is working, future steps would include the ability to design experiments that test rules, and the ability to preferentially follow rules known with high confidence.

Until the next cycle,
Jenny

Sunday, May 11, 2025

Further Thoughts on Motion Tracking

Atronach's Eye may be operating on my wall (going strong after several months!), but I'm still excited to consider upgrades. So when a new motion detection algorithm came to my attention, I decided to implement it and see how it compared to my previous attempts.

MoViD, with the FFT length set to 32, highlighting and detecting a hand I'm waving in front of the camera. The remarkable thing is that I ran this test after dusk, with all the lights in the room turned off. The camera feed is very noisy under these conditions, but the algorithm successfully ignores all that and picks up the real motion.

I learned about the algorithm from a paper presented at this year's GOMACTech conference: "MoViD: Physics-inspired motion detection for satellite image analytics and communication," by MacPhee and Jalai. (I got access to the paper through work. It isn't available online, so far as I can tell, but it is marked for public release, distribution unlimited.) The paper proposes MoViD as a way to compress satellite imagery by picking out changing regions, but it works just as well on normal video. It's also a fairly simple algorithm (if you have any digital signal processing background, otherwise feel free to gloss over the arcane math coming up). Here's the gist:

1. Convert frames from the camera to grayscale. Build a time series of intensity values for each pixel.
2. Take the FFT of each time series, converting it to a frequency spectrum.
3. Multiply by a temporal dispersion operator, H(ω). The purpose is to induce a phase shift that varies with frequency.
4. Take the inverse FFT to convert back to the time domain.
5. You now have a time series of complex numbers at each pixel. Grab the latest frame from this series to analyze and display.
6. Compute the phase of each complex number - now you have a phase value at each pixel. (The paper calls these "phixels." Cute.)
7. Rescale the phase values to match your pixel intensity range.

The result is an output image which paints moving objects in whites and light grays against a dark static background. I can easily take data like this and apply my existing method for locating a "center of motion" (which amounts to calculating the centroid of all highlighted pixels above some intensity threshold).

My main complaint with the paper is its shortage of details about H(ω). It's an exponential of φ(ω), the "spectral phase kernel" ... but the paper never defines an example of the function φ, and "spectral phase kernel" doesn't appear to be a common term that a little googling will explain. After some struggles, I decided to just make something up. How about the simplest function ever, a linear function? Let φ(ω) = kω, with k > 1 so that higher frequencies make φ larger. Done! Amazingly, it worked.

Okay, math over. Let me see if I can give a more conceptual explanation of why this algorithm detects motion. You could say frequency is "how fast something goes up and down over time." When an object moves in a camera's field of view, it makes the brightness of pixels in the camera's output go up and down over time. The faster the object moves, the greater the frequency of change for those pixels will be. The MoViD algorithm is basically an efficient way of calculating the overall quickness of all the patterns of change taking place at each pixel, and highlighting the pixels accordingly.

It may be hard to tell, but this is me, gently tilting my head back and forth for the camera.

My version also ended up behaving a bit like an edge detector (but only for moving edges). See how it outlines the letters and designs on my shirt? That's because change happens more abruptly at visual edges. As I sway from side to side, pixels on the letters' borders abruptly jump between the bright fabric of the shirt and the dark ink of the letters, and back again.

The wonderful thing about this algorithm is that it can be very, very good at rejecting noise. A naive algorithm that only compares the current and previous camera frames, and picks out the pixels that are different, will see "motion" everywhere; there's always a little bit of dancing "snow" overlaid on the image. By compiling data from many frames into the FFT input and looking for periodic changes, MoViD can filter out the brief, random flickers of noise. I ran one test in which I set the camera next to me and held very still ... MoViD showed a quiet black screen, but was still sensitive enough to highlight some wrinkles in my shirt that were rising and falling with my breathing. Incredible.

Now for the big downside: FFTs and iFFTs are computationally expensive, and you have to compute them at every pixel in your image. Atronach's Eye currently runs OpenCV in Python on a Raspberry Pi. Even with the best FFT libraries for Python that I could find, MoViD is slow. To get it to run without lagging the camera input, I had to reduce the FFT length to about 6 ... which negates a lot of the noise rejection benefits.

But there are better ways to do an FFT than with Python. If I were using this on satellite imagery at work, I would be implementing it on an FPGA. An FPGA's huge potential for parallel computing is great for operations that have to be done at every pixel in an image, as well as for FFTs. And most modern FPGAs come with fast multiply-and-add cells that lend themselves to this sort of math. In the right hardware, MoViD could perform very well.

So this is the first time I've ever toyed with the idea of buying an FPGA for a hobby project. There are some fairly inexpensive FPGA boards out there now, but I'd have to run the numbers on whether this much image processing would even fit in one of the cheap little guys - and they still can't beat the price of the eyeball's current brain, a Raspberry Pi 3A . The other option is just porting the code to some faster language (probably C).

Until the next cycle,
Jenny

Sunday, April 27, 2025

Acuitas Diary #83 (April 2025)

I'm eager to get started on trial-and-error learning, but in the spirit of also making progress on things that aren't as much fun, I rotated back to the Conversation engine for this month. The big new feature was getting what I'll call "purposeful conversations" implemented. Let me explain what I mean.

An old black-and-white photograph of what looks like a feminine mannequin head, mounted in a frame above a table, with a large bellows behind it and various other mechanisms visible.
Euphonia, a "talking head" built by Joseph Faber in the 1800s.

A very old Acuitas feature is the ability to generate questions while idly "thinking," then save them in short-term memory and pose them to a conversation partner if he's unable to answer them himself. This was always something that came up randomly, though. A normal conversation with Acuitas wanders through whatever topics come up as a result of random selection or the partner's prompting. A "purposeful conversation" is a conversation that Acuitas initiates as a way of getting a specific problem addressed. The problem might be "I don't know <fact>," which prompts a question, or it might be another scenario in which Acuitas needs a more capable agent to do something for him. I've done work like this before, but the Executive and Conversation Engine have changed so much that it needed to be redone, unfortunately.

Implementing this in the new systems felt pretty nice, though. Since the Executive and the Conversation Engine each have a narrative scratchboard with problems and goals now, the Executive can just pass its current significant issue down to the Conversation Engine. The CE will then treat getting this issue resolved as the primary goal of the conversation, without losing any of its ability to handle other goals ... so greetings, introductions, tangents started by the human partner, etc. can all be handled as usual. Once the issue that forms the purpose of the conversation gets solved, Acuitas will say goodbye and go back to whatever he was doing.

I also worked on sprucing up some of the conversation features previously introduced this year, trying to make discussion of the partner's actions and states work a little better. Avoiding an infinite regress of either "why did you do that?" or "what happened next?" was a big part of this objective. Now if Acuitas can tie something you did back to one of your presumed goals, he'll just say "I suppose you enjoyed that" or the like. (Actually he says "I suppose you enjoyed a that," because the text generation still needs a little grammar work, ha ha ha oops.)

And I worked on a couple Narrative pain points: inability to register a previously known subgoal (as opposed to a fundamental goal) as the reason a character did something, and general brittleness of the moral reasoning features. I've got the first one taken care of; work on the second is still ongoing.

Until the next cycle,
Jenny

Saturday, April 12, 2025

Pump and Hydraulics Progress

If you've been following for a while, you may know I've been working on pump designs for a miniature hydraulic system. The average commercially available water pump appears to be optimized for flow rate rather than pressure, and small-scale hobby hydraulics are barely a thing ... so that means I'm custom-making some of my own parts. Last year I got the peristaltic pump working and found it to be a generally better performer than my original syringe pump, but I always wanted to get a proper motor for it.

The new pump sitting atop a pair of reusable plastic food containers, pumping water from one into the other. A power supply connected to the pump's motor is visible in the background.

The original motor powering all the pumps was an unknown (possibly 12 V) unipolar stepper and gear assembly from my salvage bin. But the precision of a stepper motor truly wasn't necessary in this application, and was costing me some efficiency. For the upgrade, I wanted a plain gearmotor (DC motor + gear box assembly) with a relatively high torque and low RPM. I settled on this pair of motors, both of which are rated for 6 V input:

SOLARBOTICS GM3 GEAR MOTOR (4100 g-cm, 46 rpm)
Dagu HiTech Electronic RS003A DC Motors Gearhead (8800 g-cm, 133 rpm)

You can tell from the torque and speed ratings that the Dagu HiTech was always going to be the better performer. I included the Solarbotics motor in my order because its gearbox and housing are plastic, which may reduce durability but also means it weighs less. In practice, it also draws less current than the Dagu motor, which might mean the power source can weigh less ... these things are important when thinking about walking robot applications!

The peristaltic pump with its lid off, showing the latex tubing, rotor, and rollers, sits on a table next to a cat for scale. It's smaller than the cat's head.

The next step was to reprint the pump. I left the main pump design essentially unchanged - all I did was correct the geometry errors from the previous iteration. So this time it worked after assembly without an extra shim, and I could put the lid on properly without needing zip ties to hold it closed. The motor housing and coupler were always separate pieces, so I designed two new versions of each, one for the Dagu motor and another for the Solarbotics motor. This is where the 3D printers reeeeaally show off their value. Compared with both the old stepper and each other, the new motors have completely different sizes, shapes, drive shaft designs, and mounting options, but I was able to produce custom parts that mated them to the pump in only a few hours of actual work.

And the test results were amazing. The Dagu is obviously more powerful and delivers a higher flow rate, but both motors have enough torque to drive the pump at the 6 V they're rated for.

Watch to the end for a surprise appearance by the Lab Assistant.

I pressure-test my pumps by dropping a piece of tubing from my second-story window to the back patio, and measuring how high the pump can lift water in the tube. From this it is possible to calculate PSI. I have published results from the previous pump designs. Well: Peristaltic Pump Version 3 can lift water all the way past the window with either motor. Given the water level etc. in this particular test, that's a total lift height of 170 inches. So the pump is producing at least 6 PSI, and I can't measure any higher than that. This makes it competitive with the syringe pump for pressure (at least as far as I can tell - the syringe pump also exceeded my maximum ability to measure), and MUCH better for flow rate.

When I was testing the syringe pumps last year, I used to go read a book for a little bit while I waited for the water to climb to its maximum height! I timed Peristaltic V3 with the Dagu motor, and it can get the water all the way up the tube (standard 1/4" aquarium tubing) in about 24 seconds. So this is a dramatic improvement on where I was when I started.

A window with a set of blinds in front of it, and a piece of transparent silicone tubing hooked through the blind cords up high. Water is visible extending nearly to the end of the tubing, and there are visible water drips below it on many of the blind panels.
Hydraulics testing: it gets messy

One little problem remains: I've noticed that, with these more powerful motors, the friction between the latex pump tubing and the rollers gradually pulls the tube through the pump. It'll keep shortening on the intake side and eventually lift out of the water. So I need something to hold it in place without clamping it and blocking the flow. Piercing the tube seems like the only solution for this. I could do it below the water line, OR, the tube is thick-walled enough that I bet I could put a very thin thread or wire through the wall without creating a leak.

I've also started on new actuators, but that is mostly a story for another day. I did get a "knee" style of joint working to the point of a basic demo. Once I started trying to adapt my existing quadruped hinge joint for hydraulic power, I realized it would be less complicated to make an entirely new design that naturally incorporates the hydraulic bladder. Next I need better bladders ... I'm working on that!

Until the next cycle,
Jenny