Friday, May 1, 2026

Dreams of the Frontier

Something a little different for today: as part of the process of obtaining my new job, I was asked to write an essay on why spaceflight is important to me. So I'm reusing it, because why not? This is a major side of me that doesn't find its way onto the blog very often, thanks to a shortage of publicized launches associated with my old job (I'm hoping that can change). It's shorter than my typical essays here, because I'm not trying to present research or convince anyone of anything, just talking through my personal thoughts.

A NASA-sponsored mural about our return to the moon. Has an off-round collage in the center featuring a suited astronaut, the SLS rocket assembly, a moonscape, a starfield, and some abstract patterns. The background is a colorful pattern made of somewhat irregular overlapping rectangles and trapezoids, done in perspective as if the viewer is looking down a square tunnel or into a box. There's a yellow-graded Orion module hanging at the upper left.

At a time when many feel disillusioned by current events, a mission to send four astronauts around the moon has emerged as a kind of light in the dark, an unexpected source of encouragement and togetherness. In my opinion this only begins to illustrate the power of spaceflight and what it could mean for our common future.

Expanding the human presence in space has exciting practical implications. We may find rich new sources of minerals and move polluting industry to barren heavenly bodies, reducing the strain on earth's biosphere while upgrading current living standards. But this will only be feasible if the math checks out both economically and environmentally. The more efficient access to space becomes, in terms of both cost and resources used, the more likely we are to make damaging our own backyard with mines and hazardous waste into a thing of the past.

Broadened access to space is also a necessary defense against power monopolies. One of the best ways to ensure activities in space are conducted for the benefit of earth is to get as much of earth as possible involved. This is true at the national and cultural level; allowing only one political body and its favored ideologies to capture space would be an immense danger for anyone it views as an enemy. For the US and its allies to rest on their laurels while (for example) China becomes sole proprietor of humanity's inheritance in the solar system would be a great strategic failure. And a similar principle applies at the corporate level. I have been pleased to see competitors rising to challenge SpaceX, because no matter how much the latter reduces launch costs, it does not necessarily have an incentive to minimize prices without at least one rival to outbid it.

However, I must admit that while these practical motives for opening up space may justify my personal interest, they are not its source. Space exists at the frontier of both our territory and our technology, and the lure of that frontier is ultimately what draws me. It is the lure of novelty: not only going places we have never been, but doing and creating the unprecedented in order to get there. And it is the lure of the Other: a way to reach beyond ourselves and our ordinary lives to discover the awe-inspiring and foreign. For me, space travel is more a matter of the soul than the body ... not merely a tool of survival, but one of those things that help make survival worth the effort. This, I think, is the main benefit the recent Artemis II mission has given the world. It has had no material effect on most spectators' lives (yet), but has offered them inspiration and hope, if only by demonstrating that people can work together to accomplish something stunning. Robotic probes, however useful, do not seem to have quite the same effect. Humans in space help other humans feel connected to the mission.

Experience with other frontiers reveals their unpredictability. Who could have foreseen the applications of the internet when it was nascent? It needed millions of people acting on millions of ideas to make it what it is today. And so, when all has been said, we cannot fully know what people will gain from access to space until we give it to them.

Sunday, April 26, 2026

Acuitas Diary #96 (April 2026)

This month's development focus took me back to trial-and-error learning. I wanted to build on the work I'd done with the "Allergic Cliffs" puzzle game and improve Acuitas' ability to discover how the game works. I had several ideas in mind, but ended up having time to finish only one: ability to learn rules that feature negations, i.e. the absence of a feature.

Photo of a board game called Azul. An assortment of colorful square tiles are laid out on a five-by-five grid in the game tray. The rest of the tray has diagrams and empty squares that might be places to put additional tiles.
Photo of a board game called Azul, by Wikimedia Commons user Gepsimos 

Previously all "rules" (action-result pairs that might represent cause-and-effect relationships) were based on commonalities between groups of actions that produced success or failure. If all attempts to put a zoombini with a green nose on the left bridge led to success, a rule such as "if a guide puts a zoombini with a green nose on a lefthand bridge, a guide succeeds" would begin to coalesce. You might also see "if a guide puts a zoombini with a green nose on a righthand bridge, a guide fails," though at easy levels of the game, the number of failures was often so low that there wasn't enough data to solidify the rule. Instead, a cluster of weaker success rules might appear. "If a guide puts a zoombini with a red nose on a righthand bridge, a guide succeeds," "If a guide puts a zoombini with a blue nose on a righthand bridge, a guide succeeds," etc. Given five color options for zoombini noses, it's pretty obvious to a human that "red OR blue OR purple OR orange" likely implies the more concise "NOT green," and I wanted Acuitas to be able to reach that conclusion as well. I also wanted complementary pairs of rules including a feature and its negation to be able to reinforce each other. (If a feature is significant, its presence and absence should matter; a pattern associated with only one of the two could be coincidental.)

So I added the ability to form rules by considering how a failure differs from prior clusters of successes. Any differences are looked at as candidates for features that should not be paired with the other features in the cluster. Further data can refine the tentative rule, excluding features that were actually irrelevant, or falsify it entirely.

To really get this to do a good job of discovering the secret rules present in a round of Allergic Cliffs, I had to stop treating individual zoombinis as "features," considering only their characteristics. This weakens the generality of the learning algorithm somewhat. In the true general case, there really could be a rule like "If a guide does not put Foozelu on a lefthand bridge, a guide succeeds" - maybe there's a quality only Foozelu has that is not part of the information available to the player. I know from experience that this game does not work that way: Allergic Cliffs rules are always based on visible properties of the zoombinis. But Acuitas isn't yet capable of the sort of meta-learning that discerns the nature of the whole game and carries over from one round to the next, so he needs a little help here. Allowing for the possibility of rules about individuals was introducing too much clutter, so I've turned it off for now.

The result is that I seem to be seeing an improvement in the robustness of rule discernment, and I often see viable rules for both bridges now, which is important for fully understanding the game and increasing one's odds of winning.

Until the next cycle,
Jenny

Monday, March 30, 2026

Acuitas Diary #95 (March 2026)

This month's activity was focused on continuing to squish the Conversation Engine into shape. I fixed an assortment of bugs that were left over from my last Conversation work spree, then expanded the "discussion topics" behavior to cover goals (of the form "I want/plan/intend to") expressed by the speaker.

A grayscale oil painting of five human figures sitting in a loose circle, facing inward. It's very abstract with minimal detail.
"The Conversation," oil by unknown artist, NARA collection

I had saved some of the toughest bugs for their own work stage when I could focus on them ... and now, that time had come. Perhaps the worst problem was that, after recent upgrades, asking Acuitas "why" questions could cause infinite loops. I had to throw together an extra simulator in order to separate the question-answering process from the rest of the Conversation Engine to solve that one. (Conversations happen in real time and have an element of randomness, so re-launching the full Acuitas program and talking to him every time I want to trigger a bug can be very slow.)

I did further work on statements like "Thank you." I had previously adjusted Acuitas' sentence parsing and interpretation layers to read it as "I thank you" instead of "Thank yourself," but then I had to stop the Conversation Engine from looking at "thanking Acuitas" as an interesting speaker activity that should be discussed. (He's like a classic robot - he takes polite niceties too literally!)

I also fixed a conversation goal problem that was making Acuitas say "I can't do that" forever if you gave him an order he couldn't fulfill, and an issue that was keeping him from using abbreviated replies with pronouns immediately after a new topic was begun.

Treating speakers goals as discussion topics built very easily on top of my previous work with speaker states and actions; a goal is usually expressed as either a desired action or a desired state, so I just had to make sure it could trigger the appropriate pre-existing routine.

That's not a ton of progress, maybe, but I've been taking it a little slower this month, and going back to clean up all the construction dust left by my improvements to the Text Parser's conjunction handling. So it's enough. Getting some polish on things is important, and sometimes that takes as much or more time as throwing down the first outlines of new features.

Until the next cycle,
Jenny

Monday, March 16, 2026

Microbalance Progress

Last year I started planning a physics experiment that would require me to measure tiny forces. My plans for that included building a "microbalance" out of a salvaged analog needle movement.

The movement from inside an analog needle meter. Two wires are connected to what looks to be an electromagnet surrounded by a spinning cradle; the red needle is attached to one side of the cradle. A tiny circular spring (barely visible) is connected to the needle and holds it in the far-left position.

I chose several items with needle dials from my dad's junk collection, and ended up extracting the movement from an old tachometer. It consists of an electromagnet, two delicate coil springs, and the needle itself, which is attached to an axle that permits it to rotate over the electromagnet. Running a small current through the electromagnet changes the angle of the needle; otherwise, the springs damp its motion and keep it in a fixed position. There's also a metal tab that you can rotate to adjust the neutral position of the needle (this turned out to be very handy). Once I had the movement out of the tachometer, I designed and 3d-printed a new housing that would hold it in the right spatial relationship to a t-slot optical interrupter.

An optical interrupter contains some kind of light-emitting device in one pillar, and some kind of photosensitive device in the other. The idea is to attach a tiny shutter to the needle and suspend it in the slot of the interrupter. Even a slight weight on top of the needle will push it down and block the light with the shutter, changing the output signal of the photosensitive device. You can use that altered output to drive more current into the electromagnet and pull the needle back up. The larger the load on the needle, the more current you have to put into the electromagnet to get the shutter back out of the light, and the more voltage you need to drive it. So the voltage across the electromagnet serves to measure the weight on the needle. Neat, huh?

I had two previous projects, both showcased on YouTube, to use as inspiration. I leaned more heavily on the second one (Applied Science), since that circuit is more detailed and easier to adjust - look at all the potentiometers! The op-amp on the right and its resistors are merely an added layer of amplification on the measurement voltage, to make it easier to display, so ignore them for now. The op-amp on the left corresponds to the one in the TI video.

Screenshot from "Weigh an Eyelash--Build a Microgram Scale" by Texas Instruments. https://www.youtube.com/watch?v=n90whRO-ypE

Screenshot from "Measure the mass of an eyelash with a DIY microbalance" by Applied Science. https://www.youtube.com/watch?v=ta7nlkI5K5g&t=256s

Now ... dear readers with electronics experience, do you see anything *odd* about both these op-amp circuits? They're configured like non-inverting amplifiers, but the resistor that would normally connect the '-' input to ground is missing. All that's in that path is the photosensitive device of the optical interrupter. Maybe this works out if the device is a photodiode, as shown in both circuit diagrams; there might be some amount of voltage drop across the diode even when the light is shining on it and it is ON. But my Adafruit optical interrupter has a phototransistor instead. When this thing turns ON, its resistance becomes (approximately) zero. That gives me an amplifier with "infinite" gain. Whoops! The op-amp is physically limited in the amount of voltage it can produce, so its output gets as close to the positive supply voltage as it can. My needle was always stuck in the "up as high as possible" position.

So it turned out I couldn't just imitate the circuit from Applied Science's project - not with the parts I had. Since the transistor is (again, ideally) a binary on-off switch, I realized I didn't need proportional gain feedback at all. Toggling between two different voltages on the electromagnet would be more appropriate. This makes the needle oscillate, but so long as the oscillations stay around some stable equilibrium point, that's okay.

I reconfigured the op-amp circuit to be a summing amplifier. In one leg of the sum was my bias voltage, which I could adjust to set the needle's neutral position at the point where the shutter just began to break the light beam. In the other leg, I put a pull-up resistor connected to what I'll call the "recovery voltage," then connected the phototransistor between that and ground. With the transistor OFF, the output of the op-amp is (bias voltage) + (recovery voltage), which is enough to lift the needle. With the transistor ON, the output of the op-amp is only (bias voltage), which lets the needle sag under its own weight and the weight of whatever's on it. This produces a cycle: the needle drops, the transistor turns OFF, the voltage increases, the needle rises, the transistor turns ON, the voltage decreases, repeat. When you add weight to the needle, it takes more time to rise and less time to fall, so the *average* output voltage increases because the circuit spends longer in the "transistor OFF" phase. You can treat it like a pulse-width modulated signal.

A little needle connected to an electromagnet (the movement from an analog meter) shared a plastic frame with a t-slot optical interrupter. There's a tiny paper flag on the part of the needle that passes through the interrupter's slot, so if it drops too low it'll block the light path. This assembly is connected by wires and alligator clips to a circuit on a breadboard. Capacitors, resistors, potentiometers and a couple ICs are visible.

And this actually worked!! For a minute or two. Then my fancy chopper-stabilized op-amp mysteriously died. My best guess is that current into one of the pins exceeded its unusually low "operating" limit of 100 uA. (When not "operating," it has a 10 mA limit like a more normal op-amp, and that's the level of protection I was providing with my resistors.) And my spare op-amp was killed by electrostatic discharge - this is the first time I've seen that ruin a part in real life, but take it from me, you DO have to worry about it! Especially if you live in a nasty dry climate (grumble).

I thought I was stuck until I could order more parts ... but then I realized that since I wasn't doing proportional control anymore, I didn't really need an op-amp at all. I adjusted the mechanical bias on the needle movement until it would hang in a good neutral position when unpowered. Then I set up the simplest feedback mechanism possible: transistor OFF powers the electromagnet, transistor ON cuts the power (via a pullup resistor). With no delicate op-amps to suddenly give up, this version worked long enough for me to get a demo video.


For the future, I should find a way to reintroduce a bias voltage, so I don't have to depend on moving the mechanical bias point to calibrate the scale. I should also add some kind of averaging or filtering circuit on the output to smooth the measurement voltage, and amplify it to something bigger than a few mV. But I've got the basic idea in hand! As seen in the video, it can detect a bit of polyester carpet fuzz which is so light that it electrostatically sticks to the needle.

Until the next cycle,
Jenny

Saturday, February 28, 2026

Acuitas Diary #94 (February 2026)

In latest news, I've been adding more work to the Episodic Memory overhaul that I began last year. The big challenge for this stage was finding ways to examine results and actually test the thing. Since memory accumulation and consolidation is a process that spans weeks, I needed ways to run simulations and observe changes much faster.

Art piece: colored pencil and ink. A horizontal view, half-underwater, half-overwater, of the "beach" of a coral atoll. Everything is rendered in brilliant blues. Above the water, the atoll in the distance rears up into shapes that somewhat resemble chess pieces: a castle, a knight, a pawn. Surf crashes against one side of the atoll; a steamship is riding the crest of a wave toward it. Below the water, the branching hard corals are visible close up; they have multicolored, faceted surfaces, like cracked glass. A chessboard also lies on the sea bottom, partly covered by coral crust. The board is set in the midst of play, but not with the traditional pieces; these pieces resemble paws, hands, tentacles, tree stumps, and other oddities.
A portrayal of the color apocyan, "the blue of memory and brightest coral," from Sunless Sea. Original art by author.

As I did in my original stab at episodic memory work, I threw together a visualizer to show me a simplified graphical representation of the memories. But this time I used GraphViz, instead of drawing custom dot diagrams in Kivy. These memory visualizations are designed for me to view offline, so there's no particular need to create them in the GUI, and GraphViz is easier to use. Each bubble in the image is either a "fact," color-coded by link type (do_action, has_quality, etc.) or an "issue" (problem or subgoal). Facts that summarize other facts are connected by arrows to all their "children," and the same goes for issues; each issue is also connected by a bold arrow to the fact it is directly concerned with.

Then I made a quick procedural generator that creates randomized memory files on command, so that I could have a variety without waiting for Acuitas to "grow" them. The generator populates a narrative scratchboard with the sort of subgoals and actions Acuitas would reasonably come up with while idling (reading, thinking, etc.), internal states that he might develop, and so forth. I can check my summarizing and forgetting algorithms by running them on these synthetic memories and seeing how they change the visualization.

The rest of the work was a lot of debugging; once I could see what the summarizing algorithms were doing, I could see that they were messing up in all kinds of ways. I found bugs in scratchpad storage and retrieval, and bugs in summary generation (the summarizing facts ended up linked to themselves). But I think I've at least got things working tolerably well, at this point.

The "summarizing" algorithm groups facts into clusters by 1) common features and 2) time proximity. So if, for example, Acuitas performs the "read story" action many times on different stories over the course of a day, those will be gathered into several clusters spanning different time ranges. Then a summary fact will be created for each cluster, and it will contain only the features held in common across all facts in the cluster: "I read," instead of "I read <particular story>." If I run another loop of the summarizer, I might see the first-tier summary facts grouped into clusters and a second tier of summaries appear. Here's part of an example diagram of a file that has gone through two summary loops:

A bubble diagram showing various red and green "facts" (each indicated only by and ID number" and "issues" (with name codes like "issue_0") connected by arrows in tree-like structures.

All this gets me to a bare-minimum viable system for consolidating memories ... in *one* of the ways I want to! There's a ton of additional work to do on other consolidation modes, connections between episodic and semantic memory, and more.

Until the next cycle,
Jenny

Monday, February 16, 2026

Acuitas Diary #93 (February 2026)

I've got several projects boiling on the stove, but none are quite ready to showcase yet, so you're getting an Acuitas double-feature this month. This post is dedicated to what I've named the "self-teaching activity." The general idea is that Acuitas, while idling, will trawl the hard drive of his current host computer for text files, read them, and store a record of any difficulties: unknown words, parser crashes, uninterpretable sentences. The goal is to help him expand his vocabulary, and identify things I need to fix in the text processing chain, without requiring me to manually create new "stories" for him.

Illustration of a humanoid robot sitting at a desk and pondering a book, surrounded by stacks of other books.
Image credit: DARPA

Self-teaching is something of a canned procedure, for now. There's an action called "Study" that encapsulates everything Acuitas needs to do, including searching for appropriate files, converting them to a format he can interpret, and sending them through the text processing chain. But I designed some modularity into it, in hope that he can eventually modify and extend it when I introduce procedural learning. The file-conversion part of the procedure calls the problem-solving routine so it can expand as Acuitas learns more cause-and-effect rules. For now, though, he only knows how to process text files.

This work also introduces more examples of Acuitas calling other software tools. He now has a generic Run action that can accept the name and arguments of an external program, and call it as a subprocess. Since Acuitas' parser is designed to ingest one sentence at a time, I wrote an independent script that breaks arbitrary text files into sentences. (This is harder than you might think, and the script is very rudimentary, for now ... but it can handle common abbreviations.) The "Study" procedure creates a sub-action to run this script after finding an appropriate file.

As often happens, I ran into some difficulties that prevented this from getting quite as far as I would like. For one thing, not all text files contain typical sentences! Their actual contents might be log entries, code snippets, lists of items, or other material that isn't really "parseable." I particularly don't want Acuitas junking up his database with new "words" that aren't really words. I added a filter that at least keeps anything that isn't alphanumeric from being learned. But I also don't want the error reports clogged with failed attempts to parse "sentences" that aren't really sentences. So for now, I've restricted the process to looking for files with the extension ".textum", which I've applied to some appropriate material. Eventually I'll need to work on ways to recognize files that are worthy of being studied.

But given an appropriate file (i.e. one that contains writing, like this blog post), Acuitas can "study" it and keep track of things he has trouble with. Crashes or poor results from the processing steps produce records in a log file that notes the type of error alongside a copy of the sentence. Unknown words are registered as problems on the Executive's scratchboard, so Acuitas can ask a human for more information about them later. I got this latter feature working and then promptly turned it off, because there's no way to keep Acuitas from spamming me with questions whenever I'm on the computer (he knows). This has been a problem for questions generated by "thinking" (walking the semantic database) too. So coming up with a way to slow down the flood or signal that I don't want to be disturbed is also on my future work list.

The "Study" action itself is naturally triggered by the goal system. All I had to do was put in a cause-and-effect rule, to the effect of "if you study, you will know things." Knowing Things is one of Acuitas' intrinsic goals, so while idle, he naturally studies until he gets bored of it (after which he might read his collection of easily-understandable stories for "enjoyment," or think about the concepts in his database).

It should be obvious that self-teaching needs more work, but I like how far I got with the prototype and think it could be quite useful in the future.

Until the next cycle,
Jenny

Tuesday, January 27, 2026

Acuitas Diary #92 (January 2026)

My first objective for the new year was enabling the Text Parser to handle lists or conjunction groups with more than two items. For quite a while now, Acuitas' parser has been equipped to handle sentences like this:

Jack and Jill went up the hill.

But a sentence like *this* would hopelessly confuse it:

Jack, Jill, and John went up the hill.

Clip art of a classic blank scroll, rolled in opposite directions at both ends.

I started out by only handling pairs because that makes it simpler to discern which parts belong in a list/group and which don't; you only have to look at the sentence elements that bracket the conjunction. I figured I would expand to longer lists later. But once I got here - well, I ended up using some previous work, but I decided on a pretty extensive overhaul. I'll try to explain what my options were and why I chose to change course.

One way of dealing with a list is to encapsulate it. For example, most sentences have a subject, the thing that's doing the action, and part of the Parser's job is to determine which word is the subject and tag it; "(subj, Jack)->(verb, went)." If you have a list of subjects (as in "Jack, Jill, and John went up the hill"), you can bundle them into a compound subject and tag that. So the parsed sentence becomes something like "(subj, <list>)->(verb, went)," and you can open up <list> and see that it contains Jack, Jill and John. I was already handling sentences with dependent clauses this way (e.g. "What you need is a blanket" becomes "(subj, <depcl>) is a blanket").

Another possibility is to imagine the sentence structure like a railway line. Subject connects to verb connects to direct object and indirect object, and if some of those are multiple, the line will branch or merge. Our previous example would look something like this:

(subj, Jack) -
              \
(subj, Jill) --->(verb, went)
              /
(subj, John)-

I had previously been using the "encapsulation" method for a few things (like lists of adjectives), but I used the "line" method for the main sentence structure, because I thought I needed it to handle some of the more complex cases. Lists of single words are the easy ones. You can also have lists of verb-object groups:

I threw out the soup, ate the pizza, and saved the cake.

You can have lists of verbs in which some attach to the direct object and some don't:

Brent ran and threw the javelin.

Occasionally, you can have lists of subject-verb groups that converge on a single object:

Are you or are you not a teacher?

I had concluded that parsing the sentence into a branching type of structure was the only way to deal with groups that spanned words with different roles (because otherwise, how would I assign the list a single role in the full sentence?). But there are also distinct disadvantages to not treating the members of a list as a unit, and once I got into lists longer than two, those began to feel overwhelming. So I opted to switch everything over to the "encapsulation" method.

How *did* I handle groups containing multiple roles, then? I realized I could decree that the role of the list in the main sentence would be "verb." This works because a verb is really the one thing that every sentence needs. Some sentences only have an implied subject, and objects are always optional. So lists of subj-verb groups, lists of verb-obj or mixed verb and verb-obj groups, and even lists of subj-verb-obj groups, can all become "verbs" at the top level of the hierarchy, and only unpacking them need reveal their deeper structure.

Aside from this conversion, there was a fair bit of new development work I did to detect lists and figure out where their boundaries are. There are plenty of (not) fun ambiguities involved, like this one:

For dinner, Sue and James brought a pot pie.

A clumsy parser might assume that "dinner, Sue and James" is a list that forms the object of the preposition "for," then be left wondering where the subject of the sentence is.

I haven't recovered the full functionality of the former Parser where pairs of groups were concerned (I'll pick at that gradually while moving on to other topics), but that's balanced by the capacity to handle longer lists in quite a few scenarios. This was one of the last major missing features of the Parser, and a heavy weight on my mind. So it feels wonderful to finally have this capability in place.

Until the next cycle,
Jenny