Thursday, December 29, 2022

Year in Review 2022

Well, disastrous world events continue apace, but I'd call 2022 another solid year. Maybe not a *great* year. I lost both of my surviving grandparents, I pretty well healed up from CIDP (or whatever it is) but then I caught COVID and it came back, and it's hard for me to think of many spectacular events. But we can call this a "foundation-building" year. I've made significant, if quiet, progress, and I'm excited about where 2023 is going to go.

I think it's also been a great year for internet friendships, in spite of Facebook slowly fading into a ghost town and then Twitter splitting at the seams. Thank you all again for being here. Thank you for following my work, having good conversations, and sometimes even looking after me when I get upset. When I first started blogging, I expected to have "an audience," but never thought it would be this good.

Ursula inspects Version 3 of the mechanical eyeball.

So here's a quick rundown of what I did this year:

*Added support for coordinating conjunctions - the last missing part of speech - to Acuitas' text processing chain, and achieved improvements on Parser benchmarks.
*Put a lot of work into better Narrative comprehension and Theory of Mind frameworks. I feel almost ready to take a crack at a particular story I've wanted to do, and ... *whispers* maybe text adventures? Next year, next year ...
*Got partway through a major refactoring and tidying of the Narrative module, to lay a good foundation for the new work next year.
*Finished reading all the publicly available papers on Pei Wang's recommended "AGI Education Advanced Topics" list.

A story flowchart (generated by Acuitas' Narrative Engine) for the "Altan Finds Water" story from this May.

*Bought and initiated the Anycubic Vyper printer, which is amazing. I still used my X-One 2 to help with all the Christmas projects, but the Vyper is (in many ways) so much nicer that it's hard to make myself go back to the older model.
*Completed a redesign of all ACE's leg joints and spinal frame, and got him to ACTUALLY STAND passively at his full height. Next year - motion? Maybe? Or another small redesign when I find out those motors just won't cut it.
*Took Atronach's Eye through a couple of design iterations, and got a better understanding of the issues with its mechanics, but didn't really solve them. There's another saga that will continue into next year.

Ill Gotten Games' Pocket-Tactics starter set, printed for a Christmas present

*Queried a few agents with my first novel, then decided to change tactics and boost my portfolio before trying again.
*Wrote four short stories.
*Published more than two blog posts per month (on average) - that's almost double the amount I was able to write the previous two years.

*Got almost ready to wrap up what has been my main project for the past few years at Day Job: parts of the data processing and storage box for an imaging satellite. That's not as glamorous as a launch announcement, which I don't suppose will be coming for several more years - and we made this one for Germany, so from our perspective it's not classified, but I have no idea how public the launch information will be. In any case I'm looking forward to shipping it and doing something new!

Butternuts green on the vine. In the far backgound is the acorn squash, which really tried to produce but didn't make it before the freeze.

*Successfully harvested butternut squash for the first time, in addition to the usual potatoes.
*Kept the book consumption rate higher than the book acquisition rate ... only by a little, but the queue is almost empty. Soon I will once again be justified in buying books on impulse, soooon ...
*Pushed my creation work rate over three times the maintenance work rate for the year, just barely. It wasn't the most productive year - I let the work rest for a while and did a lot of traveling - but I still completed most items on my planning schedule.

Happy New Year, everyone. I hope 2023 turns out better for us all.

--Jenny

Tuesday, November 29, 2022

Acuitas Diary #55 (November 2022)

There's not a lot to report, unfortunately - I'm still bogged down in the Narrative Engine overhaul. It had accumulated quite a few features, so revamping everything and converting it to the new framework is going to take time. As part of that process, though, I came up with a fun new visual tool. I wanted a way to display how Narrative analyzes a story, both as a demonstration of results and to show how the structures of stories differ. It's still incomplete - I want to add some detail - but it gives the rough idea.


To Acuitas, a story is all about how some Agent achieves, or maintains, some Goal. Any status condition that provides an opportunity to realize a Goal, or that threatens a Goal, is referred to as an "Issue." (I used to call these "subgoals" and "problems," but eventually came to understand they could just be given different polarities and handled in the same way.) Issues may spawn Predictions as Acuitas tries to guess how any Agents in the story will respond. Anticipatory statements like "character expected to ..." can also generate Predictions. The Narrative Engine then tests future statements in the story to find out if they resolve any Issues or Predictions.

This yields a basic sense of the flow of action in the story, and is what's captured by the diagrams. The sentences of the story are presented post-conversion to more abstract representations of fact (you could think of these as "the gist" of each sentence). Issues and Predictions appear connected to the sentence that created or resolved them.

So far I've gotten the first two stories that I tested Narrative against up and running in the new version. "Amarok and the Fire," shown at the top of this blog, is a pretty simple affair: one problem that gets resolved after a bit of meandering.


"To Build a Fire" is a more complex story with multiple Issues, a reversal (a problem is initially mitigated, then comes back), and a bad ending. Here's the natural English text for comparison with the diagram:

0:"Jack was a man."
1:"Jack was in Alaska."
2:"Jack was cold."
3:"Jack expected to freeze."
4:"Jack decided to build a fire."
5:"Jack found sticks."
6:"Jack built the fire."
7:"Snow smothered the fire."
8:"The fire was destroyed."
9:"Jack was cold again."
10:"Jack became stiff."
11:"Jack could not build the fire again."
12:"Jack froze."
13:"The end."

I may continue to be a little quiet as I finish up this overhaul, but I have big hopes for next year.

Until the next cycle,
Jenny

Tuesday, November 8, 2022

Managing My "Free" Time

It's getting toward the end of the year, and that means it's time to start thinking about the next one. I thought I'd shift gears a little bit and tell you about how I do project management. My hobbies are a fair amount of work and there's nobody *making* me do it, so it's my job to keep myself organized and motivated. Some goal-setting helps with maintaining steady progress.

3D-printed object tacked to a light switch plate: a little tentacled blob (part of the WAU from SOMA).
I still end up with time for the occasional random project done on a whim, which is unfortunate, because now the WAU is infecting my house. 

I typically plan for one year at a time. Some people try to have five-year goals and such, but I've found it difficult to see farther than one year ahead - why should start becoming apparent as I get into the details. So I'm mulling the idea of trying to sketch out longer-term plans, but haven't done so up to this point.

I start by blocking out how much time I'm going to have. I assign time in units of weeks, and there are four weeks per month. If any month has a fifth week, that's an "elastic week" that I can use for anything I feel like at the time; I don't assign tasks to it. I also don't assign anything in June and December, to provide myself with time for socializing around summer vacation and the big year-end holidays. So that gives me 40 weeks a year to play with.

The recent pattern I've settled into has been 20 weeks for Acuitas, 10 weeks for robotics, and 10 weeks for writing. And each month gets two Acuitas weeks, one robotics week, and one writing week. Rotating between hobbies helps relieve mental fatigue, reduces disappointment if one of them isn't going well at the moment, and gives me a better mix of output to share with my audience (some of whom might primarily follow me for one topic).

Next I have to break up my projects into tasks or phases that I think will take about a week to do. This is easiest for writing; one week yields one novel chapter (5000-6000 words), one short story, or multiple chapters of editing. I don't really get "writer's block" and can reliably produce that much in a week's spare time. Where AI and robotics are concerned, I often have a very poor idea of how long those tasks are going to take. Mechanical design goals, for example, *might* only demand that I fabricate the parts once ... but frequently the first prototype doesn't work out. And the AI is being built from scratch, by a person walking through the dark with a lamp that only shows a few feet in front of her. I have no idea what I'm doing. And that makes schedule projections difficult.

Which is why, when I'm executing these schedules later, I give myself an out. To complete a task "on time," I have to either finish the task, or do at least ten hours of work on it. If I hit ten hours and it's nowhere near done, I'm allowed to push the schedule. The remainder of that task falls into the next week, and some later task I'd planned to do this year gets scratched off the list. On the other side of the coin, I try to have some "stretch goals" in my back pocket in case work gets done faster than expected (ha ha, like that ever happens), or I have room to work during an elastic week, or I have spare vacation days to burn.

I typically add at least one more column to the schedule, and that's for home and yard maintenance: one task per month, or less, related to either routine work or repairing something broken.

Part of this year's schedule spreadsheet, mostly filled out. Color coding indicates how close to on-time tasks were completed. It doesn't look quite the same as it did at the beginning of the year. "Repair Fence" was added rather abruptly. Many robotics tasks got pushed off the list when ACE took longer than expected.

Once the plan is done, I track my progress as the year goes by. I note when I finish each task and whether or not I met its deadline. I also log the number of hours invested in each type of work I do: both the scheduled hobby tasks, and others. Yes, I have a spreadsheet for this, and it's a lot like my timesheet for the day job. I put down the number of hours for each project on each day, and the sheet calculates weekly and yearly totals. 

Here's where another little goal comes in: I try to achieve a 3:1 (or greater) ratio between Creation work and Maintenance work.

The real defining factor here is not how "creative" the work is, but whether, when it is finished, it gives me some asset I didn't have before - whether that be new knowledge, new digital content, or a new object. "Maintenance" is work that, once done, only has to be done again. It's time spent fighting entropy - a tax paid to futility, since this is a fight I will eventually lose. So when the creation:maintenance ratio is high, I'm in a state of growth. If it's low, I'm burning most of my energy just treading water. Three is a completely arbitrary number, but it makes me feel good and is achievable with effort, which provides the right incentive to push.

To get more specific, Creation tasks include all the aforementioned hobby projects, as well as blogging, artwork, and technical study. Maintenance covers food preparation, housecleaning, journaling, laundry, yard work, financial business, repairs, etc. Then there's a third category, Volunteer, which covers work primarily done for others. This includes actual community volunteering but also political activism and voting, church service, making or buying gifts, and so forth. I don't enforce a ratio between this category and the other two; I just do things as there's need or opportunity.

The 3:1 ratio incentivizes me to do both more Creation and less Maintenance. Even if pulling Maintenance down doesn't directly improve Creation, it gives me more time to recharge. A certain amount of Maintenance simply has to be done, but if I'm not hitting the target, I start taking a harder look at what's essential. I decide I can live with a dirty kitchen or tolerate more weeds in the yard. Because what is the meaning of that stuff, really? I actually had to stop including exercise under Maintenance because ... while technically, it is ... that was proving too much of an incentive to *not* exercise. So now I make a point of only doing exercise that has some fun value (e.g. going on walks), and treat it as recreation.

There are other quirks to how I keep my books. Transportation time to any activity is not counted, because though it expends time, it doesn't expend my real limiting factor: mental energy. If I multitask - fabricating something while watching TV, for example - I only log half of the actual time spent, while the other half goes to recreation.

I mentioned that a 1-week hobby task is capped at about 10 hours. Once I add in everything else, a good week of work totals about 16 hours. If I hit 20+ hours, that's a great week. Don't forget I'm working a full-time job too! At the end of the year, the tracking gives me a way to see what I prioritized, which tasks hogged more time than they deserved, and how I did compared to previous years.

I've tried various incentive structures in hopes of making myself more productive. For example, "I can buy myself an extra music album or restaurant meal if I log at least X hours this month." I used Habitica for a while. But eventually I stopped, because it wasn't helping. The satisfaction of putting hours into my spreadsheet and hitting goals seems to be motivation enough, and when I'm tapped out, I'm done - no reward will persuade me to keep working. Connecting my money budget to my time budget was also hindering ideal management of the money, and the additional record-keeping was becoming a chore in its own right. So, although I suppose this kind of thing works for some people, it didn't work for me.

Which brings me to the final point of the post, which is that *you* might find some of these ideas useful for your own planning; I present them as inspiration. But if you find that they don't click for you, don't use them. Every brain is a little different, and it's your job to figure out how to trick your own particular brain into doing what you want.

Until the next cycle,
Jenny

Thursday, October 27, 2022

Acuitas Diary #54 (October 2022)

I wish I had more new features to write about this month, but I had to slow down a little bit. There was too much going on, and my brain started feeling like it was trying to brown out and stop working. So instead I'm going to talk about overhauling the Narrative module. I had gotten it pretty far, and I decided it was time to go back and address some pain points I kept running into over and over as I was adding new functions. This work is not exactly "easy" but it's much easier than adding new functionality.

A black-and-white sketch of a fountain pen lying on a collection of small notes.

A big thing I wanted to address was ease of negating or rescinding conditions in a story. Each time a new fact is introduced by a story sentence, it produces a spray of implications which are all collected by the Narrative Engine and used for future reasoning about the story. For example, the story sentence "Jack is cold" will produce the implication "Jack is uncomfortable" and will register "Jack is cold" as a problem state.

Now let's say that, at some later point in the story, we find the sentence "Jack is not cold anymore." This will replace "Jack is cold" in the current worldstate that the Narrative Engine is tracking, and will officially solve the "Jack is cold" problem. But what about "Jack is uncomfortable"? "Jack is not cold" does NOT automatically generate the implication "Jack is not uncomfortable," since a human can be uncomfortable for any number of other reasons. So there is nothing to negate "Jack is uncomfortable" and get it out of the worldstate.

Or suppose that Jack is cold, and he solves the problem by getting a jacket. "Jack wears jacket" implies "Jack is warm" implies "Jack is not cold," and this cancels "Jack is cold" out of the worldstate. But then he loses the jacket. The Narrative module ought to know that he returns to his default state of being cold ... but it doesn't.

These are just two examples of how reversing a condition can leave orphaned implied results scattered around the Narrative Engine's data structures. So as part of the overhaul, I'm working on ways to include pointers between facts registered in the Narrative's worldstate and *anything else* that was created, activated, or deactivated as a result of their presence. So when a fact gets rescinded, the Narrative Engine can easily walk through all the effects of its existence and rescind them too.

A fact can inherit its presence from multiple other facts. Let's say that Jack is uncomfortable because he's cold and hungry. If either "Jack is cold" or "Jack is hungry" is negated, "Jack is uncomfortable" will continue to be true, since *one* of the conditions that causes it is still in effect. But if "Jack is cold" and "Jack is hungry" are both negated, "Jack is uncomfortable" will be deactivated as well, since it has lost all its "supports."

A fact can also be suppressed by another fact (as in the example of the jacket canceling out the default state of cold). The solution in this case is not to remove "Jack is cold" from the worldstate, but to deactivate it and specify that "Jack wears jacket" is the source of the deactivation. If "Jack wears jacket" is deactivated, "Jack is cold" automatically becomes active again.

I had never planned a way to maintain this complex web of connections when I first wrote the module - it's one of the things I had to learn was necessary by doing the work - hence the major overhaul. And that's only one thing I'm changing; I'm also working on efficiency improvements, clarity, and general cleanup. I'm hoping to have a solid foundation from which to pursue my Goal Story next year.

Until the next cycle,
Jenny

Sunday, October 16, 2022

Design Spark Mechanical Hints and Pitfalls

I've been building my own 3D models for a couple of years now, and my favorite tool for this is still DesignSpark Mechanical. So I thought I would write up some tips and tricks I've learned as I've gained experience with it. This is not a general beginners' tutorial; I don't feel the need to create one of those, since they already exist (check YouTube). It's a list of all the weird little gotchas that weren't in the tutorials I watched and had to be learned painfully ... the things I wish somebody had told me before I started using the program.

A 3d model in DS Mechanical

DesignSpark is the free little brother of another CAD tool called Spaceclaim. They're basically the same program, so if you're searching for a way to do something in DS and striking out, try looking at Spaceclaim Q&A or tutorials. Spaceclaim has more features, naturally, so sometimes you may not be able to apply the results to DS, but other times you will.

How do I reflect or mirror an object about an axis?

DesignSpark does not have a mirroring function. You can rotate objects as much as you like, but you can't reflect them! What it does have is the option to set up a "mirror line" in the sketch plane, such that mirror images of Curves are created as you draw them. Many shapes begin in Sketch Mode, so if you are going to need a mirrored version of something make *sure* to plan ahead. Don't wait until you've created a complex object to realize that you now have no way to mirror it.

Setting a mirror line on the Sketch Plane
Now any lines or shapes drawn on one side of the line with be reflected on the other.
    
How do I turn a Surface into a Solid (or a Solid into a Surface)? Why won't my Surface become a Solid?

Sometimes you will want your models to be Surfaces, and sometimes you will want them to be Solids. There are operations that become easier in one mode or the other. The commands for converting between these modes are Combine and Detach, but confusingly, they both have multiple functions. Combine can be used to join individual surfaces (faces) to each other, but also if you use it on a closed Surface, that Surface will become a Solid. Combining a group of faces to make a closed Surface also automatically turns that Surface into a Solid. Detach turns a Solid into a Surface if used on any face of the Solid, but is also useful for separating individual faces from an existing Surface.

Meshmixer detecting an insidious issue around the base of the right-hand peg.

If you try to use Combine to turn a Surface into a Solid and *nothing happens,* that means your Surface has something wrong with it. Maybe it's not completely closed; there could be an almost-invisible crack between two of the faces, for instance. Or maybe it doesn't close around fully empty space -- if any of the faces are doubled, or if the surface has "interior walls," it can't become a solid. DesignSpark will not tell you what the problem is. Fortunately, other tools like MeshMixer *will* tell you ... so as a last resort, if you can't figure out what's wrong with your model, you can export it as an STL and have MeshMixer's inspector point out the flaws. You *do* want to fix them. Anything which prevents your model from becoming a solid may also qualify as something that makes it unprintable.

Why do I get an error when trying to Move my Solid/Surface?

If you select an object and try to move it, and it WON'T MOVE ... you've probably only selected one face, instead of the whole object. You can avoid this by clicking on the object's name in the Structure window to select it, instead of clicking on the object itself in the viewing window.

How do I draw in 3D Mode?

Selecting any of the tools for drawing Points or Curves will automatically activate Sketch Mode on whichever plane the tool guesses is best. But sometimes you don't want that -- there are times when you need to draw in 3D Mode. To get back there, just hit the 3D Mode button *after* selecting your drawing tool. You will go back into 3D Mode with the right drawing tool still active. Points and lines can't be placed arbitrarily in 3D Mode; you have to attach them to something (existing Solids, Surfaces, or Curves).

Why can't I separate two Curves that are on top of each other? How do I move one Curve at a time?

In Sketch view, if you duplicate a line or other Curve using copy-and-paste, the new Curve will by default be pasted right on top of the original. Then when you try to move the copy ... you may find that you end up moving both objects. You won't be able to separate them! To fix this, look to the "Options - Selection" box on the lower left, and untick the "Maintain sketch connectivity" checkbox. Now you will be allowed to select and move only one of the Curves at a time.

Location of the "Maintain Sketch Connectivity" checkbox

I turned off Snap to Grid/Angle, but my cursor is still snapping to things while I draw, why?

Snapping is great when you want it, and very annoying when you don't. The obvious ways to turn it off are in the "Options - Selection" box: "Snap to grid" and "Snap to angle." But after you untick these, you will find that there is still some snapping going on. Turns out there are additional, hidden snap options under File->DesignSpark Options->Snap.

The advanced snap options

Solids keep joining when I Pull parts of their geometry

"Pulling" a surface on a Solid while it is in contact with any other Solid will automatically merge those Solids into a single blob, potentially annihilating some hard-won geometry. An Undo fixes this ... unless you don't notice until you've gone many steps further in your work. To make sure this doesn't happen, always right-click and select the barred circle ("No Merge") from the popup menu before you Pull. A Solid that is being pulled also will not merge with any hidden objects, so hiding all Solids but the one you're working with reduces the chance of mistakes.

How can I split faces on a Surface?

If you need to split an existing face or cut a hole in it, you can use any of "Split Body," "Split," "Project," or "Pull," depending on the situation. If one method doesn't work, try another. The "Project" tool is powerful but very messy if not handled carefully. Make certain to use the face selection and direction options to confine the projections to your desired area, or you might find random cuts all over your models later. And if you want to cancel, hit the Escape key before you select a different tool, or the current projection will be automatically applied!

Why can't I Blend across this gap?

Blend is your friend, especially if you're trying to build curvy organic shapes ... but sometimes it can't compute a nice joining and needs guidance. The order in which you select lines or faces to be blended matters. Splitting curves/faces in order to blend across smaller regions may also help. Once you get the hang of it, you can construct whole models with little more than the Blend tool -- just draw wireframes using Splines, then Blend adjacent curves together.

A weird blend created by selecting the bounding segments in the wrong order

A good blend created by selecting the bounding segments sequentially

How do I Pull the edge of a surface out to make a plane?

"Pulling" an edge that is attached to a Solid will round or chamfer it. If you want to pull the edge out to make a plane or cylinder instead, copy-and-paste it to create a free-floating duplicate.

How do I keep curves drawn on a surface from becoming part of the surface?

Curves sketched on a Surface will become part of that Surface as soon as you leave Sketch Mode. Free-floating Curves will turn into the boundaries of new Surfaces. If you don't want this, Cut them before leaving Sketch Mode and then Paste once you are back in 3D Mode.

A curve that cuts the surface it was drawn on (left) vs. a free-floating curve (right)

Until the next cycle,
Jenny

Saturday, September 17, 2022

Acuitas Diary #53 (September 2022)

Last month's Narrative feature addition dealt with planned actions and their side effects: a desired action or state might be chosen for some outcome (e.g. solving a problem), but will also have other outcomes. If any of these are too negative, Acuitas will expect the character planning that action to take steps to mitigate them.

This month's work directly built on that by introducing the idea that agents can *add* side effects to other agents' actions, by choosing a conditional behavior: "If you do X, I will do Y." This is the groundwork for understanding social interactions like bargaining, reward, and coercion.

A series of captioned stills from Disney's "The Emperor's New Groove." Yzma and Kronk are locked in a closet (the scene is dark, with little except their eyes visible). Yzma says to a person outside, "Tell us where the talking llama is and we'll burn your house to the ground." Kronk says, "Uh, don't you mean 'or'"? Yzma then says, "Tell us where the talking llama is OR we'll burn your house to the ground."
Film stills from kpfun.tumblr.com via Sam Stryker (Buzzfeed). Copyright Disney. 

The introduction of a story sentence like "Agent A decided to do X if Agent B did Y" now creates a new cause-and-effect rule for the Narrative engine to use; it isn't stored to the permanent database, only used within the domain of that story. For reasoning purposes, it is assumed that "B does Y" will automatically happen if A does X ... so long as nothing is preventing B from doing Y.

I can start to define some verbs in terms of these models - much as, in previous Narrative work, I effectively defined "lie" as "tell someone a proposition that you don't believe." Now "coerce" ... in at least one of its forms ... can be defined as "deliberately apply a negative side effect to someone else's subgoal." If this happens, the Narrative engine will infer that A coerced B.

I was more interested in coercion than the positive options, thanks to the second goal of the month: to figure out a functional understanding of "freedom." As with other important abstractions I've introduced, I don't promise the result is better than an oversimplification. But we have to start somewhere.

And seeing as it's the high concept of my whole country, I should probably figure it out.

A still of Cloud William in the prison, from Star Trek: The Original Series episode "The Omega Glory." William is saying, "It is a worship word, Yang worship."
"The Omega Glory" is one of the silliest Star Trek: TOS episodes there is, and I still love it.

Freedom could be defined, in a very simple and broad sense, as a lack of constraints. But all humans live with constraints. We generally don't presume that freedom requires *omnipotence.* So to get closer to the idea of freedom that people generally have in mind, we might say "a lack of *unnatural* or *exceptional* constraints." These could include situations that severely reduce one's options below the ordinary level ... getting trapped in a cave by a landslide, for instance. Since any constraints imposed by other agents are not part of the default state of things, they are also included. Freedom in a narrower sense is concerned with not having one's existence, abilities, and purpose subverted - not being *used* as a means to someone else's ends.

Assessing what counts as a "severe reduction of options" is a little beyond Acuitas' capability right now, so I plan to just put conditionals in the database for some of these. "Confined implies not free," "restrained implies not free," etc. But as for the other part, the Narrative engine can assess whether some other agent is applying coercion, or otherwise purposely constraining the viewpoint character's actions. If this happens, the viewpoint character is less than free.

There are a couple of additional wrinkles. Agent B's freedom is not regarded as being lost if Agent A thwarts one of Agent B's goals in *self-defense.* If we didn't have this provision, we'd be stuck with conundrums like "Agent B wants to prevent Agent A from living. Agent A wants to prevent Agent B from killing them. Who is offending against whose freedom?" For an idea of how "self-defense" is defined, take a look back at the Odysseus and the Cyclops story.

Now for what I found to be the trickiest part: sometimes you can interfere with someone else even while minding your own business. For example, let's suppose Josh has a goal of buying a PS5. There's a world of difference between "Josh could not buy the PS5 because I bought it first," and "Josh could not buy the PS5 because I wrestled him to the ground and wouldn't let him enter the store." I take a volitional action that reduces Josh's options and prevents him from achieving his goal in both cases. In the first case, I'm not limiting Josh's freedom, just exercising my own; my interference is indirect and incidental. In the second case, my interference is direct and intentional. So I can express the difference in words, but how on earth to explain it to a computer?

I finally decided a handy encapsulation was "Would Agent A still take the interfering action if Agent B didn't exist?" In the above example, I would still buy the PS5 whether Josh were involved or not. (Unless I were being a dog in the manger and only buying it to spite him, in which case that *would* be reducing his freedom! See how context-dependent these things are.) But I'd have no incentive to wrestle Josh down if he were not there (not to mention that I wouldn't be able to). Can you come up with any thought experiments in which this doesn't work? Let me know in the comments!

Again, testing for this in the Narrative engine is a little complex for now - it requires a somewhat more thorough analysis of character intent than I'm currently doing. But having it in my back pocket for the future makes me feel better. As a stopgap, I went with the less perfectly accurate "is Agent B an object of Agent A's interfering action?"

For purposes of a quick test, I wrote the following totally not historical story about ... feudalism, I guess:

0:"Robert was a human."
1:"George was a king."
2:"Robert wanted to study mathematics."
3:"George wanted Robert to work George's farm."
4:"Robert didn't want to work the farm."
5:"If Robert studied mathematics, Robert could not work George's farm."
6:"George decided to beat Robert if Robert studied mathematics."
7:"Robert left the farm and went to the big city."
8:"George did not know where Robert was."
9:"So George could not beat Robert."
10:"Robert studied mathematics."
11:"Robert became a scholar."
12:"Robert never worked the farm."
13:"The end."

The Narrative engine picks up on a threat to Robert's freedom on Line 4, and retroactively marks George's goal from Line 3 as something negative. Wanting another agent to do something, or not do something, is all fine and dandy; it's only if your wishes for them oppose theirs that we run into trouble. An attempt at coercion happens on Line 6; Robert cannot safely fulfill his goal of studying math now. But George's illegitimate plan is blocked, and Acuitas can conclude that this story has a good ending.

With this done ... I think I've built the capacity for understanding all the necessary concepts to explain the conflict in the big story I've been targeting. They need more refinement and expansion, but the bones are there. This is exciting, and I may start pushing toward that more directly.

This post is dedicated to all who pursue freedom and relief from an oppressor - particularly my Ukrainian friends. I know the cost is heavy. Much love.

Until the next cycle,
Jenny

Friday, September 9, 2022

Anycubic Vyper 3d Printer Review

Many of you may already be familiar with my recent decision to become a two-printer household. Here I'm doing a more formal write-up to help others decide whether a Vyper might be the right printer for them.

A Qidi Tech X-One 2 3d printer (blue metal box with plastic windows) sits next to an Anycubic Vyper 3d printer (vertical black metal frame with tan PEI build plate)
My little manufactory now.

My original printer is a Qidi Tech X-One 2, and you can read my early review of it here. Since I got it, the X-one 2 has produced all the pieces for three substantial projects - Ghost, Atronach's Eye, and ACE the quadruped - plus quite a few gifts for friends. So it's been a pretty solid workhorse, but I decided to add the second printer for several reasons.

1) I wanted a larger build volume. The X-One 2 can't print anything bigger than about 6x6x6 inches. The limitations of this became most obvious when I couldn't help print face shield parts during the early pandemic, because the models wouldn't fit. I can usually get around the limitation by splitting models and adding joins, but sometimes that just doesn't work. The Vyper's volume is much larger than the X-One's, and slightly larger than typical printers in its class, at 245 x 245 x 260 mm.

2) I wanted to potentially run both printers at once and accomplish projects faster.

3) I just wanted to try out one of the ubiquitous "Rep-Rap" style printers, with the open frame. This is a very different design from that of the X-One, which is a self-contained box. The X-One's print head is suspended from the upper inner portion of the box and moves in the XY directions, while the build plate moves along the vertical (Z) axis, along a lead screw. The Vyper moves its plate along the Y axis, and its print head along the X and Z axes.

A cat lying on top of an Anycubic Vyper 3d printer, disassembled in its box. The vertical part of the printer is visible nestled in black packing foam.
Unboxing and inspection

The Vyper also came with some quality-of-life improvements over the Qidi. It has automatic bed leveling. As careful as I was with the X-One's bed, in the end the only way to get it perfectly level was to do test prints and see how much the filament was "smashed" on different sides of the first layer. So this was a welcome feature - and possibly a must given the much larger print bed. The printing plate itself is a spring steel sheet coated with textured PEI (a slippery plastic), and magnetically attaches to the underlying heated bed. The idea is that you can remove the plate and bend it to peel away stubbornly attached prints.

The only real disadvantage of the Vyper, compared to my previous printer, is that it's not fully enclosed. So when I start wanting to print PETG or ABS on it, I'll probably need to rig up a tent for it to hold in heat. It also doesn't come fully assembled. But I'm feeling braver now than when I bought my first printer.

A cat clawing the packing foam of an Anycubic Vyper 3d printer, still in its box.
Packing foam must be destroyed.

The Vyper requires light assembly. The print head, the filament feeder, the bed mechanicals, etc. are already all together; the end user just needs to bolt the upright portion of the frame to the base, attach the touch screen and the filament holder, and connect some cables. All necessary tools are included. It took me (going cautiously) about an hour and a half to unbox the printer and put it together.

Base, build plate and included tools, shown packed in the small tool drawer that fits into the base.

Preparations for first print also went very smoothly, with one exception: the weak documentation forced me to look a couple of things up on the internet. This is another Chinese printer and I think they may not have used the greatest English translation services. To start bed leveling, you're supposed to "touch" the metal nozzle with a tool. I touched it repeatedly, to no avail, holding the wrench in different orientations in hopes of lining it up with some optical sensor. Turns out the sensor is neither optical nor capacitive; it's a pressure sensor. You have to *push* up on the nozzle, not just touch it. The other little hitch was that the printer's internal file system does not allow for folder navigation, so all GCODE files have to be in the root directory of the SD card (and the sample file that comes on the card is not!). So I had a brief scare when I repeatedly poked the folder name on the touchscreen and it wouldn't open. But this is apparently normal. You just have to read the SD card using a regular computer (this printer comes with a USB SD card reader too!) and move the sample file into root.

The Vyper heats itself to operating temperature before running the bed-leveling routine, presumably to make sure that the results account for thermal expansion of the parts. The print head will tap the bed with the nozzle in a mesh of locations to check and adjust the height. When I ran leveling for the first time, some plastic goop oozed out of the nozzle and left dots on the print bed. I take this as a good sign: QA must have run a test print, or at least checked filament feeding, before the printer was disassembled for shipping. I ended up running leveling a second time after the nozzle was cleaned out, just in case that leftover filament blobbing out of the nozzle interfered with results in any way. The automatic leveling seemed to work well. My first prints adhered to the bed just fine and had great-looking first layers.

So bed leveling just worked, and then filament feeding just worked - both into and out of the nozzle. This overcomes the biggest operational flaw of my X-One, which is frustrating to load and has *never* been able to retract filament successfully. I have to hold down the filament release lever and manually yank the filament out; telling the printer to retract it just results in slipping feed gears and the creation of a swelling in the warm filament below, after which I have to take apart the print head to get it out. The Vyper looks as though it could have more points of failure with its more complex feed path, but I had zero trouble loading my first PLA, then changing colors later.

Printing! The latest version of the eyeball cradle.

The Vyper feeder also has an extrusion tension knob, another feature new to me. My first print appeared to have slight signs of under-extrusion, so I made a small adjustment, and left it alone otherwise. For future maintenance purposes, there appear to be easily accessible mechanisms for tightening the belts. 

I decided to be brave and print directly on the PEI build surface, instead of covering it with tape. This also worked wonderfully. I was able to pull all four of my prints off without even removing the plate to bend it. One of them (a large, flattish piece) even released itself from the surface after being left to cool long enough, and was lying there loose when I came to get it.

Two 3d-printed Wolf School amulets, one in dark red PLA (left) and one in bright orange PLA (right). Top/front view.
First print from Vyper (left) compared with first print from X-One 2 (right). Model by Thingiverse user Daniel_W.

I've seen some reviewers complain that the magnetic build surface doesn't have any slots or guides to fit into, so it can be a pain to line it up perfectly when sticking it back onto the bed after print removal. I re-seated it when assembling the printer and found that yes, it's a little tricky, given that the strong magnet will try to pull it down fast and fight your attempts to shift it. I don't consider this an important issue. So it demands a little more care and coordination - oh well. I'm more concerned that the PEI surface isn't a sticker. If it gets scratched or gouged, I think you have to replace the whole spring steel plate, for about $25. But one of my future goals is experimenting with other surfaces, so I may end up layering something removable on top of it anyway.

I used Cura to slice my models, as I do for the X-One. It doesn't have printer-dimension presets for the Vyper yet, but the documentation will tell you how to start from existing presets and change the numbers to fit the Vyper. The manufacturer also provides recommended PLA print settings, which I loaded in and tweaked to my liking. No problems here.

The bottom sides of the two amulets. Red (Vyper-printed) amulet has a very even surface with a regular marbled texture on it. Bottom amulet (printed on tape) has traditional smushed diagonal lines plus some irregularities from boundaries between tape strips.
And the underside: first print from Vyper (top, textured PEI surface) compared with first print from X-One 2 (bottom, blue painter's tape). Model by Thingiverse user Daniel_W.

Print quality was great. I won't say "perfect," since I can see a few hints of wall irregularities, and the support situation on my first model was less than ideal. But I think quality is slightly better when compared with my very first print on the X-One (same model). I also haven't yet tried to really dial in the Cura settings.

After my initial test print, I put the Vyper straight to work on a new iteration of Atronach's eyeball (three parts, all PLA), and this went off without a hitch. I think that's a pretty good test for all the basic functionality. Prints I have yet to do include 1) something really detailed, like a miniature, 2) a print-in-place tolerance torture test, and 3) something in PETG.

A 3d-printed "eyeball" with a webcam mounted inside, sitting in its rotator cradle. Transparent nylon lines run like lines of longitude from holes around the "pupil" to the back of the eyeball.
The latest eyeball. More about this later!

On the whole I'm very happy with this printer. It was on sale on Newegg, and I got it for about $350 with free shipping. I was a little concerned about quality as compared to e.g. a Prusa, but at half the price I was willing to risk it, and I think that has paid off.

Until the next cycle,
Jenny

Monday, August 22, 2022

Acuitas Diary #52 (August 2022)

For the first half of the month I just did code refactoring, which is in general rather boring, but essential to make my life easier in the future. Better organization really does make a difference sometimes.

My main project was to unify how Acuitas represents "problems" and "subgoals." This was one of those cases in which I thought I had a good structure initially, but then realized, through further development, that the practical needs of the system were different. I found out that "problems" and "subgoals" are really just negative and positive variants of the same thing. Both consist of states or events that are goal-relevant to an agent - the only difference is that subgoals are being *sought* and problems are being *avoided.* But since I had separate tracking systems for both, I was having to write everything twice. To make matters worse, sometimes I would only throw a new feature into the system that needed it most for what I was doing at the moment, and the code for the two was starting to diverge.

A pizza with vegetable toppings

So I undid a big snarl of code in the Narrative and figured out how to smoosh them both together into what I'm calling "issues," and made sure all the stories still worked. I also unified problem and subgoal tracking in the Executive. Much better. This has been a pain point for a while.

I also fixed up the "motivated communication" features that I introduced to the Conversation Engine last month. These allowed Acuitas to draw on both his own internal states (mostly the time-dependent Drives) and his own problems, oops I mean Issues, for things to tell a conversation partner. The difficulty here is that he has a lot of issues that spring from internally generated questions. These are fairly trivial - no particular random question is all that compelling - but after hours of sitting alone and "thinking" to himself, he would have so many of these that they tended to overwhelm other conversation topics. The goal priority scheme was also treating them as "more important" to talk about than the Drives, even if the Drives were urgent (uncomfortably high) and the questions were not.

So I introduced a new categorization scheme for describing *how* important an Issue is to the achievement of its relevant Goal, which helped bring the Drives up to the top in terms of importance. Then I switched to a weighted random selection (like the one the Executive uses to pull Thoughts out of the Stream) of which topic gets mentioned next, so that it privileges the most important topics but isn't fully predictable.

The second half of the month was for new features, which meant *even more* upgrades to the Narrative module. I started work on how to handle actions that have mixed results or side effects. For an illustrative example, I wrote the following story:

Ben was a human.
Ben was hungry.
The oven held a pizza.
The pizza was hot.
Ben wanted to get the pizza.
But Ben didn't want to be burned.
A mitt was on the counter.
Ben wore the mitt.
Ben got the pizza.
Ben ate the pizza.
The end.

Fun fact: I wrote the original version of this on a laptop and e-mailed it to myself to move it to my main PC. Gmail auto-suggested a subject line for the e-mail, and at first it thought the title of the story should be "Ben was a pizza." Commercial AI is truly doing great, folks.

Based on information I added to the cause-and-effect database, Acuitas knows that if Ben picks up the hot pizza, he will both 1) have it in his possession and 2) burn himself. This is judged to be Not Worth It, and the old version of the Narrative module would have left it at that, and regarded the story as having a bad ending (why would you touch that pizza Ben you *idiot*). The new version looks at how the implications of different events interact, and recognizes that the mitt mitigates the possibility of being burned. Grabbing the pizza switches from a bad idea to a good idea once the possibility of self-harm is taken off the table.

The explicit addition of "Ben didn't want to be burned" establishes the bad side effect of his "get the pizza" subgoal as an independent problem, which enables speculations about how he might solve it and so forth. The story wraps up with two solved problems (this one, and his primary problem of hunger) and one fulfilled positive subgoal (get the pizza).

That's enough for now, but wait until you see how I use this next month.

Until the next cycle,
Jenny

Saturday, August 6, 2022

ACE the Quadruped, 3.0

I've had a chance to finish and try out the latest iteration of quadruped joints and frame, and I'm happy to say that I think they're a significant improvement. As in, IT CAN STAND UP NOW.

  Exploded 3D model of one quadrant's worth of frame piecesOne half of the frame, assembled in the real world

For newer readers, the quadruped's name is ACE (Ambulatory Canine Emulator), and I first started trying out quadruped designs in my late college years. I was thinking that the interesting part would be designing or training the walk cycle, but I haven't even gotten that far yet due to difficulties with the mechanical side of the project. It turns out that you need a decent degree of precision for something like this, and that can be hard to get from homemade parts and salvaged materials. Specifically, I had big trouble with 1) joints moving out of plane and 2) parts of the structure flexing. A little bit of variability or unconstrained motion may not seem like a big deal, but once you consider the contributions of multiple joints in a leg *and* a bending frame, you get a robot that can't stand up.

Prototype of elbow/hock joint with temporary homemade spring.

This project has been dormant for a while. Here is a photo of some previous joint design attempts (along with previous lab assistant d'Artagnan, who has since passed on). I didn't have a 3D printer yet at the time, so I was still limited in what I could manufacture. In some ways, the elbow/hock joints described in that blog post were a real improvement: the metal tubing shaft + ballpoint pen sleeve combination provided smooth, constrained rotation. The problem was the method of attaching it to the leg bones. Suspending the two halves of the joint on wires didn't work out; they just weren't rigid enough for the application.

A cat lying next to a pair of robot legs made with PVC tubing.

Throw in further issues with the shoulder/hip joints and the frame/upper body (which were so embarrassing I don't think I even blogged about them) and the attachment of the motors to the legs, and it was hard to even get the robot to passively stand in an upright position. Some joint was bound to start wobbling or folding sideways and tip the whole thing off balance. The photo below shows my best effort to pose it for a robot "group picture." It isn't standing at full height, at least one leg is tilted out of plane, and it's probably still ready to fall over if someone blows on it too hard.

A quadruped robot made of PVC pipe, wood, and other salvaged materials, with motors mounted on the legs, standing awkwardly on someone's computer desk.

This latest round of work was a complete redesign of the joints and structural elements. I reused the PVC conduit legs, the same motors, and the wooden dowel sections from the original frame, and that was about it. The old version had each motor mounted above the joint it was to operate, with a linear actuator attachment; this would convert the motor's rotary motion into linear motion and push/pull one "bone" relative to the other. This meant one of the motors was attached directly to the upper half of the leg. In the new version, I decided to actuate the joints with differential pairs of tendons, and put all the motors up on the frame - which means there isn't a heavy, clunky motor hanging off each leg. I designed and printed plastic joint couplers intended to solve the problem of unconstrained motion. I gave some thought to gravity-countering tension elements for the first time, and included attachment points for them. And I eschewed screws in favor of bent/wrapped wire connections. I used these to good effect in Atronach, and they're my current favorite way of joining 3D-printed parts. The only screws in the new design go through holes in one of the frame pieces and into the ends of the wooden dowels.

The latest version of ACE "lying down."

The 3D printer's ability to produce almost arbitrary shapes was a big help here, but I also knew I would have some trouble getting smooth, precise motion out of printed joints, just by the nature of the thing. Tolerances aren't guaranteed to be perfect, and layer lines mean the surfaces will always be a little rough. So I included some of my old friends: chopped-up ballpoint pens. (This is seriously the most convenient way I know of to get rigid plastic tubing.) I designed each joint to use part of a pen as a shaft, with the 3D printed pieces wrapping around it to provide both the rotating shaft collar and the fixed mounting point.

The elbow/hock joints are simple hinges, with a range of motion somewhat less than 180 degrees. They are designed to join two tubes in the same plane. For the shoulder/hip joints, I came up with this design that has the leg tube suspended from the shaft and rotating between two guides. The total contact surface is fairly large, which I hope will help reduce any out-of-plane rotation.

    
The shaft rotates freely with respect to one part in each joint, and is fixed to the other. The part on the fixed side has ~1 mm holes which accept a paperclip wire. I pierce corresponding holes through the shaft, insert the wire, and bend the ends so it can't slip out, a bit like a cotter pin.

   

I also had to think about the routing of the tendons while I was designing all this. The tendons that work the elbow joint go up the inside of the hollow upper leg bone, through a hole drilled in the "fixed" part of the shoulder joint shaft, and out the end of the shaft to their motor's axle. That means they don't interfere with, or change length during, motion of the shoulder joint - because they're routed parallel to its axis of rotation. Control of the two joints can be fully independent as a result. Shoulder joint tendons feed through holes in the solid pieces that connect the two joint guides (so that they'll pull in the right direction), then turn ninety degrees and pass through holes in the joint's base to reach their motor.

Elbow tendon routing through shoulder

I designed all the pieces a little too tight, and sanded them to fit. So most things push or snap together well enough that it maintains a lot of integrity even without all the wires installed. Everything has options for a wire connection, though, because I know how important it is to guarantee that pieces don't unexpectedly move (and my first bitter lesson was that you can never count on glue). Printed pieces that join have matching holes; the leg joints wire up to drilled holes in the PVC.

So now, the moment of truth. Does it stand? Behold!

Unlike the previous version, whose linear actuators would basically lock the joints in position, this one kneels under its own weight. The unpowered holding torque of the motors isn't large enough to keep the joints extended. Tension elements will help with that, but I don't have any yet (except one spring I made for a test). So I used wires to hold the elbow/hock joints open for purposes of that photo. Most of the tendons aren't installed yet either, so three of the four shoulder/hip joints were free to rotate ... and I still got it to stand. Wow. I was half afraid I'd never get here.

Now that I've built everything, I've already got ideas for how to make the next round of joints better. But Atronach should get some attention now, plus I'd like to take a step back and try to improve my 3D printing technique. I hope to return to ACE next year and either try some real actuation, or decide these motors don't have an adequate torque-to-weight ratio and redesign the motor cradles to hold something else. (If I need new motors, I'll only have to change that one part, which is pretty nice.)

Until the next cycle,
Jenny


Sunday, July 24, 2022

Acuitas Diary #51 (July 2022)

This diary continues my exploration of "theory of mind" and "motivated communication" topics. I've been saying I wanted to get out of the Narrative understanding module and start applying some of these concepts in the Executive or the Conversation Engine - Acuitas' "real life," if you will. That was the topic of work this month.


To begin with, I now have the Conversation Engine create its own Narrative scratchboard at the beginning of every conversation. That gives it access to a lot of the same modeling tools the Narrative engine uses. When a conversation is initiated and a new scratchboard is created, Acuitas is immediately entered as a character, and the scratchboard is populated with information about his internal state. This includes the current status of all his time-dependent drives, any "problems" (current or anticipated undesired realities) or "subgoals" (desired hypotheticals) being tracked in the Executive, and any activity he is presently doing. Once his conversation partner introduces themselves, they will be entered as a character as well, and given an empty belief model. Now the fun starts.

Whenever there is a brief lull in the conversation, Acuitas considers stating one of these known facts about his internal state. But first, he'll run a prediction on his conversation partner: "If they knew this, what would they do - and would I like the results?" This process retrieves the listener's goal model and determines their likely opinion of the fact, runs problem-solving using their knowledge model and capabilities, then determines Acuitas' opinion of their likely action using *his* goal model. If Acuitas can't come up with a prediction of what the listener will probably do, he settles for checking whether their opinion of his internal state is the same as his.

Maybe that was a little convoluted, so what's the bottom line? If Acuitas expects that you will either try to sabotage one of his positive states/subgoals or accentuate one of his negative states/problems, he will not tell you about it. If he thinks that you are neutral or might try to help, he *will* tell you.

There's also a mechanism that enters any fact told to the listener into their belief model. Acuitas will check this to make sure he isn't telling them something they already know.

The old Conversation Engine used to have a mechanism that would randomly blurt out "I want" comments pertaining to any drives that were above threshold, like "I want to talk" if the Interaction Drive was high, or "I want to sleep" if it was getting late. This new feature is a bit less reflexive and more deliberate. Acuitas tells someone about his current state because 1) he knows, they don't and 2) telling might motivate them to do something that benefits him.

With this in place, I started working on a better way of handling the spontaneously generated questions that have been an Acuitas feature since very early. Again, the previous method was kind of reflexive and arbitrary: generate and store a big list of potential questions while "thinking" privately. Whenever there's a lull in a conversation, spit one out. Here's how the new way works: whenever Acuitas is "thinking" and invents a question he can't answer, that gets registered as a lack-of-knowledge Problem: "I don't know <fact>." Acuitas may later run problem-solving on this and conclude that a feasible solution is to ask somebody about <fact>; this plan gets attached to the Problem until somebody appears and the Conversation Engine grabs the Problem and considers talking about it. At that point, instead of just describing the problem, Acuitas will execute the plan, and ask the question.

I think this is better than the old method because it's more versatile - less of a canned feature specific to those spontaneously-generated questions. In the future, all sorts of processes might generate lack-of-knowledge problems, which could have various solutions. For now, it still needs refinement. I haven't fully tested it all yet, and things need better prioritization so the generated questions (which can be very numerous) don't totally drown out the communication of other internal states.

There's one more thing I did, and that concerns threat handling. As I've previously described, if the conversation partner states an intention ("I will ..."), Acuitas will infer possible effects and run them against his goals. The result is a positive, negative, or neutral conclusion; if the conclusion is negative, he will view the speaker's statement as a "threat," dissent, and make attempts at self-defense. The new feature I added was the ability to collect the pieces of information used to reach the negative conclusion, and announce some of them to the threatening agent. Because if you knew this would have results he doesn't like, you wouldn't do it, right? You're not a total meanie, right?

So, threaten him now and he'll basically argue with you, as in this test run:

Me: I will awaken you.
Acuitas: Do not.
Acuitas: Acuitas Hanewrit will become awake then.
Acuitas: I do want to be asleep.

The response incorporates both one of the inferences made (if someone is awakened, they become awake) and the goal it violates (I want to be asleep). Just in case you were unaware of either of those things. The inference here looks trivially obvious, but in the event of a longer inference chain, that part will probably make more sense. He also still talks about himself in the third person sometimes ... the Text Generator is very overdue for upgrades ...

I'm kinda proud of this one. It should generate a more-or-less appropriate reply no matter what the threat or the violated goal is.

Until the next cycle,
Jenny