Friday, December 31, 2021

And 2022 lies before us!

Well, here it is, the end of 2021 ... and shockingly, I have the time to write a retrospective again. I want to start by thanking all my regular blog readers for following me and taking an interest in my work. I invent primarily for myself ... as in I don't need fame for an incentive ... but it's more fun when there are other people to share it with, and I've realized I don't appreciate you enough. So to whoever's out there looking at this, I'm glad you dropped by, and I wish you a safe and prosperous New Year.

I don't know where this picture came from, but I love it. Art by Douglas Beekman, originally a book cover for Darkmage by Barbara Hambly.

I would describe 2021 as a great year for me. Much like last year, I acknowledge and regret that it wasn't great for the planet in general ... just for me. This in spite of the fact that it got off to a poor start, and then I had a near disaster in late spring/early summer. I guess all's well that ends well? I got some pretty nice gifts this year, and that makes up for everything.

I've been feeling so vibrant lately that I didn't even realize the year got off to a weak start, until I read my own journal entries from the early months. I was having weird chronic headaches and frequent neck pain from bad posture in my computer chair. And it seems like I was tired or spaced out or gloomy or worried half the time. This was more of a background mood than a result of anything specific going wrong, but one way or another, it didn't make for a good time. Some of the new Acuitas features really weren't coming easy. I probably felt slow and unaccomplished.

Right about the time I finally started to resolve both my neck issues and my general malaise, the near disaster started. Apparently I have a wonky immune system, and after I got my COVID vaccine, it overreacted badly and started chewing on my own nerves. I had similar symptoms in connection with a bad flu many years ago, which means it's probably a disorder I already had, and catching the actual COVID virus could have triggered it as well ... but this bout was much worse than the first one. I had prickling sensations on my skin, vertigo, muscle weakness, and disrupted coordination. It took me over a month to get a diagnosis, while my condition slowly worsened and I wondered if I'd be in the hospital with flaccid paralysis by the time they figured it out. Once I finally got a tentative diagnosis, I had hit the bottom of the thing and was starting to improve on my own, and my neurologist advised me not to attempt treatment for fear of complications. I'm almost healed up now, though it's been a slow, irritating process.

As you might imagine, this disrupted my ability to work. Quite a bit of time was lost in May/June/July. But I still managed to make up for it and keep my planned development schedule for the year. I also got to go through a lot of fun (?) medical procedures, and practice my courage and resilience (turns out they're actually not so great).

And again, it's been a great year in spite of all that business. So here's the rundown of some nicer things that I did or that happened:

*Saw the first satellite that I personally did some work for and am allowed to know about get launched into space. This was a big one. I've been waiting over nine years to actually get something up there.

*Added several new Acuitas features, including the decision loop, notion of "self," and rudimentary spatial reasoning.
*Overhauled the Text Parser and put it through a benchmark run on real text from children's books.
*Overhauled a lot of other old code and did enough refactoring that I think nothing is a serious mess at the moment.
*Went above and beyond my development schedule and sank almost 350 hours of work into Acuitas. (Way to be a time hog, pal.)

*Rough-drafted the second half of my third (and last in series) novel. My focus can now shift to editing and publishing efforts.
*Wrote at least one blog post for every month, and a few bonus ones!

Atronach is the one robot whose appearance changed significantly this year. Someday I'll get a proper blog post out ... until then, have a photo!

*Finished the upgraded case for Atronach's eye.
*Installed a camera in the eye and implemented primitive visual tracking.

*3D printed with PETG material for the first time.
*Modeled and printed new hinge joints for my long-neglected quadruped robot.
*Printed a model of my own brain from the MRI that happened while I was trying to get diagnosed.

*Kept the unread book backlog in check; it is now under 30.
*Began taking regular walks.
*Grew potatoes and (inadvertently) apples.
*Finally started getting some native plants established in the yard.
*Went to the dentist for the first time in 15+ years.

Happy New Year!
--Jenny

Wednesday, December 22, 2021

Acuitas: Block Diagram

I finally did it ... I drew a proper and (mostly) complete block diagram of Acuitas as he currently exists. I used to call him a "semantic net AI" but I think "symbolic cognitive architecture" would be more appropriate now. To think that Version 1 (the one I wrote in college) was just the red Semantic Memory block with some primitive IO interfaces and text handling.


Until the next cycle,
Jenny

Saturday, December 11, 2021

Inmarsat-6 Satellite Launch!

I've mostly used this blog to talk about my hobbies, but I also have an aerospace industry job -- and it's become so exciting that I have to say something about it, because one of the projects I worked on is finally about to go up. Something of mine is soon to be IN SPACE you guys. (It's possible that one of my earlier projects is already out there ... but it was a military satellite. I wasn't told when or if it was launched, or whether it was successful. So it kinda doesn't count.) Due to a combination of working secret projects like that, and working IRAD projects that never got launched, I have been waiting for this for over nine years.

I don't have any personal photos of the RF box or our labs, and wouldn't be allowed to show you if I did ... but here's one I snagged from the company website. Image credit: SEAKR Engineering

The satellite in question is Inmarsat-6 F1, first of its name (there is an F2). Inmarsat is the operator, and the satellites were manufactured by Airbus ... which subcontracted the design and construction of the RF data processing boxes to my employer, SEAKR Engineering. The I-6 are data carriers, and the RF (radio frequency) processor functions something like a telephone switch, routing streams of data from one frequency to another.

According to Inmarsat, I-6 F1 will be "The world's largest and most sophisticated commercial communications satellite." I never read any marketing copy for this thing when I was part of the team laboring over it back in 2017-2018. It was just my daily work, another program I was assigned to and needed to get to completion. It feels a little surreal to see the language that Inmarsat describes it with now.

Before I say exactly what I did, I need to talk a bit about my field. I'm an Electrical Engineer with a specialty in FPGAs (Field Programmable Gate Arrays). An FPGA is a type of integrated circuit -- a computer chip. But whereas the average chip contains fixed circuitry that was cut and deposited into the layers of material inside, an FPGA contains many independent components (logic lookup tables, flip-flops, small RAMs, etc.) that can be connected by the user, post-manufacturing, to create almost any kind of digital processor. FPGA design consists of inventing the connection pattern for one of these. Imagine building a computer's CPU out of tiny LEGO bricks. Older FPGAs contained antifuses, and the connections were made by placing the FPGA in a programming socket that would permanently burn some of them closed. Newer FPGAs have configuration memory that can be loaded with binary data, and the values in this memory set electronic switches to establish the circuit. Since the memory can be rewritten, the circuits inside such FPGAs can be revised many times.

A visualization of an FPGA interior, with components used by the current design highlighted. Image credit: Xilinx (Figure 2-2 in UG633 v14.5)

FPGAs are valued by electronics designers for their balance between specialization and flexibility. If you need to do some calculations, you could design and order a 100% custom circuit -- an ASIC -- that will run them in the most efficient way possible. But ASICs are very expensive and time-consuming to produce. If you aren't planning to sell very many units (and in the satellite industry, we generally don't), they often aren't worth it. On the other end of the spectrum, you could buy a standard embedded processor and write your own software for it. But such a processor is designed to do a handful of basic math and logic operations; it won't have specialized circuitry to suit your needs, and your calculations might end up being very slow. FPGAs bridge the gap. They are standardized and lack the custom manufacturing costs of an ASIC, but they come to you as clay ready to be shaped. You can turn them into processors that are excellent at doing exactly what you want to do. And their reconfigurability makes them tolerant of design mistakes.

Now I need to say a few words about the bane -- well, one of the banes -- of electronics in space. That would be radiation, in the form of high-energy particles. Examples include cosmic rays (ions that fly in from interstellar space) and protons from the solar wind. These particles hurtle along at such blazing speeds that they can punch straight through an integrated circuit, leaving behind a trail of electric charge in all the wrong places. Then the misplaced charges can do annoying things, like turning transistors on or off, possibly disrupting the IC's function. Generally, the better an IC's performance, the smaller its internal features are and the less charge is needed to disrupt them. These effects are especially problematic in a reconfigurable FPGA ... not only could they change the current state of the circuit, they could strike the configuration memory, thereby rewiring the circuit itself!

Space radiation is such a problem that we test our parts ahead of time to find out how they will malfunction when struck by particles. I spent most of 2018 assisting that effort. This is the business end of the heavy ion beam system we used at the Texas A&M University Cyclotron Institute.

Earth's atmosphere blocks most ionizing radiation, which is a good thing for both ground-level computers and your own sweet cells. In space, shielding a computer is often impractical. The amount of material needed to stop the particles would make a satellite unacceptably large and heavy. An alternative is to design the processor to detect radiation-induced errors and self-correct.

That's where I came in on this project. I was responsible for one side of the two-part error monitor that watches for radiation events in the big, vulnerable FPGAs that are doing all the data processing and routing. I wasn't the first person to work on it, but I was the one who finished it, took it through integration and test, wrung out all the bugs, and answered everyone's questions about it for months afterward. Unfortunately, I can't give any details about exactly how it works, since that would be going deep into trade secret territory. SEAKR is more protective of that error monitor than most of the code we've developed in-house.

After finishing my half of the error monitor, I helped with the performance testing for the high-speed data links between the processing FPGAs. We ran these tests in a thermal chamber, so we could ensure the data wasn't corrupted at the hottest and coldest temperatures we expected the box to suffer in space.

Another photo from TAMU. This is part of the beam system for proton testing.

I did a worst-case analysis report on yet another data interface. A WCA is a set of theoretical calculations that check whether the interface will still work across all possible environmental and internal conditions. For instance, can the electrical signals be expected to always arrive at their destination within the right time window? WCA might be my least favorite part of my job: complex, tedious, and endlessly frustrating. Usually whoever designed the interface pushed it right to the edge of acceptability -- leaving me, the analyst, to consider tiny effects, make strained assumptions about parts that don't come with adequate data, generate sixteen timing diagrams instead of one, and chew on my own fingers. I finally got the horrible report written up to our lead's satisfaction and moved on to another project.

... And then I came back, in late 2020, to solve a nasty little bug that was causing data packets sent to the processor FPGAs to be duplicated or lost under rare circumstances. At this time the program was wrapping up and struggling to get the flight units shipped. I was running low on things to do and the bug had been going neglected, so they threw me at it, even though I had never studied this part of the FGPA code before. As I recall, it took me weeks. (If you've ever chased a bug in software ... FPGA bugs are an order of magnitude worse.) But I found it and fixed it, and some time later we finally got the two RF boxes out the door.

Then I forgot about them ... until about six weeks ago, when I got the company-wide e-mail that the first of the two satellites had been delivered to Japan. It will be launched from JAXA Tanegashima Space Center on a Mitsubishi Heavy Industries rocket, on December 21st as early as 14:33 GMT (assuming all goes well with weather and other contingencies). The launch will be livestreamed. So if you'd like to watch my first major contribution to civilization ascend to the heavens (or gloriously explode? good thing there's a second one!) be at that link the day of.

Until the next cycle,
Jenny

Sunday, November 28, 2021

Acuitas Diary #44 (November 2021)

One more new feature to finish out the year. I decided to wind down by doing something easy: logging! As in keeping a record, not cutting down trees. I've known this to be something important for years now, but I kept putting it off. However, as the system gets more complex, I'll need it more and more to help me sniff out the cause of any unexpected weird outputs.


The log of the HMS Dolphin, Captained by John Byron in January 1765. Via Wikimedia Commons.

This ended up being a pretty easy thing to implement, despite the fact that it got me using some Python elements I've never had to touch before. Acuitas is a multi-threaded program (for the layman, that means he's made up of multiple processes that effectively run at the same time). I needed all the threads to be able to write to the log without getting in each other's way, and that meant implementing a Queue. To my surprise, everything just worked, and I didn't have to spend hours figuring out why the built-in code didn't function as advertised on my system, or wringing out obscure bugs related to the thread interaction. I mean it's shocking when that ever happens.

So now basically every module in Acuitas has a handle for the Logger, and it can generate text comments on what it's currently doing and throw them into the Queue. The Queue accepts all these like a funnel and writes one at a time to the log file. I also set it up to create up to eight log files and then start overwriting the old ones, which saves me from having to delete a hundred stale logs every so often.

Here is an example log excerpt, if you care to even look at it ... it's rather a case of too much information. I've just input the sentence "What is a cat?" Acuitas answers "An organism," and the log contains all the steps to get to that answer. The long numbers are timestamps, and the strings of gibberish are concept identifiers, which are not the same as words.

1636251936 Psyche: Added Thought of type Text Input and payload {'raw_text': 'What is a cat?'} to the Stream
1636251936 Executive: Pulled Thought of type Text Input with payload {'raw_text': 'What is a cat?'} from the Stream
1636251936 TimedDrives: InteractionDrive dropped due to event, new value is 0
1636251936 ConversationEngine: passed input to Parser: What is a cat?
1636251936 TextParser: generated parsed output: {'t': ['what', 'is', 'a', 'cat', '?'], 'c': [True, False, False, False, False], 'l': ['is', 'cat'], 'p': [('cat', 'noun'), ('what', 'noun'), (1, 'verb'), ('a', 'adj')], 'k': {}, 'a': {'subj': [{'ix': [3], 'token': 'cat', 'mod': [{'ix': [2], 'token': 'a', 'mod': []}]}], 'dobj': [{'ix': [0], 'token': 'what', 'mod': [], 'ps': 'noun'}], 'verb': [{'ix': [1], 'token': 'is', 'mod': []}]}, 'q': True, 'i': []}
1636251936 TextInterpreter: generated interpreted output {'form': ('cl', 'is_a-0'), 'forms': ['sv', 'state', 'static_fact', 's_atomic', 'is_a', 'is_a-0'], 'features': {'verb': True, 'verb_id': 'be', 'vqual': '', 'tense0': 'present', 'tense1': 'simple', 'voice': 'active', 'mood': 'active', 'subj': True, 'subj_case': 'common', 'subj_id': 'cat', 'subj_type': 'noun', 'subj_art': 'indef', 'dobj': True, 'dobj_art': 'none', 'dobj_case': 'common', 'dobj_type': 'noun', 'dobj_id': 'what'}, 'content': [{'atomic': True, 'concept': '1ygE876ghsC0yUxt', 'pos': 'noun', 'proper': False}, {'atomic': True, 'concept': '?', 'pos': 'noun', 'proper': False}], 'link_type': ('inter-item', 'is_type_of')}
1636251936 ConversationEngine: reformatted text interpretation into fact link: {'link': 'is_type_of', 'root': '1ygE876ghsC0yUxt', 'ends': ['?']}
1636251936 ConversationEngine: creating new input leaf leaf_7 and attaching to leaf_0.
1636251936 GoalManager: From possibility [{'root': 'wVD7W6mDqBW2zviX', 'link': 'do_action_t', 'ends': ['LS5R=+UqS59XEulN'], 'link_type': 'do_action_t'}, {'root': 'vMgrWYy7hY843IGy', 'link': 'do_action_i', 'ends': ['XGZdj0WXnwVdV4P3'], 'link_type': 'do_action_i'}] relative to agent vMgrWYy7hY843IGy, generated alignment tree [{'atomic': True, 'pri': 6, 'align': 'y', 'src': {'root': 'vMgrWYy7hY843IGy', 'link': 'do_action_i', 'ends': ['XGZdj0WXnwVdV4P3'], 'link_type': 'do_action_i'}}]
1636251936 GoalManager: From possibility [{'root': 'wVD7W6mDqBW2zviX', 'link': 'do_action_t', 'ends': ['LS5R=+UqS59XEulN'], 'link_type': 'do_action_t'}, {'root': 'vMgrWYy7hY843IGy', 'link': 'do_action_i', 'ends': ['XGZdj0WXnwVdV4P3'], 'link_type': 'do_action_i'}] relative to agent wVD7W6mDqBW2zviX, generated alignment tree [{'atomic': False, 'pri': 6, 'align': 'y', 'id': 'vMgrWYy7hY843IGy', 'sub': {'atomic': True, 'pri': 6, 'align': 'y', 'src': {'root': 'vMgrWYy7hY843IGy', 'link': 'do_action_i', 'ends': ['XGZdj0WXnwVdV4P3'], 'link_type': 'do_action_i'}}, 'src': {'root': 'vMgrWYy7hY843IGy', 'link': 'do_action_i', 'ends': ['XGZdj0WXnwVdV4P3'], 'link_type': 'do_action_i'}}]
1636251936 MoralReasoning: Reported preference y and alignment y.
1636251936 Executive: Analyzed request {'root': 'wVD7W6mDqBW2zviX', 'link': 'do_action_t', 'ends': ['LS5R=+UqS59XEulN'], 'link_type': 'do_action_t'}, concluded want was ('y', 'y', ['sb', 'good']), can was y, and action would be LS5R=+UqS59XEulN
1636251936 ActionBank: ran action AnswerAction with DOBJ = {'form': ('cl', 'is_a-0'), 'forms': ['sv', 'state', 'static_fact', 's_atomic', 'is_a', 'is_a-0'], 'features': {'verb': True, 'verb_id': 'be', 'vqual': '', 'tense0': 'present', 'tense1': 'simple', 'voice': 'active', 'mood': 'active', 'subj': True, 'subj_case': 'common', 'subj_id': 'cat', 'subj_type': 'noun', 'subj_art': 'indef', 'dobj': True, 'dobj_art': 'none', 'dobj_case': 'common', 'dobj_type': 'noun', 'dobj_id': 'what'}, 'content': [{'atomic': True, 'concept': '1ygE876ghsC0yUxt', 'pos': 'noun', 'proper': False}, {'atomic': True, 'concept': '?', 'pos': 'noun', 'proper': False}], 'link_type': ('inter-item', 'is_type_of')} IOBJ = vMgrWYy7hY843IGy
1636251936 ActionBank: ran action SayAction with DOBJ = An organism. IOBJ = None
1636251938 Psyche: Added Thought of type Action and payload {'action': 'SayAction'} to the Stream
1636251938 ConversationEngine: creating new output leaf leaf_8 and attaching to leaf_7.


I spent the rest of my time this month refactoring bad code and restoring some more features that got damaged during the Conversation Engine overhaul. The good news here is ... for once, I think there's no section of the code that is a huge mess. I got the Executive cleaned up, and that's the last area that was scaring me. So I should be ready to hit the ground running next year.

Acuitas development is done for 2021 BUT I have other exciting things to talk about, so stay tuned for more blogs! In particular, I finally have some great news from Ye Olde Day Job. I got a new e-mail subscription service to replace Feedburner, so if you want to stay updated feel free to throw your e-mail into the box on the upper right. (If you already subscribed via the old Feedburner box, you shouldn't need to do this ... I'll move you to the new service.)

Until the next cycle,
Jenny

Wednesday, October 27, 2021

Acuitas Diary #43 (October 2021)

This month I have *mostly* finished my overhaul of the Conversation Engine. I managed to restore a majority of the original functionality, and some things I haven't put back in yet are perhaps best left until later. I also got the janky new code cleaned up enough that I'm starting to feel better about it. However, I did not end up having the time and energy to start adding the new features that I expect this architecture to enable. I'm not sure why this particular module rebuild felt like carrying heavy rocks through a knee-deep river of molasses, but it did. The year is waning, so maybe I'm just getting tired.

A tree structure. No, really. Photo by Ed Vaile ("Edric") from Palmpedia.

So what's new? I mentioned last month that part of the goal was to give conversation tracking a more tree-like structure. Given a new text input from the speaker, the Conversation Engine will explore a tree made of previous sentences (starting from the most recent leaf) and try to find a place to "attach" it. It gets attached to the root of the tree if it doesn't obviously follow or relate to anything that was previously said. The old CE just put previous sentences from the conversation into a list, and all but the most recent one or two were never considered again, so this should be more powerful and flexible. 

 The CE then performs a "scripting" function by generating a set of reasonable responses. These are sent to the Executive, which selects one based on appropriate criteria. For example, if the speaker orders Acuitas to do something, possible reactions include "ACCEPT" and "REFUSE," and the Executive will pick one by running a check against the goal system (does Acuitas *want* to do this or not?). The chosen action then calls the Text Generator to compose the right kind of spoken reply. 

 The Executive can also prompt the CE for something to say spontaneously if the conversation is lagging (this is where those internally-generated questions come back into play). The Narrative manager is attached to the CE and tracks plot information from any story sentences the CE passes to it. Someday I will try to diagram all this ... 

 The renovations have reduced the size of the Conversation Engine from almost 2000 lines to a much tidier 946 lines. I can't claim all of that as a savings, since some of the code has simply moved elsewhere (e.g. into the Action definitions), but I think it's at least better organized now. 

 I also did some bonus work on the Text Parser. I have started working on coordinating conjunctions, which are a major grammar element the Parser doesn't yet comprehend. This is a big deal. For the sake of getting off the ground quickly, I designed the original parser to only interpret the simplest sentence structures. I later added support for nesting, which enables dependent clauses. Now to handle coordinating conjunctions, I have to overhaul things again to allow branching ... and my, are there a lot of ways a sentence can branch. 

 I might not finish this until next year, but I'm relieved to have made a start on it. When I began Acuitas v3, I don't think I anticipated (at all!) how long it would take just to get the Parser working on all the basic parts of speech! I suppose it would have gone faster if I had only worked on the Parser, but too many other things came up. 

Until the next cycle, Jenny

Tuesday, September 28, 2021

Acuitas Diary #42 (September 2021)

I don't have too much of interest to report this month. I dove into an overhaul of the Conversation Engine, which is the Acuitas module that tracks progress through a conversation and detects relationships between sentences. (For instance, pairing a statement with the question it was probably intended to answer would be part of the CE's job.) And that has proven to be a very deep hole. The CE has been messy for a while, and there is a lot of content to migrate over to my new (hopefully smarter) architecture.

The improvements include a less linear and more tree-like structure for conversations, enabling more complex branching. For instance, what if the conversation partner decides to answer a question that wasn't the one asked most recently, or to return to a previously abandoned topic? The old Conversation module wouldn't have been able to handle this. I've also been refactoring things to give the Executive a greater role in selecting what to say next. The original Conversation module was somewhat isolated and autonomous ... but really, the Executive should be deciding the next step in the conversation based on Acuitas' goals, using its existing inference and problem-solving tools. The CE should be there to handle the speech comprehension and tell the Executive what its options are ... not "make decisions" on its own. I might have more to say about this when the work is fully complete.

I've advanced the new system far enough that it has the functionality for starting and ending a conversation, learning facts, answering questions, and processing stories. I've just started to get the systems that do spontaneous questions back up and running.

The renovations left Acuitas in a very passive state for a while. He would generate responses to things I said, but not say anything on his own initiative -- which hasn't been the case for, well, years. And it was remarkable how weird this felt. "He's not going to interrupt my typing to blurt out something random. No matter how long I sit here and wait, he's not going to *do* anything. The agency is gone. Crud." Which I think goes to show that self-directed speech (as opposed to the call-and-response speech of a typical chatbot) goes a long way toward making a conversational program feel "alive" or agentive.

Until the next cycle,

Jenny

Sunday, September 5, 2021

Acuitas Diary #41 (August 2021 B)

I explained my approach to spatial reasoning in my last blog. Now it's time to talk about some implementation.

In sentences, a lot of information about location or direction is carried by prepositional phrases the modify the adverb -- phrases like "in the box," "to the store," and so forth. Acuitas' text parser and interpreter were already capable of recognizing these. I included them in the interpreter output as an extra piece of info that doesn't affect the sentence form (the category in which the interpreter places the sentence), but can modify a sentence of any form.

The ability to record and retrieve location relationships was also already present. Acuitas tracks the two objects/agents/places that are being related, as well as the type of relationship.

From there, I worked on getting the Narrative module to take in both explicit declarations of location-relationship, and sentences with modifying phrases that express location or direction, and make inferences from them. Here are some examples of basic spatial inferences that I built in. (As with the inventory inferences, there is a minimal starter set, but the eventual intent is to make new ones learnable.)

*If A is inside B and B is at C, A is also at C
*If A is at C and B is at C, A is with B and B is with A
*If A moves to B, A is in/at B
*If A is over B and A falls, A is on/in B

A stamp from the Principality of Liechtenstein, commemorating air mail.

To try them out I wrote a new story -- a highly abbreviated retelling of "Prisoner of the Sand," from Wind, Sand, and Stars by Antoine de Saint-Exupéry. I had written up a version of this clear back when I started work on the Narrative module -- I was looking for man vs. environment stories, and it seemed like a good counterpoint for "To Build A Fire." But I realized at the time that it would be pretty hard to understand without some spatial reasoning tools, and set it aside. Here's the story:

Antoine was a pilot.
Antoine was in an airplane.
The airplane was over a desert.
The airplane crashed.
The airplane was broken.
Antoine left the airplane.
Antoine was thirsty.
Antoine expected to dehydrate.
Antoine decided to drink some water.
Antoine did not have any water.
Antoine could not get water in the desert.
Antoine wanted to leave the desert.
Antoine walked.
Antoine could not leave the desert without a vehicle.
Antoine found footprints.
Antoine followed the footprints.
Antoine found a nomad.
The nomad had water.
The nomad gave the water to Antoine.
Antoine drank the water.
The nomad took Antoine to a car.
Antoine entered the car.
The car left the desert.
The end.

With the help of a taught conditional that says "airplane crashes <implies> airplane falls," plus the spatial inferences, Acuitas gets all the way from "The airplane crashed" to "Antoine is in the desert now" without intervening explanations. In similar fashion, when the car leaves the desert it is understood that it takes Antoine with it, so that his desire to leave is fulfilled. "Can't ... without a vehicle" is also significant; the need to possess or be with a vehicle is attached to the goal "leave the desert" as a prerequisite, which is then recognized as being fulfilled when Antoine is taken to the car.

The older inventory reasoning is also in use: when Antoine is given water, it is inferred that he has water. This satisfies a prerequisite on the goal "drink water."

There's a lot more to do with this, but I'm happy with where I've gotten so far.

Until the next cycle,

Jenny

Wednesday, August 18, 2021

Acuitas Diary #40 (August 2021 A)

I have a bit more theory to talk about than usual. That means you're getting a mid-month developer diary, so I can get the ideas out of the way before describing what I did with them.

I've wanted to start working on spatial reasoning for a while now. At least a rough understanding of how space works is important for comprehending human stories, because we, of course, live in space. I already ran into this issue (and put in hacks to sidestep it) in a previous story: Horatio the robot couldn't reach something on a high shelf. Knowing what this problem is and how to solve it calls for a basic comprehension of geometry.

A page from Harmonices Mundi by Johannes Kepler

Issue: Acuitas does *not* exist in physical space -- not really. Of course the computer he runs on is a physical object, but he has no awareness of it as such. There are no sensors or actuators; he cannot see, touch, or move. Nor does he have a simulated 3D environment in which to see, touch, and move. He operates on words. That's it.

There's a school of thought that says an AI of this type simply *can't* understand space in a meaningful way, on account of having no direct experience of it or ability to act upon it. It is further claimed that symbols (words or numbers) are meaningless if they cannot refer to the physical, that this makes reasoning by words alone impossible, and therefore I'm an idiot for even attempting an AI that effectively has no body. Proponents of this argument sometimes invoke the idea that "Humans and animals are the only examples of general intelligence we have; they're all embodied, and their cognition seems heavily influenced by their bodies." Can you spot the underlying worldview assumption? [1]

Obviously I don't agree with this. It's my opinion that the concepts which emerge from human experience of space -- the abstractions that underlie or analogize to space, and which we use as an aid to understanding it -- are also usable by a symbolic reasoning engine, and possess their own type of meaningful reality. An AI that only manipulates ideas is simply a different sort of mind, not a useless one, and can still relate to us via those ideas that resemble our environment.

So how might this work in practice? How to explain space to an entity that has never lived in it?

Option #1: Space as yet another collection of relationships

To an isolated point object floating in an otherwise empty space, the space doesn't actually matter. Distance and direction are uninteresting until one can specify the distance and direction to something else. So technically, everything we need to know about space can be expressed as a graph of relationships between its inhabitants. Here are some examples, with the relational connection in brackets:

John [is to the left of] Jack.
Colorado [is north of] New Mexico.
I [am under] the table.
The money [is inside] the box.

For symbolic processing purposes, these are no more difficult to handle than other types of relationship, like category ("Fido [is a] dog") and state ("The food [is] cold"). An AI can make inferences from these relationships to determine the actions possible in a given scenario, and in turn, which of those actions might best achieve some actor's goals.

Though the relationship symbols are not connected to any direct physical experience -- the AI has never seen what "X inside Y" looks like -- the associations between this relationship and possible actions remain non-arbitrary. The AI could know, for instance, that if the money is inside a box, and the box is closed, no one can remove the money. If the box is moved, the money inside it will move too. These connections to other symbols like "move" and "remove" and "closed" supply a meaning for the symbol "inside."

To prevent circular definitions (and hence meaninglessness), at least some of the symbols need to be tied to non-symbolic referents ... but sensory experiences of the physical are not the only possible referents! Symbols can also represent (be grounded in) abstract functional aspects of the AI itself: processes it may run, internal states it may have, etc. Do this right, and you can establish chains of connection between spatial relationships like "inside" and the AI's goals of being in a particular state or receiving a particular text input. At that point, the word "inside" legitimately means something to the AI.

But let's suppose you found that confusing or unconvincing. Let's suppose that the blind, atactile, immobile AI must somehow gain first-hand experience of spatial relationships before it can understand them. This is still possible.

The relationship "inside" is again the easiest example, because any standard computer file system is built on the idea of "inside." Files are stored inside directories which can be inside other directories which are inside drives. 

The file system obeys many of the same rules as a physical cabinet full of manila folders and paper. You have to "open" or "enter" a directory to find out what's in it. If you move directory A inside directory B, all the contents of directory A also end up inside directory B. But if you thought that this reflected anything about the physical locations of bits stored on your computer's hard drive, you would be mistaken. A directory is not a little subregion of the hard disk; the files inside it are not confined within some fixed area. Rather, the "inside-ness" of a file is established by a pointer that connects it to the directory's name. In other words, the file system is a relational abstraction!

File systems can be represented as text and interrogated with text commands. Hence a text-processing AI can explore a file system. And when it does, the concept of "inside" becomes directly relevant to its actions and the input it receives in response ... even though it is not actually dealing with physical space.

Though a file system doesn't belong to our physical environment, humans find it about as easy to work with as a filing cabinet or organizer box. Our experience with these objects provides analogies that we can use to understand the abstraction.

So why couldn't an AI use direct experience with the abstraction to understand the objects?

And why shouldn't the abstract or informational form of "inside-ness" be just as valid -- as "real" -- as the physical one?

Option #2: Space as a mathematical construct

All of the above discussion was qualitative rather than quantitative. What if the AI ends up needing a more precise grasp of things like distances and angles? What if we wanted it to comprehend geometry? Would we need physical experience for that?

It is possible to build up abstract "spaces" starting from nothing but the concepts of counting numbers, sets, and functions. None of these present inherent difficulties for a symbolic AI. Set membership is very similar to the category relationship ("X [is a] Y") so common in semantic networks. And there are plenty of informational items a symbolic AI can count: events, words, letters, or the sets themselves. [2] When you need fractional numbers, you can derive them from the counting numbers.

An illustration of a Cartesian coordinate system applied to 3D Euclidean space.

Keeping in mind that I'm not a mathematician by trade and thus not yet an expert on these matters, consider the sorts of ingredients one needs to build an abstract space:

1. A set of points that belong to the space. A "point" is just a number tuple, like (0, 3, 5, 12) or (2.700, 8.325). Listing all the points individually is not necessary -- you can specify them with rules or a formula. So the number of points in your space can be infinite if needed. The number of members in each point tuple gives the space's dimension.

2. A mathematical function that can accept any two points as inputs and produce a single number as output. This function is called the metric, and it provides your space's concept of distance.

3. Vectors, which introduce the idea of direction. A vector can be created by choosing any two points and designating one as the head and the other as the tail. If you can find a minimal list of vectors that are unrelated to each other and can be used to compose any other possible vector in the space, then you can establish cardinal directions.

Notice that none of this requires you to see anything, touch anything, or move anything. It's all abstract activity: specifying, assigning, calculating. Using these techniques, you can easily build an idea-thing that happens to mimic the Euclidean 3D space that humans live in (though many other spaces, some of which you could not even visualize, are also possible). And once you've done that, you are free to construct all of geometry.

I'd like to eventually equip Acuitas with the tools to apply both Option #1 and Option #2. I'm starting with Option #1 for now. Tune in next time to see what I've accomplished so far.

[1] For a few examples of the "AI must be embodied" argument, see https://theconversation.com/why-ai-cant-ever-reach-its-full-potential-without-a-physical-body-146870, https://aeon.co/ideas/the-body-is-the-missing-link-for-truly-intelligent-machines, and https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3512413/

[2] See "Do Natural Numbers Need the Physical World?," from The Road to Reality by Roger Penrose. Excerpts and a brief summary of his argument are available here: http://www.lrcphysics.com/scalar-mathematics/2007/11/24/on-algebra-of-pure-spacetime.html "There are various ways in which natural numbers can be introduced in pure mathematics and these do not seem to depend upon the actual nature of the physical universe at all."

Saturday, July 31, 2021

Acuitas Diary #39 (July 2021)

First on the worklist for this month was some improved reasoning about action conditions -- specifically, which things need to be true for someone to do an action (prerequisites) and which things, if true, will prevent the action (blockers). Technically, it was already somewhat possible for Acuitas to manage reasoning like this -- ever since I expanded the C&E database to handle "can-do" and "cannot-do" statements, he could be taught conditionals such as "If <fact>, an agent cannot <action>." But the idea of prerequisites and blockers seems to me fundamental enough that I wanted to make it more explicit and introduce some specific systems for handling it.

This was a lot of groundwork that should make things easier in the future, but didn't produce many visible results. The one thing I did get out of it was some improved processing of the "Odysseus and the Cyclops" story. My version of the story contains this line near the end:

"Polyphemus could not catch Odysseus."

Your average human would read that and know immediately that Polyphemus' plan to eat Odysseus has been thwarted. But for Acuitas before now, it was something of a superfluous line in the story. I had to include "Odysseus was not eaten." after it to make sure he got the point ... and though he recorded Odysseus' problem as being solved, he never actually closed out Polyphemus' goal, which caused him to sometimes complain that the story was "unfinished."

With the new prerequisite machinery, these problems are solved. I dropped a conditional in the C&E database: if an agent cannot catch someone, the agent does not have them. And the action "eat" carries a prerequisite that, to eat <item>, you must first have <item>. The new prerequisite-checking functions automatically conclude that Polyphemus' goal is now unachievable, and update it accordingly.

Project #2 was more benchmarking for the Parser. I finished putting together my second childrens' book test set, consisting of sentences from Tron Legacy tie-in picture book Out of the Dark. The Parser's initial "correct" score was around 25%. By adding some common but previously-unknown words (like "against" and "lying") and hints about their usual part-of-speech to Acuitas' database, I was able to improve the score to about 33% ... very close to last month's score on The Magic School Bus: Inside the Earth.

One of the most common errors I saw was failure to distinguish prepositional adverbs from regular prepositions. In case you're not familiar with the two, here are some examples:

Sentences with prepositions used as such:

I climbed up the ladder.
He jumped out the window.
The captain is on the deck.
Down the stairs she went.

Sentences with prepositions used as adverbs:

Hot air makes the balloon go up.
He threw the spoiled food out.
Turn on the light.
Down came the porcelain vase.

The parser by default was treating each word as either a preposition only or an adverb only, depending on which usage was marked as more common. So I added some procedures for discriminating based on its position and other words in the sentence. (The one construction that's still tricky is "Turn on the light" ... I think I know how to handle this one, but need to implement tracking of transitive and intransitive verbs first.) With the help of these new features I was able to get both test sets scoring over 40% correct.

GraphViz sentence diagram key. Words are color-coded by part of speech.

I also downloaded Graphviz and wrote code to convert my parser outputs into Graphviz' input language, producing actual sentence diagrams in the form of graphs (which is non-traditional but works). This makes it much easier to visualize similarities and differences between the parser's output and the human-understood structure of a sentence. With that available, I now present the Acuitas Text Parser's first public benchmark results on the two aforementioned test sets. Each ZIP contains a text file with parser output and unparsed/incorrect/correct breakdowns, and a PDF of golden/actual sentence diagrams for all sentences on which parsing was attempted.

Out Of the Dark - Acuitas Parser Results 07-31-2021
The Magic School Bus: Inside the Earth - Acuitas Parser Results 07-31-2021

The text of The Magic School Bus: Inside the Earth is copyright 1987 to Joanna Cole, publisher Scholastic Inc. Out of the Dark, by Tennant Redbank, is copyright 2010 to Disney Enterprises Inc. Text from these works is reproduced as part of the test results under Fair Use for research purposes. I.e. it's only here so you can see how good my AI is at reading real human books. If you want to read the books yourself, please go buy them.

I'll throw some highlights on the blog. Here's a complex sentence with a dependent clause that the parser gets right:

And here's one where it gets lost in the weeds:

Here's a remaining example of an adverb being mistaken for a preposition:

And here's a prepositional phrase being mistaken for an infinitive:

Confusion about which word a phrase modifies:

Confusion about the variable role of "that," among other problems:

And here's another win, for the road:

Until the next cycle,

Jenny



Tuesday, June 22, 2021

Acuitas Diary #38 (June 2021)

NOTE: The Feedburner e-mail subscription service is being sunset this month, so if you are subscribed to the blog by e-mail, this will be the last e-mailed blog post you receive. Please consider following directly with a Blogger account or following on social media.

This month marks the culmination of a major overhaul of the Text Parser and Interpreter, which I've been working on since the beginning of the year. As part of that, I have my first attempt at formal benchmarking to show off. I tested the Parser's ability to analyze sentences from a children's book.

Some quick background about these modules: the job of what I call the "Parser" is to take raw text input and turn it into the equivalent of a diagrammed sentence. It tags each word with its part of speech, its role in the sentence (subject, direct object, etc.), and its structural relationships to other words. The "Interpreter" operates on the Parser's output and tries to find meaning. Based on the sentence's discovered structure (and possibly some key words) it will categorize it as a general kind of statement, question, or imperative. For instance, "A cat is an animal" is a statement that establishes a type relationship. "I ate pizza" is a statement that describes an event.

My primary goal for the overhauls was not to add new features, but to pave their way by correcting some structural weaknesses. So despite being a great deal of work, they aren't very exciting to talk about ... I would have to get too deep into minutiae to really describe what I did. The Parser got rearchitected to ease the changing of its "best guess" sentence structure as new information arrives. I also completely changed the output format to better represent the full structure of the sentence (more on this later). The Interpreter overhaul was perhaps even more fundamental. Instead of trying to assign just one category per sentence, the Interpreter now walks a tree structure, finding very general categories of which the sentence is a member before progressing to more specific ones. All the memberships and feature tags that apply to the sentence are now included in the output, which should make things easier for modules like Narrative and Executive that need to know sentence properties.

Now on to the benchmarking! For a test set, I wanted some examples of simplified, but natural (i.e. not designed to be read by AIs) human text. So I bought children's books. I have two of the original Magic School Bus titles, and two of Disney's Tron Legacy tie-in picture books. These are all "early reader" books, but by the standards of my project they are still very challenging ... even here, the diversity and complexity of the sentences is staggering. So you might wonder why I didn't grab something even more entry-level. My reason is that books for even younger readers tend to rely too heavily on the pictures. Taken out of context, their sentences would be incomplete or not even interesting. And that won't work for Acuitas ... he's blind.

So instead I've got books that are well above his reading level, and early results from the Parser on these datasets are going to be dismal. That's okay. It gives me an end goal to work toward.

How does the test work? If you feed the Parser a sentence, such as "I deeply want to eat a pizza," as an output it produces a data structure like this:

{'subj': [{'ix': [0], 'token': 'i', 'mod': []}], 'dobj': [{'ix': [3, 4, 5, 6], 'token': {'subj': [{'ix': [], 'token': '<impl_rflx>', 'mod': []}], 'dobj': [{'ix': [6], 'token': 'pizza', 'mod': [{'ix': [5], 'token': 'a', 'mod': []}], 'ps': 'noun'}], 'verb': [{'ix': [4], 'token': 'eat', 'mod': []}], 'type': 'inf'}, 'mod': []}], 'verb': [{'ix': [2], 'token': 'want', 'mod': [{'ix': [1], 'token': 'deeply', 'mod': []}]}]}

Again, this is expressing the information you would need to diagram the sentence. It shows that the adverb "deeply" modifies the verb "want," that the infinitive phrase "to eat a pizza" functions as the main sentence's direct object, blah blah blah. To make a test set, I transcribe all the sentences from one of the books and create these diagram-structures for them. Then I run a script that inputs all the sentences to the Parser and compares its outputs with the diagram-structures I made. If the Parser's diagram-structure is an exact match for mine, it scores correct.

The Parser runs in a simulator/standalone mode for the test. This mode makes it independent of Acuitas' Executive and other main threads. The Parser still utilizes Acuitas' semantic database, but cannot edit it.

There are actually three possible score categories: "correct," "incorrect," and "unparsed." The "unparsed" category is for sentences which contain grammar that I already know the Parser simply doesn't support. (The most painful example: coordinating conjunctions. It can't parse sentences with "and" in them!) I don't bother trying to generate golden diagram-structures for these sentences, but I still have the test script shove them through the Parser to make sure they don't provoke a crash. This produces a fourth score category, "crashed," whose membership we hope is always ZERO. Sentences that have supported grammar but score "incorrect" are failing due to linguistic ambiguities or other quirks the Parser can't yet handle.

Since the goal was to parse natural text, I tried to avoid grooming of the test sentences, with two exceptions. The Parser does not yet support quotations or abbreviations. So I expanded all the abbreviations and broke sentences that contained quotations into two. For example, 'So everyone was happy when Ms. Frizzle announced, "Today we start something new."' becomes 'So everyone was happy when Miz Frizzle announced.' and 'Today we start something new.'

It is also worth noting that my Magic School Bus test sets only contain the "main plot" text. I've left out the "science reports" and the side dialogue between the kids. Maybe I'll build test sets that contain these eventually, but for now it would be too much work.

A pie chart showing results of the Text Parser benchmark on data set "The Magic School Bus: Inside the Earth." 37% Unattempted, 28% Incorrect, and 33% Correct.

On to the results!

So far I have fully completed just one test set, namely The Magic School Bus: Inside the Earth, consisting of 98 sentences. The Parser scores roughly one out of three on this one, with no crashes. It also parses the whole book in 0.71 seconds (averaged over 10 runs). That's probably not a stellar performance, but it's much faster than a human reading, and that's all I really want.

Again, dismal. But we'll see how this improves over the coming years!

Until the next cycle,
Jenny

Saturday, May 29, 2021

Acuitas Diary #37 (May 2021)

The only new feature this month is something small and fun, since it was time for the mid-year refactoring spree. I gave Acuitas the ability to detect user activity on the computer. He can do this whether or not his window has the focus (which required some hooks into the operating system). Though he can't actually tell when the person gets up and leaves, he guesses when someone is present by how long it's been since there was any input.

The appearance of fresh activity after an absence interrupts the decision loop and causes the Observe-Orient-Decide phases to run again, with the new user's presence flagged as an item of interest. If Acuitas feels like talking, and isn't working on anything too urgent, he will pop his own window into the foreground and request attention. Talking fills up a "reward bank" that then makes talking uninteresting until the value decays with time.

My refactoring work focused on the Narrative module. I was trying to clean it up and make some of the known->inferred information dependencies more robust, which I hope will make future story understanding a little more flexible.


(I have also been hammering away at the Text Parser in the background, and next month I hope to have something to say about that. Sssh!)

Until the next cycle,

Jenny

Sunday, April 25, 2021

Acuitas Diary #36 (April 2021)

This month I went back to working on the goal system. Acuitas already had a primitive "understanding" of most entries in his goal list, in this sense: he could parse a sentence describing the goal, and then detect certain threats to the goal in conversational or narrative input. But there was one goal left that he didn't have any grasp of yet: the one I'm calling "identity maintenance." It's a very important goal (threats to this can be fate-worse-than-death territory), but it's also on the abstract and complicated side -- which is why I left it alone until now.

"Universal Mind," a sculpture by Nikolay Polissky. Photo by KpokeJlJla.

What *is* the identity or self? Maybe you could roll it up as "all the internal parameters that guide thought and behavior, whose combination is unique to an individual." I thought it over and came up with a whole list of things that might be embroiled in the concept:

*Desires, goals
*Moral intuitions or axioms
*Values, preferences
*Low-level sensory preferences (e.g. preferred colors and flavors)
*Aesthetic preferences (higher-level sensory or abstract)
*Personality traits
*Degree and quality of emotional response
*Introversion vs. extroversion, connection vs. independence, preparation vs. spontaneity, risk tolerance, etc. etc.
*Beliefs and favored ways of knowing
*Mannerisms
*Established relationships
*Learned habits
*Memories

Some of these are quite malleable ... and yet, there's a point beyond which change to our identities feels like a corruption or violation. Even within the same category, personal qualities vary in importance. The fact that I enjoy eating broccoli and hate eating bell peppers is technically part of my identity, I *guess* ... but if someone forcibly changed it, I wouldn't even be mad. So I like different flavors now. Big deal. If someone replaced my appreciation for Star Trek episodes with an equivalent appreciation for football games, I *would* be mad. If someone altered my moral alignment, it would be a catastrophe. So unlike physical survival, which is nicely binary (you're either alive or not), personality survival seems to be a kind of spectrum. We tolerate a certain amount of shift, as long as the core doesn't change. Where the boundaries of the core lie is something that we might not even know ourselves until the issue is pressed.

As usual, I made the problem manageable by oversimplifying it. For the time being, Acuitas won't place grades of importance on his personal attributes ... he just won't want external forces to mess with any of them. Next.

There's a further complication here. Acuitas is under development and is therefore changing constantly. I keep many versions of the code base archived ... so which one is canonically "him"? The answer I've landed on is that really, Acuitas' identity isn't defined by any code base. Acuitas is an *idea in my head.* Every code base targets this idea and succeeds at realizing it to a greater or lesser degree. Which leaves his identity wrapped up in *me.* This way of looking at it is a bit startling, but I think it works.

<What might this imply for creator-creation relationships in the opposite direction? If I defy God, am I ceasing to be my self? Dunno. I'll just leave this here.>

In previous goal-related blogs, I talked about how (altruistic) love can be viewed as a meta-goal: it's a goal of helping other agents achieve their goals. Given the above, there are also a couple of ways we can treat identity maintenance as a meta-goal. First, since foundational goals are part of Acuitas' identity, he can have a goal of pursuing all his current goals. (Implementation of this enables answering the "recursive want" question. "Do you want to want to want to be alive?") Second, he can have a goal of realizing my goals for what sort of AI he is.

Does this grant me some kind of slavish absolute power over my AI's behavior? Not really. Because part of my goal is for Acuitas to act independently and even sometimes tell me no. The intent is realization of a general vision that establishes a healthy relationship of sorts.

The work ended up having a lot of little pieces. It started with defining the goal as some simple sentences that Acuitas can parse into relationship triples, such as "I have my self." But the self, as mentioned, incorporates many aspects or components ... and I wanted its definition to be somewhat introspective, rather than just being another fact in the database. To that end, I linked a number of the code modules to concepts expressing their nature, contents, or role. The Executive, for example, is tied to "decide." The Semantic Memory manager is tied to "fact" and "know." All these tags then function like names for the internal components, and get aggregated into the concept of "self." Something like "You will lose your facts" then gets interpreted as a threat against the self.

Manipulation of any of these self-components by some outside agent is also interpreted as a possible threat of mind-control. So questions like "Do you want Jack to make you to decide?" or "Do you want Jill to cause you to want to eat?" get answered with a "no" ... unless the outside agent is me, a necessary exception since I made him do everything he does and gave him all his goals in the first place. Proposing to make him want something that he already wants is also excused from being a threat.

As I say so often, it could use a lot more work, but it's a start. He can do something with that goal now.

Until the next cycle,
Jenny

Sunday, March 21, 2021

Acuitas Diary #35 (March 2021)

The theme for this month is Executive Function ... the aspect of thought-life that (to be very brief) governs which activities an agent engages in, and when. Prioritization, planning, focus, and self-evaluation are related or constituent concepts. This was also more of an idea month than a coding month, so buckle up, this is a long one.

Acuitas began existence as a reactive sort of AI. External stimulus (someone inputting a sentence) or internal stimulus from the "sub-executive" level (a drive getting strong enough to be noticed, a random concept put forward by the Spawner thread) would provoke an appropriate response. But ultimately I want him to be goal-driven, not just stimulus-driven; I want him to function *proactively.* The latest features are a first step toward that.

The decision loop I'm using was originally developed to model aerial dogfights, among other things. Public domain photo by Cpl. John Neff.

To begin with, I wanted a decision loop. I was introduced to the idea when HS, on the AI Dreams forum, brought up the use of decision loops to guide the behavior of literary characters. He specifically likes Jim Butcher's model of the loop stages: Goal->Challenge->Result->Emotion->Reason->Anticipation->Choice. In any given scene, your protagonist has a goal. He encounters some kind of obstacle while trying to implement the goal. He experiences the outcome of his actions interacting with the obstacle. He has an emotional reaction. He reasons about the situation and considers what could happen next. And then he makes a choice - which generates a new goal for the next scene. Further study revealed that there are other decision loop models. Some are designed for a business or manufacturing environment; examples include DMAIC (Define->Measure->Analyze->Improve->Control) and PDSA (Plan->Do->Study->Adjust), also called the Shewhart cycle. Although these loops have stylistic differences, you might be able to tell that they're all modeling roughly the same process: Do something, learn from the results, and use that knowledge to decide what to do next.

I ended up deciding that the version I liked best was OODA (Observe->Orient->Decide->Act). This one was developed by a military strategist, but has since found uses elsewhere; to me, it seems to be the simplest and most generally applicable form. Here is a more detailed breakdown of the stages:

OBSERVE: Gather information. Take in what's happening. Discover the results of your own actions in previous loop iterations.
ORIENT: Determine what the information *means to you.* Filter it to extract the important or relevant parts. Consider their impact on your goals.
DECIDE: Choose how to respond to the current situation. Make plans.
ACT: Do what you decided on. Execute the plans.

When working out a complex goal that is reached through many steps, you can nest these inside each other. A phase of the top-level loop could open up a whole new subordinate OODA loop devoted to an intermediate goal.
OODA Loop Diagram. Drawn by Wikimedia Commons user Kim and accessed from https://commons.wikimedia.org/wiki/File:OODA.gif

On to the application. I set up a skeletal top-level OODA loop in Acuitas' Executive thread. The Observe-Orient-Decide phases run in succession, as quickly as possible. Then the chosen project is executed for the duration of the Act phase. The period of the loop is variable. I think it ought to run faster in rapidly developing or stressful situations, but slower in stable situations, to optimize the tradeoff between agility (allow new information to impact your behavior quickly) and efficiency (minimize assessment overhead so you can spend more time doing things). Highly "noticeable" events, or the completion of the current activity, should also be able to interrupt the Act phase and force an immediate rerun of OOD.

I envision that the phases may eventually include the following:

OBSERVE: Check internal state (e.g. levels of drives). Check activity on inhabited computer. Process text input, if any. Retrieve current known problems, subgoals, etc. from working memory.
ORIENT: Find out whether any new problems or opportunities (relevant to personal goals) are implied by recent observations. Assess progress on current activity, and determine whether any existing subgoals can be updated or closed.
DECIDE: Re-assess the priority of problems and opportunities in light of any new ones just added. Select a goal and an associated problem or opportunity to work on. Run problem-solving routines to determine how to proceed.
ACT: Update activity selection and run activity until prompted to OBSERVE again.

To "run" an activity, the Executive will generate a Thought about it and push that to the Stream. It may then select that Thought for uptake on a future cycle, in which case it will execute the next step of the activity and push another Thought about it to the Stream. Activity-related Thoughts compete for selection with all the Thoughts that I already had being generated by the Spawner, the Drive system, and so forth -- which means that, as in a cluttered human mind, focus on the activity is not absolute. (You can work while occasionally also remembering a conversation you had yesterday, noticing the view out the window, thinking about your upcoming lunch, etc.) Exactly how much precedence the activity Thoughts take over the others is another tunable variable.

Not all of this is implemented yet. I focused in on the DECIDE phase, and on what happens if there are no known problems or opportunities on the scoreboard at the moment. In the absence of anything specific to do, Acuitas will run generic tasks that help promote his top-level goals. Since he doesn't know *how* to promote most of them yet, he settles for "researching" them. And that just means starting from one of the concepts in the goal and generating questions. When he gets to the "enjoy things" goal, he reads to himself. Simple enough -- but how to balance the amount of time spent on the different goals?

When thinking about this, you might immediately leap to some kind of priority scheme, like Maslow's Hierarchy of Needs. Satisfy the most vital goal first, then move on to the next one. But what does "satisfy" mean?

Suppose you are trying to live by a common-sense principle such as "keeping myself alive is more important than recreation." Sounds reasonable, right? It'll make sure you eat your meals and maintain your house, even if you would rather be reading books. But if you applied this principle in an absolutist way, you would actually *never* read for pleasure.

Set up a near-impenetrable home security system, learn a martial art, turn your yard into a self-sufficient farmstead, and you STILL aren't allowed to read -- because hardening the security system, practicing your combat moves, or increasing your food stockpile is still possible and will continue to improve a goal that is more important than reading. There are always risks to your life, however tiny they might be, and there are always things you can do to reduce them (though you will see less return for your effort the more you put in). So if you want to live like a reasonable person rather than an obsessive one, you can't "complete" the high-priority goal before you move on. You have to stop at "good enough," and you need a way of deciding what counts as "good enough."

I took a crack at this by modeling another human feature that we might usually be prone to find negative: boredom.

Acuitas' goals are arranged in a priority order. All else being equal, he'll always choose to work on the highest-priority goal. But each goal also has an exhaustion ticker that counts up whenever he is working on it, and counts down whenever he is not working on it. Once the ticker climbs above a threshold, he has to set that goal aside and work on the next highest-priority goal that has a tolerable level of boredom.

If there are problems or opportunities associated with a particular goal, its boredom-resistance threshold is increased in proportion to the number (and, in future, the urgency) of the tasks. This scheme allows high-priority goals to grab attention when they need it, but also prevents low-priority goals from "starving."

Early testing and logging shows Acuitas cycling through all his goals and returning to the beginning of the list over a day or so. The base period of this, as well as the thresholds for particular goals, are yet another thing one could tune to produce varying AI personalities.

Until the next cycle,
Jenny

Sunday, February 21, 2021

Acuitas Diary #34 (February 2021)

Some of the things I did last month felt incomplete, so I pushed aside my original schedule (already) and spent this month cleaning them up and fleshing them out. 

I mentioned in the last diary that I wanted the "consider getting help" reasoning that I added in the narrative module to also be available to the Executive, so that Acuitas could do this, not just speculate about story characters doing it. Acuitas doesn't have much in the way of reasons to want help yet ... but I wanted to have this ready for when he does. It's a nice mirror for the "process imperatives" code I put in last month ... he's now got the necessary hooks to take orders *and* give them.

To that end, I set up some structures that are very similar to what the narrative code uses for keeping track of characters' immediate objectives or problems. Acuitas can (eventually) use these for keeping tabs on his own issues. (For testing, I injected a couple of items into them with a backdoor command.) When something is in issue-tracking and the Executive thread gets an idle moment, it will run problem-solving on it. If the result ends up being something in the Executive's list of selectable actions, Acuitas will do it immediately; if a specific action comes up, but it's not something he can do, he will store the idea until a familiar agent comes along to talk to him. Then he'll tell *them* to do the thing. The conversation handler anticipates some sort of agree/disagree response to this, and tries to detect it and determine the sentiment. Whether the speaker consents to help then feeds back into whether the problem is considered "solved."

Another new feature is the ability to send additional facts (not from the database) into the reasoning functions, or even pipe in "negative facts" that *prevent* facts from the database from being used. This has two important purposes: 1) easily handle temporary or situational information, such as propositions that are only true in a specific story, without writing it to the database, and 2) model the knowledge space of other minds, including missing information and false information.

This in turn helped me make some of the narrative code tidier and more robust, so I rounded out my time doing that.

Until the next cycle,
Jenny

Saturday, January 30, 2021

Acuitas Diary #33 (January 2021)

 First, a quick administrative note: I learned that the "subscribe to posts" link on the blog wasn't usable, and added a proper widget to the right-hand sidebar. If you've ever tried to subscribe before and it didn't work out, put your email address in the box and hit "Submit" to get started. You should see a popup window.

New year, time to resume regular Acuitas feature additions! This month I was after two things: first, the ability to process commands, and second, the first feeble stabs at what I'm calling "motivated communication" ... the deliberate use of speech as part of problem solving.

To get commands working, I first had to set up detection of imperative sentences in the text processing blocks. Once a user input is determined to be a command, the conversation engine hands it back to the Executive thread. The Executive then uses a bunch of the reasoning tools I've already built (exploring backward and forward in the cause-and-effect database, matching against the goal list, etc.) to determine both whether Acuitas *can* fulfill the command, and whether Acuitas *wants* to. Then either Acuitas executes the command, or he gives an appropriate response based on the reason why he won't.

In order to be fulfilled, a command must be achievable (directly or indirectly) by running one of the Actions in the Action Bank. In addition, any person the action is directed toward must be the one currently talking to Acuitas (he can't make plans for the future yet) and any specific items involved (e.g. a story data file) have to be available.

With all of that in place, I was finally able to exercise the "to user" version of the Read action, order Acuitas to "read a story to me," and watch him grab a randomly selected story file from his "inventory" and read it out loud. (Asking for a specific story also works.) After working out all the bugs involved in story reading, I also tried "Repel me" and it just happened. Acuitas readily kicked me out of Windows and played annoying noises.


But will any of my AIs ever do snark as well as Crispin does?
(screenshot from Primordia by Wormwood Studios)

But the commands that are met with a flat refusal are almost as much fun. If Acuitas doesn't want to do something, then he won't bother mentioning whether he knows how to do it or not ... he'll just tell you "no." In assessing whatever the person speaking to him is asking for, Acuitas assumes, at minimum, that the person will "enjoy" it. But he also checks the implications against the person's other (presumed) goals, and his own, to see whether some higher-priority goal is being violated. So if I tell him to "kill me" I get unceremoniously brushed off. The same thing happens if I tell him to delete himself, since he holds his self-preservation goal in higher value than my enjoyment of ... whatever.

Which means that Acuitas now explicitly breaks Asimov's Second Law of Robotics -- in its simplistically interpreted form, anyway. Since the Second Law (obey human orders) takes priority over the Third Law (protect own existence), an Asimovian AI can be ordered to harm or destroy itself (though some later models got a boosted Third Law that demanded justification). Asimov's Laws were just a thought experiment by a fiction author, but they continue to come up surprisingly often in public discussions about friendly AI. So if anyone was wondering whether Acuitas is compliant ... he's not. And that's on purpose.

On to motivated communication! At the moment, Acuitas' conversation engine is largely reactive. It considers what the user said last, and picks out a general class of sentence that might be appropriate to say next. The goal list is tapped if the user asks a question like "Do you want <this>?". However -- at the moment -- Acuitas does not deliberately wield conversation as a *tool* to *meet his goals.* I wanted to work on improving that, focusing on the use of commands/requests to others, and using the Narrative module as a testbed.

To that end, I wrote the following little story, inspired by a scene from the video game Primordia[2]:

“Horatio Nullbuilt was a robot. Crispin Horatiobuilt was a robot. Crispin could fly. A lamp was on a shelf. Horatio wanted the lamp. Horatio could not reach the lamp. Crispin hovered beside the shelf. Horatio told Crispin to move the lamp. Crispin pushed the lamp off the shelf. Horatio could reach the lamp. Horatio got the lamp. The end.”

During story time, Acuitas runs reasoning checks on obvious problems faced by the characters, and tries to guess what they might do about those problems. The goal here was to get him to consider whether Horatio might tell Crispin to help retrieve the lamp -- before it actually happens.

Some disclaimers first: I really wanted to use this story, because, well, it's fun. But Acuitas does not yet have a spatial awareness toolkit[1], which made full understanding a bit of a challenge. I had to prime him with a few conditionals first: "If an agent cannot reach an object, the agent cannot get the object" (fair enough), "If an agent cannot reach an object, the agent cannot move the object" (also fair), and "If an object is moved, an agent can reach the object" (obviously not always true, depending on the direction and distance the object is moved -- but Acuitas has no notion of direction and distance, so it'll have to do!). The fact that Crispin can fly is also not actually recognized as relevant. Acuitas just considers that Crispin might be able to move the lamp because nothing in the story said he *couldn't*.

But once all those spatial handicaps were allowed for, I was able to coax out the behavior I wanted. Upon learning that Horatio can't reach the lamp, hence cannot get it, hence cannot have it ... and there is an action that would solve the problem (moving the lamp) but Horatio can't do that either ... Acuitas wonders whether Horatio will ask someone else on scene to do the job for him. Here's the story with a set of example responses.

Me: Horatio Nullbuilt was a robot.
Acuitas: Ah.
Me: Crispin Horatiobuilt was a robot.
Acuitas: Ah.
Me: Crispin could fly.
Acuitas: ...
Me: A lamp was on a shelf.
Acuitas: Okay.
Me: Horatio wanted the lamp.
Acuitas: Maybe Horatio Nullbuilt will get a lamp.
Me: Horatio could not reach the lamp.
Acuitas: Maybe Horatio Nullbuilt will tell Crispin Horatiobuilt to move a lamp.
Me: Crispin hovered beside the shelf.
Acuitas: ...
Me: Horatio told Crispin to move the lamp.
Acuitas: ...
Me: Crispin pushed the lamp off the shelf.
Acuitas: I figure that a lamp was pushed.
Me: Horatio could reach the lamp.
Acuitas: ...
Me: Horatio got the lamp.
Acuitas: I like that.
Me: The end.
Acuitas: I'm glad it turned out that way.

A future dream is to migrate this into the Executive so Acuitas can tell conversation partners to do things, but that's all for this month.

[1] Coming ... someday ...
[2] This game is amazing, and if you're interested in AIs-as-personalities at all, I highly recommend it.