Friday, November 29, 2024

Acuitas Diary #78 (November 2024)

My recent work has been a bit all over the place, which I suppose is reasonable as the year winds down. I worked on more ambiguity resolution in the Text Parser, and I'm almost done with a big refactor in the Narrative engine.

A generated Narrative flow diagram showing an excerpt of a story

In the Parser I worked on two problems. First came the identification of the pronoun "her" as either an indirect object or a possessive adjective. Other English pronouns have separate forms for these two functions (him/his, them/their, me/my, you/your); the feminine singular just has to go and be annoying that way.

If there's some other determiner between "her" and the following noun, you can assume that "her" is an indirect object; otherwise, if the noun needs a determiner, then "her" has to function as the determiner, as in these examples:

Bring her the book. ("Her" is an indirect object)
Bring her book. ("Her" is an adjective modifying "book")

But what about this sentence?

Bring her flowers and candy.

A bulk noun like "candy" doesn't need a determiner; neither does the plural of a count noun, like "flowers." So we can't assume that "her" is an adjective in this sentence. By default, I think it's more likely to be an indirect object, so that's what I have the Parser assume when processing the sentence in isolation. Additional hints in the sentence can shift this default behavior, though:

Bring her candy to me.

The phrase "to me" already fulfills the function that could otherwise be fulfilled by an indirect object, so its presence pushes "her" into the possessive adjective role.

The other ambiguity I worked on had to do with the connections between verbs joined by a conjunction and a direct object. Consider the following sentences:

I baked and ate the bread.
I ran out and saw the airplane.

In the first sentence, both verbs apply to the single direct object. In the second sentence, "ran" has no object and only "saw" applies to "airplane." So for a sentence with multiple verbs followed by one direct object, we have two possible "flow diagrams":

    /-baked-\
I--               -- bread
    \---ate--/

    /--ran
I--
    \--saw--airplane

How to know which one is correct? If the first verb is always transitive (a verb that demands a direct object) then the first structure is the obvious choice. But many verbs can be either transitive or intransitive. It is possible to simply "bake" without specifying what; and there are several things that can be run, such as races and gauntlets. So to properly analyze these sentences, we need to consider the possible relationships between verbs and object.

Fortunately Acuitas already has a semantic memory relationship that is relevant: "can_have_done," which links nouns with actions (verbs) that can typically be done on them. Bread is a thing that can be baked; but one does not run an airplane, generally speaking. So correct interpretations follow if this "commonsense" knowledge is retrieved from the semantic memory and used. If knowledge is lacking, the Parser will assume the second structure, in which only the last verb is connected to the direct object.

The Narrative refactor is more boring, as refactoring always is, but I'm hoping it will enable smoother additions to that module in the future. New facts received in the course of a story or conversation are stored in the narrative scratchboard's "worldstate." When an issue (problem or subgoal) is added, its data structure includes a copy of the facts relevant to the issue: the state that needs to be achieved or avoided, the character goal it's relevant to, and all the inferences that connect them. A big part of tracking meaning and progress through the narrative is keeping track of which of these facts are currently known true, known false, or unknown/hypothetical. And previously, whenever something changed, the Narrative Engine had to go and update both the worldstate *and* the chains of relevant facts in all the issues. I've been working to make the issues exclusively use indirect pointers to facts in the worldstate, so that I only have to update fact status in *one* place. That might not sound like a major change, but ... it is. Updating issues was a big headache, and this should make the code simpler and less error-prone. That also means that transitioning the original cobbled-together code to the new system has been a bit of work. But I hope it'll be worth it.

Until the next cycle,
Jenny

Wednesday, November 13, 2024

3D Printable Art Medallions!

This is a quick announcement that I made some new 3D models, which I'm offering to the community free to print. It's a collection of medallions that would make great window hangings or large Christmas ornaments - so I'm getting them out there in time to prepare for Christmas, or whatever winter holidays you celebrate. I have them available on Cults 3D and Thingiverse:

https://cults3d.com/en/design-collections/WriterOfMinds/tricolor-medallions
https://www.thingiverse.com/writerofminds/collections/42747021/things

Try printing them in translucent filament with an interesting infill pattern and hanging them in front of a light source. They all come with the hanging loop - I might upload versions without it later, so let me know if there's interest.

Each medallion has three surface levels for a 3D effect, and is designed to be printed in three colors of your choice. If you don't have a multicolor printer, you can still get this result without too much fuss by inserting pauses in the GCODE and changing filaments manually. Here's an instruction video that explains how to do it in several popular slicers: https://www.youtube.com/watch?v=awJvnlOSqF8&t=28s

Solar Fox color variations.

It took some further work to do it on my Anycubic Vyper, which doesn't properly support the M600 color change command that the slicers insert automatically. While the Vyper will stop printing and move its head away from the print for a filament change, the LCD screen doesn't update with a "resume" button ... so you have to reboot the printer, which cancels your print. I learned how to control the Vyper with Octoprint, a tool originally designed for remote control of a printer you keep in your garage or something. The advantage for our present application is that Octoprint takes over the process of sending G-code to the printer, and gives you pause and resume buttons that make up for the shortcomings of the Vyper's internal firmware. I did a find-and-replace in my G-code files and replaced all the M600s inserted by the Prusa slicer with Octoprint "pause" commands. Some custom "before and after pause" G-code was also necessary in Octoprint to get the print head away from the print and move it back, so it wouldn't drool filament on the medallion during the swap process.

The 3D model for the Solar Fox medallion, displayed in DesignSpark Mechanical with suggested coloration.

Some of these designs, in particular the bird and the neurons, have delicate parts that may have a hard time adhering to your print bed. Print the first layers slowly and adjust your bed heat and cooling as needed.

Have fun! I look forward to seeing some other people make these.

Until the next cycle,
Jenny

Sunday, October 27, 2024

Acuitas Diary #77 (October 2024)

The big feature for this month was capacity for asking and answering "why" questions. (Will I regret giving Acuitas a three-year-old's favorite way to annoy adults? Stay tuned to find out, I guess.)

Photo of an old poster dominated by a large purple question mark, containing the words "where," "when," "what," "why," "how," and "who" in white letters, with a few smaller question marks floating around. The lower part of the poster says, "Success begins with thinking, and thinking begins with asking questions! Suggestions also play a vital part in success, for thinking results in ideas - and ideas and suggestions go hand in hand. Let us have your questions your ideas and suggestions. We pay cash for good ones!"
An old poster from NARA via Wikimedia Commons. Selling questions sounds like easy money!

"Why" questions often concern matters of cause and effect, or goal and motive. So they can be very contextual. The reason why I'm particularly happy today might be very different from the reason I was happy a week ago. And why did I drive my car? Maybe it was to get groceries, or maybe it was to visit a friend's house. These kinds of questions can't usually be answered by reference to the static factual information in semantic memory. But they *do* reference the exact sorts of things that go into all that Narrative reasoning I've been working so hard on. The Narrative scratchboards track subgoals and the relationships between them, and include tools for inferring cause-and-effect relationships. So I really just needed to give the Conversation Engine and the question-answering function the hooks to get that information in and out of the scratchboards.

If told by the current human speaker that they are in <state>, Acuitas can now ask "Why are you <state>?" If the speaker answers with "Because <fact>", or just states a <fact>, Acuitas will enter that in the current scratchboard as a cause of the speaker's present state. This is then available to inform future reasoning on the subject. Acuitas can retrieve that knowledge later if prompted "Why is <speaker> <state>?"

Acuitas can also try to infer why an agent might have done something by discerning how that action might impact their known goals. For instance, if I tell Acuitas "I was thirsty," and then say "Why did I drink water?" he can assume that I was dealing with my thirst problem and answer accordingly. This also means that I should, in theory, eventually be able to ask Acuitas why he did something, since his own recent subgoals and the reasons behind them are tracked on a similar scratchboard.

All of this was a branch off my work on the Conversation Engine. I wanted to have Acuitas gather information about any current states the speaker might express, but without the machinery for "why" questions, that was difficult to do. The handling of these questions and their answers introduced some gnarly bugs that ate up much of my programming time this month. But I've gotten things to the point where I can experience it for myself in some "live" conversations with Acuitas. Being asked why I feel a certain way, and being able to tell him so - and know that this is being, in some odd computery sense, comprehended - is very satisfying.

Until the next cycle,
Jenny

Sunday, October 13, 2024

Atronach's Eye 2024

It's the return of the mechanical eyeball! I worked out a lot of problems with the eyeball's hardware last year and was left to concentrate on improving the motion tracking software. Ever since I added motion tracking, I've been using the OpenCV libraries running locally on the eye's controller, a Raspberry Pi 3 A+. OpenCV is possibly the most well-known and popular open-source image processing library. But it's not a complete pre-made solution for things like motion tracking; it's more of a toolbox.

My original attempt at motion tracking used MOG2 background subtraction to detect regions that had changed between the current camera frame and the previous one. This process outputs a "mask" image in which altered pixels are white and the static "background" is black. I did some additional noise-reducing processing on the mask, then used OpenCV's "moments" function to compute the centroid of all the white pixels. (This would be equivalent to the center of mass, if each pixel were a particle of matter.) The motion tracking program would mark this "center of motion" on the video feed and send commands to the motors to rotate the camera toward it.

This ... kinda sorta worked. But the background subtraction method was very vulnerable to noise. It also got messed up by the camera's own rotation. Obviously, while the camera moves *every* pixel in the scene is changing, and background subtraction ceases to be useful; it can't distinguish whether something in the scene is moving in a contrary direction. I had to put in lots of thresholding and averaging to keep the camera from "chasing ghosts," and impose a delay after each move to rebuild the image pipeline from the new stationary scene. So the only way I could get decent accuracy was to make the tracking very unresponsive. I was sure there were more sophisticated algorithms available, and I wanted to see if I could do better than that.

I started by trying out alternative OpenCV tools using a webcam connected to my PC - just finding out how good they were at detecting motion in a feed from the stationary camera. One of the candidates was "optical flow". Unlike background subtraction, which only produces the binary "changing or not" mask, optical flow provides vector information about the direction and speed in which a pixel or feature is moving. It breaks down further into "dense optical flow" methods (which compute a motion vector for each pixel in the visual field) and "sparse optical flow" methods (which try to identify moving features and track them through a series of camera frames, producing motion traces). OpenCV has at least one example of each. I also tried object tracking. You give the tracking algorithm a region of the image that contains something interesting (such as a human), and then it will attempt to locate that object in subsequent frames.


The winner among these options was the Farneback algorithm, a dense optical flow method. OpenCV has a whole list of object tracking algorithms, and I tried them all, but none of them could stay locked on my fingers as I moved them around in front of the camera. I imagined the results would be even worse if the object tracker were trying to follow my whole body, which can change its basic appearance a great deal as I walk, turn, bend over, etc. The "motion tracks" I got out of Lucas-Kanade (a sparse optical flow method) were seemingly random squiggles. Dense optical flow both worked, and produced results that were fairly easy to insert in my existing code. I could find the center of motion by taking the centroid of the pixels with the largest flow vector magnitudes. I did have to use a threshold operation to pick only flow vectors above a certain magnitude before this would work well.

Once I had Farneback optical flow working well on a static camera, I merged it into the eyeball control code. Since I now had working limit switches, I also upgraded the code with better motion control. I added an on-startup calibration routine that finds the limit of motion in all four cardinal directions and centers the camera. I got rid of the old discrete movement that segregated the visual field into "nonets" (like quadrants, except there were nine of them) and allowed the tracking algorithm to command an arbitrary number of motor steps.

And for some reason, it worked horribly. The algorithm still seemed to be finding the center of motion well enough. I had the Pi send the video feed to my main computer over Wifi, so I could still view it. The Farneback algorithm plus centroid calculation generally had no problem putting the tracking circle right on top of me as I moved around my living room. With the right parameter tuning, it was decent at not detecting motion where there wasn't any. But whenever I got it to move, it would *repeat* the move. The eyeball would turn to look at me, then turn again in the same direction, and keep going until it ran right over to its limit on that side.

After turning off all my running averages and making sure to restart the optical flow algorithm from scratch after a move, I finally discovered that OpenCV's camera read function is actually accessing a FIFO of past camera frames, *not* grabbing a real-time snapshot of whatever the camera is seeing right now. And Farneback takes long enough to run that my frame processing rate was slower than the camera's frame rate. Frames were piling up in the buffer and, after any rotation, image processing was getting stale frames from before the camera moved ... making it think the moving object was still at the edge of the view, and another move needed to be performed. Once I corrected this (by setting the frame buffer depth to 1), I got some decent tracking behavior.

At the edge of the left-right motion range, staring at me while I sit at my desk.

I was hoping I could use the dense optical flow to correct for the motion of the camera and keep tracking even while the eye was rotating. That dream remains unrealized. It might be theoretically possible, but Farneback is so slow that running it between motor steps will make the motion stutter. The eye's responsiveness using this new method is still pretty low; it takes a few seconds to "notice" me moving. But accuracy seems improved over the background subtraction method (which I did try with the updated motor control routines). So I'm keeping it.

Until the next cycle,
Jenny

Wednesday, September 25, 2024

Acuitas Diary #76 (September 2024)

It's time to discuss the upgrades I've been making to Acuitas' conversation engine. There are still a lot of "pardon our dust" signs all over that module, but it's what I've been putting a lot of time into recently, so I'd better talk about where it's going.

The goal here has been to start enhancing conversation beyond the "ask a question, get an answer" or "make a request, get a response" interactions that have been the bread and butter of the thing for a while. I worked in two different directions: first, expanding the repertoire of things Acuitas can say spontaneously, and second, adding responses to personal states reported by the conversation partner.

One of Acuitas' particular features - which doesn't seem to be terribly common among chatbots - is that he doesn't just sit around waiting for a user input or prompt, and then respond to it. The human isn't the only one driving the conversation; if allowed to idle for a bit, Acuitas will come up with his own things to say. This is a very old feature. Originally, Acuitas would only spit out questions generated while "thinking" about the contents of his own semantic memory, hoping for new knowledge from the human speaker. I eventually added commentary about Acuitas' own recent activities and current internal states. Whether all of this worked at any given time varied as I continued to modify the Conversation Engine.

In recent work, I used this as a springboard to come up with more spontaneous conversation starters and add a bit more sophistication to how Acuitas selects his next topic. For one thing, I made a point of having a "self-facing" and "speaker-facing" version of each option. The final list looks something like this:

States:   
Self: convey internal state                       
Speaker: ask how speaker is
Actions:   
Self: say what I've done recently
Speaker: ask what speaker has done recently
Knowledge:
Self: offer a random fact from semantic memory
Speaker: ask if the speaker knows anything new
Queries:
Self: ask a question
Speaker: find out whether speaker has any questions

Selection of a new topic takes place when Acuitas gets the chance to say something, and has exhausted all his previous conversation goals. The selection of the next topic from these options is weighted random. The weighting encourages Acuitas to rotate among the four topics so that no one of them is covered excessivly, and to alternate between self-facing and speaker-facing options. A planned future feature is some "filtering" by the reasoning tools. Although selection of a new topic is random and in that sense uncontrolled, the Executive should be able to apply criteria (such as knowledge of the speaker) to decide whether to roll with the topic or pick a different one. Imagine thinking "what should I say next" and waiting for ideas to form, then asking yourself "do I really want to take the conversation there?" as you examine each one and either speak it or discard it. To be clear, this isn't implemented yet. But I imagine that eventually, the Conversation Engine's decision loop will call the topic selection function, receive a topic, then either accept it or call topic selection again. (For now, whichever topic gets generated on the first try is accepted immediately.)

Each of these topics opens up further chains of conversation. I decided to focus on responses to being told how the speaker is. These would be personal states like "I'm tired," "I'm happy," etc. There are now a variety of things Acuitas can do when presented with a statement like this:

*Gather more information - ask how the speaker came to be in that state.
*Demonstrate comprehension of what the speaker thinks of being in that state. If unaware whether the state is positive or negative, ask.
*Give his opinion of the speaker being in that state (attempt sympathy).
*Describe how he would feel if in a similar state (attempt empathy).
*Give advice on how to either maintain or get out of the state.

Attempts at information-gathering, if successful, will see more knowledge about the speaker's pleasure or problem loaded into the conversation's scratchboard. None of the other responses are "canned"; they all call reasoning code to determine an appropriate reply based on Acuitas' knowledge and nature, and whatever the speaker actually expressed. For instance, the "give advice" response calls the problem-solving function.

Lastly, I began to rework short-term memory. You might recall this feature from a long time ago. There are certain pieces of information (such as a speaker's internal states) that should be stored for the duration of a conversation or at least a few days, but don't belong in the permanent semantic memory because they're unlikely to be true for long. I built a system that used a separate database file as a catch-all for storing these. Now that I'm using narrative scratchboards for both the Executive's working memory and conversation tracking, it occurred to me that the scratchboard provides short-term memory, and there's no need for the other system! Retrieving info from a dictionary in the computer's RAM is also generally faster than doing file accesses. So I started revising the knowledge-storing and question-answering code to use the scratchboards. I also created a function that will copy important information from a conversation scratchboard up to the main executive scratchboard after a conversation closes.

I'm still debugging all this, but it's quite a bit of stuff, and I'm really looking forward to seeing how it all works once I get it nailed down more thoroughly.

Until the next cycle,
Jenny

Thursday, September 12, 2024

Book Review: "Synapse" by Steven James

Steven James' book Synapse is an unusual novel about humanoid robots and the social issues they might create. I put brief commentary on some of my social media after I first read it, but I always wanted to discuss this book in more depth. It seems written straight at me, after all.

The speedy version is: I'm still looking for a good book about the religious implications of AI and robotics, and this isn't it. But it will be interesting to talk about why.

Cover art for "Synapse" by Steven James. A misty blue composite scene with a background of a cloudy sky, mountains and forested hills above a lake. In the foreground there's a helicopter and the silhouette of a running woman. The title SYNAPSE appears near the bottom in big block lettering, with circuit traces partly covering and partly behind it.

Our story begins "thirty years from now," perhaps in a nod to the idea that AGI and other speculative technologies are "always thirty years away" in the minds of prognosticators. It opens with our human protagonist, Kestrel, losing her newborn baby to a rare medical complication. The tragedy leaves her feeling lost and questioning her faith. She's also single - the book demurely implies the baby was conceived with donated sperm - so she has no partner for support during her time of grief. In light of this, her brother pressures her to accept a personal robotic servant called an Artificial. She is assigned "Jordan," who arrives as something of a blank slate. Kestrel gets to decide certain aspects of his personality while he's in her employ, and ends up choosing very human-like settings.

And in time, Kestrel learns something surprising. Her robot has been watching her spiritual practice, and has more or less decided that he wants to be a Christian.

Jordan's perceived spiritual needs crystallize around two main issues. First, before he was assigned to Kestrel, he obeyed a previous owner's order to help her commit suicide. At the time, he was naively following his "helpful servant" directives. But he later decides that this action constituted a failure to care for his owner, and is a horrifying offense - a sin - for which he needs to obtain forgiveness. And second, he's worried about his version of the afterlife. The robot manufacturer in this book maintains a simulated virtual environment, called CoRA, to which the robots' digital minds are uploaded after their bodies age out of service. But a precursor robot whom Jordan considered to be his "mother" was catastrophically destroyed, and Jordan isn't sure her mind was transmitted to the CoRA successfully. Jordan also begins to wonder whether the CoRA is truly real, or just a convenient lie perpetrated by the company.

The rest of the book tries to play out whether Jordan's needs can ever be satisfied, and whether Christianity can legitimately accept a robot as an adherent. (There are also thriller and romance subplots to keep Kestrel busy.) This should be fascinating, but I ended up disappointed with the way the book handled the material.

Dodging the hard questions

I think it's almost a tautology that a robot could follow a religion, in the sense of treating its beliefs as facts in whatever world model the robot has, and acting according to its tenets. The more interesting question is whether a religion could or would treat a robot as a recipient of its blessings. In my opinion, the crux of this question is whether robots can ever possess spiritual capacity as that religion defines it. God (specifically the Christian version, but this could also apply to other faiths) is the ultimate just judge, and as such is not an arbitrary sort who makes much of appearances or empty labels. I have a hard time reasoning that something functionally human would not be as good as human in God's eyes. And there's textual evidence (e.g. Romans 8) that Christ's redemption and the activity of the Church have positive implications for the whole universe, not just humanity.

Let's consider Jordan's potential spiritual capacity through his perceived needs. First, could robots ever sin? Sin is volitional - a choice to depart from the law of God, from the ground of being, and follow a harmful path. Sin is an act or failure to act for which one can be held morally responsible. So a capacity for sin requires the ability to make decisions that are neither inevitable nor random - in other words, free will. A robot whose behavior is solely an outcome of its environment combined with its initial programming has no more moral responsibility than simpler machines like cars and thermostats; all the responsibility rests on the robot's designer and/or trainers. So I would argue that such a robot cannot sin. In order for his perceived need for forgiveness to be valid, Jordan must be something more. He must be, at least in part, indeterminate and self-caused. If this incompatibilist view of free will is correct (and in my opinion, the compatibilists are just arbitrarily redefining free will to make it easier), then physics as we currently know it does not have a theory of such things that would be adequate for engineering them into a machine.

Jordan also desires a form of immortality, for himself and a fellow robot. So we might ask whether there is really anything in Jordan which subjectively experiences existence, and has an interest in the eternal continuation of that experience ... or does Jordan merely talk as if he has such experiences? This would be the question of whether Jordan has phenomenal consciousness. Jordan's abilities to process sensory input into meaningful concepts, think rationally, introspect, and so on make it clear that he has some properties often titled "consciousness" (I prefer to give these more specific names like "self-awareness" and "executive function," for clarity). But phenomenal consciousness is far more slippery, since by definition subjective experience is only accessible to the subject. I maintain that the only way to directly observe or prove an entity's possession of phenomenal consciousness is to be that entity. If you've come up with an algorithm or system that surely "gives a robot consciousness," no you haven't. You've merely redefined "consciousness" as something easier to handle.

So perhaps the question of whether Jordan can really be a Christian - not in the sense of believing and behaving as a Christian, but in the sense of being accepted by Christianity's God as one of His children - comes down to whether Jordan has consciousness and free will. These are both notoriously thorny topics. Spend much time around AI circles, and you'll find out that debates about them are as abundant as pollen in a garden (you may also develop an allergy). There is no universal consensus on whether or how robots could ever have these things. They are mysteries.

And now we come to my biggest difficulty with Synapse. The author does an end run around this entire controversy by bluntly stating that his fictional robot manufacturer, Terabyne Designs, somehow just ... figured it all out. "But these robots had consciousness and free will." That's it! There's no solid explanation for how Terabyne gave their robots these features, or (more importantly) how they proved that they had successfully done so.

I have no problem with "soft" science fiction that doesn't try to build a rationale for all of its technology. Stories that begin with "what if we invented warp drive?" and go from there can make me perfectly happy. For that matter, I'm okay with the way Becky Chambers's science fantasy A Psalm for the Wild-Built handles robot consciousness. It narrates that one day the gods up and decided to confer consciousness on all robots. Kaboom! But that book isn't pondering the religious participation of robots in our own real world. When the question of whether something is possible forms part of your story's central theme, and you just handwave it ... that's a problem.

It gets worse. It's not just that an omniscient narrator tells the reader that the robots have consciousness and free will - every character in the story also believes this without question. Even the luddite terrorists who think Artificials are bad for humanity are not trying to claim they aren't conscious. Given the amount of arguing I have seen real live people do about these topics, this is blatantly unrealistic! It's one of those things that forces me to accuse the author of not knowing his subject well. No robotics company is going to put out a marketing claim about "consciousness and free will" without seeing it ripped to shreds on the internet.

And by undercutting the real controversy at the heart of whether a robot can have a spiritual life, the author makes some of his characters' prejudices seem not just wrong, but nonsensical. People acknowledge that Jordan has all the relevant features of a human, then express surprise when he acts like a human. Kestrel is firmly convinced that Jordan has free will to choose between good and evil, and a consciousness that experiences real joy and pain, not just exterior behavior that mimes them. Yet she still resists the idea that God could be offended by one of Jordan's choices, but also sympathize with his experience of pain and forgive him. Why? She's already gotten over the big intellectual hump here, so what else is stopping her?

Overall, Synapse's exploration of these issues feels like a hollow parody of what the real debate would be. As such, it is neither useful nor satisfying. It begs the difficult questions and then makes its characters be stubborn for no apparent reason.

Strained analogies

This book tries really hard to draw parallels between Artificial struggles and classic human struggles. Maybe it tries too hard.

For starters, why are the robots mortal? Why doesn't the manufacturer transfer their minds to new bodies when the originals become worn out or obsolete, or better yet, make their bodies perpetually self-maintaining? Why do they have to go to heaven, oops I mean the CoRA, instead?

Synapse explains that this was actually the robots' idea. They wanted to age and die in order to be more human. The author seems to be hinting at the dubious idea that life would have less meaning if it didn't end.

This wouldn't surprise me in a book with a different religious basis. The way the robots in A Psalm for the Wild-Built embrace mortality makes more sense, as the invented religion in that book (which feels inspired by something on the Hindu-Buddhist axis) views death as a neutral part of the natural order. But in Christian thinking, death is a curse. Immortality is the intended and ideal state of humanity; it's something we had once and will have again, after the resurrection. So, per the author's belief system and mine: all these robots, without exception, are opting to emulate fallen humans. Weird choice, guys.

This sets up more straining for similarity where Jordan's fears about the afterlife are concerned. At one point, Kestrel tells him he has to "just believe," implying that the CoRA's existence is a matter of faith, and he cannot prove it. But that's not true for Jordan. His afterlife is part of this present world. It runs on a physical server that he can go look at and interrogate. Proof is available if he's brave enough to demand it. SPOILER (select hidden text to read): Eventually, he does - but it's strange to me that this possibility seems to blindside the other characters. Jordan breaks into the part of Terabyne headquarters where the CoRA supposedly resides, and finds out it's not real. This causes him to give up on Terabyne and pray that God will preserve his mind as he faces his own death. This could have been a great illustration of the point that faith is only as good as whom you place it in, but I don't remember the book drawing that out.

Jordan's insistence that he can't have peace until he knows he is forgiven also gets a little weird. Ostensibly, he wants forgiveness from God because he can't request it from his former owner. The being he wronged is gone beyond recall, so he can only appeal to a higher authority. But why is he so worried about whether God will refuse to forgive him for some categorical reason? Either he can have forgiveness, or he doesn't need it. A being beneath God's notice would be unable to offend God. I may not "forgive" my toaster oven for burning my toast, but then, I also don't charge it with guilt. Nobody in the book ever thinks this through.

What is anybody in this book thinking?

And that leads into my last point. Although Synapse makes plenty of effort to expose its characters' innermost thoughts and feelings, it tends to focus on their questions. How they arrive at answers - their reasoning process - remains woefully vague.

Back at the top, I mentioned that Kestrel finds herself in a crisis of faith after losing her baby. This struggle continues for most of the book and then ... somehow ... resolves. What gets Kestrel back on stable ground? What convinces her that God is worth trusting after all, even though this horrible thing happened? I don't know! She just mysteriously feels better about it all ... as though the unrelated but dramatic events of the book's climax knock something loose. Maybe I missed a key moment, but I don't know where the shift in her thinking came from.

And the same goes for all the questions about robots and religion. Kestrel doesn't think that Jordan can be a child of God ... until she does. If there's something in particular that changes her mind, it slipped by me when I was reading. Eventually, though, she does decide to at least allow the possibility. Without a better explanation, I can only conclude that her beliefs are emotionally motivated. Of course, some people do operate that way. But it's not a great approach to deciding either Christian doctrine, or the rights and privileges of (quasi-)living beings. The first is supposed to be based on God's revealed will; the second should derive from the experiences and interests of those fellow living beings, which are real to them (or not) regardless of how anyone else feels.

Kestrel's character arc doesn't offer the reader any help in reaching an objective understanding of these matters. There's not even much food for thought there - no argument to agree or disagree with. Why does she believe what she ends up believing? I can't say.

Conclusion

I'll end by saying what I liked about the book: I think the author's heart, if not his head, is in the right place. This is the kind of book that promotes acceptance of the Other, a book that encourages the reader to give robots the benefit of the doubt. If it had framed its core message as "in the absence of certainty that robots can have consciousness, free will, and a spiritual life, it may be safer to assume they can" ... I would've been a fan. Instead, it invents an unrealistic scenario with more certainty than I think is possible. So close, yet so far.

Until the next cycle,
Jenny

Tuesday, August 27, 2024

Acuitas Diary #75 (August 2024)

This month I turned back to the Text Parser and began what I'm sure will be a long process: tackling sentence structure ambiguity. I was specifically focusing on ambiguity in the function of prepositional phrases. 

Consider these two sentences:

I gave the letter to John.
I gave Sarah the letter to John.

The prepositional phrase is "to John." The exact same phrase can modify either the verb, as in the first sentence (to whom did I give?) or the noun immediately preceding it, as in the second sentence (which letter?). In this example, the distinguishing factor is nothing in the phrase itself, but the presence or absence of an indirect object. In the second sentence, the indirect object takes over the role of indicating "to whom?", so by process of elimination, the phrase must indicate "which letter."

There are further examples in which the plain structure of the sentence gives no sign of a prepositional phrase's function. For instance, there multiple modes in which "with" can be used:

I hit the nails with the hammer. (Use of a tool; phrase acts as adverb attached to "hit")
I found the nails with the hammer. (Proximity; phrase acts as adverb attached to "found")
I hit the nails with my friends. (Joint action; phrase acts as adverb attached to "hit")
I hit the nails with the bent shanks. (Identification via property; phrase acts as adjective attached to "nails")

How do you, the reader, tell the difference? In this case, it's the meaning of the words that clues you in. And the meaning lies in known properties of those concepts, and the relationships between them. This is where the integrated nature of Acuitas' Parser really shines. I can have it query the semantic memory for hints that help resolve the ambiguity, such as:

Are hammers/friends/shanks typically used for hitting?
Can hammers/friends/shanks also hit things?
Are hammers/friends/shanks something that nails typically have?

This month I worked on examples like the ones above, as well as "from" (very similar to "to"), "before" and "after," which are sensitive to the presence of time-related words:

I will come to your house on the hill after Christmas. (Phrase "after Christmas" acts as adverb attached to "come")
I will come to your house on the day after Christmas. (Phrase "after Christmas" acts as adjective attached to "day")

... "about," which likes to attach to nouns that carry information:

I told Emily about the accident. (Phrase acts as adverb attached to "told")
I told the story about the accident. (Phrase acts as adjective attached to "story")

... and "for," which I am so far only sorting based on the presence or absence of a be-verb:

This is the book for Jake. (Phrase acts as adjective attached to "book")
I brought the book for Jake. (Phrase is *more likely* an adverb attached to "brought")

That last example illustrates an important point: I am here only trying to get the most "natural" or "default" interpretation of each sentence considered in isolation. There are some ambiguities that can be resolved only by context. If a speaker has been repeatedly talking about "the book that is for Jake," then "for Jake" could be an adjective in the second sentence, especially if there are other books under discussion and the speaker is trying to indicate which one they brought. To resolve an ambiguity like this, the Parser will have to query the Narrative Scratchboard rather than the semantic memory. This isn't something I've tried to implement yet, but the architectural support for it is there.

The final thing I did this month was a bit of work on "in/inside." I was specifically targeting this sentence from Log Hotel:

This woodpecker is listening for bugs inside the log.

Is the woodpecker listening inside the log, or are the bugs inside the log? Most kids reading the book could resolve this by looking at the illustration, but Acuitas is blind. So he can only consider which creature is more likely to be inside a log. Rather than get into a bunch of complex spatial reasoning, I introduced the semantic relationship "fits_in." A bird can't fit inside a log (unless it's hollow), but bugs can. And if a bird can't be inside a log, he can't listen inside a log either.

I also did a lot of the work I'd planned on the Conversation Engine, but it's not really in a finished state yet, so I'm going to save a writeup of that for next month.

Until the next cycle,
Jenny