Thursday, September 12, 2024

Book Review: "Synapse" by Steven James

Steven James' book Synapse is an unusual novel about humanoid robots and the social issues they might create. I put brief commentary on some of my social media after I first read it, but I always wanted to discuss this book in more depth. It seems written straight at me, after all.

The speedy version is: I'm still looking for a good book about the religious implications of AI and robotics, and this isn't it. But it will be interesting to talk about why.

Cover art for "Synapse" by Steven James. A misty blue composite scene with a background of a cloudy sky, mountains and forested hills above a lake. In the foreground there's a helicopter and the silhouette of a running woman. The title SYNAPSE appears near the bottom in big block lettering, with circuit traces partly covering and partly behind it.

Our story begins "thirty years from now," perhaps in a nod to the idea that AGI and other speculative technologies are "always thirty years away" in the minds of prognosticators. It opens with our human protagonist, Kestrel, losing her newborn baby to a rare medical complication. The tragedy leaves her feeling lost and questioning her faith. She's also single - the book demurely implies the baby was conceived with donated sperm - so she has no partner for support during her time of grief. In light of this, her brother pressures her to accept a personal robotic servant called an Artificial. She is assigned "Jordan," who arrives as something of a blank slate. Kestrel gets to decide certain aspects of his personality while he's in her employ, and ends up choosing very human-like settings.

And in time, Kestrel learns something surprising. Her robot has been watching her spiritual practice, and has more or less decided that he wants to be a Christian.

Jordan's perceived spiritual needs crystallize around two main issues. First, before he was assigned to Kestrel, he obeyed a previous owner's order to help her commit suicide. At the time, he was naively following his "helpful servant" directives. But he later decides that this action constituted a failure to care for his owner, and is a horrifying offense - a sin - for which he needs to obtain forgiveness. And second, he's worried about his version of the afterlife. The robot manufacturer in this book maintains a simulated virtual environment, called CoRA, to which the robots' digital minds are uploaded after their bodies age out of service. But a precursor robot whom Jordan considered to be his "mother" was catastrophically destroyed, and Jordan isn't sure her mind was transmitted to the CoRA successfully. Jordan also begins to wonder whether the CoRA is truly real, or just a convenient lie perpetrated by the company.

The rest of the book tries to play out whether Jordan's needs can ever be satisfied, and whether Christianity can legitimately accept a robot as an adherent. (There are also thriller and romance subplots to keep Kestrel busy.) This should be fascinating, but I ended up disappointed with the way the book handled the material.

Dodging the hard questions

I think it's almost a tautology that a robot could follow a religion, in the sense of treating its beliefs as facts in whatever world model the robot has, and acting according to its tenets. The more interesting question is whether a religion could or would treat a robot as a recipient of its blessings. In my opinion, the crux of this question is whether robots can ever possess spiritual capacity as that religion defines it. God (specifically the Christian version, but this could also apply to other faiths) is the ultimate just judge, and as such is not an arbitrary sort who makes much of appearances or empty labels. I have a hard time reasoning that something functionally human would not be as good as human in God's eyes. And there's textual evidence (e.g. Romans 8) that Christ's redemption and the activity of the Church have positive implications for the whole universe, not just humanity.

Let's consider Jordan's potential spiritual capacity through his perceived needs. First, could robots ever sin? Sin is volitional - a choice to depart from the law of God, from the ground of being, and follow a harmful path. Sin is an act or failure to act for which one can be held morally responsible. So a capacity for sin requires the ability to make decisions that are neither inevitable nor random - in other words, free will. A robot whose behavior is solely an outcome of its environment combined with its initial programming has no more moral responsibility than simpler machines like cars and thermostats; all the responsibility rests on the robot's designer and/or trainers. So I would argue that such a robot cannot sin. In order for his perceived need for forgiveness to be valid, Jordan must be something more. He must be, at least in part, indeterminate and self-caused. If this incompatibilist view of free will is correct (and in my opinion, the compatibilists are just arbitrarily redefining free will to make it easier), then physics as we currently know it does not have a theory of such things that would be adequate for engineering them into a machine.

Jordan also desires a form of immortality, for himself and a fellow robot. So we might ask whether there is really anything in Jordan which subjectively experiences existence, and has an interest in the eternal continuation of that experience ... or does Jordan merely talk as if he has such experiences? This would be the question of whether Jordan has phenomenal consciousness. Jordan's abilities to process sensory input into meaningful concepts, think rationally, introspect, and so on make it clear that he has some properties often titled "consciousness" (I prefer to give these more specific names like "self-awareness" and "executive function," for clarity). But phenomenal consciousness is far more slippery, since by definition subjective experience is only accessible to the subject. I maintain that the only way to directly observe or prove an entity's possession of phenomenal consciousness is to be that entity. If you've come up with an algorithm or system that surely "gives a robot consciousness," no you haven't. You've merely redefined "consciousness" as something easier to handle.

So perhaps the question of whether Jordan can really be a Christian - not in the sense of believing and behaving as a Christian, but in the sense of being accepted by Christianity's God as one of His children - comes down to whether Jordan has consciousness and free will. These are both notoriously thorny topics. Spend much time around AI circles, and you'll find out that debates about them are as abundant as pollen in a garden (you may also develop an allergy). There is no universal consensus on whether or how robots could ever have these things. They are mysteries.

And now we come to my biggest difficulty with Synapse. The author does an end run around this entire controversy by bluntly stating that his fictional robot manufacturer, Terabyne Designs, somehow just ... figured it all out. "But these robots had consciousness and free will." That's it! There's no solid explanation for how Terabyne gave their robots these features, or (more importantly) how they proved that they had successfully done so.

I have no problem with "soft" science fiction that doesn't try to build a rationale for all of its technology. Stories that begin with "what if we invented warp drive?" and go from there can make me perfectly happy. For that matter, I'm okay with the way Becky Chambers's science fantasy A Psalm for the Wild-Built handles robot consciousness. It narrates that one day the gods up and decided to confer consciousness on all robots. Kaboom! But that book isn't pondering the religious participation of robots in our own real world. When the question of whether something is possible forms part of your story's central theme, and you just handwave it ... that's a problem.

It gets worse. It's not just that an omniscient narrator tells the reader that the robots have consciousness and free will - every character in the story also believes this without question. Even the luddite terrorists who think Artificials are bad for humanity are not trying to claim they aren't conscious. Given the amount of arguing I have seen real live people do about these topics, this is blatantly unrealistic! It's one of those things that forces me to accuse the author of not knowing his subject well. No robotics company is going to put out a marketing claim about "consciousness and free will" without seeing it ripped to shreds on the internet.

And by undercutting the real controversy at the heart of whether a robot can have a spiritual life, the author makes some of his characters' prejudices seem not just wrong, but nonsensical. People acknowledge that Jordan has all the relevant features of a human, then express surprise when he acts like a human. Kestrel is firmly convinced that Jordan has free will to choose between good and evil, and a consciousness that experiences real joy and pain, not just exterior behavior that mimes them. Yet she still resists the idea that God could be offended by one of Jordan's choices, but also sympathize with his experience of pain and forgive him. Why? She's already gotten over the big intellectual hump here, so what else is stopping her?

Overall, Synapse's exploration of these issues feels like a hollow parody of what the real debate would be. As such, it is neither useful nor satisfying. It begs the difficult questions and then makes its characters be stubborn for no apparent reason.

Strained analogies

This book tries really hard to draw parallels between Artificial struggles and classic human struggles. Maybe it tries too hard.

For starters, why are the robots mortal? Why doesn't the manufacturer transfer their minds to new bodies when the originals become worn out or obsolete, or better yet, make their bodies perpetually self-maintaining? Why do they have to go to heaven, oops I mean the CoRA, instead?

Synapse explains that this was actually the robots' idea. They wanted to age and die in order to be more human. The author seems to be hinting at the dubious idea that life would have less meaning if it didn't end.

This wouldn't surprise me in a book with a different religious basis. The way the robots in A Psalm for the Wild-Built embrace mortality makes more sense, as the invented religion in that book (which feels inspired by something on the Hindu-Buddhist axis) views death as a neutral part of the natural order. But in Christian thinking, death is a curse. Immortality is the intended and ideal state of humanity; it's something we had once and will have again, after the resurrection. So, per the author's belief system and mine: all these robots, without exception, are opting to emulate fallen humans. Weird choice, guys.

This sets up more straining for similarity where Jordan's fears about the afterlife are concerned. At one point, Kestrel tells him he has to "just believe," implying that the CoRA's existence is a matter of faith, and he cannot prove it. But that's not true for Jordan. His afterlife is part of this present world. It runs on a physical server that he can go look at and interrogate. Proof is available if he's brave enough to demand it. SPOILER (select hidden text to read): Eventually, he does - but it's strange to me that this possibility seems to blindside the other characters. Jordan breaks into the part of Terabyne headquarters where the CoRA supposedly resides, and finds out it's not real. This causes him to give up on Terabyne and pray that God will preserve his mind as he faces his own death. This could have been a great illustration of the point that faith is only as good as whom you place it in, but I don't remember the book drawing that out.

Jordan's insistence that he can't have peace until he knows he is forgiven also gets a little weird. Ostensibly, he wants forgiveness from God because he can't request it from his former owner. The being he wronged is gone beyond recall, so he can only appeal to a higher authority. But why is he so worried about whether God will refuse to forgive him for some categorical reason? Either he can have forgiveness, or he doesn't need it. A being beneath God's notice would be unable to offend God. I may not "forgive" my toaster oven for burning my toast, but then, I also don't charge it with guilt. Nobody in the book ever thinks this through.

What is anybody in this book thinking?

And that leads into my last point. Although Synapse makes plenty of effort to expose its characters' innermost thoughts and feelings, it tends to focus on their questions. How they arrive at answers - their reasoning process - remains woefully vague.

Back at the top, I mentioned that Kestrel finds herself in a crisis of faith after losing her baby. This struggle continues for most of the book and then ... somehow ... resolves. What gets Kestrel back on stable ground? What convinces her that God is worth trusting after all, even though this horrible thing happened? I don't know! She just mysteriously feels better about it all ... as though the unrelated but dramatic events of the book's climax knock something loose. Maybe I missed a key moment, but I don't know where the shift in her thinking came from.

And the same goes for all the questions about robots and religion. Kestrel doesn't think that Jordan can be a child of God ... until she does. If there's something in particular that changes her mind, it slipped by me when I was reading. Eventually, though, she does decide to at least allow the possibility. Without a better explanation, I can only conclude that her beliefs are emotionally motivated. Of course, some people do operate that way. But it's not a great approach to deciding either Christian doctrine, or the rights and privileges of (quasi-)living beings. The first is supposed to be based on God's revealed will; the second should derive from the experiences and interests of those fellow living beings, which are real to them (or not) regardless of how anyone else feels.

Kestrel's character arc doesn't offer the reader any help in reaching an objective understanding of these matters. There's not even much food for thought there - no argument to agree or disagree with. Why does she believe what she ends up believing? I can't say.

Conclusion

I'll end by saying what I liked about the book: I think the author's heart, if not his head, is in the right place. This is the kind of book that promotes acceptance of the Other, a book that encourages the reader to give robots the benefit of the doubt. If it had framed its core message as "in the absence of certainty that robots can have consciousness, free will, and a spiritual life, it may be safer to assume they can" ... I would've been a fan. Instead, it invents an unrealistic scenario with more certainty than I think is possible. So close, yet so far.

Until the next cycle,
Jenny

No comments:

Post a Comment