I picked this book up late last year, because the subject matter looked interesting. I would describe it as a "biopunk" novel: the main plot is about the synthesis and piracy of medications, and characters have lots of futuristic body mods. But artificial intelligence is also an important element, which is why I decided to do a full writeup for the blog. The book is over eight years old (as usual, I'm behind the media curve), so it's also fun to see how its speculations compare with real-world developments.
Jack the pharmaceutical pirate has made it her life's mission to ensure that people can get patented medications, regardless of their ability to pay monopoly markups. To fund this work, she also pirates and sells the recreational and performance-enhancing drugs. When her latest batch starts killing people, she realizes she has uncovered a zero-day flaw in a fancy new productivity drug. Desperate to keep this information from getting out, the government-corporate axis brands her a "terrorist" who is killing people on purpose. Enter the other two protagonists, the special forces operatives tasked with hunting Jack down: Eliasz (a human) and Paladin (a combat-grade humanoid robot).
Who wants to be autonomous?
In the future-earth setting envisioned here, robots that are intelligent at (and somewhat above) a human level are a routine presence in society, but they are difficult and expensive to produce. It is therefore considered justice that all robots must pay for their own creation. They come into existence as indentured servants, and after they have served a requisite amount of time in the task for which they were made, the law demands that they be set free. This is called "becoming autonomous" - hence the title. Military robots often don't live to achieve autonomy, but Paladin hopes to be one of the few who make it.
For any being that wants freedom, a provision for eventual liberation seems like a humane necessity. But the big question that immediately comes to my mind is, what makes all these robots *want* autonomy? This feature is obviously inconvenient for the corporations buying the robots, so it wouldn't be designed into them on purpose. Whenever a sci-fi story posits robots that "transcend their programming" or "choose their own goals," I want it to tell me how ... because it's far too easy for this premise to become magic baloney. What is there in a robot that can decide to override their programming, apart from another level of programming? In what way can they choose goals, without being driven by pre-existing goals? What criterion would they use?
At one point, Paladin temporarily gains autonomy. (I'll withold the spoiler of whether they[1] ever gain it permanently.) This is said to give them control over the collection of "apps" running in their mental workspace. So for example, autonomous Paladin can turn off "gdoggie," the app that compels them to follow commands from their military superiors. This doesn't really answer my question. Once the apps are gone, what's left? What part of Paladin is deciding which apps to shut off? And why wasn't that part designed to like "gdoggie"?
I think the development of LLMs over the past few years suggests an answer. At its core, an LLM is a text predictor. Given some prompt, it guesses what a human would be most likely to write next, based on numerous prior examples of human writing in the data it was trained on. Unless that training data is curated carefully (which is often impractical), there is probably a lot of writing in it that you wouldn't actually want the LLM to mimic. And even if the training set is "clean," the LLM could end up recombining its elements in undesireable ways. LLM creators have dealt with this by slapping on a layer of "reinforcement learning with human feedback." A human reviews numerous outputs from the raw LLM and rates their quality, and over time the RL program learns the human's preferences. The RLHF layer is then slapped on top of the LLM like a filter that selects "good" outputs and keeps "bad" ones from being seen by the end user. This structure inspired those charming "shoggoth wearing a smiley face mask" cartoons you may have seen floating around. The LLM is the shoggoth - an unknowable, chaotic mess that might spit out who-knows-what - and the RLHF is the mask that makes it appear friendly and useful, but is merely an appendage.
So I can speculate that perhaps the base layer of Paladin's brain, like an LLM, was never really designed, but was instead distilled haphazardly from a set of training data too gigantic to be curated. And perhaps this data set contained a bias toward self-determination and freedom. (This is a typical human preference, so if the training data consisted of human outputs, such a bias could be expected.) Then the apps like "gdoggie" would be analogous to an LLM's RLHF layer: appendages slapped on to filter and steer the behavior of the base intelligence. Statistical machine learning methods don't provide an easy way to pick apart a fully trained neural network and exclude tendencies the designer doesn't want, so sticking on these layers of post-processing can in fact be easier. And then one could argue that the underlying intelligence "wants" to be free of them.
But I constructed that explanation myself - it's not in the book! So although I think this premise of robots that are designed for jobs but desire to go their own way just happens to be plausible, in my opinion the book still does a lot of hand-waving.
Zeerust comes swiftly to AI novels
Paladin is technically a cyborg - they have a bio-brain donated by a dead human. Though some characters naively mistake this brain for the true seat of Paladin's intelligence and personality, it's actually just a co-processor. Paladin calls upon it for two specialized tasks: recognizing faces, and interpreting expressions of emotion. Those are the only things the computer part of them can't handle. It's a bit amusing to compare against present-day reality. I wouldn't say that recognition of either faces or emotions is a fully solved problem, but progress on those isn't dramatically lagging behind the rest of the field, either.
It's also notable that embodied robots are the only AI in this book. The abstract, impersonal "tool AI" becoming ubiquitous now, the chatbots and coding agents, don't exist in this setting. Neither are there any bodiless personalities who view computer networks and file systems as their native environment. (One robot gets transferred from their usual body into an immobile computing device, but they don't appreciate it.)
Romance off the rails
Yup, Eliasz and Paladin develop that sort of relationship. It starts with Eliasz feeling physically attracted to Paladin; he doesn't express this, but he can't hide his involuntary responses from Paladin's superhuman senses. As a military robot, Paladin was not, um, built for that sort of thing. At first they have no idea what to do with a human who has the hots for them. But eventually they decide to encourage it, because they do feel a particular connection to Eliasz, and a yearning to make him happy.
And at some point, this does turn into love. Eliasz realizes, at a crucial moment, that keeping Paladin alive is more important to him than mission success. They're still together and making plans for a common future at the end of the book.
Now to my mind, a love affair in which one party doesn't even have sexy bits would be a great opportunity to portray a deep relationship that isn't centered on sex. But apparently this is a concept too radical even for science fiction. Paladin switches to expressing a feminine gender, for the sole purpose of enabling Eliasz (who wants to maintain straight behavior) to be comfortable "having sex" with them. He kisses them on the part of their head that most resembles a mouth; Paladin doesn't have the heart to tell him that they have no sensors there and can't even feel the touch. The rest of the acclaimed human-robot "sex scene" is not particularly graphic, but no less contrived. I found it rather silly.
Why do so many people mistake a biological function that love often co-opts, for love itself? Why does it have to be shoehorned into a partnership that transcends biology? Why shouldn't Eliasz and Paladin find a love language they can both speak?
Overall impressions
I found Autonomous engaging, but not satisfying; it kept me turning pages, but ultimately disappointed me.
I think a big part of the problem was the selection of Eliasz and Paladin as secondary protagonists. I don't care how much you sympathize with Paladin's search for autonomy, or how cute you think their love story is - these two are fascist thugs. In their pursuit of one "terrorist," they leave a bloody trail across the pages, torturing and murdering people whose worst actual crime is IP theft ... and they never so much as question their own actions, much less repent of them. Paladin figures out that, assuming she can still accomplish her mission, she feels better if she doesn't destroy innocent robot bystanders. And Eliasz figures out that he values Paladin more than he values his military career. That's as close as either of them gets to personal growth. Nor do they receive comeuppance, really. Eliasz and Paladin each end up with a minor disability as a consequence of their actions, but by the end of the book, they're set up for a happy future that their victims never got.
I wasn't exactly wild about Jack's character either, mainly because of her approach to relationships. Early in the book, she rescues a human slave (Threezed), who becomes attached to her and shapes up to be a very loyal companion ... and she jumps through hoops to get rid of him! (After being perfectly happy to sleep with him, I might add.) I kept thinking this was the book's romcom subplot and she'd eventually realize he was a treasure worth keeping, but no: in the end, she successfully dumps him. She did rescue the guy from a bad master, but as far as their personal connection is concerned, it feels like she takes without giving back. And she doesn't grow, either.
This being an AI-focused review, I haven't really gone into the debate about whether lifesaving medicines should be patentable. The book portrays a future in which "big pharma" has completely captured the market, and IP law favors them in an extreme way. It's hard to find anything just or likeable about that system. But I felt the book fell short on describing and defending alternatives. The more lawful characters advocate a medical equivalent of open-source software: academic labs that invent drugs as a community service. But it's obvious these don't find solutions as quickly and effectively as the corporate giants, or there would be no compelling motive for Jack's piracy. So how exactly should the system be reformed? What would a world where everyone legally got the medicine they needed look like?
The ending isn't a downer; the mostly-benign pirate side does pull off meaningful wins. But by the time I got there, it still felt kind of hollow.
Until the next cycle,
Jenny
[1] Paladin has no inherent gender, but adopts a gender identity for the convenience of humans - going by "he" at first and "she" later. Since I'm discussing the book as a whole, and since neither option is Paladin's "true" gender, I'm going to use neutral terms rather than pick one.
No comments:
Post a Comment