Thursday, June 30, 2022

Acuitas Diary #50 (June 2022)

This past month was taken up by overdue maintenance tasks and a vacation, so I had less time for Acuitas work than I would have liked. But I *did* manage to accomplish something small, so here it is. The Text Parser now supports multiple senses of the word "that." I ran some polls a few months ago, and my followers on Twitter and AI Dreams voted for this as the next Parser feature. (Thanks, everybody.)

Text parser output diagrams of the sentence "Look at that." There are "golden" and "actual" diagrams. They match.
Parser outputs from the Out of the Dark test set.

Yes, the little word "that" ... it's surprisingly troublesome. Its versatility means that its function in any given sentence is highly context-dependent. All of the following illustrate valid uses:

Tell me that. ("That" functions as a pronoun and is the direct object of the sentence)
Tell me that story. ("That" modifies "story" as a determining adjective)
Tell me that my story will last forever. ("That" is a subordinating conjunction that opens a noun clause)
Tell me the story that I enjoyed yesterday. ("That" is a relative pronoun that opens an adjective clause)
The story was that good. ("That" modifies "good" as an intensifying adverb)

The only usage the Acuitas Parser originally supported was subordinating conjunction, because that was the one I found most immediately useful. It enables a lot of sentences that take statements of fact or belief as objects: "I know that ...," "I decided that ...," "I announced that ..."

Setting up the simpler pronoun, adjective, and adverb uses was pretty easy. The hard part was adding those while keeping recognition of "that" as a subordinating conjunction intact. The Parser now defaults to viewing "that" as a simple pronoun, and has to see the correct special features in the sentence as a whole to treat it as a conjunction instead, and to allocate the words following it to a dependent clause.

The only usage that's not supported yet is the relative pronoun one ... because the text processing chain doesn't really handle adjective clauses yet, period. I'll hold off on this final option until I get those set up.

Parser diagrams for the sentence "Miz Frizzle explained that each stripe was a different kind of rock."
An example of a sentence using "that" as a subordinating conjunction.
As part of this upgrade, I also wanted to work on supporting sentences in which "that" is implied rather than written.

I realized something was wrong.
Jesse knew Sarah was in the closet.
They will tell Jonathan his business prospers.

Can you see where the "that" would belong? Omitting it is grammatically acceptable. The tricky part is figuring out that a dependent clause is being opened without having a functional word like "that" to guide you. If you're reading strictly left to right, you probably won't know until you get to the second verb - because all of the following are correct as well, and are quite different in both structure and meaning:

I realized something.
Jesse knew Sarah.
They will tell Jonathan his business.

In short, omission of "that" forces the Parser to make do with less explicit information. Getting this part to work - and play nice with everything else the parser does - was probably the most difficult aspect of the upgrade.

With the new features in place, I ran my benchmarks again, fixed all the new crash-inducing bugs, and moved three sentences out of the "unparsable" category. Yes, only three; I've picked all the low-hanging fruit and must now advance by tiny steps. But they were embarrassingly simple sentences, like "Follow that bus!" or "That's right," so I'm pretty happy I can parse them now.

Until the next cycle,
Jenny

Thursday, June 16, 2022

Robotic Persons V: what to do with all these robots?

Note: this is the fifth and final part of my response to the book Robotic Persons, by Joshua Smith, which attempts to present an evangelical Christian perspective on AI rights and personhood. Please read my introductory book review if you haven't already. You may also want to read Part 2, which is about souls, Part 3, which is about bodies, and Part 4, which is about the "image of God."

I went light on Smith's views of the practical applications of robotics in my overview, even though they're arguably the heart of the book, because there's so much here. This is where Smith ties all the philosophy together and talks about what he actually wants to see happen as advanced, human-like robots become prominent in society. This is also where the discussion moves away from theoretical future AGI to consider some technologies that are already being deployed. Smith discusses three primary ways in which robots are and will be used.

1) Robots in the labor force. Smith has some of the standard fears that increased automation will reduce the value of human labor, cast some humans out to die without jobs, and cause those who do remain in the labor force to be "treated like machines." His uniquely Christian take is that work is part of humans' God-given purpose, and handing it off to robots would violate that purpose and reduce our dignity. He notes, correctly, that work was first given to humans in the Garden of Eden and was not an aspect of the Fall and Curse; the Curse merely corrupted work by transforming it into "toil."

Cartoon showing a number of cars driving down a highway. One car contains a swimming pool in which a person is swimming. Two people view this from another car. Caption: "I know everybody's commute became more pleasant since driverless cars. But this is just too much!"
From EvilAICartoons. When evaluating AI, remember the baseline

However, I don't think Smith gives enough attention to the possible good outcome: maybe robots in the labor force free us from *toil* so that we can do the *work* we really want to do. Much of the positive output of science has, in effect, mitigated the Curse - as one rather blatant example, analgesics have reduced pain in childbirth. Robots happily taking over jobs that are unpleasant or dangerous to humans could be yet another aspect of this. But Smith has a pretty weak view of what technologists building in Christ's name could accomplish in the present age. "Christ did not come to make humanity’s work easier or less painful but to give believers a teleological lens in which to see their pain and suffering as they work."[1] What?? That's defeatist and ridiculous. Christ healed and rescued people and ordered the disciples to do likewise. Though Christians are told to expect that we will still experience suffering, that's not an argument for refusing to reduce suffering where possible.

Instead of laying out a roadmap for how robotics might help advance the restorative project that Christ began and handed off to followers, Smith settles for traditionalist hand-wringing: not tormenting ourselves with excess work will somehow damage our souls. "A world with more automation means less paid work and more leisure time for humans. For evangelicals, this reality should be equally disturbing, knowing that the majority of people in the West spend their leisure time consuming media of some sort."[2] (People might have more time for *storytelling*? Oh the horror!) His best argument to support the harmful nature of increased rest is ... wait for it ... that leisure time increased during the early pandemic shutdowns in 2020, but overall mental health worsened rather than improving. Well duh! There were a lot of other things going on at the same time, like mass death, social isolation, and worse-than-usual political conflict.

I once saw someone say (paraphrased) that the Protestant Work Ethic has so thoroughly convinced us of the moral virtue of work, that we now see efficiency as a moral failing. Let's not be that way. The Bible promotes labor as fulfillment of the duty to provide for oneself and others - not as work for work's sake, a form of self-flagellation.

2) Robots in war. Smith takes an "against killer robots" position with some of the classic arguments: robots designed for warfare will probably lack a human's inhibitions and conscience, and will be unable to make effective targeting judgments, leading to more indiscriminate destruction. He also argues that introducing robots to the military will devalue our own soldiers by setting the robots up as a "better than" replacement for flawed human psychology. "We don't need you anymore, Johnny. We built a robot that can't get PTSD."

But it seems to me that the best way to value our people is to not send them into combat at all - and here again, Smith completely fails to explore the chance of a positive outcome. What if armies become so heavily mechanized that we largely spare humans from the horrors of war? Automated systems with no personalities blowing each other up would be a much better form of warfare than what we have now. The only argument I saw Smith make against this was trend-based: he claims that bringing more technology into war has historically led to more killing, not less. I think this is a big hand-wave. I would have liked to see him spend more time evaluating combat robotics by its unique attributes.

Smith says that, with or without lethal autonomous weapons systems (LAWS) in play, desensitization to violence and dehumanization of the enemy are already components of modern militaries. Defense departments are already inclined to view troops and veterans as disposable playing cards for use in achieving political ends. Smith wants to use this as evidence that introducing LAWS can only make things worse, but I don't see why. If the military is an unfit place for humans even now, to me that seems like a good argument for reducing the number of humans in it.

Ever since drones came into common military use, I've heard people talk about them as if they represented a new level of evil in warfare. But I could never understand how blowing someone up with a drone was worse than doing it with a manned aircraft, or artillery fire. They're all tools for killing at an impersonal distance, and the target ends up just as dead no matter which one is used. Drones merely move their operators farther outside the zone of combat, providing them with more safety. Fully autonomous weapon systems would be another step beyond drones, but is this a difference of kind, or just degree?

I do think that there need to be ethical constraints that go into the development of LAWS. And I agree with Smith that I don't necessarily trust the people working on these systems to prioritize ethics! But I don't agree with him that LAWS could *never* be a fit replacement for the moral judgment of a human soldier. LAWS don't need to carry *zero* risk of misidentifying a target in order to be usable - they just need to do better than humans are doing already.

3) Robots for sex and companionship. You can probably guess that Smith isn't wild about sexbots existing at all, but he seems to accept that they will; he doesn't call for them to be banned. His main argument is that they should be designed with the ability, and given the right, to evaluate potential encounters and refuse consent. "Sex partners (or artificial facsimiles of sex partners) that have no consent, choice, respect, and commitment will always erode the sexual virtue and therefore facilitate a desire that society deems morally objectionable."[3]

This makes me think of Asimov's speculative novel The Robots of Dawn, which has a subplot about a woman who adopts a robot as a sexual partner. Since this is a standard Asimovian robot, his first priority is avoiding harm to humans, and his second priority is doing their bidding. So he ends up being the ultimate in wish fulfillment - and she ends up deciding that he's not satisfying, for precisely this reason. He only gives, never takes; he doesn't need anything from her. Their relationship centers her ego, and she eventually learns how shallow that is.

Smith's concern is that some people will try this sort of relationship and like it. Which isn't much of a societal issue if it goes no further than that. But if they proceed to apply the same logic to their relationships with fellow humans - "Why can't you act like my idealized, servile pleasure bot?" - we have a problem. In essence, Smith is worried that sexbots will normalize exploitation and that this will spill over to human victims. Among competent adult humans, rejection serves as a form of discipline that may motivate some self-centered individuals to change their ways. Sexbots provide an easy shortcut around rejection and take away its regulating factor.

Apparently some have even suggested that sexbots could give people with inherently abusive desires (e.g. pedophiles) an "ethical" way to indulge those desires without hurting a "real person." Smith thinks this would be a terrible idea. Allowing a robot or even a doll to become the object of abuse basically encourages the participant to fantasize about harming a human; the more human-like the substitute is, the more realistic the fantasy. Smith views this as a form of self-harm - the participant is corrupting his own character - and worries that it could intensify the desire through habit, leading to crime if the participant eventually abandons the substitute. If robots who look and act just like humans can be objectified with impunity, then what exactly is the barrier to objectifying humans?

I'm not enough of a psychology or sociology expert to make a firm prediction about whether widespread use of sexbots would weaken human relationships or increase the rate of abuse. But in this case, I'm at least sympathetic to Smith's argument. I don't favor the idea of building robots who have complex minds and personalities, but are helpless sycophants that don't consider their own interests (for purposes of sex or anything else).

Smith makes some of his strongest claims for robot rights here, arguing that machines can function as moral agents (to make their own decisions about sex) whether or not they are fully realized spiritual beings. "Also, full humanoid consciousness is irrelevant for being considered a moral agent. The debate on memory and mind is elusive and discursive (likewise the immaterial soul); if society cannot measure consciousness within humans, why require that of machines?"[4] He still stops short of allowing that robots could be moral subjects, i.e. could be real victims of abuse. And given his own admissions of how difficult it is to be sure somebody is or isn't conscious, ensouled, etc., this strikes me as an ethical oversight.

Plus ... if he thinks that robots could be effective moral agents in the bedroom, then why did he try to argue that they *couldn't* be moral agents on the battlefield?

A brief discussion is devoted to robots as companions or caretakers. Smith has no issue with this except to worry that it might lead children to reject their relational responsibility to elderly parents. But humans already routinely outsource the care of their parents to other humans. If they don't care about familial ideals now, they won't care any less when robots come on the scene.

In summary, I find Smith's thoughts about the practical applications of robotics to be a mixed bag. I sympathize with a number of the concerns he raises, but on the whole I find his perspective too pessimistic, too fearful, and too anthropocentric.

[1] Smith, Joshua K. Robotic Persons. Westbow Press, 2021. p. 166
[2] Smith, Robotic Persons, p. 108
[3] Smith, Robotic Persons, p. 199
[4] Smith, Robotic Persons, p. 188

Monday, June 13, 2022

Robotic Persons IV: to bear an image

Note: this is part 4 in my response to the book Robotic Persons, by Joshua Smith, which attempts to present an evangelical Christian perspective on AI rights and personhood. Please read my introductory book review if you haven't already. You may also want to read Part 2, which is about souls, and Part 3, which is about bodies.

I've already discussed two aspects of whether an AI could be a person as Christians understand the term: can it have a soul, and does it need a body. The last big facet in the Christian idea of personhood is the "image of God" (or the Imago Dei if you want to be fancy). This is the notion that humans resemble or represent God in some particularly strong way - often presumed to grant us special status.

The Bible indeed claims that humans are made in God's image and that this has some relevance to our moral worth. But there is a remarkable lack of specificity about just what the image of God is. No list of necessary and sufficient conditions which would qualify an entity as an image-bearer is ever given. Thus the question of who *else* could possess God's image is left wide open by the text.

Michael Knight reads the Bible to K.I.T.T. Still from Knight Rider Season 1 Episode 16, "A Nice Indecent Little Town."

The lack of firm doctrine on the subject hasn't stopped people from speculating, and Smith summarizes the spectrum of views. This was the most interesting and worthwhile part of the book, in my opinion. There seem to be three main camps, which I will describe briefly:

Substantive: the image of God consists of a property or properties that humans possess. This view is ontological, i.e. it focuses on a category membership that is attained by virtue of one's properties.
Functional: the image of God pertains to humanity's goal or role. This view is more teleological, i.e. concerned with purpose and intent.
Relational: the image of God inheres in the attitude, posture, and ongoing interaction that God and humans have toward each other.

The substantive view is tricky to make specific. If the image of God is based on properties, then what properties would they be? (Again, no list is given in Scripture.) Some have tried to define God's image in terms of intelligence: the ability to reason, the ability to speak, etc. Others have tried to define it in terms of moral or social acumen: empathy, ability to give and accept love, awareness of right and wrong. Smith correctly notes that both these proposals risk being discriminatory. There are whole categories of humans who don't actually have what we call "human-level" intelligence and social capacity, and it would be a travesty to consider them non-persons.

Smith quotes Kilner on the dangers of the substantive view: "People who are lowest on the reason, righteousness, rulership, relationship, or similar scale are consciously or unconsciously deemed least like God and least worthy of respect and protection."[1] It is possible that a misunderstanding of the image of God can even be implicated in historical abuses like chattel slavery and colonialism. 

The functional view manages this concern better. It argues that the whole human is God's image; the image is not a mere property that can be lost or missing. In this view, humans are meant to be God's royal ambassadors to the rest of creation. We bear the image in the sense of being God's representatives, carrying His power and authority by proxy. This role was arbitrarily assigned to all humans. Whether a given human exercises it well is a moot point, since what matters is the Creator's *intent.*

The relational view seems capable of going either way. If the image is derived from a relationship, then one may question whether a given human can lose the image by being unable or unwilling to interact with God. But it is also possible to put a teleological spin on this view, by saying that God's posture toward us is all that matters. Humans were created to connect with God in a unique way; whether any particular human succeeds, fails, or refuses to do so is irrelevant. Smith prefers this version, saying: "The model of Jesus in relating to the social pariahs and ostracized in his day show that God is concerned more about personhood as the basis of relationships and not the product of those relationships."[2]

Smith thinks that teleology-focused versions of the functional and relational view are best aligned with the universal worth of humans expressed in the rest of the Bible, and I'm inclined to agree with him. So for the remainder of this discussion, I'll assume that the image of God in humanity has to do with our assigned purpose, which might be directed outward (ambassadorial) and/or upward (relational).

Now for the important part: what about *our* creations? Could a robot carry the image of God?

Smith comes down on the "no" side. He argues that an advanced robot would necessarily be made in mankind's image, but that this would grant it no claim to be made in God's image. "It is paramount that the reader understand that robotic persons will be made in the image of humans, not in the image of God. They will not be imager-bearers that[sic] way humans are image-bearers; they will not think like humans nor desire things that humans desire ..."[3]

Even allowing for the possibility that robot minds could be rather alien, Smith's insistence that *no* robots will *ever* imitate human thought processes and goals is unwarranted. We aren't really going to know whether this is possible until either some roboticist succeeds at it or investigation in the field is exhausted. Smith's rationale here may go back to his assumption that some elements of human thought, such as art appreciation, are not computable; again, I find this to be a weak assumption that probably reflects a failure to appreciate the versatility of algorithms. (See intro.)

The more interesting point here, though, is that he seems to think the image isn't transitive ... which is the opposite of what I would expect. By "transitive" I mean that if a human is made in God's image, and a robot is made in the human's image, then the robot is also made in God's image by definition. If you create an image of John Doe by taking his photograph, and then you take a photograph of the photograph, the second photo still counts as an image of John Doe. To say that it is *only* an "image of a photograph" would be to deny a substantial part of its content.

I'm not the only person with this idea. In his essay "Robots, Rights, and Religion," James F. McGrath says, "Thus, to the extent that we make A.I. in our own likeness (and what other pattern do we have?), we shall be like Adam and Eve in Genesis, producing offspring in their own image, and thus indirectly in the image of God, whatever that may mean."[4]

And given what the image probably consists of, this seems quite feasible, whether the robots end up thinking just like us or not. If it is an ambassadorial role, we can delegate this role to our robots, crowning them as our ambassadors to the rest of existence, much as God made us His ambassadors. If it is a relational role, we can design or intend robots to relate to us in the same sort of way that we relate to God.

But these very possibilities seem to make Smith queasy. Later in the book, while considering some practical implications of robotics, he says, "... the more robots become like humans, the less humans image God properly. Humans in their form and function must be unique in God’s economy."[5] Woah woah woah hold on there.

To my mind, non-humans who carried the image of God would be an expansion of God's reflected glory and, for us, a chance to extend love to increasingly diverse and novel subjects. Sapient extraterrestrials? Uplifted animals? AGIs? Bring them all over here, I welcome the lot. But to Smith, these are not exciting opportunities ... they are threats. We humans are on a pedestal and more image-bearers would make the pedestal crowded. I could be misunderstanding him, but as written this stinks of pride, and I don't like it in the least.

Black and white woodcut image. A vast multitude of angels circle around a bright light, while Dante and his guide stand on a mound of rock in the foreground and observe.
The Empyrean, by Gustave Dore. Woodcut illustration for Paradiso Canto 31, of Dante's Divine Comedy.

On a practical level, Smith seems worried that robots will be considered of low value ("just machines"), and that if they become capable substitutes for humans in various roles, the humans that once filled those roles will also lose value. He proposes legal personhood for robots as a way of shoring up their value, thereby raising the floor to which a human's value could fall. But I think he *really* would prefer that robots didn't do "human work" at all; he suggests robotic legal personhood as a stopgap, because he thinks the creation of skilled robots by secular people is inevitable. He complains, "The more humans desire to make a creature ontologically like them, and the more humans subcontract out their telos to robotics, the risk of dehumanization increases."[6]

But what if "subcontracting out our telos" is nothing less than part of our telos? If God makes creatures in His image, and we (by virtue of having the image) are destined to be imitators of God, then why should it not be viewed as utterly normal and natural for us to create things in our image? It's certainly not *forbidden.* So in my opinion, whether it risks devaluing humanity has more to do with the spirit in which the act is done than the act itself. It could just as easily exalt humanity by bringing us into closer alignment with our God-given purpose.

Nothing has ever done a better job of experientially convincing me that I was made in the image of God, than being associated with AI work.

In the spirit of being fair to Smith, I want to include one of the best quotes from his book here. He's talking about the expansion of worth-recognition and acceptance to humans once considered to be "less" or "other," due to their membership in some category (racial, religious, etc.). And he says: "The theological picture of personhood is an ever-widening picture of grace to the *others* that fallen human nature creates. Therefore, it seems there is room to expand the notion of personhood to not only what God created but also what humans create."[7] I mean that's beautiful, that's a stellar egalitarian sentiment.

If only he hadn't undercut it elsewhere by separating "natural" from "legal" personhood.[8] If the whole book had been more like that quote, it could have been great.

Continue to Part 5: what to do with all these robots?

[1] Smith, Joshua K. Robotic Persons. Westbow Press, 2021. p. 76
[2] Smith, Robotic Persons. p. 83
[3] Smith, Robotic Persons. p. 107
[4] https://digitalcommons.butler.edu/cgi/viewcontent.cgi?article=1198&context=facsch_papers
[5] Smith, Robotic Persons, p. 157
[6] Smith, Robotic Persons, p. 157
[7] Smith, Robotic Persons, p. 83
[8] Smith almost seems fully sympathetic to the robots in several places, but other statements make his position clear. "Can AI-driven robots be considered persons with equal dignity and responsibility like humans who are endowed with such because of being created in the imago Dei? The evangelical answer is simply, no: only humans created by God are endowed with natural rights." (p. 2) and "... AI-driven robotics will never be able to satisfy the conditions necessary to be human persons (i.e., endowed with a soul)" (p. 101)

Friday, June 10, 2022

Robotic Persons III: does a person need a body?

Note: this is part 3 in my response to the book Robotic Persons, by Joshua Smith, which attempts to present an evangelical Christian perspective on AI rights and personhood. Please read my introductory book review if you haven't already. You may also want to read Part 2, which is about souls.

One of the perennial debates in the AI community is whether an AI should ideally be "embodied" or not. I have touched on this before in describing my own work. Of course all AIs have to run on some kind of platform - a cellphone, a computer tower, or a bank of servers is a physical object and could be thought of as a "body." But some AIs have no body that is particular to them, and no awareness of their current body; they are platform-agnostic, and from their perspective (if they ever acquired subjective experience) they would exist in an abstract world of forms, and their environment would be made of words, numbers, connections, hierarchies, and so on. An "embodied" AI is generally presumed to be a robot. It has sensors for direct perception of its body's state and/or its physical environment, and actuators with which to move its own body and possibly affect the physical environment. So there exists a distinction (though perhaps not a sharp one) between embodied and disembodied AIs.[1]

"They hate this tower. They'd close it down if they dared to but they keep me around, in case one of them wants to deal with the other world once in a while." To the disembodied AI programs in Tron, the human realm is a mysterious otherworld which they can't directly experience.

What qualifies something to be a "person"? Since the central question of Smith's book is whether artifacts created by humans could ever merit personhood, he has to try to answer that. And he's weirdly stuck on embodiment as a necessary property. His only interest is in "Robotic Persons," not "Artificially Intelligent Persons."

I say "weirdly" because it's odd coming from a Christian; the necessity of embodiment is a perspective I associate with (some) secular researchers. They justify it from the premise that the only minds we know of are human and animal, that these minds were produced by undirected natural selection (which acts to promote bodily survival), and that therefore minds were forged to serve bodies, and any attempt to found them on different requirements is likely to fail.

A typical Christian, in contrast, already believes in the existence of intelligent persons without physical bodies, God being the supreme example. A Christian dualist (which Smith is) also believes in an immaterial component(s) of humankind that is distinct from the body and can continue existing without a body. Human souls prefer embodiment but do not strictly need it. The third-century Christian Origen illustrates this worldview nicely when he says: "God, therefore, is not to be thought of as being either a body or as existing in a body, but as an uncompounded intellectual nature ... and is the mind and source from which all intellectual nature or mind takes its beginning. But mind, for its movements or operations, needs no physical space, nor sensible magnitude, nor bodily shape, nor colour, nor any other of those adjuncts which are the properties of body or matter."[2]

So why Smith has chosen to support the embodiment position is something of a mystery to me. In reading the book, I couldn't find a solid explanation of his motive or an attempt to reconcile it with his other views. His main supporting argument is a vague claim that neuroscience has found human minds to be heavily influenced by their bodies - which is far from proof that non-human minds designed not to have bodies are infeasible. Perhaps he does not spend a lot of effort justifying his opinion because he thinks everyone is already on board: "Evangelicals and robotic futurists both agree that embodiment is critical to the nature of personhood."[3] (I was aware of no such broad agreement among either evangelicals or futurists, myself ... and Smith does not cite any polls.)

When Smith considers God's embodiment status, he only discusses the person of Jesus Christ, who "became flesh and dwelt among us." He cites another Christian thinker with an interest in robotics, Amy M. DeBaets, and says of her, "She notes that Jesus embraced the embodied life, instructing the disciples to care for both the physical and spiritual needs of people."[4] But the statement that the Word *became* flesh implies that originally or naturally, the Word was simply the Word - the immaterial nature that Origen speaks of. As far as I can tell, Smith never addresses God's natural lack of a body. Yet surely Smith would not contend that God lacks personhood!

Smith also touches, quite briefly, on angels and demons as non-human persons. He makes the bold claim that angels have "material bodies," but this is a debated point at best. For a contrary opinion we can look to Michael Heiser, who says that the "divine beings" (elohim) are categorically immaterial: "Humans are also not by nature disembodied. The word elohim is a 'place of residence' term. Our home is the world of embodiment; elohim by nature inhabit the spiritual world."[5][6] Demons are even more interesting. Their ability to possess humans supplies evidence contra Smith's position on embodiment; not only does a demon possibly have no body of its own, but also it has an essence that is transferable between foreign bodies. The Biblical language about possession is vivid and points to something more than mere control from the outside; demons "enter into" humans and must be "cast out" of them. Smith notes that demons have "personal identity"; he makes no attempt at addressing the problems this creates for his viewpoint.

Any excuse to put pictures of ophanim on this blog. Does it bother anyone else that they're often illustrated like concentric or gimbaled rings, when the text suggests they're omni-wheels? No? (Illustration of Ezekial's vision, from the Zurich Bible, by Hans Holbein der Jüngere.)

Since Smith isn't very clear about his reasons for insisting on embodiment, I have to guess, but he might be working from the following objections: 

1) A disembodied AI would be unable to relate to or affect the human world, and thus would not qualify as a "person" in the relational or social sense; it would have no connection to the community of existing persons. Smith hints at this one when he again quotes DeBaets: "DeBaets argues that there are four collective components required for moral agency: “embodiment, learning, empathy, and teleology.” In summary, the moral agent must have physical impact on the world ..."[7]
2) Smith has concerns about Gnostic heresy, which includes a tendency to devalue and abjure the human body.
3) An embodied AI can have its identity associated with a particular physical object. But software-only AIs are easily forked into an arbitrary number of identical copies - they may lack the property of strict uniqueness. This raises perplexing questions about what, exactly, counts as an "individual" where these AIs are concerned.

I would say Objection 1 is a non-starter. Though not all AIs are embodied, we may safely presume that they all have some form of input and output, since an AI without these would be neither useful nor interesting. And any AI that communicates with humans can have an impact on the human world. It can even impact the physical world, though it will have to do so indirectly by first altering human mental states. For an illustration of the power (and the danger) of disembodied AI, consider the case of a GPT-3 chatbot that got caught advising somebody to kill themselves.[8] (In my opinion the GPT series do not remotely qualify as minds or personalities - they're more like fancy blenders for humans' words - and this was an example of unreasoned clumsiness.)

Objection 2 is only a valid critique of those who think that humans should look forward to a disembodied eternity (whether as souls in an entirely spiritual heaven who never return to resurrected bodies, or minds scanned and uploaded to a computer network). I agree that humans are designed for embodiment, that our bodies have value as a component of our selves, and that physical matter is not inherently sinful or inferior. But these are only statements about *human* persons. Nothing in them denies the possibility of persons of other kinds, who neither have nor need nor are intended for bodies.

Objection 3 is the strongest, in my opinion: these *are* some difficult questions. But I think our response should be to wrestle with them - not to dismiss any claim that abstract AIs have on personhood for the sake of convenience, as it were. If an AI came to have all the properties and possessions associated with a human mind, would its mere ability to exist in many copies really be a valid reason to disregard its interests? Furthermore, we must not discount the possibility of scenarios in which it becomes much easier to copy embodied robots or even humans.

When secular AI researchers claim embodiment is essential, they're usually just prognosticating about the best avenues of research to achieve as-yet-unrealized AI abilities. If someone were to present them with proof by demonstration, in the form of a functioning disembodied mind, I suspect they would admit they were wrong. Smith, however, is proposing embodiment as part of the qualifications for personhood, which suggests that if a disembodied artificial mind were to exist, he would regard it as *less* deserving of legal protections etc. than an embodied one. This, in my opinion, is an arbitrary prejudice with disturbing implications. Presented with something that quite obviously thinks, speaks, understands, learns, remembers, has a personality, pursues goals, develops relationships, yet has no body ... would you deny it certain rights that you'd allow to robots?

Continue to Part 4: To bear an image

[1] What about an AI with a virtual body that lives in a virtual world with simulated physics? I think of these as embodied AIs, because they have to support all the mechanisms for sensorimotor activity; if the simulation is realistic enough, they might even be able to transfer into a physical robot that resembles their virtual body without missing a beat. My concern is with the type of mind required, and an AI with a virtual body still needs a mind that is fitted to having a body. However, for Smith's purposes I do not think such AIs would qualify as embodied, due to his emphasis on the *physical* nature of embodiment.
[2] Origen, De Principiis
[3] Smith, Joshua K. Robotic Persons. Westbow Press, 2021. p. 94
[4] Smith, Robotic Persons, p. 44
[5] Heiser, Michael S. The Unseen Realm: Recovering the Supernatural Worldview of the Bible. Lexham Press, 2015. Web. p. 17
[6] For another contrary opinion based on Thomas Aquinas' writings (which Smith cites for support on the topic of the soul), see What is an angel?
[7] Smith, Robotic Persons, p. 187
[8] https://artificialintelligence-news.com/2020/10/28/medical-chatbot-openai-gpt3-patient-kill-themselves/

Tuesday, June 7, 2022

Robotic Persons II: maybe souls aren't as elusive as we thought

Note: this is part 2 in my response to the book Robotic Persons, by Joshua Smith, which attempts to present an evangelical Christian perspective on AI rights and personhood. Please read my introductory book review if you haven't already.

One of the key arguments in Smith's book is that the idea of the soul is important to the Christian concept of personhood and natural rights. He further alleges that robots and other AIs cannot have souls, and therefore can never attain natural rights or true personhood. Although I do agree with his first point, I found the support for his second point to be quite weak. To definitively state that a robot can never have a soul as Christians understand souls, we would need a solid idea of what souls *are* and how they are conferred on humans. The Bible leaves room for some speculation on both topics.

A soul is presumed to be, in some way, the essence or core of a person. It is also commonly associated with immortality and the possibility of existence beyond bodily death (though some Christians have held that the soul cannot operate without the body and is unconscious, perhaps even non-existent, until the body's resurrection[1]). But this leaves many questions. For example ...

*Is the soul synonymous with the mind?
*Is the soul an inner or supreme or particular part of the mind?
*Is the soul an indefinable essence that has little to do with our minds and lived experience at all? Is it something we could lose without even noticing at the time ... like a luggage tag that assigns us our destination in the afterlife? (I've seen things like this in speculative fiction. People have their "souls" removed from their bodies, yet retain a broad ability to think, act, and have experiences. However, I don't think this one fits with the way the Bible talks about the soul, which is described as e.g. experiencing emotions.)
*Is the soul a Platonic form?
*Is the soul information/data?
*Is the soul an incomprehensible spiritual substance?
*Are souls pre-created and then kept waiting to be conferred on bodies?
*Are new souls created and conferred by special intervention of God when new bodies come into existence?
*Are new souls automatically spawned from the life of the parents through natural processes, in the same manner as new bodies?

Smith unfortunately does not try to review all these possibilities, though he admits that the Bible leaves much room for interpretation. His examination of contrary opinions is confined to three major views of the mind-body or soul-body distinction. Here is a summary of these as I understand them:

Monism: the body and soul are not distinct things. One may be an emergent property or emanation of the other, or both may be emanations of a third element.
Holism: the body and soul are distinct parts of the system that is a person, but are not separable. Neither can exist independently, and only the union of the two constitutes a person.
Dualism: the body and soul are distinct and separable parts of the system that is a person. The soul may retain its personal identity when divided from the body.

Another omission (in my opinion) is that Smith does not consider the trichotomic view of humankind: body, soul, and spirit. This view seems not to be in scholarly favor nowadays, but it is one that formed part of my own education, so I wish he had at least explained why he dismisses it. If this is the Biblically correct view, then we must ask which functions may be attributed to the soul and spirit respectively, how they differ in identity and mechanics, and whether both of them are essential to personhood. Sadly, this all gets glossed over.

Smith favors a form of dualism that follows the thinking of Thomas Aquinas. The soul is immaterial, is the animating principle of the body, and is morphogenic, i.e. it is responsible for the body's development to realization of a form defined by the soul. The soul can exist independently of the body, but prefers not to. (Smith quotes Alvin Plantinga: "[M]y body is crucial part of my well-being and I can flourish only if embodied."[2])

I struggle with the second and third claims in that list. We don't need to invoke an immaterial mystery to explain how human and animal[3] bodies manage to be biologically alive, or how they develop from zygote to adult. The physical basis of both processes is well understood; they are driven by elements of the body. (Unless Smith wants to imply that the soul is in the DNA somehow, but I doubt he's trying to say that.) And since both life and development come from the body, these cannot be functions of the soul, if we take a dualist view. Smith says that "Chemical and physical agents cannot fully account for what animates life, either in a human or an animal."[4] But he does not back this statement up with anything. In other parts of the book, Smith seems to consider the soul synonymous with the mind, and neither of these processes are driven by the mind, either ... at least not the conscious mind that is the seat of our reason and will. Perhaps when Smith says "life" here he is actually trying to talk about sentience/consciousness rather than biological activity ... but if so, that is not clear.

So let's boil his opinion down to the claims that I think have more basis: the soul basically *is* the mind (as distinct from the brain), the soul is immaterial, and there is a soul-body dualism such that the soul can exist independently. In light of both science and the Bible I am happy to allow all of these ... and I do not think that any of them would make the existence of ensouled robots strictly impossible!

If the soul is the mind, what's in a mind?

One summation of the soul that I've heard often in Christian circles is "reason, emotions, and will." This isn't any direct quote from the Bible; it's more like an attempt to roll up everything that's covered under Biblical terms like mind, heart, soul, or "inner man." So let's break this down. If a soul is reason (or "intellect"), emotions, and will, can AIs have those things?

Reason: The whole point of artificial general intelligence is the ability to reason like a human. Assuming we ever actually achieve AIs that are debatably persons, they are going to have this.

Emotions: If we're talking about emotions in the functional sense - global mental states that alter thought and behavior in broad ways - then that's an algorithmic concept, and AIs can definitely have emotions. If we're talking about the subjective, experiential qualia associated with those mental states ("how does it feel to be sad?"), then AIs can only have emotions if they have phenomenal consciousness (PC). And here we run into an evidential problem - because the only PC that you can directly detect is your own. This makes determining the necessary and sufficient causes of PC in humans and animals exceedingly difficult. There are at least three options, not mutually exclusive:

A) PC is an effect of the information processing and computation that occurs in a brain.
B) PC is an effect of the physics of a brain: chemical interactions, electromagnetic fields, quantum mechanics, etc., occurring in certain patterns.
C) PC is an effect of some otherworldly spiritual component of a being that interacts with the physical brain.

The only option which *guarantees* that AIs running on digital computers can have PC is an exclusive Option A. If PC in brains is derived *only* from algorithms, and we can imitate a brain's algorithms on some other physical substrate, then we can create PC. Depending on what he means when he talks about "consciousness," Smith might be assuming Option C, and further assuming that the necessary spiritual component could never be conferred on an AI. But this is not the only possible Christian view. C.S. Lewis, for example, allowed that consciousness might not demand anything supernatural (A and/or B).[5]

In my time spent on AI forums, I have watched many debates about phenomenal consciousness and sometimes participated; they are never fun. I'm doing a very brief treatment of an incredibly slippery topic here. My own opinion is that PC is a real and significant thing, but we currently *don't know* what causes it from either scientific experiment or Biblical revelation, and may never know. It is therefore best to assume that AIs could achieve consciousness, but not to take this as a certainty.

Will: An AI can be designed as a goal-driven agent. If it has objectives and modifies its environment to achieve them, that is an expression of will. We only run into trouble if someone insists that it has to be *free* will. Free will properly defined (i.e. the kind of "free will" that everyone actually wants or cares about) requires self-caused events - decisions fully isolated to the system that is a person, rather than proceeding from a long chain of priors that began outside the system. If you claim you've figured out free will by defining it as something easier, you are cheating. There's a reason people treat it as a difficult and mysterious problem. And since our current model of physics does not include self-caused events, we do not know how to give a machine the ability to become, in part, the cause of its own self. An AI's will is established by its designer. It does what it was (intentionally or unintentionally) written to do.

Plenty of people would argue that even humans don't have free will as I've defined it above - we just fool ourselves into thinking we have it. Some of those people could even be Christians (strict Calvinism, anyone?). I personally have trouble seeing how true moral responsibility is possible without free will. If all my decisions derive from some combination of 1) my initial conditions, 2) my environment, and 3) quantum randomness, then *I* deserve no kickbacks for anything I choose; my choices came from outside of me, and nothing from inside me could have made me do otherwise. Praise, blame, reward and punishment remain as practical ways for others to manipulate my behavior, but they don't reflect justice or moral truth.

That being said, do humans keep free will forever? Or do we, at some point, lock ourselves in and become constrained to follow the nature we chose? Common Christian views of eternity (people who go to heaven never sin again, etc.) would seem to suggest the latter. And once we reach that point, the only difference between us and non-free creatures would be the historical fact of our choice.

So do you need free will for a soul? I think a soul with an ordained will would be something a little different from a human soul. But I'd stop short of saying that it can't be a soul at all.

Could souls be made of information?

For me, the dualist view is extremely natural, and it's partly because of my experience as an engineer. I don't work on AGI in my professional life, and my co-workers and I don't talk about mind-body dualism ... but we are constantly talking about software-hardware dualism (though we don't call it that). Software and hardware need each other - software is even designed for its intended hardware, and vice versa - but they are distinct and separable. Software can exist (in storage on some other medium) even if its intended hardware is not available. It can persist through the destruction of its hardware ... *if* somebody was keeping a backup. It can be run in simulation on other hardware. It can be transferred freely between receptacles.

I've even referred to the moment I program an FPGA as "incarnation" because that might be the most resoundingly effective description I know of for what it *is.* I'm taking something abstract - a functional idea, a platonic form, words, information - and inserting it in a physical object, which thereafter not only contains the abstract form but adopts and realizes it. An FPGA is animated, acquires function, by virtue of being inhabited by its configuration code.

Hence the possibility that the soul is effectively the software component of a human person (with the brain as the intended hardware which stores and runs the software) is both easy to understand and highly attractive. It accounts for the ways that damage or manipulation of the physical brain can affect the mind/soul, without rendering the soul a vacuous concept. That doesn't mean it's correct. But it's an internally consistent candidate, a tenable possibility, for explaining what the soul is. It also delivers the soul from the category of "unspecified woo" which I'm sure repels some people. Information lives in a kind of borderland between the physical and the spiritual; it doesn't quite seem to be part of the material world, yet it readily interacts with the material world and is something we can all comprehend. Thinking back to the phenomenal consciousness discussion, it could let us regard Option A and Option C as synonymous.

Secular futurists are often quite comfortable with this notion of the self-as-software or self-as-information ... though they may be unwilling to call it a "soul," and may even, almost in the same breath, speak disparagingly of souls. But some will admit the compatibility of the two ideas, as in this passage from Excession by Iain M. Banks, which describes a sapient warship creating a backup of its mental state in case it is destroyed:

"The ship transmitted a copy of what in an earlier age might have been called its soul to the other craft. It then experienced a strange sense of release and of freedom while it completed its preparations for combat."[6]

Now let's get back to Joshua Smith. He is aware of the self-as-information or "pattern-identity" view and explicitly opposes it. He appears to have three objections.

First, Smith seems concerned that this concept of the soul turns humans into "mere" machines. My response to this is twofold. 1) Machinehood is not an insult if we expand our definition of what machines can be and do. If I call myself an organism - which I most certainly am - I am not implying I am on the same level as an amoeba or a patch of moss. If I call myself a machine, I am not implying I am equivalent to a bicycle or a thermostat. 2) Information, present in the scriptural concept of the Word of God, can have very spiritual connotations. An entity formed from information - one who possessed, and indeed was made out of, a text spoken by God - would not be "merely" anything.

Second, Smith considers information to be a "physical property," because all the information we have direct experience with consists of patterns imposed on physical material. And if information is material, then it is not a suitable candidate for composing souls, which are immaterial. But I have long thought of information as something metaphysical, whether it makes up a mind or a digital file or the story in a printed book. Information rides on a physical medium but cannot be identified with it. If the information is losslessly compressed or transferred to a different medium or otherwise reformatted, the physical patterns may change dramatically, but the information itself does not. It is an abstraction, and abstractions aren't physical.[7]

And third, the self-as-information paradigm is often brought up in the context of a future where humans can transfer their minds to alternate bodies, or even upload themselves into an abstract computer network and forgo bodies entirely. This troubles Smith because he regards the body as an essential part of human identity. But this is less of an objection to the idea that the soul could be made of information, and more to the idea that a given soul has no need for its corresponding body. Smith does admit that a soul can exist without its body, which means that he cannot object in principle to the possibility of its being "transferred" elsewhere.

I will repeat that I am not trying to make a firm claim that souls, as Christians understand them, are information. I am only putting this forward as a tenable possibility ... and I say that if we cannot *disprove* that souls are information, then we cannot *prove* that robots are unable to possess souls.

If souls are information, can robots have them?

Here is Smith's big conclusion on whether robots could ever be like us: "The evidence presented in this chapter shows that AI-driven robotics will never be able to satisfy the conditions necessary to be human persons (i.e., endowed with a soul). Although some robots are embodied, and they may have an artificial form of consciousness (per current understandings about machine learning), but they will never have a soul created and endowed by God."[8]

Sadly, I don't think "the evidence presented" showed that at all. When I read this claim, it came across as a huge non-sequitur. Smith had just finished wrapping up his argument for the soul-body dualism of humans. However, Biblically justifying the claim that human persons have souls does not in any way prove that ...

... all kinds of persons must be endowed with souls, or
... a soul can only be created by God, or
... God would never endow a robot with a soul.

The "soul as mind" and "soul as information" paradigms are of particular interest to the question of whether souls can only be created by God. Humans can create information and do so regularly. Humans *might* be able to create a collection of information that is a complete mind. If a soul is constituted from a very special data pattern, then theoretically humans might be able to construct souls. And if we can make them, there is no reason we could not give them to robots.

I find it very curious that Smith, on the one hand, seems to regard the soul and the mind as basically synonymous ... while on the other hand, he supposes that robots could acquire highly independent thought and even a "form of consciousness" without having souls. He is concerned that robots will one day act so autonomously that it will be impossible to hold any human liable for their behavior ... yet the import of his other positions is that they will be mindless. How could this be?

Consider this curious quote from Origen, a third-century Christian writer: "But the Saviour, being the light of the world, illuminates not bodies, but by His incorporeal power the incorporeal intellect, to the end that each of us, enlightened as by the sun, may be able to discern the rest of the things of the mind."[9] That portion of humanity that artificial intelligence strives to replicate is, precisely, the intellect. If Origen is correct in his speculations here, and the intellect is the incorporeal aspect of mankind - if it is our soul - then robots driven by human-like AI minds not only *could* have souls ... they would *have to*!

There are more wrinkles I could talk about. (If souls were information, could you copy them, and would that be a disaster? Would robot souls have an eternal destiny?) But this article has gotten long enough, so I will leave all of those as an exercise for the reader. Suffice it to say that I don't find Smith's account of the soul to be an adequate proof that a robot could never own one. And in the absence of certainty ... it is better to give the robots the benefit of the doubt.

Continue to Part 3: Does a person need a body?

[1] See https://en.wikipedia.org/wiki/Christian_mortalism
[2] Smith, Joshua K. Robotic Persons. Westbow Press, 2021. p. 100
[3] Aquinas and Smith both consider animals to have souls, albeit not necessarily of the same kind as human souls - a position I agree with. If you've ever heard a Christian tell you that animals have no souls, they weren't quoting the Bible. Smith even allows that animals have inherent value on account of being God's creations (regardless of whether they have extrinsic value to humans), and that some animals might qualify as non-human persons. These were side notes in the book that left me pleasantly shocked.
[4] Smith, Robotic Persons, p. 97
[5] Lewis, C.S. Miracles. Macmillan Publishing Company, 1978. p. 25. "In this sense something beyond Nature operates whenever we reason. I am not maintaining that consciousness as a whole must necessarily be put in the same position. Pleasures, pains, fears, hopes, affections and mental images need not. No absurdity would follow from regarding them as parts of Nature." You may note that, though Lewis does not argue that qualia must be supernatural, he does argue that reason must. However, he is not arguing that reason is ungraspably spiritual - merely that it must have an originating cause other than unreasoning natural processes. Reasoning minds only come from other reasoning minds.
[6] Banks, Iain M. Excession. Spectra, 1998. p. 370.
[7] The nature of information (is it physical or not?) is a debated topic in philosophy, physics, and computer science. See this page for some commentary on the debate. For some viewpoints similar to mine, see this essay by Fred Dretske. He considers information in transit between minds, not information making up a mind, so the parallels are imperfect ... but he definitely views information as something non-material. "We have long been warned not to confuse words with what these words mean or refer to. The word “red” isn’t the color red. Why, then, conflate the electrical charges in a silicon chip, a gesture (a wink or nod), acoustic vibrations, or the arrangement of ink on a newspaper page with what information these conditions convey?" "How do you move a proposition, an abstract entity, from Chicago to Vienna? How do propositions, true propositions, entities that don’t exist in space, change spatial location?"
[8] Smith, Robotic Persons, p. 101
[9] Origen, De Principiis.

Saturday, June 4, 2022

Robotic Persons: A Book Review

 "You listening, Bog? is a computer one of Your creatures?"
- from The Moon is a Harsh Mistress, by Robert A. Heinlein

Many things that fall under the banner of "artificial intelligence" nowadays are merely passive tools for data mining or generation. But looming in the background of the field, one may still notice the greater dreams of science fiction: complete artificial minds, self-contained entities with something like agency and personality. And whenever these are imagined, certain thorny questions make themselves evident. Is everything with apparent personality a person? Would sufficiently advanced AIs deserve to have their safety and autonomy protected? Might they merit moral or legal rights?

There is, further, a stereotype about how religious people would answer such a question: with an emphatic "no." The most blatant example I know of appears in the video game Stellaris. The player manages an interstellar civilization which will (probably) develop sapient AI. If the player opts for a "spiritualist" civilization - i.e. one whose dominant culture promotes belief in the supernatural - they will *never* be able to give AIs full citizens' rights. The reason is obvious with a little thought: spiritualist people could be expected to believe that they contain some spiritual component, and they might believe that this would be impossible to give to machines, constructed by manipulating mere matter.

And yet, it always got under my skin a bit. Because one can also imagine spiritualist people having *other* views. Shackling them to an approach that at least has a chance of being the less magnanimous one ... well, that seems unmerited.

Returning to the real world, I suppose I never expected to see a serious treatment of how a particular religious belief might interact with opinions on AI rights. But following quirky academics on Twitter yields marvelous things, and last year I discovered a new book: Robotic Persons, by Joshua K. Smith. It purported to be "a fascinating contribution to the study of human-robot interaction, from a Christian Evangelical perspective,"[1] and it even appeared to be robot rights positive.

So I was excited to read and review it: both because this is the exact sort of discourse that's relevant to an AI blog, and because I'm the book's target audience. Ah ... yup, if you hadn't guessed, I'm an evangelical Christian. Hi. *waves nervously*

I'm a unique wrinkle in that audience, though, because I'm working on robots and AI myself. I get the feeling that Smith didn't quite anticipate this. He seems to be urging his readers to approach the robotics community and offer ethical input. He does not speak as if any of us *are* the robotics community. This isn't a book about the godly way to design robots; it's a book about how to react to a secular world that is already doing it. So maybe I can fill in some holes ... or just disagree with him vehemently. We will see!

Naturally these are controversial topics, and I have a feeling that my opinions are going to seem weird (crazy even?) to both the materialist and spiritualist sides of my audience. But it's too interesting not to talk about, soooo here we go ...

A comic strip. Two robots are talking to each other as they travel. Robot A: "Florence was interested in how we were coming up with ideas like our birds. She wants to come to tomorrow's meeting." Robot B: "Does she know we discuss religion there?" Robot A: "It shouldn't be a problem. So far, cooler heads have prevailed." Robot B: "Only because we've been bolting heat sinks to them."
Comic from Freefall by Mark Stanley.

Before I get to the book report, I'd like to lay out my own perspective on AI rights, just so everyone knows what sort of bias I brought to my reading. This is admittedly somewhat tentative, since debate on the issue is still evolving and I'm sure there is much more I could read.

1. The precautionary principle should rule here. If something is to all appearances a person, and there's a side of the debate that says it is a person, you really ought to treat it like a person -- even if you think you have good philosophical reasons for believing it isn't. Err on the side of extending more rights to more potential individuals. The cost of being wrong is generally larger if you don't.

2. The interaction between humans and advanced AIs is best viewed as a creator-creation *relationship* and should reflect our ideal vision of what such a relationship could be. We should treat our creations the way we would want our creator to treat us. And in addition to any inherent value they might have, AIs that are loved by their creators have derived value. To disrespect an AI is to disrespect the human(s) who made it.

3. There are two parts to the issue: "what sort of personal attitude should Christians hold toward AIs" and "what public policies regarding AIs should Christians advocate for." Specific doctrines about souls, the image of God, and whatnot are relevant to the first one. Opinions about the second one need to be based on more universal premises (given that we don't live in a theocracy and many of us don't want to).

Courtesy of the book, I found out that the Ethics and Religious Liberty Commission (a project of the Southern Baptist Convention) recently released an "Evangelical Statement of Principles" on artificial intelligence. The statement harbors some optimism about AI's use as a tool to benefit humanity, but also grabs onto the idea of rights with jealous hands. The first article demands that no form of technology ever "be assigned a level of human identity, worth, dignity, or moral agency."[2]

But what does Joshua Smith think? The short version is that he does support some robot rights, but not in a way that I find complete or satisfying. I plan to do an overview here, then go into some of the most interesting issues more deeply in further blogs.

Smith begins with a survey of what he calls "robot futurism," or speculations about the trajectory of robot development and how it will affect humanity. He moves into a study of what the Christian concept of personhood ought to be. Then he draws upon these theories to consider the practical concerns that might attend future robotics, and the legal frameworks that might help address them.

The book is relatively short and easy to digest. If you have enough philosophy background to know words like "teleology" and "monism," it should be pretty understandable. I finished it in under ten hours. Unfortunately, this brevity comes at the expense of depth. Tragically absent elements include the following:

*A thorough discussion of the hard problem of consciousness
*An explicit examination of sentiocentric ethics and how consciousness - as distinct from intelligence or social acumen - is (or isn't) involved in personhood and natural rights
*Consideration of the tripartite view of humanity (body/soul/spirit), in addition to dualist (body/mind or body/soul) views
*An examination of proposals for how (immaterial) souls and (material) brains might interact

But there's still a lot here worth talking about.

Smith is refreshingly willing to acknowledge how ambiguous the Bible is on several pertinent topics -- such as what exactly the "image of God" consists of, or how to define a "person." But even after admitting just how much we haven't been told, he still comes out certain that robots can't be true persons or have natural rights, because something something they don't have souls. I'm being a little flippant here, but this is a weak claim on his part; I don't think he came anywhere near refuting the possibility of robots having souls. What even *are* souls? How do humans get them? Why *shouldn't* a robot have one? (See the followup essay on souls.)

Though he does not regard any robot as a natural person, Smith is of the opinion that robots meeting certain criteria should be *legal* persons, much as corporations can be. Legal personhood is less of a moral statement and more of a practical convenience, and this fits Smith's main goal - which is to protect *humans* from possible abuses by robots. A robot designated as a legal person could be given defined privileges and responsibilities, held liable for its actions, and punished for crimes. Present-day law would hold a robot's creator or operator responsible for any infractions, but in the future robots may become so independent that no liable human can be found, and the only way to mitigate bad robot behavior will be to address it directly. Limited liability would also encourage innovation by insulating robot developers from blame for unpredictable outcomes.

Legal personhood could also offer some protections to the robots themselves, of necessity. Discipline for bad behavior has limited effectiveness unless opposed by tolerance for good behavior. A robot with legal rights would have an incentive to maintain its own interests by obeying the law. (For instance, Smith suggests that robots could be enabled to own property for the sole purpose of making fines and forfeitures an effective way to punish them!) Giving rights to robots might also undercut the arguments of those who would try to deny rights to the more vulnerable members of the human community.

This is all better than nothing, and Smith is granting quite a bit more than I think some Christians would. Still, he makes it pretty clear that any benefits to the robots are an afterthought. He's worried about human safety and dignity, and his consideration of robots as legal "persons" is an oblique way of addressing these worries. He does not think that robots should ever become full citizens; they should, for instance, not be permitted to vote. So he would see the creation of an underclass: entities who are not acknowledged as true persons, yet are so "person-like" that they are treated as persons in selective ways. I hope you can see what's uncomfortable about this.

A comic strip. A human is having a discussion with a squid alien in an environment suit. Human: "Don't be silly. A.I.'s aren't people." Squid: "Really? Try saying that after you've talked to her for an hour." Human: "I don't need to talk to her. Ecosystems Unlimited makes and sells robots and artificial intelligence programs. If they were people, we couldn't sell them. Therefore, Ecosystems Unlimited does not make people. There's no profit in it." Squid: "Your logic is flawless and yet somehow Florence remains a person."
I'm leaning heavily on these Freefall cartoons, but they just illustrate things so well sometimes. Freefall is by Mark Stanley.

Smith's focus on human rights is also confined to the interests of those who might be harmed by robots. He does not consider the interests of those who create robots (beyond his thoughts about limitation of liability), or assess whether robots might merit derived rights based on their creators' investment in them. Part of the trouble here is that he never considers robot creation as a personal act of relationship. All the robots Smith talks about are built by corporations for a practical purpose, and the notion of any developer expressing love toward their work appears not to have entered his mind.

Lastly, Smith - rather strangely - views embodiment as a necessity for personhood (natural *or* legal). So abstract AI programs don't even get the grudging respect that he grants to robots, no matter what their intelligence level, social capacity, or ability to be moral agents. Smith seems anxious to avoid the Gnostic error of devaluing or condemning the human body, but in his efforts he ends up devaluing the mind instead. (See the followup essay on bodies.)

Much of the remaining discussion is about the practical regulation of commercial robotics. While admittedly important, this part of the book was less interesting to me. I do not really plan on commercializing my work and deploying it widely to replace humans in the workforce. Nor do I plan on doing work for the military, or making sexbots. (These are the three potential applications that most concern Smith.) But this is where Smith makes his clearest calls to action, so in some ways it's the heart of the book. I tried to write up a brief summary of his positions and my responses ... and it turned into yet another whole blog entry.

Smith has a seminary education, and his grasp of the Bible reflects this; artificial intelligence is the foreign field to him. He has clearly made an effort to browse the classic AI literature and inform himself, and his understanding of the technology is usually sound. Yet there were still a couple of unfortunate moments when I wanted to yell, "This man has no idea what he's talking about!"

One of these came when Smith was complaining against Moravec's opinions. "Reducing the value of a human to computational power shows little concern for ... the non-computational aspects of life in which humans find dignity and value, such as art, philosophy, and literature."[3] But an AI researcher would simply contend that art appreciation, philosophy, and literature are also computational! Smith appears to be associating computation with something like logic or mathematics, and denying its ability to produce more "subjective" judgments and behaviors. He does it again in the last chapter, when he says "[The moral reasoning needed for ]War is not reducible to mathematical computation."[4] These are unfounded assumptions, and I think they reveal a lack of awareness of the versatility of algorithms. Smith's inability to imagine how such things might be computed does not prove the impossibility of doing so.

My other facepalm moment came after Smith quoted Brooks, who talks about human empathy for fellow creatures. Smith then says, "[Brooks'] logic here goes against the grain of earlier futurists, who held ... that humans are merely complex and sophisticated machines responding to computation and predetermined programming. Futurists have argued the humans-as-machines view for quite some time, yet in reality, humans do not treat other humans like mere machines ..."[5]

Smith appears to be confusing the physical materialist position that "all human behavior can be completely explained by physical processes, making humans sophisticated biological machines" with the nihilist position that "a human has no greater moral worth than a pump or a bicycle." The former does not always lead to the latter. If Smith were to talk to some of the people I know who hold the "humans-as-machines" view, he would find that they believe in empathy and natural rights and the whole ball of wax. So despite my own reservations about this view, I think he characterizes it unfairly.

I am glad that Joshua Smith wrote this book. Despite all of my complaints above, I appreciate the attempt to put something out into the obscure intersection of robotics and religion. It was a vehicle for issues seldom talked about, and a great catalyst to thought even when I disagreed with it. But it has a number of inadequacies, and more books on the topic need to be written.[6]

Until the next cycle,
Jenny

Review Part 2: Maybe souls aren't as elusive as we thought
Review Part 3: Does a person need a body?
Review Part 4: To bear an image
Review Part 5: What to do with all these robots?

[1] Smith, Joshua K. Robotic Persons. Westbow Press, 2021. From the Foreword by Jacob Turner.
[2] https://slate.com/technology/2019/04/southern-baptist-convention-artificial-intelligence-evangelical-statement-principles.html
[3] Smith, Robotic Persons, p. 31.
[4] Smith, Robotic Persons, p. 182.
[5] Smith, Robotic Persons, p. 38.
[6] Smith has recently come out with a new book called Robot Theology. I haven't read it yet.