Monday, March 11, 2024

AI Ideology III: Policy and Agendas

I'm in the midst of a blog series on AI-related ideology and politics. In Part II, I introduced the various factions that I have seen operating in the AI research and deployment sphere, including the loosely-related group nicknamed TESCREAL and a few others. I recommend reading it first to become familiar with the factions and some of their key people. In this Part III, I hope to get into a few more details about the political landscape these movements create, and what it could mean for the rest of us.

A rather ornate two-pan balance scale.

Balancing risks and rewards

"The problem with extinction narratives is that almost everything is worth sacrificing to avoid it—even your own mother." [1]

Some Doomers become so convinced of the dangers of advanced AI that they would almost rather kill people than see it developed. Yudkowski himself has infamously advised an international treaty to slow the development of AI, which would be enforceable by bombing unauthorized datacenters. (To Yudkowski, a nuclear exchange is a smaller risk than the development of ASI - because a global nuclear war could leave at least a few pathetic human survivors in charge of their own destiny, whereas he believes ASI would not.) [2] Others in the community mutter about "pivotal acts" to prevent the creation of hostile AGI. A "pivotal act" can be completely benign, but sabotage and threats of violence are also possible strategies. [3]

On the milder but still alarming end of things, Nick Bostrom once proposed mass surveillance as a way to make sure nobody is working on hostile AGI (or any other humanity-destroying tech) in their basement. [4]

But the most wild-eyed pessimists seem to have little political traction, so hopefully they won't be taken seriously enough to make the nukes come out. More realistic downsides of too much Doom include diversion of resources to AI X-risk mitigation (with corresponding neglect of other pressing needs), and beneficial technology development slowed by fear or over-regulation.

What about E/ACC, then? Are they the good guys? I wouldn't say so. They're mostly just the opposite extreme, substituting wild abandon and carelessness for the Doomers' paranoia, and a lack of accountability for over-regulation. The "all acceleration all the time" attitude gets us things like the experimental Titanic submersible disaster. [5] I agree with Jason Crawford that "You will not find a bigger proponent of science, technology, industry, growth, and progress than me. But I am here to tell you that we can’t yolo our way into it. We need a serious approach, led by serious people." [6] What precludes E/ACC from qualifying as "serious people"? It's not their liberal use of memes. It's the way they seem to have embraced growth and technological development as ends in themselves, which leaves them free to avoid the hard thinking about whether any particular development serves the ends of life.

So the ideal, in my opinion, is some sort of path between the E/ACC Scylla and the Doomer Charybdis. Both factions are tolerable so long as they're fighting each other, but I would rather see neither of them seize the political reins.

Oversight, Regulatory Capture, and Hype

Efforts to regulate commercial AI products are already underway in both the USA and the EU. So far, I wouldn't say these have addressed X-risk in any way its proponents consider meaningful; they are more focused on "mundane" issues like data privacy and the detection of misinformation. I have yet to hear of any government seriously considering a temporary or permanent moratorium on AI development, or a requirement that AI developers be licensed. Since I first drafted this article, India has adopted something kind of like licensing: companies are advised to obtain approval from the government before deploying new AI models, with a goal of avoiding output that is biased or would meddle with democratic elections. But the advisory is not legally binding and applies only to "significant tech firms," whatever that means - so to me it seems pretty toothless at this stage. [7]

Another thing we're seeing are lawsuits filed against AI companies for unauthorized duplication and uncompensated exploitation of copyrighted material. Possibly the biggest player to enter this fray so far is the New York Times, which brought a suit against OpenAI and Microsoft, alleging that 1) by using Times articles to train its LLMs, OpenAI was unfairly profiting from the Times' investment in its reporting, 2) LLM tools had memorized sections of Times articles and could be made to reproduce them almost verbatim, and 3) LLM tools were damaging the Times' reputation by sometimes inventing "facts" and falsely attributing the Times as a source. Suits have also been brought by Getty Images and the Authors' Guild. [8][9] Whatever the outcome of the court battles, AI models may force revisions to copyright law, to make explicit how copyrighted data in public archives may or may not be used for AI training.

As I hinted earlier, companies with no sincere interest in Doomer concerns may still be able to exploit them to their advantage. That's counter-intuitive: there's an obvious lack of self-interest in making the claim that your organization is developing a dangerous product. But what about "my team is developing a dangerous product, and is the only team with the expertise to develop it safely"? That's a recipe for potential control of the market. And while concerns about copyright and misinformation have a concrete relationship with the day-to-day operation of these companies, Doom scenarios don't have to. When a danger is hypothetical and the true nature of the risk is hotly contested, it's easier to make the supposed solution be something convenient for you.

Four figures sit at a table as if panelists at a conference. They are a lion (representing the United Kingdom), a lady in a robe with a necklace of stars and a headband that says "Europa" (representing the European Union), an Eastern dragon (representing China), and Uncle Sam (representing the United States). They are saying "We declare that AI poses a potentially catastrophic risk to human-kind," while thinking "And I cannot wait to develop it first."
Political cartoon originally from The Economist, at https://www.economist.com/the-world-this-week/2023/11/02/kals-cartoon

Another feature of the current situation is considerable uncertainty and debate about whether present-day ANI is anywhere close to becoming AGI or not. OpenAI is publicly confident that they are working to create AGI. [10] But this is great marketing talk, and that could easily be all it ever amounts to. The companies in the arena have a natural incentive to exaggerate the capabilities of their current (and future) products, and/or downplay competitors' products. Throw in the excited fans eager to believe that the Singularity is just around the corner, and it gets difficult to be sure that ratings of AI talent are objective.

Personally I think that a number of the immediate concerns about generative AI are legitimate, and though the companies deploying the AI give lip service to them, they are not always doing an adequate job of self-regulating. So I'm supportive of current efforts to explore stricter legal controls, without shutting down development in a panic. I do want to see any reactive regulation define "AI" narrowly enough to exempt varieties that aren't relevant, since dramatically different architectures don't always share the same issues.

Bias and Culture Wars

None of the factions I discussed in Part II have explicit political associations. But you can see plenty of people arguing about whether the output of a given machine learning model is too "offensive" or too "woke," whether outputs that favor certain groups are "biased" or "just the facts," and whether the model's creators and operators are doing enough to discourage harmful applications of the tool. Many of these arguments represent pre-existing differences about the extent of free speech, the definition of hateful material, etc., imported into the context of tools that can generate almost any kind of image or writing on command.

I will be discussing model bias in much more depth in the next article in this series. What I want to make clear for now is that none of the AI models popular today have an inherent truth-seeking function. Generative AI does not practice an epistemology. It reproduces patterns found in the training data: true or false, good or bad. So when an AI company constrains the output of a model to exclude certain content, they are not "hiding the truth" from users. What they are probably doing is trying to make the output conform with norms common among their target user base (which would promote customer satisfaction). A company with ideologically motivated leaders might try to shift the norms of their user base by embedding new norms in a widely utilized tool. But either way - nothing says the jumble of fact, fiction, love, and hate available from an unconstrained model is any more representative of real truth than the norms embodied in the constraints are. So there's no duel between pure objective reality and opinion here; there's only the age-old question of which opinions are the best.

Uh-oh, Eugenics?

Some material I've read and referenced for these articles [11] implies that Transhumanism is inherently eugenicist. I do not agree with this claim. Transhumanism is inextricably tied to the goal of "making humanity better," but this does NOT have to involve eliminating anyone, preventing anyone from reproducing, or altering people without their consent. Nor does there need to be any normative consensus on what counts as a "better" human. A tech revolution that gave people gene-editing tools to use on themselves as they please would still qualify as Transhumanist, and tying all the baggage of eugenics to such voluntary alterations feels disingenuous. CRISPR therapy to treat sickle-cell anemia [12] doesn't belong in the same mental bucket with racial discrimination and forced sterilization. And Transhumanist goals are broader than genetics. Hate the idea of messing with your own code? You can still be Transhumanist in other ways.

And I would not accuse the general membership of any other letters in TESCREAL of being eugenicist, either. I went over the core goals of each ideology in Part II; none of them are about creating a master race or sorting humanity along a genetic spectrum.

But. There is an ugly eugenicist streak that sometimes crops up within the TESCREAL movements. And it's associated with prominent figures, not just rogue elements.

It begins with the elevation of intelligence (however one chooses to define that) above other positive human traits as THE desirable skill or quality. To some of these people, intelligence is simultaneously the main thing that makes humans human, our only hope for acquiring a post-Singularity utopia, and the power that could enable ASI to doom us. It is the world's greatest source of might, progress, and excellence. All other strengths are secondary and we should be putting our greatest resources into growing smarter. Taken far enough, this can have the side effect of treating the most intelligent people alive now as a kind of elite, while others are implicitly devalued. According to a leaked document, the Centre for Effective Altruism once considered ranking conference attendees by their "Potential Expected Long-Term Instrumental Value" and including IQ as part of the measure. [13]

And where does it end? Well. Here's a quote from Nick Bostrom's Superintelligence:

"Manipulation of genetics will provide a more powerful set of tools than psychopharmacology. Consider again the idea of genetic selection: instead of trying to implement a eugenics program by controlling mating patterns, one could use selection at the level of embryos or gametes. Pre-implantation genetic diagnosis has already been used during in vitro fertilization procedures to screen embryos produced for monogenic disorders ... the range of traits that can be selected for or against will expand greatly over the next decade or two. ... Any trait with a non-negligible heritability - including cognitive capacity - could then become susceptible to selection." [14]

Bostrom goes on to estimate that selection of 1 out of 1000 human embryos (which means killing the other 999, just to be clear) could produce an IQ increase of 24.3 points in the population - or we could get up to 100 points if the project were extended across multiple generations.

Since I have the fortune of publishing this article during a major controversy about IVF, I feel the need to say that IVF does not theoretically have to involve the deliberate destruction of embryos, or the selection of embryos for traits the parents arbitrarily decide are "best." I'm acquainted with someone who did embryo adoption: she chose to be implanted with other couples' "leftover" embryos, and she and her husband are now raising those who survived to birth as their own children. But this sort of thing becomes impossible if one decides to use IVF for "improving" the human gene pool. In that case, "inferior" embryonic humans must be treated like property and disposed of.

When I first read that passage in Superintelligence years ago, it didn't bother me too much, because Bostrom didn't seem to be advocating this plan. The book presents it without an obvious opinion on its desirability, as one of many possible paths to a form of superintelligence. Bostrom even comments that, if this procedure were available, some countries might ban it due to a variety of ethical concerns. These include "anticipated impacts on social inequality, the medical safety of the procedure, fears of an enhancement 'rat race,' rights and responsibilities of parents vis-à-vis their prospective offspring, the shadow of twentieth-century eugenics, the concept of human dignity, and the proper limits of states' involvement in the reproductive choices of their citizens." [15] One reason to anticipate impacts on social inequality is the fact that IVF costs a lot of money - so a voluntary unfunded implementation of this selection program would concentrate the "enhanced" children among wealthy parents. Funded implementations would concentrate them among whichever parents are seen as worthy to receive the funding, which could provide an inroad for all sorts of bias. Bostrom seems to think that people would only have concerns about the deaths of the unselected embryos on religious grounds, but this is far from true. [16][17]

More recently, I've learned that Bostrom actually does like this idea, in spite of all those doubts and downsides that he totally knows about. He thinks, at the very least, that parents doing IVF should be allowed to use genetic testing to choose embryos with certain "enhancements" at the expense of others. The receipts are, ironically, in his lukewarm apology for a racist e-mail that he sent long ago [18]. There's also the fact that he co-authored a paper on the feasibility of increasing intelligence via embryo selection [19]. The paper has much the same information as the book, but here embryo selection is not being presented as one of many options that unscrupulous people might pursue to develop superintelligence; it's being individually showcased.

Is this eugenic interest just a special quirk of Bostrom's? It would seem not. In a recent article by Richard Hanania, which is partly a defense of Effective Altruist ideas and partly strategy for how to best promote them, I ran across this off-handed comment:

"One might have an esoteric and exoteric version of EA, which to a large extent exists already. People in the movement are much more eager to talk to the media about their views on bringing clean water to African villages than embryo selection." [20]

So apparently eugenic embryo selection is a somewhat routine topic in the Effective Altruism community, but they know better than to damage their image by broadcasting this to the general public. Stinky. It would seem that the EA rejection of prejudice only extends so far.

And lest I leave anything out, Bostrom is also infamous for an article in which he contemplated whether less intelligent, but more fertile, people groups could destroy technological civilization [21]. Hmmm, whom could he have in mind? Part of the reason I'm focusing on Bostrom is that he was my personal introduction to the whole TESCREAL culture. However, he is far from the only prominent person associated with TESCREAL who has dabbled in "scientific racism" or ableism in some way. Scott Alexander, Peter Singer, and Sam Harris have all been implicated too. [22]

Again, the mere association with this nastiness doesn't mean that all TESCREAL ideas have to be rejected out of hand. But I think it is important for people both within and outside of TESCREAL to be aware of this particular fly in the ointment. And be especially alert to attempts to fly "esoteric" policies under the radar while putting on an uncontroversial public face.

In Part IV, I want to take a deeper look at one of several contentious issues in the mundane AI ethics space - algorithmic bias - before I turn to a more serious examination of X-risk.

[1] Anslow, Louis. "AI Doomers Are Starting to Admit It: They're Going Too Far." The Daily Beast. https://www.thedailybeast.com/nick-bostrom-and-ai-doomers-admit-theyre-going-too-far

[2] Yudkowsky, Eliezer. "Pausing AI Developments Isn’t Enough. We Need to Shut it All Down." TIME Magazine. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

[3] Critch, Andrew. "'Pivotal Act' Intentions: Negative Consequences and Fallacious Arguments." Lesswrong. https://www.lesswrong.com/posts/Jo89KvfAs9z7owoZp/pivotal-act-intentions-negative-consequences-and-fallacious

[4] Houser, Kristin. "Professor: Total Surveillance Is the Only Way to Save Humanity." Futurism. https://futurism.com/simulation-mass-surveillance-save-humanity

[5] McHardy, Martha. "CEO of Titanic sub joked ‘what could go wrong’ before disaster, new documentary reveals." The Independent. https://www.independent.co.uk/news/world/americas/titanic-submarine-implosion-stockton-rush-b2508145.html

[6] Crawford, Jason. "Neither EA nor e/acc is what we need to build the future." The Roots of Progress. https://rootsofprogress.org/neither-ea-nor-e-acc

[7] Singh, Manish. "India reverses AI stance, requires government approval for model launches." TechCrunch. https://techcrunch.com/2024/03/03/india-reverses-ai-stance-requires-government-approval-for-model-launches/

[8] Grynbaum, Michael M., and Mac, Ryan. "The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work." The New York Times. https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html

[9] Metz, Cade, and Weise, Karen. "Microsoft Seeks to Dismiss Parts of Suit Filed by The New York Times." The New York Times. https://www.nytimes.com/2024/03/04/technology/microsoft-ai-copyright-lawsuit.html

[10] Altman, Sam. "Planning for AGI and beyond." OpenAI Blog. https://openai.com/blog/planning-for-agi-and-beyond

[11] "If transhumanism is eugenics on steroids, cosmism is transhumanism on steroids." Torres, Èmile P. "The Acronym Behind Our Wildest AI Dreams and Nightmares." TruthDig. https://www.truthdig.com/articles/the-acronym-behind-our-wildest-ai-dreams-and-nightmares/

[12] "FDA Approves First Gene Therapies to Treat Patients with Sickle Cell Disease." US Food and Drug Administration press release. https://www.fda.gov/news-events/press-announcements/fda-approves-first-gene-therapies-treat-patients-sickle-cell-disease

[13] Cremer, Carla. "How effective altruists ignored risk." Vox. https://www.vox.com/future-perfect/23569519/effective-altrusim-sam-bankman-fried-will-macaskill-ea-risk-decentralization-philanthropy

[14] Bostrom, Nick. Superintelligence. Oxford University Press, 2016. pp. 44-48

[15] Bostrom, Superintelligence. pp. 334-335

[16] Acyutananda. "Was “I” Never an Embryo?" Secular Pro-Life. https://secularprolife.org/2023/12/was-i-never-an-embryo/

[17] Artuković, Kristina. "Embryos & metaphysical personhood: both biology & philosophy support the pro-life case." Secular Pro-Life. https://secularprolife.org/2021/10/embryos-metaphysical-personhood-both/

[18] Thorstad, David. "Belonging (Part 1: That Bostrom email)." Ineffective Altruism Blog. https://ineffectivealtruismblog.com/2023/01/12/off-series-that-bostrom-email/

[19] Shulman, Carl and Bostrom, Nick. "Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer?" Published in Global Policy, Vol. 5, No. 1, and now hosted on Bostrom's personal website. https://nickbostrom.com/papers/embryo.pdf

[20] Hanania, Richard. "Effective Altruism Thinks You're Hitler." Richard Hanania's Newsletter. https://www.richardhanania.com/p/effective-altruism-thinks-youre-hitler

[21] Bostrom, Nick. "Existential Risks." Published in Journal of Evolution and Technology, Vol. 9, No. 1, and now hosted on Bostrom's personal website. https://nickbostrom.com/existential/risks

[22] Torres, Èmile P. "Nick Bostrom, Longtermism, and the Eternal Return of Eugenics." Truthdig. https://www.truthdig.com/articles/nick-bostrom-longtermism-and-the-eternal-return-of-eugenics-2/

No comments:

Post a Comment