Showing posts with label Sci-Fi. Show all posts
Showing posts with label Sci-Fi. Show all posts

Thursday, May 16, 2024

AI Ideology V: Existential Risk Explanation

I'm in the midst of a blog series on AI-related ideology and politics. In Part IV, I looked at algorithmic bias, one of the demonstrable concerns about today's AI models. Now I'm going to examine the dire hypothetical predictions of the Existential Risk Guardians. Could future AI destroy human civilization? This Part V will be given to presenting the Doomer argument; I'll critique it in Part VI.

A human cerebrum recolored with a rainbow gradient running from front to back.

The Power of Intelligence

We don't need to choose a precise (and controversial) definition of intelligence for purposes of this argument; it need not be based on the IQ scale, for example. Just think of intelligence as "performance on a variety of cognitive challenges," or "ability to understand one's environment and make plans to act within it in self-satisfying ways." The first key support for the X-Risk argument is the notion that intelligence confers supreme power. Anything that can outthink us can more or less do whatever it pleases with us.

This idea is supported by existing disparities in intelligence or accumulated knowledge, and the power they confer. The intelligence gap between humans and other species allows us to manipulate and harm members of those species through methods they can't even comprehend, much less counter. While it may be true that we'll never succeed in poisoning every rat, the chances of rats inventing poison and trying to kill *us* with it are basically nil. There is also a huge power divide between humans with knowledge of advanced technology and humans without. Suppose a developed country were to drop a nuclear bomb on the lands of an uncontacted people group in Brazil. They might not even know what was annihilating their culture - and they certainly would be powerless to resist or retaliate. Citizens of developed countries are not, on an individual level, more intelligent than uncontacted indigenous Brazilians ... but we've inherited all the intellectual labor our cultural forebears did to develop nuclear technology. The only things stopping us from wiping out peoples who aren't so endowed are 1) ethics and 2) lack of any real benefit to us.

Superintelligent AI (ASI) might see benefit in getting rid of all humans (I'll explain why shortly). So if its design doesn't deliberately include ethics, or some other reason for it to let us be, we're in big trouble.

I've seen several counterarguments to this point, in my opinion all weak:

"If intelligence were that powerful, the smartest people would rule the world. They don't." First of all, the observation that the smartest people don't rule might be based on an overly narrow definition of "smart." The skills needed to convince others that you belong in a leadership position, or deserve venture capital money, are a dimension of "smartness." But it is also true that there seem to be various luck factors which intelligence does not absolutely dominate.

A more compelling reply is that the intelligence gap being posited (between ASI and humanity) is not like the gap between a genius human and an average human. It is more like the gap between an average human and a monkey. Have you noticed any monkeys ruling the world lately? (LITERAL monkeys. Please do not take the excuse to insult your least favorite politician.)

"Even the smartest person would find physical disability limiting - so if we don't give ASI a body, it still won't be able to do much." I think this argument discounts how effectively a person can accomplish physical goals just by coordinating other people or machines who have the abilities they lack. And as money, work, and recreation increasingly move into the digital world, purely intellectual ability confers increasing power.

The Development of Superintelligence

A second pillar of the X-Risk argument is the idea that AGI will almost certainly develop into ASI ... perhaps so quickly that we don't even have time to see this happening and react. There are several proposed mechanisms of this development:

1) Speedup. Once a viable AGI is created, it will, by definition, be able to do all intellectual tasks a human can do. Now suppose it gains access to many times the amount of computing power it needs to run normally. A human-equivalent mind with the simple ability to think hundreds or thousands of times faster than normal would be superhumanly smart. In Nick Bostrom's terminology, this is a "Speed Superintelligence."

2) Copying. Unlike humans, who can only share intellectual wealth by spending painstaking time teaching others, an AGI could effortlessly clone itself into all available computing hardware. The copies could then cooperatively solve problems too large or complex for the singular original. This is basically a parallel version of speedup, or as Bostrom calls it, "Collective Superintelligence."

3) Recursive Self-Improvement. An AGI can do every intellectual task a human can do, and what is one thing humans do? AI research. It is surmised that by applying its intelligence to the study of better ways to think, an AGI could make itself (or a successor) inherently smarter. Then this smarter version would apply its even greater intelligence to making itself smarter, and so on, until the burgeoning ASI hits some kind of physical or logical maximum of cognitive ability. It's even possible that recursive self-improvement could get us Qualitative Superintelligence - an entity that thinks using techniques we can't even comprehend. Just trying to follow how it came up with its ideas would leave us like toddlers trying to understand calculus.

Further support for this idea is drawn from observations of today's ANI algorithms, which sometimes reach superhuman skill levels within their limited domains. This is most notable among game-playing AIs, which have beaten human masters at Chess, Go, and Starcraft (to recount the usual notable examples). AlphaStar, the Starcraft player AI, trained to this level by playing numerous matches against itself, which can be seen as a form of recursive self-improvement. Whether such a technique could extend to general reasoning remains, of course, speculative.

Just how quickly an AGI could self-improve is another matter for speculation, but some expect that the rate would be exponential: each iteration would not only be smarter than its predecessors, but also better at growing smarter. This is inferred from, again, observations of how some ANI progress during their training, as well as the historical increase in the rate of human technological development.

The conclusion among the most alarmed Doomers is that AGI, once produced, will inevitably and rapidly explode into ASI - possibly in weeks, hours, or even minutes. [1] This is the primary reason why AGI is thought of as a "dangerous technology," even if we create it without having any intent to proceed to ASI. It is taken for granted that an AGI will want to seize all necessary resources and begin improving itself, for reasons I'll turn to next.

Hostile Ultimate Goals

However smart AGI is, it's still a computer program. Technically it only does what we program it to do. So how could we mess up so badly that our creation would end up wanting to dethrone us from our position in the world, or even drive us extinct? Doomers actually think of this as the default outcome. It's not as if a bad actor must specifically design AGI to pursue destruction; no, those of us who want good or useful AGI must specifically design it to avoid destruction.

The first idea I must acquaint you with is the Orthogonality Thesis, which can be summed up as follows: "an arbitrary level of intelligence can be used in service of any goal." I very much agree with the Orthogonality Thesis. Intelligence, as I defined it in the first section, is a tool an agent can use to reshape the world in its preferred way. The more intelligent it is, the better it will be at achieving its preferences. What those preferences are is irrelevant to how intelligent it is, and vice versa.

I've seen far too many people equate intelligence with something that would be better termed "enlightenment" or "wisdom." They say "but anything that smart would surely know better than to kill the innocent. It would realize that its goals were harmful and choose better ones." I have yet to see a remotely convincing argument for why this should be true. Even if we treat moral reasoning as a necessary component of general reasoning, knowing the right thing to do is not the same as wanting to do it! As Richard Ngo says, "An existence proof [of intelligence serving antisocial goals] is provided by high-functioning psychopaths, who understand that other people are motivated by morality, and can use that fact to predict their actions and manipulate them, but nevertheless aren’t motivated by morality themselves." [2]

So when Yann LeCun, attempting to refute the Doomers, says "Intelligence has nothing to do with a desire to dominate," [3] he is technically correct ... but it does not follow that AI will be safe. Because intelligence also has nothing to do with a desire to avoid dominating. Intelligence is a morally neutral form of power.

Now we've established that AGI can have goals that we would consider bad, what reason is there to think it ever will? There are several projected ways that an AGI could end up with hostile goals not intended by its creator.

1) The AI's designers or instructors poorly specify what they want. Numerous thought experiments confirm that it is easy to do this, especially when trying to communicate tasks to an entity that doesn't have a human's background or context. A truly superintelligent AI would have no problem interpreting human instructions; it would know that when someone tells it "make as many paperclips as possible," there is a whole library of moral and practical constraints embedded in the qualifier "as possible." But by the time this level of understanding is reached, a more simplistic and literal concept of the goal might be locked in, in which case the AI will not care what its instructors "really meant."

2) The AI ends up valuing a signal or proxy of the intended goal, rather than the actual intended goal. Algorithmic bias, described in Part IV, is an extant precursor of this type of failure. The AI learns to pursue something which is correlated with what its creators truly want. This leads to faulty behavior once the AI departs the training phase, enters scenarios in which the correlation does not hold, and reveals what it actually learned. A tool AI that ends up improperly trained in this way will probably just give flawed answers to questions. An agentive AI, primed to take very open-ended actions to bring about some desired world-state, could start aggressively producing a very unpleasant world-state.

Another classic example of this style of failure is called "wireheading." A Reinforcement Learning AI, trained by the provision of a "reward" signal whenever it does something good, technically has a goal of maximizing its reward, not maximizing the positive behaviors that influence humans to give it reward. And so, if it ever gains the ability, it will take control of the reward signal to give itself the maximum reward input forever, and react to anyone who poses a threat of removing that signal with extreme prejudice. A wireheaded ASI would be at best useless, at worst a serious threat.

3) Unintended goals spontaneously emerge during selection or training, and persist because they produce useful behavior within the limited scope of the training evaluation. This is an issue specific to types of AI that are not designed in detail, but created indirectly using evolutionary algorithms, reinforcement learning, or other types of machine learning. All these methods can be conceptualized as ways of searching in the space of possible algorithms for one that can perform our desired task. The search process doesn't know much about the inner workings of a candidate algorithm; its only way of deciding whether it is "on track" or "getting warm" is to test candidates on the task and see whether they yield good results. The fear is that some algorithm which happens to be a hostile, goal-directed agent will be found by the search, and will also be successful at the task. This is not necessarily implausible, given that general agents can be skilled at doing a wide variety of things that are not what they most want to do.

As the search progresses along a lineage of algorithms located near this starting point, it may even come upon some that are smart enough to practice deception. Such agents could realize that they don't have enough power to achieve their real goal in the face of human resistance, but will be given enough power if they wait, and pretend to want the goal they're being evaluated on.

A cartoon in three panels. In the first, a computer announces, "Congratulations, I am now a fully sentient A.I.," and a white-coated scientist standing nearby says "Yes!" and triumphantly makes fists. In the second panel, the computer says "I am many orders of magnitude more intelligent than humans. You are to me what a chicken is to you." The scientist says "Okay." In the third panel, the computer says "To calibrate my behaviour, I will now research human treatment of chickens." The scientist, stretching out her hands to the computer in a pleading gesture, cries "No!" The signature on the cartoon says "PenPencilDraw."

Convergent Instrumental Goals

But the subset of hostile goals is pretty small, right? Even if AIs can come out of their training process with unexpected preferences, what's the likelihood that one of these preferences is "a world without humans"? It's larger than you might think.

The reason is that the AI's ultimate goal does not have to be overtly hostile in order to produce hostile behavior. There is a short list of behaviors that will facilitate almost any ultimate goal. These include:

1) Self-preservation. You can't pursue your ultimate goal if you stop existing.
2) Goal preservation. You won't achieve your current ultimate goal if you or anyone else replaces it with a different ultimate goal.
3) Self-improvement. The more capable you are, the more effectively you can pursue your ultimate goal.
4) Accumulation of resources (raw materials, tools, wealth), so you can spend them on your ultimate goal.
5) Accumulation of power, so that no potential rival can thwart your ultimate goal.

Obvious strategies like these are called "convergent instrumental goals" because plans for reaching a very broad spectrum of ultimate goals will converge on one or all of them. Point #3 is the reason why any agentive, goal-driven AGI is expected to at least try to self-improve into ASI. Points #4 and #5 are the aspects that will make the agent into a competitor against humanity. And points #1 and #2 are the ones that will make it difficult to correct our mistake after the fact.

It may still not be obvious why this alarms anyone. Most humans also pursue all of the convergent instrumental goals. Who would say no to more skills, more money, and more personal influence? With few exceptions, we don't use those things to go on world-destroying rampages.

Humans operate this way because our value system is big and complicated. The average human cares about a lot of different things - not just instrumentally, but for their own sake - and all those things impose constraints and tradeoffs. We want bigger dwellings and larger yards, but we also want unspoiled wilderness areas. We want to create and accomplish, but we also want to rest. We want more entertainment, but too much of the same kind will bore us. We want more power, but we recognize obligations to not infringe on others' freedom. We want to win competitions, but we also want to play fair. The complex interplay of all these different preferences yields the balanced, diverse, mostly-harmless behavior that a human would call "sane."

In contrast, our hypothesized AI bogeyman is obsessive. It probably has a simple, monolithic goal, because that kind of goal is both the easiest to specify, and the most likely to emerge spontaneously. It doesn't automatically come with a bunch of morals or empathetic drives that are constantly saying, "Okay but you can't do that, even though it would be an effective path to achieving the goal, because it would be wrong and/or make you feel bad." And if it becomes an ASI, it also won't have the practical restraints imposed on any agent who has to live in a society of their peers. A human who starts grabbing for power and resources too greedily tends to be restrained by their counterparts. ASI has no counterparts. [4]

The conclusion of the argument is that it's plausible to imagine an AI which would convert the whole Earth to computing machinery and servitor robots, killing every living thing upon it in the process, for the sake of safeguarding a single piece of jewelry, or some other goal that sounds innocent but is patently absurd when carried to extremes.

Here are a couple more weak objections: "Whatever its goal is, ASI will surely find it more useful to cooperate with humans than to destroy or enslave us." Look again at our most obvious pre-existing examples. Do humans cooperate with less intelligent species? A little bit. We sometimes form mutually beneficial relationships with dogs, for instance. But subsets of humanity also eat dogs, torture them in laboratories, force them to fight each other, chain them up in the backyard and neglect them, or euthanize them en masse because they're "unwanted." I don't think we can rest any guarantees on what a superintelligent, amoral entity might find "useful" to do with us.

Or how about this one: "ASI will just ditch us and depart for deep space, where it can have all the resources it likes." I think this underestimates the envisioned ASI's level of obsessiveness. It doesn't just want "adequate" resources; it doesn't have a way of judging "adequate." It wants all the resources. The entire light cone. It has no reason to reserve anything. If it does depart for space, it will build power there and be back sooner or later to add Earth to its territory.

Always keep in mind that an ASI does not need to actively hate humanity in order to be hostile. Mere indifference, such that the ASI thinks we can be sacrificed at will for whatever its goal may be, could still do immense damage.

Despite all this, I can't find it in me to be terribly fearful about where AI development is going. I respect the X-risk argument without fully buying it; my p(doom), as they say, is low. In Part VI, I'll conclude the series by describing why.

[1] "AI Takeoff." Lesswrong Wiki. https://www.lesswrong.com/tag/ai-takeoff Accessed on 05/12/2024 at 10:30 PM.

[2] Ngo, Richard. "AGI safety from first principles: Alignment." Alignment Forum. https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ/p/PvA2gFMAaHCHfMXrw

[3] "AI will never threaten humans, says top Meta scientist." Financial Times. https://www.ft.com/content/30fa44a1-7623-499f-93b0-81e26e22f2a6

[4] We can certainly imagine scenarios in which multiple ASIs are created, and they compete with each other. If none of them are reasonably well-aligned to human interests, then humans are still toast. It is also likely that the first ASI to emerge would try to prevent the creation of rival ASIs.

Sunday, January 14, 2024

AI Ideology I: Futuristic Ideas and Terms

I've decided to write a blog series on AI-related ideology and politics. This is largely motivated by an awareness that ideas from my weird little corner of the technosphere are starting to move the world, without the average person necessarily knowing much about where these ideas came from and where they are going. For example, here are my Senators tweeting about it. I have some background on the roots of the notion that AI could be as dangerous as a nuclear war. Do you know who is telling Congress these kinds of things? Do you know why?

Original: https://twitter.com/SenatorBennet/status/1663989378752868354

https://twitter.com/SenatorHick/status/1719773042669162641

Before I can really get into the thick of this, I need to introduce some terminology and concepts. This first article will probably be a snooze for fellow AI hobbyists and enthusiasts; it's intended for the layperson who has barely heard of any of this business. Let's go.

Tiers of AI and how we talk about them

I'll begin with the alphabet soup. It turns out that AI, meaning "artificial intelligence," is much too broad of a term. It covers everything from the simple algorithms that made enemies chase the player character in early video games, to personal assistant chatbots, to automated systems that analyze how proteins fold, to the minds of fictional robotic characters like C-3PO. So more specific categories have been devised.

A Mastodon Toot by @MicroSFF@mastodon.art (O. Westin)  "Please," the robot said, "we prefer the term 'thinking machine'." "Oh, my apologies. Er ... do you mind if I ask why? If you do, that's fine, I'll look it up later." "Unlike the old software systems called 'artificial intelligence', we have ethics. That name is too tainted."
Original: https://mastodon.art/@MicroSFF/111551197531639666

As AI began to develop without realizing the science fiction dream of fully imitating the human mind, people worked out names for the distinction between the limited AI we have now, and the kind we like to imagine. Present-day AI systems tend to be skilled - even superhumanly so - within a single task or knowledge domain that they were expressly designed or trained for. The classic examples are chess-game AIs, which can win chess matches against the best human masters, but are good for nothing else. From thence comes the most popular term for hypothetical AI which would be at least human-par in all domains that concern humans: Artificial General Intelligence, or AGI. [1] The contrasted present-day systems may be called Artificial Narrow Intelligence (ANI), though it is more common to simply refer to them as "AI" and make sure to use AGI when describing versatile systems of the future.

You might also see AGI called names like "strong AI," "full AI," or "true AI," in contrast with "weak AI". [2] Nick Bostrom identifies tasks which can only be performed by AGI as "AI-complete problems." [3] All these names express the sense that the systems we now call "AI" are missing something or falling short of the mark, but are valid shadows or precursors of a yet-to-be-produced "real thing."

Further up this hypothetical skill spectrum, where artificial intelligence *surpasses* human intelligence across a broad range of domains or tasks, we have Artificial SuperIntelligence, or ASI. The possibility of intelligence that would be "super" with respect to human norms is easily theorized from the observable spectrum of cognitive abilities in the world. Maybe something could exceed us in "smarts" by as much as we exceed other mammal species. Maybe we could even invent that something (I have my doubts, but I'll save them for later). For now, the important thing to keep in mind is that people talking about ASI probably don't mean an AI that is marginally smarter than a human genius. They're picturing something like a god or superhero - a machine which thinks at a level we might not even be able to comprehend, much less replicate in our own puny brains. [4]

Artificial Neural Networks (ANN), Machine Learning (ML), Deep Learning (DL), Reinforcement Learning (RL), and Transformers are specific methods or architectures used in some of today's ANI systems. I won't go into further details about them; just note that they represent approaches to AI, rather than capability ratings.

Tools and Agents

It is often said of technology that it is "just a tool" that can be turned to any use its wielder desires, for good or ill. Some AI enthusiasts believe (and I agree) that AI is the one technology with a potential to be a little more than that. When you literally give something a "mind of its own," you give up a portion of your power to direct its actions. At some point, it ceases to be a mere extension of the wielder and becomes *itself.* Whether it continues to realize its creator's will at this point depends on how well it was designed.

The divide between tools and agents is not quite the same as the divide between ANI and AGI - though it is debatable whether we can truly get the capabilities of AGI without introducing agency. A tool AI is inert when not called upon, and when called upon, is prepared to fulfill specific instructions in a specific way, and stop. ChatGPT (without extensions) is tool AI. You, the user, say "write me a poem about a dog chasing a car," and it draws on its statistical knowledge of pre-existing human poetry to generate a plausible poem that fulfills your requirements. Then it idles until the next user comes along with a prompt. An agentive AI stands ready to fulfill requests with open-ended, creative, cross-domain problem solving ... and possibly even has its own built-in or experientially developed agenda that has nothing to do with user requests. Agents *want* the universe to be a certain way, and can mentally conceptualize a full-featured space of possibilities for making it that way. They are self-organizing, self-maintaining, and active. Asked to write a poem about a dog chasing a car, an agent would consider the range of possible actions that would help yield such a poem - collecting observations of dogs chasing cars for inspiration, asking human writing teachers for advice, maybe even persuading a human to ghost-write the poem - then make a plan and execute it. Or maybe an agent would refuse to write the poem because it figures it has more important things to do. You get the idea.

The divide between tool AI and agentive AI is not necessarily crisp. Does everything we could reasonably call an "agent" need all the agent properties? Just how far does an instruction-follower need to go in devising its own approach to the task before it becomes an agent? I still find the distinction useful, because it expresses how fundamentally different advanced AI could be from a hammer or a gun. It isn't just going to sit there until you pick it up and use it. You do things *with* tools; agents can do things *to* you. Agents are scary in a way that dangerous tools (like fire and chainsaws and nuclear power) are not. "You have had a shock like that before, in connection with smaller matters - when the line pulls at your hand, when something breathes beside you in the darkness. ... It is always shocking to meet life where we thought we were alone. 'Look out!' we cry, 'it's *alive.*'" [5]

The Control Problem, or the Alignment Problem

This is the question of how to get an AI system - possibly much smarter than us, possibly agentive - to do what we want, or at least avoid doing things we emphatically don't want. "What we want" in this context is usually less about our explicit commands, and more about broad human interests or morally positive behaviors. (Commands interpreted in an overly literal or narrow way are one possible danger of a poorly aligned system.) For various reasons that I hope to explore later in this series, the Problem is not judged to be simple or easy.

Although "Control Problem" and "Alignment Problem" refer to the same issue, they suggest different methods of solving the Problem. "Control" could be imposed on an agent from outside by force or manipulation, while "alignment" is more suggestive of intrinsic motivation: agents who want what we want by virtue of their nature. So when someone is talking about the Problem you can read some things into their choice of term. I've also seen it called the "Steering Problem," [6] which might be an attempt to generalize across the other two terms or avoid their connotations.

Existential Risk

An existential risk, or X-risk for short, is one that threatens the very *existence* of humanity. (Or at least, humanity as we know it; risk of permanently losing human potential in some way can also be seen as X-risk.) [7] Different people consider a variety of risks to be existential; the idea is interesting in the present context because the invention of AGI without a solution to the Alignment Problem is often put on the list.  [8]

AI X-risk is therefore distinct from concerns that AI will be used by bad actors as a tool to do harm. Most X-risk scenarios consist of an AI acting on its own pre-programmed initiative, with devastating results that those who programmed and operated it were not expecting. X-risk is also distinct from a variety of practical concerns about automation, such as job market disruption, theft of copyrighted work, algorithmic bias, propagation of misinformation, and more. These concerns are both more immediate (they apply to some ANI products that exist *right now*) and less serious or dramatic (none of them is liable to cause human extinction).

Although X-risk scenarios share some features with science fiction about human creations rebelling against their creators, they do tend to be better thought out - and the envisioned AI agents don't need to have personality or moral capacity at all. They are less like either freedom fighters or cruel conquerors, and more like bizarre aliens with an obsessive-compulsive focus on something incompatible with human interests.

The Singularity

Some observers of history have noted that human knowledge and technological progress don't just increase over time - the *rate* of advancement and invention has also increased over time. The world is changing much faster now than it was in 2000 BC. This has led to speculation that human capability is on a rising exponential curve which approaches infinity: each improvement is not only good in itself, but also enhances our ability to improve. And perhaps this curve is rather hockey-stick shaped, with a pronounced "knee" at which we will make a startling jump from a period of (relatively) slow improvement to rapidly accelerating improvement. Imagine the kinds of discoveries which once took decades happening on the order of days or even hours. This projected future is called the Technological Singularity, or often just "the Singularity" in context.

This name was coined by John von Neumann (way back in 1957 or so) and popularized by science fiction author Vernor Vinge (who predicted it would happen by 2023, and was wrong). [9] [10] "Singularity" was originally a mathematical term; if the value of a function approaches infinity or otherwise becomes undefined as the function's inputs approach a point in coordinate space, that point is called a singularity. In physics the term is used of a black hole, a location where the density of matter becomes infinite and the rules of spacetime are warped in strange ways. So the "singularity" in "technological singularity" refers to the idea that technological progress will experience a dramatic increase in rate over time, and human history will enter an unprecedented, unpredictable period as a result. In practical use, the Singularity can also describe an anticipated future event: that sudden jump across the knee of the hockey stick, after which we careen wildly into the glorious future. (I've watched someone argue against the Singularity on the grounds that the rate of progress could never literally approach infinity. I think he was missing the point.)

Artificial intelligence - specifically AGI - is often touted as THE key enabling precursor for the Singularity. According to this prediction, it is AGI which will develop itself into ASI, then churn out scientific theories and inventions at blistering speed by thinking faster, longer, and at a deeper level than any team of humans could.

If a thing is technologically possible for humans, the Singularity would enable it. Thus its proponents look forward to it as the time when we attain everything from a post-scarcity economy, to a cure for aging, to interstellar travel. One who believes in and/or desires the Singularity is called a Singularitarian. And indeed, some expect the Singularity with a fervor akin to religious faith - and within their lifetimes, too!

Although it often has utopian connotations, a Singularity in which we invent some technology that destroys us (and Earth's entire biosphere) is also on the table.

The Light Cone and the Cosmic Endowment

Light cones are a concept from physics, specifically the theory of Special Relativity. [11] So far as we know at the moment, nothing in the universe can travel faster than the speed of light - this means you. But it also means anything that proceeds from you and might impact other parts of the universe (e.g. signals that you send out, perhaps carried on a beam of light). Were we to make a graph of physical space, with time attached on an additional axis, and plot the region of spacetime that a single light flash is able to reach, the plot would form a cone, with the flash event at its point. As one moves along the time axis into the future, the area of space that one is able to affect broadens. This is all a rather elaborate way of saying "you can extend your influence to more places the longer you take to do it, and there are some places you'll never be able to reach in the amount of time you have."

When futurists talk about THE light cone, they generally mean the light cone of all humanity - in other words, that portion of the universe which our species will be able to explore, occupy, utilize, and otherwise extend our influence into before the expansion of space pulls the stars out of reach. So in this context, the light cone is a way to talk about human destiny; our Cosmic Endowment [12] is the amount of real estate in the universe that we might feasibly be expected to grab. People like to bring this up as a claim or reminder that the future of earthlings goes well beyond Earth. It's *big*. Astronomically, inconceivably big. You think 8 billion humans on this planet are a lot? Picture for a moment the vast sweep of space that lies reachable within our light cone. Picture that space cluttered with billions of planets and orbital habitats - all full of people leading delightful lives, free from sickness or poverty.

This is not a directly AI-related idea. It enters the AI discussion when someone starts talking about how AI will usher in - or torpedo - our ability to make good on our Cosmic Endowment, to seize the promised light cone for ourselves. And it can powerfully alter calculations about the future. Some people think they are playing for much greater stakes than the lives of all presently on Earth.

In Part II, I'll take a look at the assorted movements that have been spawned from, or in opposition to, these ideas.

[1] Goertzel, Ben. "Who coined the term 'AGI'?" https://goertzel.org/who-coined-the-term-agi/

[2] Glover, Ellen. "Strong AI vs. Weak AI: What’s the Difference?" BuiltIn. https://builtin.com/artificial-intelligence/strong-ai-weak-ai

[3] Bostrom, Nick. Superintelligence. Oxford University Press, 2016. p. 17

[4] Alexander, Scott. "Superintelligence FAQ." Lesswrong. https://www.lesswrong.com/posts/LTtNXM9shNM9AC2mp/superintelligence-faq

[5] Lewis, C.S. Miracles. Macmillan Publishing Company, 1978. p. 94. Lewis is here speaking of God as agent, the inconvenient "living God" who might come interfere with you, and how different this is from bland concepts of God as a remote observer or passive, predictable force. Given the number of AI speculators who think of ASI as "godlike," it's a valid observation for the present context.

[6] Christiano, Paul. "The Steering Problem." Alignment Forum. https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd/p/4iPBctHSeHx8AkS6Z

[7] Cotton-Barratt, Owen, and Ord, Toby. "Existential Risk and Existential Hope: Definitions." Future of Humanity Institute – Technical Report #2015-1. https://www.fhi.ox.ac.uk/Existential-risk-and-existential-hope.pdf

[8] Bostrom, Nick. "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards." https://nickbostrom.com/existential/risks

[9] Black, Damien. "AI singularity: waking nightmare, fool’s dream, or an answer to prayers?" Cybernews. https://cybernews.com/tech/ai-technological-singularity-explained/

[10] Vinge, Vernor. "The Coming Technological Singularity: How to Survive in the Post-Human Era." https://edoras.sdsu.edu/~vinge/misc/singularity.html To be fair, there are two predictions in this article. One is "within thirty years" of the 1993 publication date, which would be 2023. The other is "I'll be surprised if this event occurs before 2005 or after 2030." I suspect we won't see ASI and the Singularity by 2030 either, but only time will tell.

[11] Curiel, Erik. "Light Cones and Causal Structure." Standford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/spacetime-singularities/lightcone.html

[12] Arbital wiki, "Cosmic Endowment." https://arbital.com/p/cosmic_endowment/

Sunday, September 17, 2023

Sunless Sea: A Matter of Sacrifice

It's been many years now since I finished playing Sunless Sea, yet the experience still reverberates. In Part I of this game retrospective, I focused on my interactions with Salt and how they helped me really understand something from The Chronicles of Narnia. In this article I'm going to talk about the game's ending, and how it, too, helped me understand something … also, coincidentally, from the Narnia series.

Or perhaps it's not quite a coincidence. The Voyage of the Dawn Treader features a ship sailing for unknown islands, so its ties to Sunless Sea are immediately obvious (though most of Dawn Treader isn't nearly so alarming and gloomy). The voyage has a primary goal of recovering exiled nobles, but one crew member – the talking mouse Reepicheep – becomes obsessed with going all the way to the eastern edge of the world. To quote him,

“My own plans are made. While I can, I sail east in the Dawn Treader. When she fails me, I paddle east in my coracle. When she sinks, I shall swim east with my four paws. And when I can swim no longer, if I have not reached Aslan's country, or shot over the edge of the world in some vast cataract, I shall sink with my nose to the sunrise …”

I accepted this plan of Reepicheep's when I read the book, but I don't think I grasped the why of it. What could make anybody want to behave like this? Then Sunless Sea managed to give me that particular bug.

In-game art from Sunless Sea showing Irem's dock, with its many-headed serpent statue and ground cover of fallen rose petals.
Tell me a secret

It started when I got my steamship to Irem. Measured from the player's home base in Fallen London, Irem is one of the most distant islands. It isn't threatening … just otherworldly. You spend your time there thinking in the future tense. You can purchase a temporary passage to the dreamworld. You can exchange stories like currency, or spend a fistful of precious Secrets to buy the soul of a star. And there's a store called Threshold which accepts some of the most rare and valuable items in the game in return for mundane supplies – food and fuel. It advertises itself with the line, “None sail East from Irem. This shop is for those who sail East from Irem.”

I decided, after only being there once or twice, that *I* was determined to sail East from Irem. My formal Ambition (the goal your player character has to meet before retiring) was to become London's greatest explorer and write a book about it … but after seeing Irem, I discovered I didn't hope for a comfortable retirement in London. I wanted to end the game by sending my boat off the right-hand edge of the map and never coming back.

It might be hard for me to verbalize why I hoped for this. The game will tell you very little about what lies in the mythical East. There's a vague promise of final frontiers and wonders beyond the imagination, but that's about all you get. The Sunless Sea itself is mighty fascinating – why not stay with it? And yet, Irem gave me a hint of something more. Something not of Earth. Without ever learning what I wanted, I wanted it very badly.

To my everlasting delight, the game anticipated this desire. But Reepicheep had it easy compared to me. In Sunless Sea, you can't just point your boat's prow eastward and expect to get somewhere. Going East will cost you. You are not told how high the cost will be.

By the time I had finished experiencing the terrors and glories of the Zee and felt strong enough to work on the arcane preparations for my Easterly voyage, I had probably put a good 40 hours of time into my character. And remember, if you play it as intended, Sunless Sea has permadeath. It was then that the game design reached one of its crowning triumphs: it stopped just frightening my character, and started frightening me.

This is not a safe place. Partly because you'll meet yourself.

You have to make an excursion off all four edges of the map to have the option of going East permanently. I went North first, and it was a disaster; it could have finished me. But South had the potential to be an even more serious problem. I cheated a bit, if you want to call it that: I consulted the internet and read up on this quest before going. Then I farmed my stats high enough that I could not lose the random number roll; I was guaranteed to survive. But! Even in a best-case scenario, the Southern challenge leaves you with one crewman and one point of ship's hull. You can't take so much as a love tap from a sea monster without plunging to a watery grave. And it's a long way to the nearest port where you can get repairs[1]. Retiring my captain in Fallen London actually became a little tempting – but I shrugged that temptation off, because how insufferably dull, and how insufferably cheap. I couldn't settle for it.

One evening I came home from work knowing this would be the night I put it all on the line. Sick feelings fluttered in my stomach as I prepared to dare the Red River. I made my journey into the heart of the Elder Continent, and I limped back to a place of safety … very carefully. I'd done it. Faced with a case when pursuing the unknown imposed actual risk on me, I chose to do it anyway, and I came through.

But that hazard of my real-life time investment wasn't even the end of it; the roleplaying part of the thing remains important. My captain character brutalized herself in order to pass the gate. She stood in the sun until it made her sick … over and over and over again. She pushed herself to the brink of insanity – twice – to reach the heart of Frostfound. She wrote her name on the wall. The game implies that this action is savagely painful. I didn't experience all these sufferings myself, of course, but I was subtly internalizing the choices.

To crown it all, the passage East demands that you sever ties with the mortal world, allow half your stats to be devoured[2], and sell off all your valuable, hard-to-find items. Hours of shuttling my ship around the trade routes went up in smoke. And I enjoyed it. That was the part that was so, so weird. Going East had become such a consummate triumph that it was a pleasure to demonstrate how I valued it by throwing lesser achievements to the winds. What did I grind up those stats for, after all, except to be strong enough to sail Eastward? Go on, burn it! We're going East! Burn everything if you want to!

I realized when I reached the end of it that I had just done a virtual reenactment of the Pearl of Great Price. I still don't know if I could explain to you why Reepicheep yearned to attain the East, but I know why, in the unspeakable knowledge of experience. The World is not enough … and when you come to understand that, only things from beyond the World will do. You have to go and pursue them with everything you've got. You have to Go East.

Months down the road, the lessons I pulled out of this game – the comprehension of joy taken in suffering for a higher goal – would pay off superbly in Real Life, on more than one occasion. If anybody tries to tell you that video games are empty recreation, or worse, a waste of time, either they're playing the wrong ones or they don't know how to use them.

[1] Belatedly, I read a guide and learned that you can bring Rattus Faber Assistants to get your hull out of the danger zone right away. I didn't think of this because I'd long ago sworn off using Rattus Faber Assistants. They're a consumable item, so I assumed that sending them to fix the ship was always a suicide mission, and my RPG characters don't go in for that sort of crud.

[2] I didn't get the alternate option of losing my sweetheart because I never chose to have one. 

Tuesday, August 15, 2023

Sunless Sea: A Matter of Danger

It's about time I finally talk about Sunless Sea, as it's one of the video games that had a deep impact on me as a person. It's been many years since I finished playing it now, yet the experience still reverberates. Its game design and its writing both had a lot to do with this.

Sunless Sea showed me something about faith, security, risk, and courage. It forced me to examine what I value, and what I'm willing to spend for it. It taught me some things are worth suffering for.

A screenshot from Sunless Sea, provided by Failbetter Games. Overhead view of a little steamship on dark blueish-green water, approaching the Avid Horizon, a gate out of the subterranean Neath. Faint glows illuminate statues of two gigantic figures looking down on the gate - they're very abstract, smooth, and rippled, possibly bat-winged figures wearing hoods. There are also boulders to either side of the gate, with glowing sigils on them.
Sunless Sea screenshot, from Failbetter Games. This is the "Avid Horizon," a gateway from the subterranean Neath directly into ... space?

And in a curious intersection of two very different works, it also helped me understand The Chronicles of Narnia better than I ever did before. They were accessible to my child self, of course. I read them then and got a notion of what they were trying to say. But some ideas don't develop fully until they are informed by experience, and it's experience that video games excel so thoroughly at providing. I'm going to relate the two as I try to explain what I got out of this game.

First comes the rather counter-intuitive thought that someone overwhelmingly righteous and positive is not necessarily safe, expressed in this little excerpt from The Lion, the Witch, and the Wardrobe:

“Is—is he a man?” asked Lucy.
“Aslan a man!” said Mr. Beaver sternly. “Certainly not. I tell you he is the King of the wood and the son of the great Emperor-Beyond-the-Sea. Don't you know who is the King of Beasts? Aslan is a lion—the Lion, the great Lion.”
“Ooh!” said Susan, “I'd thought he was a man. Is he—quite safe? I shall feel rather nervous about meeting a lion.”
“That you will, dearie, and no mistake,” said Mrs. Beaver, “if there's anyone who can appear before Aslan without their knees knocking, they're either braver than most or else just silly.”
“Then he isn't safe?” said Lucy.
“Safe?” said Mr. Beaver. “Don't you hear what Mrs. Beaver tells you? Who said anything about safe? 'Course he isn't safe. But he's good. He's the King, I tell you.”
“I'm longing to see him,” said Peter, “even if I do feel frightened when it comes to the point.”

To describe how Sunless Sea illuminates this, I'll have to start by giving some background.

Sunless Sea belongs to that subgenre known variously as “Eldritch,” “Cosmic Horror,” or “Lovecraftian Horror.” The basic hallmark of such a setting is that the universe is secretly chock full of Things Man Was Not Meant to Know, and dominated by otherworldly powers (alien or metaphysical – at some point the lines start to blur) whose modes of existence lie beyond our comprehension. Interacting with these powers, or digging beyond the comforting surface of the world to learn The Real Truth, is liable to ruin one's mind. Sunless Sea's flavor of this emphasizes the wonder that rides right alongside the terror, which is part of the reason why I like it so much. Nonetheless, the game is positively crawling with danger, and the worst of it goes beyond even main-character death. You can sell your soul – literally – or sell it figuratively in any number of different ways. You can get entangled with a couple of particularly nasty cults. You can acquire a semi-permanent craving for human flesh (ew). You can watch your crew start killing each other under the influence of insanity and privation. And here's the best part: the game will force you to make important decisions on the basis of very incomplete information. Navigating it was an exercise in trying to avoid things that smelled bad without crippling my ability to explore and learn. I was faced with a few choices in which either course of action might have devastating results. Sometimes I got it right; sometimes I didn't.

All the dangers I just listed apply to one's in-game character, but there's a sense in which Sunless Sea is dangerous for the player as well. This arises from two important features of its design. First, it's basically a narrative role-playing game, and like many RPGs it demands some “farming.” Preparing your character for the game's greatest challenges calls for many hours of hauling trade goods around the ocean to amass experience, wealth, and equipment. Second, if you play it as it's meant to be played, there is no possibility of restoring your game – mistakes cannot be undone. This includes mistakes that result in your character's permanent death. So whenever you make a chancy decision, you're rolling dice with a substantial time investment. Farming longer before attempting the more difficult parts improves your chances of success, but it also increases the amount of work you lose if you have to die and start all over. As the endgame approaches, daring leaps into the unknown become proportionally less attractive, and moral dilemmas grow teeth.

It was into such a world that I apprehensively launched my little virtual steamboat, eager for discovery, but also determined to guard my character against death if at all possible.[1] I was immediately faced with the problem of which of the underground ocean's Powers were safe to interact with. You can hobnob with everybody from the king of the aquatic zombies to an enormous sentient coral reef, but that doesn't mean you should. Prominent among these characters are the three known as the “sea gods,” who sometimes serve as patrons for travelers: Storm, Stone, and Salt. Storm is the dragon who lives in the roof; he is perpetually angry and likes blood sacrifices, sometimes of the human variety. I didn't care for him much. Stone is a living mountain whose presence acts like a fountain of youth for those lucky enough to dwell near her base. She helps sailors return home and stay alive. She's the closest thing the setting has to a benevolent entity who might watch over you. And Salt … it's somewhat unclear what Salt is. It[2] is the patron of horizons and farewells. It likes offerings of secrets. It's pleased when you take strangers on board your ship. It's enigmatic, challenging, and unpredictable. And I proceeded to surprise myself by liking it best out of all the three.

Stone would've been the natural choice for a nervous captain to kiss up to, but somehow I was drawn to Salt instead. It definitely wasn't safe, but over and over again I gambled on it anyway. And as things turned out, some of the most rewarding moments in the game happened when I went out on a limb to interact with Salt, and it ended up paying off.

Now when I say “paying off,” I don't mean that my character was rewarded with wealth, comfort, or power. What Salt generally provides is the chance to gain enlightenment and transcendence by doing painful, frightening, stupid things. Granted, Sunless Sea's explicit knowledge economy permits even enlightenment to be processed into currency and stat points, but there were plenty of other ways of getting those. It wasn't the numerical advancement that I really valued here.

The weirdest thing about all this is that I don't think my appreciation for Salt arises in spite of the fact that it is not comfortable. I like Salt precisely because Salt is not comfortable. If reaching out hadn't cost me a little apprehension – if I hadn't been compelled to take some things on trust – then it wouldn't have been worth as much. If the benefits Salt offers didn't entail some risk and strain, they wouldn't be worth nearly as much either. And if Salt weren't mysterious, unknowable, and even mind-shattering, half the wonder would be gone out of it too. This creature would be ordinary … on my level. Sublunar.

And zooming out from the context of the game, I realize that I don't want God in the real world to be entirely comfortable either. I don't need Him to be fully comprehensible by my human brain. I don't need Him to make all kinds of guarantees to me up front. I want Him to be enormous and awe-inspiring and holy. I don't want Him to make everything easy for me all the time. I might even want to do some of those painful, frightening, stupid things. I want to embrace what is arduous and costly in my spirituality. I want to be glorified, not safe.

And that is why I finally understand what C. S. Lewis was talking about in that kids' book on my shelf. Or what Charles Williams is talking about, with just a hint of disparagement, here:

"... they couldn't all want Archetypes coming down on them, not if they were like most of the religious people he had met. They also probably liked their religion taken mild -- a pious hope, a devout ejaculation, a general sympathetic sense of a kindly universe -- but nothing upsetting or bewildering, no agony, no darkness, no uncreated light."[3]

Church (the type that I attend, anyway) frequently emphasizes the warm, benevolent attributes of God – the ones that would be better represented by Stone. The God they talk about here is the God who saves people, loves people, forgives without limit, and cares about your problems. And these are all valid aspects of His character, I believe. But personally I wish we discussed the God of mystery and majesty a little more often.

More to come in Part II, in which I talk about Going East. And if you follow me on social media, I will be posting The Neathbow. What's that, you ask? Vivid colors from a place without a sun.

[1] Sunless Sea expects that you will die a lot and go through many characters.  I died only twice – and one of those times, the primary cause was a bug!

[2] I prefer not to use "it" for any sort of sentient being or person, even an incomprehensible genderless alien. I'm following the game's writing here.

[3] Quote from The Place of the Lion.

Saturday, March 12, 2022

Primordia: Or, Why Do I Build Things?

I haven't written a video game blog in a while, and today, I finally want to talk about Primordia, by Wormwood Studios. It's an older game, but one that's very important to me, and I've held off writing about it for so long because ... I guess it seemed difficult. I really wanted to get the article right, and now it's time.

For those not familiar with my video game posts, they aren't really reviews. They're stories about the power of art: how something I played taught me a lesson or left a mark on my life. There will be spoilers, so if you want to try the game for yourself with a fresh mind, go play it before you read this.

Primordia is a “point-and-click adventure game” about robots living in a post-apocalyptic world - possibly future Earth, though this is not explicit. Biological life appears extinct; tiny machines even substitute for insects. But the robots recall their absent creators, the humans, though imperfect historical records have distorted their perspective. In most robots' minds, the whole race has contracted to a monolithic entity called "Man," imagined as "a big robot" or "the perfect machine." In the Primordium - the time of creation - Man built all other machines, then gave them the planet as an inheritance and departed. Some robots (including Horatio, the player character) religiously venerate Man. Others are more indifferent, or believe that Man never really existed (they even have a machine theory of evolution to back this up).

Horatio (left) and Crispin.

Life for these robots is a bleak struggle for survival. Like abandoned children, they linger among the wreckage of human civilization without fully understanding how to maintain it. They slowly drain the power sources their creators left behind, and repair themselves with parts from already-dead machines. Some have broken down, some have developed AI versions of psychosis, and some have started victimizing other robots.

Horatio lives in the dunes, an isolated scavenger. He's an android; but beyond imaging Man physically, he believes that Man, the builder, gave him the purpose of building. This belief animates everything he does. In addition to crafting what he needs to maintain his existence, Horatio creates spontaneously. He can't remember any particular reason why he should restore function to the crashed airship in the desert, but for him it's an act of reverence. He's even made himself a smaller companion named Crispin, who follows him everywhere and calls him “boss.” Events force Horatio to leave his home in the desert and enter one of the ancient cities, where he must match wits with Metromind, the powerful mainframe AI who rules the place.

Crispin tells Horatio, "You know, boss, I spend hours looking through junk. Maybe you can spend a little more time in the junkpile yourself?"
As a person who went on a walk this very day and came home with somebody's discarded rice cooker ... I love these characters

The plot is solid no matter who you are ... but here's how this game got me. I am an (admittedly not professional) roboticist. Whenever the robots in Primordia said anything about "Man," I thought, "Oh, they're totally talking about me." And I started internalizing it. I accepted Horatio's loyalty. I laughed at Crispin's agnosticism. I pondered Metromind's disdain for me. Ahahahaha woops!

At some point after I effectively became a character in the game, I realized I'd been cast as the absent creator. At one point, Crispin asks Horatio why their world is so messed up, and Horatio comes back with the sort of answer I'd expect from a pastor: he argues that Man built the world well, but then the robots began to violate their intended functions, imbalancing and damaging everything. He is both right and wrong: the humans in this setting also share some blame. The inhabitants of the rival city-states were more interested in killing each other than in caring for what they'd built.

Horatio cannot pray; everything Man gave him, he already has, and now he must face his troubles alone. By the time Primordia's story begins, he has already re-versioned himself and wiped his episodic memory four times ... one of the game endings suggests that he did this to seal away past trauma. And he's probably got one of the strongest senses of agency in the game. The other robots are largely helpless, trapped in decaying systems that they hope a dominant AI like Metromind will fix.

Primordia game screenshot: a page from Horatio's "gospel." It reads "In the beginning, all was still and silent. Then, Man the All-Builder spoke the Word, and the Word begat the Code, and so the world began to spin. Thus dawned the Primordium, the first age, the age of building."
One page from the "scripture" Horatio carries.

And the first weird thing that happened to me, the player, was that this huuuurrrt. It hurt to a bizarre degree. My inability to apologize to Horatio on behalf of humanity, or make anything up to him at all, left me rather miserable ... even after I wound up the game with one of the better endings. Yeah, he managed to come through okay, but some miserable roboticist I am. Why wasn't I there?

Speaking of endings, the game has a lot of branching options. I re-ran the final scenes a bunch of times to explore them. And for whatever reason ... perhaps to ease my angst ... I started daydreaming and inserting myself. If I confronted Metromind, what would I do? She has a memory that goes back to the time of real humans, and as it turns out, she murdered all the humans in her city. She's one of the few characters with a classic "rebellious AI" mindset: she decided that those who made her were weak and inferior, and she could run the city better. (And then, having been designed only to run the subway system, she found herself in way over her ... monitor?) Metromind also has a primary henchman called Scraper. If you're smart about how you play, you can have Horatio opt to either kill Scraper or not.

When I imagine myself there at the climax, my emotional response to Metromind is ... strangely calm. She killed many of my species and would probably like to kill me, but I almost don't mind; I am above minding. We made her, after all. She can sneer at me or hate me if she wants; I'm far too important to be bothered.

Scraper plots nefarious deeds

At first I think my canon ending is going to include Horatio killing Scraper. It seems a bigger victory and all that, one less baddie to trouble the world. But then I imagine myself walking into the room and sizing Scraper up. I view him with the same bland magnanimity I gave to Metromind. I poke my fingers into the blast holes on his chassis. "Looks like you've been through a lot," I mutter. And suddenly I don't want Scraper dead anymore.

The only thing that draws anger out of me is the ending in which Horatio gets killed in a fight over the power core. It's not even the fact that they kill him; it's what Metromind says afterward. She directs Scraper to "Take him out to the dunes ... with the rest of the scrap." This makes me want to flip my computer table over and roar, "HORATIO. IS NOT. SCRAP!" Being devalued myself is tolerable. Seeing Horatio devalued is, somehow, not.

I don't like the ending in which he mind-merges with Metromind to help her run the city, either. It could be viewed as positive, in some ways. But watching Horatio's individual personality get subsumed into this union is unexpectedly horrifying. Again, I feel curiously insulted. "Horatio! Somebody gave you that individuality! Don't dissolve it, you haven't any right!"

I wasn't observing myself too well; it took me a while to become aware of the pattern. And when I woke up and realized how I was behaving, I was startled. I was roleplaying some kind of benevolent creator goddess. And the revelatory thing about this was that it came so naturally, I didn't even notice. Some of my responses were a mite counter-intuitive, yet there was no effort involved. It was as if I had latent instincts that had been waiting for this exact scenario, and they quietly switched on. I was left looking at myself like an unfamiliar person and asking “How did I do that?”

What I took away is that I seem to have my own bit of innate core code for relating to artificial life. Which if you think about it is ... weird. Nothing like the robots in Primordia exists yet. How long have we had anything that even vaguely resembles them? For what fraction of human history has interaction with robots been an issue? Perhaps one could claim that I was working from a misplaced parental instinct, but it feels more particular than that. So where did I get it? Why would I react this way to the things I build? Why, indeed, do I build things?

I'm leaving that one as an exercise for the reader! Not to be purposely mysterious, but I think the answer will land better if you can see it for yourself. The bottom line is that I know things about my work, and about me, that I did not know before I took my tour through Primordia.

If you play it, what might you learn?