Wednesday, August 18, 2021

Acuitas Diary #40 (August 2021 A)

I have a bit more theory to talk about than usual. That means you're getting a mid-month developer diary, so I can get the ideas out of the way before describing what I did with them.

I've wanted to start working on spatial reasoning for a while now. At least a rough understanding of how space works is important for comprehending human stories, because we, of course, live in space. I already ran into this issue (and put in hacks to sidestep it) in a previous story: Horatio the robot couldn't reach something on a high shelf. Knowing what this problem is and how to solve it calls for a basic comprehension of geometry.

A page from Harmonices Mundi by Johannes Kepler

Issue: Acuitas does *not* exist in physical space -- not really. Of course the computer he runs on is a physical object, but he has no awareness of it as such. There are no sensors or actuators; he cannot see, touch, or move. Nor does he have a simulated 3D environment in which to see, touch, and move. He operates on words. That's it.

There's a school of thought that says an AI of this type simply *can't* understand space in a meaningful way, on account of having no direct experience of it or ability to act upon it. It is further claimed that symbols (words or numbers) are meaningless if they cannot refer to the physical, that this makes reasoning by words alone impossible, and therefore I'm an idiot for even attempting an AI that effectively has no body. Proponents of this argument sometimes invoke the idea that "Humans and animals are the only examples of general intelligence we have; they're all embodied, and their cognition seems heavily influenced by their bodies." Can you spot the underlying worldview assumption? [1]

Obviously I don't agree with this. It's my opinion that the concepts which emerge from human experience of space -- the abstractions that underlie or analogize to space, and which we use as an aid to understanding it -- are also usable by a symbolic reasoning engine, and possess their own type of meaningful reality. An AI that only manipulates ideas is simply a different sort of mind, not a useless one, and can still relate to us via those ideas that resemble our environment.

So how might this work in practice? How to explain space to an entity that has never lived in it?

Option #1: Space as yet another collection of relationships

To an isolated point object floating in an otherwise empty space, the space doesn't actually matter. Distance and direction are uninteresting until one can specify the distance and direction to something else. So technically, everything we need to know about space can be expressed as a graph of relationships between its inhabitants. Here are some examples, with the relational connection in brackets:

John [is to the left of] Jack.
Colorado [is north of] New Mexico.
I [am under] the table.
The money [is inside] the box.

For symbolic processing purposes, these are no more difficult to handle than other types of relationship, like category ("Fido [is a] dog") and state ("The food [is] cold"). An AI can make inferences from these relationships to determine the actions possible in a given scenario, and in turn, which of those actions might best achieve some actor's goals.

Though the relationship symbols are not connected to any direct physical experience -- the AI has never seen what "X inside Y" looks like -- the associations between this relationship and possible actions remain non-arbitrary. The AI could know, for instance, that if the money is inside a box, and the box is closed, no one can remove the money. If the box is moved, the money inside it will move too. These connections to other symbols like "move" and "remove" and "closed" supply a meaning for the symbol "inside."

To prevent circular definitions (and hence meaninglessness), at least some of the symbols need to be tied to non-symbolic referents ... but sensory experiences of the physical are not the only possible referents! Symbols can also represent (be grounded in) abstract functional aspects of the AI itself: processes it may run, internal states it may have, etc. Do this right, and you can establish chains of connection between spatial relationships like "inside" and the AI's goals of being in a particular state or receiving a particular text input. At that point, the word "inside" legitimately means something to the AI.

But let's suppose you found that confusing or unconvincing. Let's suppose that the blind, atactile, immobile AI must somehow gain first-hand experience of spatial relationships before it can understand them. This is still possible.

The relationship "inside" is again the easiest example, because any standard computer file system is built on the idea of "inside." Files are stored inside directories which can be inside other directories which are inside drives. 

The file system obeys many of the same rules as a physical cabinet full of manila folders and paper. You have to "open" or "enter" a directory to find out what's in it. If you move directory A inside directory B, all the contents of directory A also end up inside directory B. But if you thought that this reflected anything about the physical locations of bits stored on your computer's hard drive, you would be mistaken. A directory is not a little subregion of the hard disk; the files inside it are not confined within some fixed area. Rather, the "inside-ness" of a file is established by a pointer that connects it to the directory's name. In other words, the file system is a relational abstraction!

File systems can be represented as text and interrogated with text commands. Hence a text-processing AI can explore a file system. And when it does, the concept of "inside" becomes directly relevant to its actions and the input it receives in response ... even though it is not actually dealing with physical space.

Though a file system doesn't belong to our physical environment, humans find it about as easy to work with as a filing cabinet or organizer box. Our experience with these objects provides analogies that we can use to understand the abstraction.

So why couldn't an AI use direct experience with the abstraction to understand the objects?

And why shouldn't the abstract or informational form of "inside-ness" be just as valid -- as "real" -- as the physical one?

Option #2: Space as a mathematical construct

All of the above discussion was qualitative rather than quantitative. What if the AI ends up needing a more precise grasp of things like distances and angles? What if we wanted it to comprehend geometry? Would we need physical experience for that?

It is possible to build up abstract "spaces" starting from nothing but the concepts of counting numbers, sets, and functions. None of these present inherent difficulties for a symbolic AI. Set membership is very similar to the category relationship ("X [is a] Y") so common in semantic networks. And there are plenty of informational items a symbolic AI can count: events, words, letters, or the sets themselves. [2] When you need fractional numbers, you can derive them from the counting numbers.

An illustration of a Cartesian coordinate system applied to 3D Euclidean space.

Keeping in mind that I'm not a mathematician by trade and thus not yet an expert on these matters, consider the sorts of ingredients one needs to build an abstract space:

1. A set of points that belong to the space. A "point" is just a number tuple, like (0, 3, 5, 12) or (2.700, 8.325). Listing all the points individually is not necessary -- you can specify them with rules or a formula. So the number of points in your space can be infinite if needed. The number of members in each point tuple gives the space's dimension.

2. A mathematical function that can accept any two points as inputs and produce a single number as output. This function is called the metric, and it provides your space's concept of distance.

3. Vectors, which introduce the idea of direction. A vector can be created by choosing any two points and designating one as the head and the other as the tail. If you can find a minimal list of vectors that are unrelated to each other and can be used to compose any other possible vector in the space, then you can establish cardinal directions.

Notice that none of this requires you to see anything, touch anything, or move anything. It's all abstract activity: specifying, assigning, calculating. Using these techniques, you can easily build an idea-thing that happens to mimic the Euclidean 3D space that humans live in (though many other spaces, some of which you could not even visualize, are also possible). And once you've done that, you are free to construct all of geometry.

I'd like to eventually equip Acuitas with the tools to apply both Option #1 and Option #2. I'm starting with Option #1 for now. Tune in next time to see what I've accomplished so far.

[1] For a few examples of the "AI must be embodied" argument, see https://theconversation.com/why-ai-cant-ever-reach-its-full-potential-without-a-physical-body-146870, https://aeon.co/ideas/the-body-is-the-missing-link-for-truly-intelligent-machines, and https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3512413/

[2] See "Do Natural Numbers Need the Physical World?," from The Road to Reality by Roger Penrose. Excerpts and a brief summary of his argument are available here: http://www.lrcphysics.com/scalar-mathematics/2007/11/24/on-algebra-of-pure-spacetime.html "There are various ways in which natural numbers can be introduced in pure mathematics and these do not seem to depend upon the actual nature of the physical universe at all."

No comments:

Post a Comment