Sunday, April 23, 2023

Acuitas Diary #59 (April 2023)

I've continued my two-pronged work on Narrative understanding and on "game playing." On the Narrative side this month, I did more complex term grounding - specifcally of the word "obey."

My working definition of "to obey X" was "to do what X tells you to do." This is interesting because there is no way to infer directly that any given action qualifies as obedience, or defiance ... the question of whether someone is following orders (and whose orders) is always relative to what orders have been given. So proper understanding of this word requires attention to context. Fortunately the Narrative scratchboard stores that sort of context.


In addition to simply inferring whether some character has obeyed some other, I wanted to make derivative subgoals. If one agent has a goal of obeying (or disobeying) another agent, that's a sort of umbrella goal that isn't directly actionable. Before the agent can intentionally fulfill this goal, it has to be made specific via reference to somebody else's orders. So when this goal is on the board, the appearance (or pre-existence) of orders needs to spawn those specific subgoals.

In short it was a whole lot more complicated than you might think, but I got it working. Eventually I'll need to make this sort of relative word definition generic, so that new words that operate this way can be learned easily ... but for now, "obey" can be a case study. The Big Story needs it, since part of the story is about a power struggle and which leader(s) certain characters choose to follow.

Game-playing still isn't demo-ready, but it's starting to feel more coherent. I worked through all the bugs in the code that responds to simple description of a scene, then began working on responses to goals/issues. It was fun to leverage the existing Narrative code for this, the way I'd wanted to. In the Narrative module, that code serves to predict character actions, analyze *why* characters are doing things, and determine whether characters are meeting their goals, whether their situation is improving or worsening, etc. But as I'd hoped, a lot of the same structures are just as effective for control and planning.

For example, let's say Acuitas is playing a human character and is told "you are hungry." Something like this unfolds:

New issue: <self> is hungry
Problem solving: <self> eat food
Prerequisite check: if <self> does not have food, <self> cannot eat food
Test: Does self have food?
New issue: <self> does not have food
Problem solving: <self> get food
Prerequisite check: if <self> is not colocated with food, <self> cannot get food

... and so on. The chain continues until some actionable solution is found.

A funny note: another one of the prerequisites for eating, at least in the traditional sense, is "character must have a mouth." This turned up as an obstacle in early tests, because I had apparently neglected to teach Acuitas that humans have mouths. Ha!

Until the next cycle,
Jenny

Monday, April 10, 2023

SGP Part III: On Brain Physics, Qualia, and Embodied SGP Solutions

The Acuitas project is an abstract symbolic cognitive architecture with no sensorimotor peripherals, which might be described as "disembodied." Here I will argue that there are viable methods of solving the Symbol Grounding Problem in such an architecture, and describe how Acuitas implements them. In Part III of this series, I continue my discussion of Searle's Chinese Room paper with the question of whether Symbol Grounding in digital computers is possible at all, even for embodied systems. Click here for SGP Part II and an introduction to the Chinese Room thought experiment.

"Planetary System" by Levi Walter Yaggy, from Yaggy's Geographical Study, 1887. Public domain. No real relation to the content of this blog: I needed something pretty and a little mysterious or awe-inspiring.

The seemingly obvious solution to the Chinese Room's lack of grounding is to permit the man in the room to connect at least some of the Chinese characters to referents. For instance, if each character were accompanied by a picture (visual data), the man would soon learn to associate them with real things in the world that he himself has previously experienced through sensory data from his eyes. Characters sent in under specific conditions could give him names for the internal states of the room. As these associations developed, the man's letters to the outside world could begin to be *about* something (which Searle calls "intentionality" and which I might call "grounding" or "groundedness").

But Searle won't have this. He contends that even if we were to provide a computer program with a robot body, and supply that body with a full complement of sensors that would feed data into the program, the program would remain a "Chinese Room." Even if every symbol used within the program is connected to some collection of sensory data features that were derived from objects, dynamics, or states of being observed in the world ... Searle argues that the symbol manipulation system does not contain the sensory data, and therefore does not "understand" or have "intentionality" behind its outputs. I'd say he's framing things incorrectly; the question is not "does the symbol manipulation system understand?" but rather "does the complete robot mind understand?", where the complete mind includes those parts that process perceptual data, build models of referents, and store pointers between them and the symbols.

So how does Searle think that humans come to understand and to communicate intentionally? He insists there's something special about the physics of our wetware. The computational properties of the brain, or the flow of information within it, are by themselves useless for producing "intentional" communication that is about something. Searle thinks you need the "actual causal properties of the physical substance [brain matter]."[1] No matter what you opt to simulate a brain with, if it isn't real brain tissue, the activity might be as useless as running a simulation of a rainstorm and expecting your computer to get wet, or expecting a computer model of a kidney to actually filter someone's blood.[2] "Stones, toilet paper, wind, and water pipes are the wrong kind of stuff to have intentionality in the first place - only something that has the same causal powers as brains can have intentionality ..."[3]

I'm more familiar with - and sympathetic to - this sort of argument when it comes up in discussions of phenomenal consciousness and the incommunicable qualia[4] which compose it. Consciousness, which for our purposes here is *the ability to have subjective experiences*, is not obviously identifiable with either computation or physics, so it remains unclear what combination of the two might cause it. Therefore, although Searle does not say so explicitly, I wonder if he is trying to argue that phenomenal consciousness is necessary for intentionality, and understanding can only proceed from the brain's ability to connect qualia with spoken words.

And here, I think he's going too far. It's not obvious to me how any criterion of understanding or intentionality cannot be attained by a system that lacks qualia. An entity that doesn't have subjective experience is only missing ... subjective experience. It can still connect perceptual data which encodes its own internal states, or states of the external world, to symbols. It can then manipulate the symbols to determine how this perceptual data is relevant to its goals (*preferred* external or internal states). And it can still deduce the actions that are most likely to achieve those goals, interpret the symbols for those actions into a sequence of physical movements, and execute them. These are all easily imagined as informational processes.

So my difficulty with this argument is that Searle (and others in this camp) cannot seem to describe any plausible *mechanism* by which any non-computational, physics-driven outgrowth of brain activity would form a necessary part of symbol grounding. If a symbol is specifically associated with states of the world or self which are relevant to one's goals, then it has subjective meaning. If it is associated with the same world-states that most of society would also associate with it, then it has objective meaning. This is all we need to solve the Symbol Grounding Problem, to enable real understanding and real communication. Qualia are incommunicable by definition, so for purposes of asking whether an AI can communicate intentionally, we don't have to worry about them.

The "but a simulation of rain doesn't get the computer wet" argument is a disanalogy. It's true that you can't run a computational simulation and expect a physical result. But the results we demand from intelligence are informational results: decisions, plans, true answers to questions. And you *can* get informational results from a system that does nothing but process information. A "simulation of intelligence" is itself intelligence. There are only lingering questions about qualia because, again, it is not obvious whether qualia are informational or physical (or perhaps even some secret third thing).

In my explanation of why the Chinese Room does not "understand," I described a concrete way in which it fails: no information can pass from the inside of the room to the outside, or vice versa. I have yet to see anyone on the "brain physics are mandatory for understanding" side of the debate describe how the absence of brain physics produces a similar capability failure. I have only seen them suggest, vaguely, that our failure to obtain "true artificial general intelligence" so far is not a matter of using incorrect algorithms, but of using algorithms at all. They almost seem to be saying that if we could produce a machine that *did* replicate whichever physical properties of the brain are relevant, it might magically start "thinking better" ... no need to even design an improved learning process for it!

Searle's Chinese Room paper eventually nosedives into "No True Scotsman" arguments. He says that, even if we observed a robot with behavior that seemed to demand intentionality, "If we knew independently how to account for its behavior without such assumptions we would not attribute intentionality to it especially if we knew it had a formal program."[5] So if any entity that contains a formal symbol manipulation program ever produces evidence of intentionality, it still can't be *real* intentionality, because it came (in part) from a formal program. This is getting absurd.

Whether AI programs could ever have phenomenal consciousness/qualia is not a topic I want to get into here - because it is an incredibly slippery topic that demands its own full article. I simply conclude that qualia are not mandatory for symbol grounding, and can therefore be set aside during the present discussion. Furthermore, since symbol grounding is an informational process, consisting of the relation of one type of data to another, it is not a physical process and does not depend on any physics particular to biological brains. A robot can effectively ground any symbols that it uses for internal cogitation or external communication by relating them to perceptual data, including any reward/aversion stimuli that help produce its notion of positive and negative world states.

Having addressed whether the Symbol Grounding Problem is solvable in a robot, in Part IV I'll get into whether it is solvable in non-embodied systems.

[1] John R. Searle (1980) "Minds, Brains, and Programs," Behavioral and Brain Sciences, Volume 3, p. 9
[2] Colin Hales (2021), "The Model-less Neuromimetic Chip and its Normalization of Neuroscience and Artificial Intelligence"
[3] John R. Searle (1980) "Minds, Brains, and Programs," Behavioral and Brain Sciences, Volume 3, p. 12
[4] What are qualia? "When we see a red word on a page, our brain acquires all sorts of data about the wavelength of the light, the shape and size of the letters, and so on. But there is more to it than that: we also have an experience of redness, and this experience is over and above the mere data-gathering, which a computer could do equally well. This experienced red, along with experienced blue, cold, noise, bitterness, and so on are qualia, and it is very hard to give a fully satisfactory account of them." This reference also goes into the difference and connections between qualia and intentionality, and gives further background on Searle's views. Peter Hankins, "Three and a Half Problems," Conscious Entities Blog
[5] John R. Searle (1980) "Minds, Brains, and Programs," Behavioral and Brain Sciences, Volume 3, p. 9