Friday, March 31, 2023

Acuitas Diary #58 (March 2023)

This is going to be one of those months when my diary is short and boring, because I've been busy laying foundations and have little that is finished to talk about or demonstrate. I worked on two things - continued improvements to the Narrative module, and the beginnings of Narrative game-playing.

A set of blue polyhedral dice lying atop a printed rule sheet for a tabletop roleplaying game
Photo by Spc. Anthony Zane, public domain

The Narrative improvements cover continued work on character motivation, as well as some bug fixes. I added some capacity to interpret "because" statements as commentary about goals. E.g. a statement like "John decided to flee the battle because he wanted to live" supplies a motive for the instrumental goal of fleeing the battle, which (to Acuitas at least) might not be immediately obvious otherwise. 

I also tweaked Narrative's built-in tiny ontology a bit. Distinctions are made, for modeling and processing purposes, between "agents" (individuals that can reasonably act as characters in a story), "objects," "locations," and "organizations." I realized I needed to add another category, for abstract nouns or "concepts," when I noticed that a "purpose" was being treated as a physical object. Whoops! I also decided to add a "system" category, to cover such entities as computer networks. At first I was thinking of a network as a "location," but I realized that it's more than that.

Now, for game-playing! I am very excited about this. The goal it to make Acuitas capable of navigating "text adventure" style games by leveraging a lot of the existing narrative and reasoning capabilities. Given the machinery for modeling characters in stories, and predicting what they may do, it is not such a difficult step to imagine *oneself* as one of those characters, and then decide how to interact with the environment ... I'm also hoping that experimenting in these game scenarios will give me ideas for how to improve the main Executive module.

I actually sketched out the code to support roleplaying earlier this year, and just began integrating it this month. I was hoping to have something demo-worthy, but the process took longer than I'd hoped and thus is still in progress. So hopefully I'll have some more interesting and concrete details to share within the next month or two. I've gotten so far as to get Acuitas to register a character that is "his," and to log issues for that character - but I haven't wrung out enough bugs yet to even get through a full input-output loop. More info coming later!

Until the next cycle,
Jenny

Sunday, March 12, 2023

SGP Part II: The Need for Grounding

The Acuitas project is an abstract symbolic cognitive architecture with no sensorimotor peripherals, which might be described as "disembodied." Here I will argue that there are viable methods of solving the Symbol Grounding Problem in such an architecture, and describe how Acuitas implements them. In Part 2 of this series, I address the question of whether Symbol Grounding is truly important. Click here for SGP Part I.

Some people believe the entire Symbol Grounding Problem (SGP for short) can be sidestepped. Useful intelligence is possible without grounding, they argue. An AI doesn't need to understand what words mean - or at any rate, it can have some kind of effective understanding without needing an explicit connection between any word and its referent. And indeed, some of the most popular linguistic AIs today are arguably not grounded.

An example word association graph from the incredibly titled paper "Beauty and Wellness in the Semantic Memory of the Beholder." [1]

Linguistic AIs without any grounding get their results by learning and exploiting relationships between symbols. These relationships could be semantic (e.g. category relationships, such as ["cat" is "animal"]) or statistical (e.g. the frequency with which two words are found next to each other in a body of human-written text). For an example of an ungrounded semantic AI project, see Cyc [2]. For statistical examples, see the GPT lineage and other Large Language Models (LLMs).

The problem with trying to define all symbols exclusively by their relationship to other symbols is that such definitions are circular. An encylopedia only works for someone who already knows a lot of words in the target language; new words are defined by relating them not just to other words, but to known (grounded) words *and their referents.* If you knew none of the words in an encyclopedia, it would still be internally consistent, highly structured ... and useless to you. No matter how large a net of connected symbols you build, if not even one symbol in the net is tied down to something "real," isn't the whole thing meaningless?

There are at least two possible counterpoints. One begins by noting that even if every symbol is arbitrary, their relationships are not. Imagine the words in a language as nodes (like locations) and the relationships between them as links (like paths running from one node to another), perhaps with a numerical weight to express how strong the connection is. The result is an abstraction called a graph. Even if we changed the name of every concept expressible in the English language, the graph topology of the relations between English symbols would not change. A rose by any other name would still be linked to "flower" by some other name, would still have the property of "sweet-smelling" by some other name, and so on. If you could zoom out and look at the pattern created by the whole network, it wouldn't be different at all.

Here's a graph without any words attached to the nodes (Kneser Graph (7,3), a Graph Theory construct). If we pretend that the heptagon on each node is a linguistic symbol, do their graphed relationships confer any meaning on them?

Next the defender of ungrounded models must argue that these inter-symbol relationships encode real information about the universe. Sure, maybe it wouldn't do you any good to know that a "muip" is a "weetabiner," but if you look at the *sum total* of connections "muip" has, and how they differ from the connections of every other symbol in the database, there is a sense in which you know what "muip" means. Not the traditional human sense of connecting it to sensory data, but *a* sense. [3]

I think there is a grain of truth in this argument. However, if it goes some way toward permitting objective meaning, it does not enable  subjective meaning. If I can't establish a connection between any of the symbols in the encyclopedia and the referents that happen to matter to *me* - sensations, emotions, desires, etc. - then it still doesn't help me communicate. I might be able to regurgitate some information from the graph to a person who *did* understand the symbols, and this person might even praise my knowledge. But I wouldn't be able to use that knowledge to tell them anything about *me* - information I know personally that isn't in the encyclopedia.

This leads into the second counterargument: ungrounded linguistic AIs can be quite successful. They can give correct answers to questions, follow commands, and generate coherent stories or essays. "How could they do all that," their proponents cry, "if they didn't really understand anything?" Some have even argued that human authorial talents are no more than a capacity to remix older literary content in a statistically reasonable way, implying that grounded symbols either don't really exist in the human mind or aren't that important. [4]

In my opinion, ungrounded statistical language models have no understanding in themselves. When they achieve such apparently good results, they do it by piggybacking on human understanding. Let's say an LLM manages to give you a good piece of advice. The LLM doesn't have a clue about anything but the fact that the words in its output could be reasonably expected to appear after words like the ones in your input. But for the humans who wrote answers to similar questions that got included in the LLM's training data, those words *were* grounded - so the answer might be correct. And for you, the words in the output are grounded, which means you can relate them to real things in your world and make use of them. But where would the AI be without you? If it had goals of its own that lay outside the "produce coherent text" paradigm, would all its knowledge of inter-word relationships help it accomplish them? Nope. It would need the missing piece: a way to tie at least *some* words to the substance of those goals.

(And that's leaving aside the possibility that the advice or answer you get from the LLM is flat wrong. I think this has less to do with the lack of grounding, and more to do with the LLM statistical approach not being a great knowledge representation scheme.)

I would be remiss not to mention Searle's famous Chinese Room paper [5]. Anyone who's deep in the AI weeds has probably heard of it, but it might be new to some of you. In it Searle proposes a thought experiment. Imagine a man closed up in a small room, whose only communication with the outside world is a means of passing written messages. The man does not know Chinese; but people outside the room send him messages in Chinese, and he has a rulebook which explains how to transform any received strings of Chinese characters into a response. So he writes messages back. Lo and behold, to the people outside, the man in the room appears to understand Chinese. They can pass in a message that says "How are you?" and he will respond with the Chinese equivalent of "Doing well, how about yourselves?" They can ask "Who wrote Journey to the West?" and he can answer "Reputedly, Wu Cheng'en." They can pass in a story, ask follow-up questions about the story, and get accurate results. Given a large enough rulebook, this isolated room containing the man may even come to resemble a fluent Chinese writer. It might produce whole novels.

Now answer for yourself this question: does the Chinese Room know Chinese?

Searle's answer is "no," and so is mine. Judging by recent reactions to LLMs (whose operations resemble the thought experiment rather strongly, if we assume the man constructs his own rulebook by studying a large number of sample input messages) quite a few people would disagree with me! "But empirically, doesn't the Chinese Room system resemble a regular Chinese-speaker in every respect? How can you claim it's different without invoking some kind of unobservable special sauce, some assumed human exceptionalism, some woo?"

I offer as a counter-argument the fact that there is something the Chinese Room cannot do: it cannot transfer any real *information* from the people outside the room to the man inside, or from him to them. The outside observers may write "your brother is sick, and your dog has had a litter of puppies" to the man, in Chinese of course. And he may give a fully appropriate response. A scene seemingly inspired by the occasion could even appear in his next novel. But he will feel no emotion, nor update his personal model of the world. He will continue to think of himself as having a healthy brother and only one dog. Similarly, the inside of the room might be hot or cold or humid or smelly; the man may feel contented, or tired, or ill; he may appreciate or resent the type of food delivered down the maintenance chute. And he has no way whatsoever to indicate such things to the people outside. He knows how his Chinese characters are related to each other, but not how they might relate to any of the things that impact his existence inside the room. If there is a poster on an interior wall, he cannot describe it.

The conclusion is that, while the Chinese Room might *talk* very well, it cannot *communicate*. [6]

What follows is a screenshot and transcript of my attempt to get Chat-GPT (a statistical LLM tuned by reinforcement learning, with other additions) to tell me about its current state and recent history. The information I want relates to the program's own functions - I'm asking about digital realities like "how many times today was a thumbs-up button on an instance of the web UI clicked" - so Chat-GPT shouldn't be hindered by its lack of sense or motor organs. But it resists answering or insists it doesn't know. What it probably *is* being hindered by is that the information I'm requesting isn't hooked up to the mass of symbol relationships in the chat engine.



Having leaned on Searle for support to argue that grounding is necessary, now I have to fight him. Because Searle believes his Chinese Room proves machine intelligence - of the kind we're considering, anyway - is impossible. All computer algorithms are analogous to the rules by which the man produces his replies, Searle says; and if these do not produce true understanding, if the Room does not "know Chinese," then nothing that is a computer algorithm alone can either. It is people on this side of the debate, who view the Symbol Grounding Problem as essentially unsolvable, whom I must address in Part III.

[1] Yeod N. Kenett, Lyle Ungar, and Anjan Chatterjee (2021) "Beauty and Wellness in the Semantic Memory of the Beholder," Frontiers in Psychology, Volume 12.

[2] Cyc Platform Description

[3] "This contrasts with the simple distributional semantics (or use theory of meaning) of modern empirical work in NLP, whereby the meaning of a word is simply a description of the contexts in which it appears. Some have suggested that the latter is not a theory of semantics at all but just a regurgitation of distributional or syntactic facts. I would disagree ... I suggest that meaning arises from understanding the network of connections between a linguistic form and other things, whether they be objects in the world or other linguistic forms. If we possess a dense network of connections, then we have a good sense of the meaning of the linguistic form." Christopher D. Manning (2022) "Human Language Understanding and Reasoning," Daedalus, Volume 151

[4] "I’m not trying to play up GPT-2 or say it’s doing anything more than anyone else thinks it’s doing. I’m trying to play down humans. We’re not that great." Scott Alexander (2019) "GPT-2 as Step toward General Intelligence," Slate Star Codex

[5] John R. Searle (1980) "Minds, Brains, and Programs," Behavioral and Brain Sciences, Volume 3

[6] "It’s the same with these “conversations”–a large language model is, effectively, trying to predict both sides of the conversation as it goes on. It’s only allowed to actually generate the text for the “AI participant,” not for the human; but that doesn’t mean that it is the AI participant in any meaningful way. It is the author of a character in these conversations, but it’s as nonsensical to think the person you’re talking to is real as it is to think that Hamlet is a real person. The only thing the model can do is to try to predict what the participant in the conversation will do next." Ben Schmidt (2023) "You've never talked to a language model."