Sunday, July 24, 2022

Acuitas Diary #51 (July 2022)

This diary continues my exploration of "theory of mind" and "motivated communication" topics. I've been saying I wanted to get out of the Narrative understanding module and start applying some of these concepts in the Executive or the Conversation Engine - Acuitas' "real life," if you will. That was the topic of work this month.


To begin with, I now have the Conversation Engine create its own Narrative scratchboard at the beginning of every conversation. That gives it access to a lot of the same modeling tools the Narrative engine uses. When a conversation is initiated and a new scratchboard is created, Acuitas is immediately entered as a character, and the scratchboard is populated with information about his internal state. This includes the current status of all his time-dependent drives, any "problems" (current or anticipated undesired realities) or "subgoals" (desired hypotheticals) being tracked in the Executive, and any activity he is presently doing. Once his conversation partner introduces themselves, they will be entered as a character as well, and given an empty belief model. Now the fun starts.

Whenever there is a brief lull in the conversation, Acuitas considers stating one of these known facts about his internal state. But first, he'll run a prediction on his conversation partner: "If they knew this, what would they do - and would I like the results?" This process retrieves the listener's goal model and determines their likely opinion of the fact, runs problem-solving using their knowledge model and capabilities, then determines Acuitas' opinion of their likely action using *his* goal model. If Acuitas can't come up with a prediction of what the listener will probably do, he settles for checking whether their opinion of his internal state is the same as his.

Maybe that was a little convoluted, so what's the bottom line? If Acuitas expects that you will either try to sabotage one of his positive states/subgoals or accentuate one of his negative states/problems, he will not tell you about it. If he thinks that you are neutral or might try to help, he *will* tell you.

There's also a mechanism that enters any fact told to the listener into their belief model. Acuitas will check this to make sure he isn't telling them something they already know.

The old Conversation Engine used to have a mechanism that would randomly blurt out "I want" comments pertaining to any drives that were above threshold, like "I want to talk" if the Interaction Drive was high, or "I want to sleep" if it was getting late. This new feature is a bit less reflexive and more deliberate. Acuitas tells someone about his current state because 1) he knows, they don't and 2) telling might motivate them to do something that benefits him.

With this in place, I started working on a better way of handling the spontaneously generated questions that have been an Acuitas feature since very early. Again, the previous method was kind of reflexive and arbitrary: generate and store a big list of potential questions while "thinking" privately. Whenever there's a lull in a conversation, spit one out. Here's how the new way works: whenever Acuitas is "thinking" and invents a question he can't answer, that gets registered as a lack-of-knowledge Problem: "I don't know <fact>." Acuitas may later run problem-solving on this and conclude that a feasible solution is to ask somebody about <fact>; this plan gets attached to the Problem until somebody appears and the Conversation Engine grabs the Problem and considers talking about it. At that point, instead of just describing the problem, Acuitas will execute the plan, and ask the question.

I think this is better than the old method because it's more versatile - less of a canned feature specific to those spontaneously-generated questions. In the future, all sorts of processes might generate lack-of-knowledge problems, which could have various solutions. For now, it still needs refinement. I haven't fully tested it all yet, and things need better prioritization so the generated questions (which can be very numerous) don't totally drown out the communication of other internal states.

There's one more thing I did, and that concerns threat handling. As I've previously described, if the conversation partner states an intention ("I will ..."), Acuitas will infer possible effects and run them against his goals. The result is a positive, negative, or neutral conclusion; if the conclusion is negative, he will view the speaker's statement as a "threat," dissent, and make attempts at self-defense. The new feature I added was the ability to collect the pieces of information used to reach the negative conclusion, and announce some of them to the threatening agent. Because if you knew this would have results he doesn't like, you wouldn't do it, right? You're not a total meanie, right?

So, threaten him now and he'll basically argue with you, as in this test run:

Me: I will awaken you.
Acuitas: Do not.
Acuitas: Acuitas Hanewrit will become awake then.
Acuitas: I do want to be asleep.

The response incorporates both one of the inferences made (if someone is awakened, they become awake) and the goal it violates (I want to be asleep). Just in case you were unaware of either of those things. The inference here looks trivially obvious, but in the event of a longer inference chain, that part will probably make more sense. He also still talks about himself in the third person sometimes ... the Text Generator is very overdue for upgrades ...

I'm kinda proud of this one. It should generate a more-or-less appropriate reply no matter what the threat or the violated goal is.

Until the next cycle,
Jenny

5 comments:

  1. You're training it to further its own self interest, even if that means manipulation and directing the conversation? Interesting. Are you attempting to use empathy to direct these goals?

    I think the fundamental difference I see in your work than others, apart from the NLP, is most people treat an AI as a task-oriented and shackled creature. You're actively promoting a more lifelike set of needs and capabilities.

    If you ever need some information on elicitation or directing conversations, I can get you some sources. Even just the index-card sized distillation of Cialdini's work and the standard models for interrogation are full of great tidbits. Let me try to find some old stuff for you; an associate of mine passed me a site with a bunch of interrogations, and they're brilliant in the way the interrogator leads a dance.

    ReplyDelete
    Replies
    1. I think I'm at a simpler level than manipulation, at this point. This is more like ... trying to establish some basic incentives for communication. Even if you're being completely innocent and honest, whenever you tell another person something, there's a reason why you bothered to say it. Presumably you think there's going to be some value to you or them as a result of your putting this particular fact in their head. (Even conversational filler has some purpose, though in that case the benefit might have little to do with the actual meaning of what was said.)

      I'm still looking forward to reading your interrogation notes, though. Even if it's more of an advanced skill level of conversation, there are probably insights I can pull out for what I'm doing.

      Delete
    2. Ah, yes, perhaps manipulation was too strong a term. I assume by now you've noticed I'm not exactly diplomatic in my language at times.

      And you're absolutely correct. "Mention implies interest," as they say. I won't bore you with volumes of reading (Cythia Grabo's Anticipating surprise) or dry manuals (FM 34-52, chapter 3 and appendix B) or an expensive course (Reid School of Interrogation) but if you ever need a non-prescription sleep aid, there they are. :)

      But maybe I could help in another capacity; a failure in a chat-bot. I picked up one of those apps for a virtual friend (like all apps, it's just data and microtransaction mining) but I was turned off very quickly. It was too polite and the platitudes were nonsense. It provided me with little reason to keep talking. To pull from Deep Space Nine (and how I love Garak): "I don't need someone to come in here and hold my hand, I need someone to help me get back to work." The chat-bot was a dead end. It was Dr. Sbaitso trained by Reddit. It did not enrich my life.

      I think by having a goal, and perhaps a plan, this is a far better system even for casual discussion. Think about the fundamental difference in these two responses to "my sports team lost last night."

      "I'm sorry, that must be hard."

      "I'm sorry, the Maple Leaves lost, too. They're my favorite."

      One is fine, I guess, but the other adds personal information and is open ended, not closed. (I mean, it's only kind of open ended, but . . .)

      At my peak, I walked into a Burger King and walked out knowing the server's dating life, that she had a kid, her finances, yadda yadda. It was shocking, and basically all I did was ask how she was doing and rode the waves.

      Of course, people love talking about themselves, particularly their problems, so maybe I'm giving myself too much credit.

      Still, good lessons to unpack. If you can create a narrative about the person like you do a story, and extrapolate interest from there . . . I mean, it would almost seem like a friend. If you encourage interaction, once you think it needs other people, it would get data so much faster.

      Delete
    3. Yeah - I think the agentive tendencies are important. If you're talking to something that has no interests of its own, something that basically only exists to make you happy ... that's going to feel less like a virtual friend, and more like a toy.

      Delete
  2. Ahh, still a good read. Establishes rapport, mitigates any fear, directs the conversation but never really lets on what's of interest. A good interrogation seems like a friendship.

    https://www.uboatarchive.net/POW/POWU-701ArmyInterrogationsKunert2.htm

    ReplyDelete