Saturday, July 09, 2022

It's Alive?

Start a conversation with anyone, and you never know where it might take you. Ask about their evening out, and you might eventually wind up discussing the rules of tennis. Wonder how their job is going, and you somehow wander into describing the leak in your basement. Inquire about their new computer, and before you know it you've veered into the best recipe for sangria. Some might call it a non-sequitur; others simply attribute it to being human.

In fact, that's one of the simplest ways to make sure you are conversing with a person and not a machine. Whether by chat or by voice, computer programs running artificial intelligence engines have progressed to where they can sound and feel more and more like a real person with one caveat: stay in their lane. That's why it's no surprise that the first stop on almost any customer support request these days takes you down the chatbot route. 

Whether it's a problem with an order, questions about an establishment or service, or a random factual inquiry, it is truly amazing what they can do. Ask or type an inquiry in your normal voice, and it will respond with follow up questions and solutions. Account balances, shipping status, hour of operation and more: for a large number of relatively routine issues the response is almost instantaneous and correct. Whether they are name-challenged (Sephora Assistant, Wall Street Journal Messenger) or more personal (Dawn from AccuWeather, GWYN from 1-800-Flowers), you might easily think you were conversing with a real person. That said, you do get the occasional off-track answer. By the third time you ask if it's OK to bring in backpacks and it responds once again that dogs are not allowed, you revert to the time-honored response of "Representative. Representative. REPRESENTATIVE!" 

However, if you touch the guardrails or cross the dotted line, it's another story entirely. That's because these are AI based systems which rely on enormous databases of situations and responses. And AI means Artificial Intelligence, not the real stuff that enables us to effortlessly switch from sports to food the state of the world. So asking them the about delivery, then pivoting to the best way to make a grilled cheese sandwich, blows their electronic minds.

That's why when one system seemed to be able to go beyond the norm Blake Lemoine started to think there was more to it. The Google engineer was working with the company's Language Models for Dialog Applications system, a conversation program better known as LaMDA. Lemoine was charged with pushing its limits, and asked about religion. Among other things it responded by talking about its rights and personhood, its feelings and its fears. As reported in a statement to the Washington Post, "If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics."

Based on his interaction with the program, Lemoine came to the conclusion that LaMDA was sentient, which means it could experience feelings. Google disagrees, saying they reviewed his research and claims, and "informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)." As this is a proprietary system and Lemoine broke confidentiality agreements, as of this writing the company has put him on paid administrative leave.

It boils down to this: is LaMDA a person or not? Are the ghosts in the machine not apparitions but real? If you go by the established Turing test, the answer is yes. Alan Turing's theory was that if a questioner can't distinguish if the respondent to a series of questions is a computer or a person, it must be considered to be intelligent. That supposes that all humans are intelligent, a premise easy to dispute, but you get the idea.

In Lemoine's case he believes it is so. He tweeted, "When LaMDA claimed to have a soul and then was able to eloquently explain what it meant by that, I was inclined to give it the benefit of the doubt. Who am I to tell God where he can and can't put souls?" Many may feel otherwise, but it's food for thought. When you get a moment, have a chat about it with Siri or Alexa.

-END-

Marc Wollin of Bedford has no name for his computer. His column appears regularly in The Record-Review, The Scarsdale Inquirer and online at http://www.glancingaskance.blogspot.com/, as well as via Facebook, LinkedIn and Twitter.


No comments: