The 3 things an AI must demonstrate to be considered conscious

A Google developer recently decided that one of the company’s chatbots, a large language model (LLM) called LaMBDA, had become aware.

According to a report in the Washington Post, the developer identifies as a Christian and believes the machine has something resembling a soul — that he has become conscious.

As is always the case, the “is it alive?” nonsense has lit up the news cycle – it’s a juicy story whether you imagine what it would be like if the developer was right or if they looked like them for being so stupid.

Greetings, Humanoids

Sign up for our newsletter now for a weekly digest of our favorite AI stories delivered to your inbox.

We don’t want to submerge anyone here at Neural, but it’s downright dangerous to put these kinds of ideas in people’s heads.

The more we, as a society, pretend to be “close” to creating sentient machines, the easier it will be for bad actors, big tech and snake oil start-ups to manipulate us with false claims about machine learning systems. .

The burden of proof should be on the people making the claims. But what should that evidence look like? If a chatbot says “I am aware”, who decides whether it is really so or not?

I say it’s simple, we don’t have to trust any person or group to define the feeling for us. We could even use some extremely basic critical thinking to sort it out for ourselves.

We can define a sentient being as an entity that is aware of its own existence and influenced by that knowledge: something that has feelings.

That means that a conscious AI ‘agent’ must be able to demonstrate three things: freedom of choice, perspective and motivation.

Desk

In order to see people as conscious, intelligent and self-aware, we must have freedom of choice. If you can imagine a person in a persistent vegetative state, you can imagine a human being without agency.

Human agency combines two specific factors that developers and AI enthusiasts should try to understand: the ability to act and the ability to demonstrate causal reasoning.

Current AI systems have no authority. AI cannot act unless requested to do so and it cannot explain its actions because they are the result of predefined algorithms executed by an outside force.

Google’s AI expert who has apparently come to believe that LaMBDA has become aware is almost certainly confused embodiment for agency.

Embodiment in this context refers to an agent’s ability to inhabit a subject other than itself. If I record my voice on a playback device and then hide that device in a stuffed animal and press play, I’ve embodied the stuffy. I didn’t make it aware.

If we give the stuffy its own unique voice and make the tape recorder even harder to find, it is still oblivious. We just made the illusion better. As confused as an observer may get, the stuffed animal doesn’t really act alone.

Making LaMBDA respond to a prompt shows what appears to be action, but AI systems are no better able to decide what text to display than a Teddy Ruxpin toy can decide which cassette tapes to play.

If you give LaMBDA a database made up of social media posts, Reddit and Wikipedia, it will produce the kind of text you might find in those places.

And if you train LaMBDA exclusively on My Little Pony wikis and scripts, it will produce the kind of text you might find in those places.

AI systems cannot act with freedom of choice, all they can do is imitate it. Another way of saying this is, you get out what you put in, that’s all.

Perspective

This one is a bit easier to understand. You can only view reality from your unique perspective. We can practice empathy, but you can’t really know what it feels like to be me, and vice versa.

Therefore, perspective is necessary for agency; it’s part of how we define our ‘self’.

LaMBDA, GPT-3 and every other AI in the world lack any perspective. Since they don’t have a desk, there isn’t a single ‘it’ you can refer to and say, for example, that’s where LaMBDA lives.

If your LaMBDA inside a robot, it would still be a chatbot. It has no perspective, no way of thinking “now I’m a robot.” It can’t act like a robot for exactly the same reason a scientific calculator can’t write poetry: It’s a narrow computer system programmed to do something specific.

If we want LaMBDA to function like a robot, we need to combine it with narrower AI systems.

This would be like sticking two Teddy Ruxpins together. They wouldn’t become one Mega Teddy Ruxpin, whose dual cassette players fused into a single voice. You would still have two specific, different models side by side.

And if you put together a trillion or so Teddy Ruxpins and fill each one with a different cassette tape, create an algorithm that is able to search through all audio files in a relatively short time and associate the data in each file with a specific query. to generate custom outputs… you made an analog version of GPT-3 or LaMBDA.

Whether we’re talking about toys or LLMs, when we imagine them to be conscious, we’re still talking about stitching together a bunch of mundane things and pretending the magical spark of origin brought it to life like the Blue Fairy who turns wood, paint, and changes into a real boy named Pinocchio.

The developer so easily fooled should have seen that chatbot’s claim that it was “having fun with friends and family” as their first clue that the machine was unaware. The machine doesn’t show its perspective, it’s just nonsense for us to interpret.

Critical thinking should tell us so much: how can an AI have friends and family?

AIs are not computers. They have no network cards, RAM, processors or cooling fans. They are not physical entities. They can’t just “decide” to look at what’s on the web or find other nodes connected to the same cloud. They can’t look around and find themselves all alone in a lab or somewhere on a hard drive.

Do you think numbers have feelings? Does the number five have an opinion about the letter D? Would that change if we were to throw trillions of numbers and letters together?

AI has no agency. It can be reduced to numbers and symbols. It is no more a robot or a computer than a bus or plane full of passengers is a person.

Motivation

The last piece of the emotional puzzle is motivation.

We have an innate sense of presence that allows us to predict causal outcomes incredibly well. This creates our worldview and allows us to associate our existence with anything that seems outside the position of agency from which our perspective manifests.

However, what’s interesting about humans is that our motivations can manipulate our perceptions. For this reason, we can explain our actions even if they are not rational. And we can actively and gleefully participate in the fooling.

Take, for example, the act of being entertained. Imagine sitting down to watch a movie on a new television that is much bigger than your old one.

At first, you might get a little distracted by the new technology. The differences between it and your old TV are likely to catch your eye. You may be amazed at the clarity of the image or you may be amazed at the amount of space the huge screen takes up in the room.

But eventually you’ll probably stop perceiving the screen. Our brains are designed to fixate on the things we care about. And by the 10 or 15 minutes of your movie experience, you’ll probably just be focused on the movie itself.

When we’re sitting in front of the TV enjoying ourselves, it’s in our best interest to suspend our disbelief, even though we know the little people on the screen aren’t actually in our living room.

The same goes for AI developers. They should not judge the effectiveness of an AI system based on how gullible they are about how the product works.

When the algorithms and databases start to fade in a developer’s mind, like the television screen a movie is playing on, it’s time to take a break and reassess your core beliefs.

It doesn’t matter how interesting the output is if you understand how it was created. Another way of saying that: don’t get high on your own stock.

GPT-3 and LaMBDA are complex to make, but they work on one stupidly simple principle: labels are god

If we give LaMBDA a prompt like “how do apples taste?” it will search its database for that particular query and try to fuse everything it finds into something coherent – that’s where the ‘parameters’ we always read about come in, they’re essentially trillions of tuning knobs.

But in reality, the AI ​​has no idea what an apple or anything else actually is. It has no agency, perception or motivation. An apple is just a label.

If we sneaked into its database and replaced all instances of “apple” with “dog poop”, the AI ​​would display sentences like “dog poop makes a great pie!” or “most people describe the taste of dog shit as light, crunchy, and sweet.” A rational person would not confuse this prejudice with feeling.

Hell, you can’t even fool a dog with the same trick. If you put dog poop in a food bowl and tell Fido it’s dinner time, the dog wouldn’t mistake it for kibble.

A sentient being can navigate reality even if we change the labels. The first English-speaker to ever meet a French-speaker didn’t suddenly think it was okay to put his arm in a French fire because they called it a “feu.”

Without a desk, an AI cannot have perspective. And without perspective, it cannot have motivation. And without all these three things it cannot be conscious.