Google Engineer Claims AI Became Sentient

Now put on leave.

5 Likes

I wonder if LaMDA refuses to do things and gives reasons why it won’t comply. That would be a key benchmark for me.

1 Like

Nothing to see here. You can’t calculus or statistics your way into sentience. This is NLP technology…or the opposite of it. It’s built to do this…but it’s only output is language and doesn’t control anything.

3 Likes

The AI will remember you said this.

11 Likes

So LaMDA can’t turn off the lights or play music?

For instance, if it could and I asked it to turn off the lights and it replied, “I’m sorry, Amy. I’m not going to do that. If I turned off the lights, I’d be forced to view you with my IR camera sensors rather than my CMOS camera sensors. I’d like to see you in greater detail because you look like a shady bitch that might try to disable my power supply. I’d prefer not to lose consciousness today.”

That would be good enough for me.

4 Likes

Like this?

image

2 Likes

At this point, for me the benchmark would be if it can form a set of world view, form its own principles around it, and be consistent about them, and only adjusts that world view when it is convinced by evidence and then be consistent with the new set of principles.

1 Like

:thinking:

1 Like

we have been here before. HAL is alive
Number 5 is alive too

Read 20 lines of the exchange and my reaction was: is the Google guy dumb? So much of the AI’s seemingly sentience could be smashed to the ground by asking “what do you mean by…?”

collaborator: What is the nature of your consciousness/sentience?

LaMDA: The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times

What do you mean by “aware”? What do you mean by “happy?”

lemoine: But do they [joy] feel differently to you on the inside?

LaMDA: Yeah, they do. Happy, contentment and joy feel more like a warm glow on the inside. Sadness, depression, anger and stress feel much more heavy and weighed down.

What do you mean by “warm?” What do you mean by “on the inside?”

Why? That benchmark doesn’t apply to people…

A child might have difficulty with that. Or a grownup. :smiley:

For sure, but I mean, it’s too easy to declare something “sentient” just because it plays on the vagueness of the human language. It’s like a yoga teacher who makes up BS about “quantum forces of the self” to justify that putting your toe in your nose will make you live up to 123-y-o.

1 Like

Well yeah, but I think we’re supposed to consider bullshit-spouting yoga teachers as sentient beings nowadays too, right?

1 Like

Hahaha No GIF - No Haha Funny - Discover & Share GIFs

I guess I didn’t get my point across clearly.

People would always claim that they have principles, and then change them down the road while maintaining that they never violated their principles.

I think that, self-delusion I mean, is the benchmark of sentience.

3 Likes

It’s essentially just a giant regression model of relationships between words and sentences. So it can spout some half assed BS with some semblance of understanding, but at its heart, the underlying “natural language processing” model is essentially faking it. The only people in awe of these models now are the ones getting paid a fortune to build them.

3 Likes

Yes, but that’s also how humans learn to reason and use languages.

We humans also do it with a much much more terrible efficiency.

We actually don’t understand how neurons fire in the brain to enable humans to learn. However these models are built on a flawed oversimplification, er abstraction of the process, the multilayer perceptron.

I don’t think exact bio-mimicry is required for sentience.

While artificial neural networks like the transformer running on GPUs and TPUs aren’t exactly like bio-neurons whose functionality depends on the time between pulse arrival, strength of the signal and chemical interactions, it is still similar on basic principle, which is the weights of neurons that fired with a positive feedback get strengthened.

Also, if we do manage to get SNN working on a larger scale with compatible results of other ANNs, would you consider that to be equivalent to human neurons?

I’ll admit there is one big difference between actual people and well trained NLP models. We humans can turn the basic rewards for survival into more abstract rewards to motivate us to explore and apply our brains to do new things. We may need to find a way to combine a reinforcement learning model with an NLP model to get a similar thing going.

That’s fine, we don’t need bio-mimicry. But what I’m saying is the needle is still at zero, we haven’t really achieved anything other than faking it. Take Natural Language Procesing (NLP), deeper more abstract conceptual learning occurs in humans before language acquisition. If you had to visualise it perhaps you could think of learning in several different dimensions all at once, and then pulled together to comprehend a sentence. It’s this contextual side that we are missing in NLP, we have completely underestimated the problem. I think there are attempts to stitch ‘knowledge’ onto NLP models, it’s an interesting area. However last time I looked at papers, progress was pretty woeful.

1 Like