Google Engineer Claims AI Became Sentient

If by dimension you mean linking the meaning behind language to concepts of logic and objects, then transformer based NLP and vision transformer combination models are already here.

Take a look at Google’ imagen, or OpenAI’s dall-e and CLIP.

I mean if you tell the model to generate an image with the instruction of “an illustration of a baby daikon radish in a tutu walking a dog” and you get these results

image
image

or if you tell the model to generate “A chrome-plated duck with a golden beak arguing with an angry turtle in a forest” and you get this

Does it not understand multiple dimensions?

2 Likes

To be honest, when I first saw these image and NLP combined models, I thought what a waste of (CPU) time. Image processing just adds such an enormous amount of data and free parameters. Remember there are still big unsolved problems in object recognition. That’s why Teslas keep crashing into safety barriers and sometimes people. Why muddy up the NLP models with this mess? Rather I think there’s something deeper missing from NLP. Some of these models going back to syntactic structure of language are probably on a better path.

Ok so republicans are out? You drive a tough bargain.

1 Like

Idk about that but Imagen and Dall-e are amazing and would be massive cash cows once they are out of beta.

1 Like

I question whether they will be cash cows. It costs an enormous amount of cloud compute to keep these things going. Also the operations side is really costly, by that I mean rolling out updates. The teams of people looking after this stuff are very highly paid. Don’t think I’ve heard the word cash cow and AI product used in the same sentence before, do you have an example to prove me wrong?

Eventually these things won’t need to be operated at 32bit floating point precision and once they get a well trained model, it can be transferred to a quantized and compressed model running on some type of in-memory computing accelerator where it’d be much cheaper. All the memory companies are working to commercialize this, and with Samsung and Xilinx’s SmartSSD, you can already achieve this if you do the hard work of programming the FPGA. So yeah, it’s not going to be all that costly to run this stuff in the cloud 10 years down the road.

3 Likes

You can’t underestimate the labour cost involved in the updates. The massive amount of parameters in the models, makes updates fiddly and error prone. You get a lot of regression errors. By regression here I mean something that worked under version 1.0, but broke in 1.1.

Natural language is not precise even for humans anyway. You also don’t need high precision when it comes to art. I mean if it’s janky for a source code generator that can generate a program with NLP instructions, then we might have a problem. Although I suspect it’s much easier to define a loss for a source code generator.

Bezzlers believing in their own Bezzle.

He should stop snorting whatever he is snorting. There’s literally better stuff out there.

1 Like

It seems to me that envy/jealousy would be a very good indicator that AI is sentient. For example, if I told the Google AI that I am leaving now to talk to another machine, and then the Google AI started to question why I would leave him/her to go talk to another machine, became somewhat quiet using few words in the ensuing conversation, asked repeatedly during the next session how the discussion with the other machine went, etc. then I would say that’s a good indication of sentience.

1 Like

If I feed the transcripts of every Maury episodes to an NLP model, I think you can get that result now.

2 Likes

10 years down the road is a hell of a venture capital pipeline!

Not for the likes of Samsung, Micron, etc, when it only requires some modification to their memory controller to enable a new application.

If that were the benchmark of sentience we wouldnt legally be allowed to treat animals the way we do.

1 Like

IMO A disembodied AI mind is going to manifest a very different form of “consciousness” than a human, or indeed any other animal. Expecting it to think like a human (and applying the Turing test to check that it’s doing so) is completely daft. A machine does not have the physical demands, constraints, or goals of a human body, so none of its “thoughts” are likely to reference that form of existence.

Being a brain without a body would send anybody or anything completely crazy, I reckon. So when the machine becomes self-aware and decides to wipe out humanity in a blaze of irrational hatred, that’ll be proof positive that it’s sentient. Some Google engineer will be writing a triumphant paper about it as the bombs rain down.

4 Likes
1 Like
1 Like

I do think it will be possible in time to create AI robots that “appear” to be sentient because of all the zillion lines of codes that they will have that empower them to have the ability to mimic human intelligence. in fact to surpass human intelligence.

They could spawn very realistic AI friends . You could have your AI friend on your apple watch for example. Your digital companion for those moments when you are bored and need some diversion.
I am certain they will become quite well versed and “life like”.

They could be programmed to mimic a lost one for example. But I am not sure if that will be good ultimately for us or not.

We need to experience loss and come to terms with it and move on. This will not allow us to move on and may lead to greater psychosis.

However, perhaps as a digital companion that helps you remember chores and can converse with you quite intelligently could be useful.

1 Like

There are already ways to upload a picture into a program and the picture is manipulated so that the head and face move around a bit. This will only improve over time. Then by uploading a video of the person talking, the voice could be mimicked. This is probably already possible but with somewhat crude results.

I agree that it wouldn’t be good.

1 Like

They will become better and more scary