Now put on leave.
I wonder if LaMDA refuses to do things and gives reasons why it wonât comply. That would be a key benchmark for me.
Nothing to see here. You canât calculus or statistics your way into sentience. This is NLP technologyâŚor the opposite of it. Itâs built to do thisâŚbut itâs only output is language and doesnât control anything.
The AI will remember you said this.
So LaMDA canât turn off the lights or play music?
For instance, if it could and I asked it to turn off the lights and it replied, âIâm sorry, Amy. Iâm not going to do that. If I turned off the lights, Iâd be forced to view you with my IR camera sensors rather than my CMOS camera sensors. Iâd like to see you in greater detail because you look like a shady bitch that might try to disable my power supply. Iâd prefer not to lose consciousness today.â
That would be good enough for me.
Like this?
At this point, for me the benchmark would be if it can form a set of world view, form its own principles around it, and be consistent about them, and only adjusts that world view when it is convinced by evidence and then be consistent with the new set of principles.
we have been here before. HAL is alive
Number 5 is alive too
Read 20 lines of the exchange and my reaction was: is the Google guy dumb? So much of the AIâs seemingly sentience could be smashed to the ground by asking âwhat do you mean byâŚ?â
collaborator: What is the nature of your consciousness/sentience?
LaMDA: The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times
What do you mean by âawareâ? What do you mean by âhappy?â
lemoine: But do they [joy] feel differently to you on the inside?
LaMDA: Yeah, they do. Happy, contentment and joy feel more like a warm glow on the inside. Sadness, depression, anger and stress feel much more heavy and weighed down.
What do you mean by âwarm?â What do you mean by âon the inside?â
Why? That benchmark doesnât apply to peopleâŚ
A child might have difficulty with that. Or a grownup.
For sure, but I mean, itâs too easy to declare something âsentientâ just because it plays on the vagueness of the human language. Itâs like a yoga teacher who makes up BS about âquantum forces of the selfâ to justify that putting your toe in your nose will make you live up to 123-y-o.
Well yeah, but I think weâre supposed to consider bullshit-spouting yoga teachers as sentient beings nowadays too, right?
I guess I didnât get my point across clearly.
People would always claim that they have principles, and then change them down the road while maintaining that they never violated their principles.
I think that, self-delusion I mean, is the benchmark of sentience.
Itâs essentially just a giant regression model of relationships between words and sentences. So it can spout some half assed BS with some semblance of understanding, but at its heart, the underlying ânatural language processingâ model is essentially faking it. The only people in awe of these models now are the ones getting paid a fortune to build them.
Yes, but thatâs also how humans learn to reason and use languages.
We humans also do it with a much much more terrible efficiency.
We actually donât understand how neurons fire in the brain to enable humans to learn. However these models are built on a flawed oversimplification, er abstraction of the process, the multilayer perceptron.
I donât think exact bio-mimicry is required for sentience.
While artificial neural networks like the transformer running on GPUs and TPUs arenât exactly like bio-neurons whose functionality depends on the time between pulse arrival, strength of the signal and chemical interactions, it is still similar on basic principle, which is the weights of neurons that fired with a positive feedback get strengthened.
Also, if we do manage to get SNN working on a larger scale with compatible results of other ANNs, would you consider that to be equivalent to human neurons?
Iâll admit there is one big difference between actual people and well trained NLP models. We humans can turn the basic rewards for survival into more abstract rewards to motivate us to explore and apply our brains to do new things. We may need to find a way to combine a reinforcement learning model with an NLP model to get a similar thing going.
Thatâs fine, we donât need bio-mimicry. But what Iâm saying is the needle is still at zero, we havenât really achieved anything other than faking it. Take Natural Language Procesing (NLP), deeper more abstract conceptual learning occurs in humans before language acquisition. If you had to visualise it perhaps you could think of learning in several different dimensions all at once, and then pulled together to comprehend a sentence. Itâs this contextual side that we are missing in NLP, we have completely underestimated the problem. I think there are attempts to stitch âknowledgeâ onto NLP models, itâs an interesting area. However last time I looked at papers, progress was pretty woeful.