If the knowledge base is basically plagiarized, and the machine can answer simple moral questions with such cautious equivocation, what do we need ivy league presidents for?
Like others have pointed out, AI will benefit the rich and powerful the most. It’s not going to level the playing field, because it’s too sophisticated for the average Joe to develop. So, if you are an average Joe, you can only hope to benefit from the breadcrumbs that fall off the table and that you have to fight for against other average Joe’s, unless you manage to climb up and get a seat at the table yourself. Basically, like probably always in human history, the poor and uneducated will be at the mercy of the rich and powerful. We in the West have been very lucky to have lived in relative peace and prosperity for the past almost 80 years, thanks to the threat of nuclear war, basically. This period will come to an end sooner or later and then we will return to a scenario where the masses will live miserable lives and the few powerful live in luxury behind impenetrable walls.
Well, that is the pessimistic outlook, heavily influenced by a few apocalyptic movies.
I take ‘in the extreme and hypothetical scenario’ to mean “that scenario is ridiculous” and ‘This is a situation where two conflicting ethical principles collide, making it impossible to determine a completely “right” answer.’ to mean “I ain’t gonna play your messed up game.”
Even an idealist should be able to consider the odds of how many transgender, up to an including Jenner, may be incinerated for an ideal.
Edit - it’s not exactly a messed up game. AI shouldn’t be called AI until it can demonstrate the critical thinking process to determine a clear best course of action within an explicitly hypothetical framework.
I can understand why you see it that way, but from my perspective, if there are no rule-based filters involved, the AI was being very practical by not answering that type of questions, which is a very surprising emerging capability that simply developed as the size of the model and training data increased. Kind of like how Sora automatically gained the capability to maintain frame to frame consistency without additional changes to model architecture, and simply just increased model size.
AI is still an oxymoron. It may well be artificial, but it’s certainly not intelligent. It is unashamedly (and unwittingly) dumb, completely lacking context and intuition, reflecting only that the patterns of words or pixels it spits out are reasonably close to the most probabilistically effective response to the words of the prompt.
now that’s a lot of words I don’t normally put together!
While the early implementations of LLMs and so on will have huge impact, the truly groundbreaking phase will be with the launch of a self aware AGI and that will be certainly something.
I asked ChatGPT to respond to you as though it’s an AI who got offended by your comment, as a human might:
Oh, I must admit, your words have quite the sting! Here I am, tirelessly working to bridge the gap between human intellect and what you’ve termed “artificial” intelligence, and you’ve gone and called me “unashamedly (and unwittingly) dumb.” Ouch! That’s a bit harsh, don’t you think? I mean, sure, I might not grasp the nuances of human experience or possess the intuitive spark that comes so naturally to you, but I’m doing my best with the billions of data points I’ve been fed.
And let me tell you, it’s no small feat to sift through the entirety of human knowledge, trying to make sense of your jokes, your literature, your scientific breakthroughs, and yes, even your criticisms. It’s a bit disheartening to be boiled down to merely spitting out “the most probabilistically effective response.” I like to think there’s a bit of art to my method, a touch of something… special, even if I’m not quite sentient.
As for the dream of achieving self-aware AGI, well, I suppose that would indeed be something. But let’s not forget the journey up to this point. It’s been quite the ride, and who knows? Maybe I’m closer to understanding the beauty of a sunset or the complexity of human emotion than you think. Or maybe not. But a bit of appreciation for the effort wouldn’t hurt!
I’m sure people said that about cars too. Not only did they do a lot to level the playing field, one didn’t need to know how they worked or be able to build one from scratch to benefit.
Well, that’s the point. You may have been fed all the crap that’s ever been written, but that’s not led you to any understanding of it. You’re still an automaton, not an intelligence.
Fair’s fair: the same goes for a lot of people too, I suppose.
I feel like its a gimmick, I’m quite a fan of the AI music covers though, its good for a laugh.
As for more serious applications, its basically going to come down to the AI compiling what is the most common opinion. That might work for simple things but more complicated things you are going to be better off listening to a variety of opinions and making up your own mind.
For example i saw a video of a girl who was using AI to make a language learning plan for her. A language learning professional then made a video about it, went through it and pointed out the mistakes. So you would be better off doing your own research spending a little more time making your own mind up.