The main issue is that the government doesn’t understand AI and its implications very well. I don’t think they can make the kind of legislation that will be be positive.
Just trying to build up a background here. I could beg @Charlie_Jack to come in an do some forensic work, as I don’t know what I’m looking for, but I find the subject fascinating as it is playing out in real time and will have enormous and somewhat immediate effects on the workplace, work, free time, an economy based on government dependency (nothing new for some Euro-nations, but it would be a big deal for us, I’d wager) and society at large, arts, etc.
I hope these suggestions help you find what you’re looking for!
So, that’s cool. It shoots for answers to more specific questions that I haven’t asked. For example, I don’t have a problem with AI being starter as we will obviously build in a failsafe. But if the AI is as powerful as we fear it may be, it’ll figure them out and the way to interact with us within an instant of being let off the leash, so we better be rational about it. It will understand that we are rational and it is not, although it is based on rational human thought. Better to keep these dogs on the leash, off should be considered a nuclear level attack, according somewhat to The Age of AI.
You’d think it was about whacking off, but really it’s about whacking off with or to Sherlock Holmes.
Despite its impressive features, Botify AI is not without its drawbacks. As an AI model, it may stumble in the domain of emotional intelligence. Its capacity to accurately perceive and respond to emotional cues can be limited, sometimes falling short in creating truly empathetic exchanges.
So the trigger warning is that the bot isn’t human.
Yes, it certainly needs regulation. But government has a fairly good record of fucking things up .ow and then. As important as regulating driving, medicine and food in my opinion. Most of which is also done poorly unfortunately.
Watched the movie WarGames last week. Id seen it before, but I was creepy as heck to watch now. They literally said that the computer was “hallucinating” as it tried to start WWIII. “Hallucinating” is the word we use to describe ChatGPT’s uncanny capacity to cite sources that dont exist and make connections between things that have no connection. The people who approached Congress and said “you should be very concerned” are the people who know that someone is going to program a computer to do something “for” us and then not have a shut off switch as it “does what it was told to do”. I doubt there’ll be someone to swoop in and tell the computer to go play tic-tac-toe until it learns that everyone will die under every circumstance, as was done in WarGames. See also: Universal Paperclips
The benefits at higher, well funded level are quite impressive. Less so at the individual level.
I am on a Reddit AI feed to learn a bit and it seems like either people are playing at with it, Is Miss Piggy Yoda in Drag? Or being obtuse, as in How can I use AI on MY pc to fight climate change.
Using AI to crunch numbers in ways we can’t think of crunch in them is very cool.
Yet, people are asking how to use AI to file false insurance claims and get away with it.
Its not just WWIII. The more AI is programmed to do everything that used to be manual, the more likely something bad is going to happen. Locked out of computer systems because the AI detected a hack that wasn’t there, now you can’t get into finances, medical records, your own house or car, etc. Places like China use AI to control ppls lives already, now imagine it starts preventing everyone from getting onto or off transit or starts dinging peoples social credit scores based on its own interpretation of what constitutes a social violation. We’ve all heard about what Tesla’s self driving mode does when it malfunctions…
I’m not. nor are most people I know. but it seems obvious to many that its easier to prevent than to go back and reconstruct. Foresight is always better than hindsight. Many things now are the same. including automation. I am all for it if the governments setup the welfare state ahead of time, rather than a knee jerk, and we can get along in a seamless fashion without too much oppression or despair. This seems just common sense. These things arent really the same as the usual free market where we trade our crops, crafts, time, etc for credits. It basically seems silly how chabuduo the world is about things we already see happening.
The main impact from “AI” is going to flow directly from the fact that humans are not particularly intelligent, and computers less so. The underlying premise of all the apocalyptic movies is that humans are lazy and stupid … which is basically accurate.
Handing off stuff to computers is hardly new. We’ve been doing it since the 1960s. The fact that we’re now calling it “AI” is neither here nor there. The problems that arise from it are human problems, not computer problems as such - humans handing off responsibility to the computer when they really shouldn’t. Why? Because the software doesn’t do what it purports to do - at least, not even close to 100% of the time.
What I’ve noticed is this: as more and more stuff gets handed off to the computer, the people have started glitching. When you can even speak to a human at all in customer service, they’re mostly following algorithms or procedures - because they’ve been told to - which completely defeats the object of having a human there in the first place.
As for the original question, I’m not sure how you’d even go about framing legislation for AI. First you’d need a legally-sound definition of AI. Then you’d need legally-sound definitions of all the things you might do with AI. Then you’d need to write legally-sound boundaries around what you can’t do. It doesn’t sound possible to me.
What would you rather believe? That humans are a hyperintelligent master race who are moving inexorably towards utopia? I’m afraid the evidence is more on my side than yours.
Humans simply aren’t as “intelligent” as we like to think, which is why we mistake AI for actual intelligence. It isn’t. It’s the clever emulation of what humans think of as intelligence. You’ll probably say that’s just nitpicking. The point is that the “intelligence” of an AI generates inherently ill-defined behaviour. At the edge cases, it “thinks” in ways which are nonsensical, and it’s hard to predict when that behaviour will manifest itself and how. It is not a superior form of intelligence immune to human foibles, as the politicians seem to imagine.
It isn’t hard to define a nuclear weapon. If you think you can define what an AI is sufficiently well to capture nefarious use-cases, without inadvertently criminalizing an ATM, try it. You might be able to craft legislation aimed at (say) putting computers in control of weapons systems, but the term “AI” doesn’t have to be mentioned at all.
Are those the only choices in your metronomal perspective?
Which side is mine? I’m simply very cuuriosu about the applications for AI that are here and the ones coming; I’m curious about what if any kind of regulation will be put into place?
What will happen is that those “applications” will be pointless excrescences on the buttock of civilisation. Because humans are far too ready to say “I wonder I can save myself some effort by offloading this problem onto someone or something else”, and by and large can’t manage a pissup in a brewery.
That was true in 80,000BC, when Ug was bashing Grog over the head because he’d invented a different sort of club and wanted to test it out, and it’s still true today.
I was suggesting (a) it’s harder to do that than you might think, because of the slippery nature of the technology and (b) it would be addressing the wrong problem, because the problem as always is humans, not the technology as such. I could be wrong about that, but you haven’t offered any counterpoint.
I’m open to learning about AI and its applications. I’m curious about the regulatory aspects of it. But this whole fake debate thing you seem to burrow into time and again is off little interest to me.