Do we need the government to further regulate AI?

I’m still working through that AI book I started a while back. I keep seeing stories though so I guess I tripped a wire in the algorithm. No big deal.

I’ll be worried I guess when a shitty little country does something it shouldn’t have to a big country.

But the issue is worth following. I read that the EU has rules and regs already.

Italy temporarily banned Chat GPT for a while.

The main issue is that the government doesn’t understand AI and its implications very well. I don’t think they can make the kind of legislation that will be be positive.

1 Like

Well, the top tier does. Build your platform and AI will come.

Just trying to build up a background here. I could beg @Charlie_Jack to come in an do some forensic work, as I don’t know what I’m looking for, but I find the subject fascinating as it is playing out in real time and will have enormous and somewhat immediate effects on the workplace, work, free time, an economy based on government dependency (nothing new for some Euro-nations, but it would be a big deal for us, I’d wager) and society at large, arts, etc.

Anyway, I asked BING for some help and got this.

There are many lectures available on the subject of AI and its effect on expanding human consciousness. One such lecture series is the Reith Lectures 2021 - Living With Artificial Intelligence by Stuart Russell, Professor of Computer Science and founder of the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley1. In these lectures, Russell explores the impact of AI on our lives and discusses how we can retain power over machines more powerful than ourselves. The lectures examine what Russell argues is the most profound change in human history as the world becomes increasingly reliant on super-powerful AI1.

Another lecture that might interest you is “The present and future of AI” by Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences at Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS)2. In this lecture, Doshi-Velez discusses how AI is shaping our lives and how we can shape AI2.

I hope these suggestions help you find what you’re looking for! :blush:

So, that’s cool. It shoots for answers to more specific questions that I haven’t asked. For example, I don’t have a problem with AI being starter as we will obviously build in a failsafe. But if the AI is as powerful as we fear it may be, it’ll figure them out and the way to interact with us within an instant of being let off the leash, so we better be rational about it. It will understand that we are rational and it is not, although it is based on rational human thought. Better to keep these dogs on the leash, off should be considered a nuclear level attack, according somewhat to The Age of AI.

It is what it is, a circus freak as smart as god.

cool stuff, yo

1 Like

My net is picking up more AI related stuff.

You’d think it was about whacking off, but really it’s about whacking off with or to Sherlock Holmes.

Despite its impressive features, Botify AI is not without its drawbacks. As an AI model, it may stumble in the domain of emotional intelligence. Its capacity to accurately perceive and respond to emotional cues can be limited, sometimes falling short in creating truly empathetic exchanges.

So the trigger warning is that the bot isn’t human. :crazy_face:

Yes, it certainly needs regulation. But government has a fairly good record of fucking things up .ow and then. As important as regulating driving, medicine and food in my opinion. Most of which is also done poorly unfortunately.

Watched the movie WarGames last week. Id seen it before, but I was creepy as heck to watch now. They literally said that the computer was “hallucinating” as it tried to start WWIII. “Hallucinating” is the word we use to describe ChatGPT’s uncanny capacity to cite sources that dont exist and make connections between things that have no connection. The people who approached Congress and said “you should be very concerned” are the people who know that someone is going to program a computer to do something “for” us and then not have a shut off switch as it “does what it was told to do”. I doubt there’ll be someone to swoop in and tell the computer to go play tic-tac-toe until it learns that everyone will die under every circumstance, as was done in WarGames. See also: Universal Paperclips

1 Like

IDK why everyone jumps to ww3 when discussing AI.

The benefits at higher, well funded level are quite impressive. Less so at the individual level.

I am on a Reddit AI feed to learn a bit and it seems like either people are playing at with it, Is Miss Piggy Yoda in Drag? Or being obtuse, as in How can I use AI on MY pc to fight climate change. :roll_eyes:

Using AI to crunch numbers in ways we can’t think of crunch in them is very cool.

Yet, people are asking how to use AI to file false insurance claims and get away with it. :doh:

Its not just WWIII. The more AI is programmed to do everything that used to be manual, the more likely something bad is going to happen. Locked out of computer systems because the AI detected a hack that wasn’t there, now you can’t get into finances, medical records, your own house or car, etc. Places like China use AI to control ppls lives already, now imagine it starts preventing everyone from getting onto or off transit or starts dinging peoples social credit scores based on its own interpretation of what constitutes a social violation. We’ve all heard about what Tesla’s self driving mode does when it malfunctions…

1 Like

AI at a governmental level will vary due to type of government. That’s the new Cold War scenario. My advice is don’t live there.

We have?

1 Like

I’m not. nor are most people I know. but it seems obvious to many that its easier to prevent than to go back and reconstruct. Foresight is always better than hindsight. Many things now are the same. including automation. I am all for it if the governments setup the welfare state ahead of time, rather than a knee jerk, and we can get along in a seamless fashion without too much oppression or despair. This seems just common sense. These things arent really the same as the usual free market where we trade our crops, crafts, time, etc for credits. It basically seems silly how chabuduo the world is about things we already see happening.

1 Like

Agreed. It’ll be quite a paradigm shift.

I just finished listening
To the Reith lecture, the final one “A Future for Humans.”

Very cool. Big on maintaining human autonomy that is not only built into the AI, but regulated into global society.

The main impact from “AI” is going to flow directly from the fact that humans are not particularly intelligent, and computers less so. The underlying premise of all the apocalyptic movies is that humans are lazy and stupid … which is basically accurate.

Handing off stuff to computers is hardly new. We’ve been doing it since the 1960s. The fact that we’re now calling it “AI” is neither here nor there. The problems that arise from it are human problems, not computer problems as such - humans handing off responsibility to the computer when they really shouldn’t. Why? Because the software doesn’t do what it purports to do - at least, not even close to 100% of the time.

What I’ve noticed is this: as more and more stuff gets handed off to the computer, the people have started glitching. When you can even speak to a human at all in customer service, they’re mostly following algorithms or procedures - because they’ve been told to - which completely defeats the object of having a human there in the first place.

As for the original question, I’m not sure how you’d even go about framing legislation for AI. First you’d need a legally-sound definition of AI. Then you’d need legally-sound definitions of all the things you might do with AI. Then you’d need to write legally-sound boundaries around what you can’t do. It doesn’t sound possible to me.

2 Likes

Yeah, no. I don’t buy that for a second. Seems like an awful lot of projection towards, what is it now, 7 billion people?

Sure, because it’s CUSTOMER service, not computer repair. Those guys are in the back.

Every hear of the nuclear nonproliferation treaties? I’m sure it didn’t sound possible to armchair pedants in the day either. :idunno:

Its absolutely possible. Whether its probably or not is a totally different issue.

What would you rather believe? That humans are a hyperintelligent master race who are moving inexorably towards utopia? I’m afraid the evidence is more on my side than yours.

Humans simply aren’t as “intelligent” as we like to think, which is why we mistake AI for actual intelligence. It isn’t. It’s the clever emulation of what humans think of as intelligence. You’ll probably say that’s just nitpicking. The point is that the “intelligence” of an AI generates inherently ill-defined behaviour. At the edge cases, it “thinks” in ways which are nonsensical, and it’s hard to predict when that behaviour will manifest itself and how. It is not a superior form of intelligence immune to human foibles, as the politicians seem to imagine.

It isn’t hard to define a nuclear weapon. If you think you can define what an AI is sufficiently well to capture nefarious use-cases, without inadvertently criminalizing an ATM, try it. You might be able to craft legislation aimed at (say) putting computers in control of weapons systems, but the term “AI” doesn’t have to be mentioned at all.

Are those the only choices in your metronomal perspective?

Which side is mine? I’m simply very cuuriosu about the applications for AI that are here and the ones coming; I’m curious about what if any kind of regulation will be put into place?

So yeah, what utopia am I talking about now?

Good thing I’m not a politician.

What will happen is that those “applications” will be pointless excrescences on the buttock of civilisation. Because humans are far too ready to say “I wonder I can save myself some effort by offloading this problem onto someone or something else”, and by and large can’t manage a pissup in a brewery.

That was true in 80,000BC, when Ug was bashing Grog over the head because he’d invented a different sort of club and wanted to test it out, and it’s still true today.

I was suggesting (a) it’s harder to do that than you might think, because of the slippery nature of the technology and (b) it would be addressing the wrong problem, because the problem as always is humans, not the technology as such. I could be wrong about that, but you haven’t offered any counterpoint.

Any by since you can stop this kind of shit?

I’m open to learning about AI and its applications. I’m curious about the regulatory aspects of it. But this whole fake debate thing you seem to burrow into time and again is off little interest to me. :idunno: