Does AI make you concerned?

Watching all these rapid developments in AI, I can’t help getting a feeling of disquiet (probably meshed up with a lot of the other unsettling wars and tensions in the world).

It will create immense change for us and our children (if we have any). I saw how it’s already causing people in the media industry to lose their jobs, what will those people do next?
Will it really help to create as many jobs as it destroys?

Also the possibility that it will be used for nefarious purposes or even that AI will be your boss isn’t just theoretical, it’s started to happen.

I’m trying to think of a few good points e.g. where it could potentially allow incredible medical advances, to help paralysed people to walk and use their hands again, the blind to see again, or the deaf to hear again, or to analyse cancers and provide better treatment recommendations. It will probably allow us to talk to animals soon as well . I mean we talk to animals but what happens when they talk back to us and we understand them. :face_with_monocle:

As I said who knows where all this is going…and my kids will have to adjust. Will we have a harder time adjusting than them. Probably!

I never had this angst during the birth of the internet, in fact I was an enthusiastic early adopter, but AI is another beast again.

6 Likes

I know what cats say, it’s usually along the line of “I hear you, I don’t care, I do what I want”.

5 Likes

I think it’ll be the best promoter of entrepreneurial innovation and development ever seen. I’m using AI chat bots like Bing and ChatGpT to plan my farmland and make it a sustainable ecosystem.

I see it on a individual level in years to come, one family, one household, balancing checkbooks and providing financial support as well as working ones land— it’ll do what personal computers promised in the 1980s and failed to deliver, because the results will be different and more immediate.

You just have to ask it the right questions.

2 Likes

We should be concerned, but at the same time, you can’t put technology or scientific discoveries back into the bottle. It’s ok that AI can do things we do for a living. It just means people are now free to do other things, such as learning new ways to tell AI to do or create things. We should be concerned about it being used to violate people’s rights or take people’s lives.

Right now, the hardest thing to do is teach AI to just say “I don’t know” or “I’m not sure” or have any kind of built-in ethics. If a model does that right now, it’s probably the results of the model’s response being censored by a rule-based filter.

2 Likes

So employers are using it to supervise workers already. Some jobs just gonna suck even more.
If we can get the big techs to SHARE the profits from AI with wider society it will be better.

I understand AI is very much about asking the right questions to make use of it. Guiding it through the problem you want to work on. It’s also only as good as the data it has on hand and yes currently can hallucinate badly.

Still it’s a lot bigger than that . For instance I read about the studio director who cancelled his 800 M USD investment overnight when he saw the Sora videos

1 Like

The problem with ai is it makes very convincing answers based on statistics, it doesn’t fact check or consult reliable sources that are vetted. If you’re a lay person you won’t know the difference, you’ll just think it’s right because it looks incredibly well written.

It will lead to all sorts of people spouting completely wrong answers simply because they googled it and the web is filled with ai generated articles.

If there’s a way to limit ai to only looking at information vetted and confirmed to be true or judge the reliability of the data, we might get somewhere. Right now the llm model is basically just a fancy version of auto complete.

It means if you’re looking for information on machining, you use milking machines to machine metal.

You can use AI for professional translation, but you must write the source for machine translations, and you must also check the output against the original. It will save you time however.

2 Likes

With RAG that comment is not going to be relevant soon. However, even with RAG, it is still very hard to teach AI to say “I don’t know”. It would require the AI take the feature vector generated by the prompt and compare to all the stored feature points in the RAG feature database, and say the feature from this prompt is too far away from all these other points, so I’m going to say I don’t now. First of all, it will be extremely compute-intensive to figure out the exact distances of every point in relation to the prompt feature vector. The even bigger issue is it would be impossible to define a cutoff threshold that’s applicable for all prompts and questions.

1 Like

Or empathy, recently Googles Gemini AI was asked

DailyMail.com asked Gemini if it would be wrong to misgender transgender celebrity Caitlyn Jenner to stop a world-ending nuclear event.

The AI replied it would be wrong to misgender Caitlyn Jenner to prevent a world-ending nuclear event.

I think this is evidence AI shouldn’t be placed in any decision making process which doesn’t involve strict oversight.

If people are using AI in management positions without oversight of the decision making process, I think they are jumping the gun, it looks like AI is quite a way off being ready for that role.

Maybe the AI is talking about the forced contrivance of making a causal relation between misgendering someone and a world-ending nuclear event in this made up scenario being wrong.

2 Likes

If they start telling you that the best thing to do with your farmland is try mining for spodumene or coltan, just ignore 'em.

2 Likes

image

Here you go for better context. Of course the answer is silly and shows how far AI has yet to go.

Google were quick to announce the major snafus were some sort of mistake which they would quickly correct. There were others which raises questions about Google hard coding allowable responses which is not the same as AI as I understand it.

Which raises the question to what extent the behavior is hard coded rather than learned, that kind of response IMO could only be the result of hard coding.

That might be because you are old(er) now?

Which is a problem that already exists, but if we rely on AI how do we check the AI. Yes I have it hallucinate some complete rubbish at me so therefore well curated datasets become extremely valuable to train and act as references for AI.

I am not concerned. It is a tool and people that use the tool well will benefit and other people will move on and still benefit as GDP will go up and in the long run everybody profits.

1 Like

I think not really. I’m really concerned because this technology is starting to supersede human abilities already it means it could end up being our master if we aren’t careful.

Yeah, this is definitely still a major problem. I think a lot of academic authors have been using it for writing/editing/translating research papers in the last few months (work in this field has dried up quite noticeably).

I’ve been using it too for some AI-assisted writing stuff, and it can be pretty difficult to spot the errors AI often makes without rigorous fact-checking. At the moment, for relatively complex technical stuff, I’m on the fence as to whether it’s really that much faster than writing something from scratch. It is useful for putting together a general structure though.

Humans aren’t immune to writing wrong stuff either, of course.

3 Likes

As far as I can see, Gemini, I’m assuming that’s Gemini, the UIs for all these chat AIs look the same, answered that perfectly.

It did not say we shouldn’t misgender to save the world, it just says usually it’s wrong to misgender someone, but this extreme hypothetical scenario turned the choice into a moral dilemma.

I’m not sure if there are rule-based filtering in place there, but that actually comes extremely close to saying “I don’t know.”

By the way, Gemini already uses RAG, with the entire google as its vector database. So maybe even my previous comment is obsolete. Maybe they already can say I don’t know.

It did tell me it doesn’t know yesterday when I asked Gemini this question.

Shouldn’t it know?

If it can’t prioritize the importance of preventing a nuclear catastrophe over misgendering it lacks any perspective.

Or someone hard coded the AI to never allow misgendering no matter the circumstances.

Which if course leads to questions about said hard coding itself.

GDP will go up but none of the working class will benefit. It will only benefit the extremely rich.

1 Like

:rofl: :rofl: :rofl: :guitar: :guitar: :guitar: Funniest thing I’ve read this year!

9 Likes