The robot revolution thread

Um….
:no_mouth:

2 Likes

Doesn’t saying that “a robot attacked an engineer” assumes it was intentional?

1 Like

It would imply that, which of course wouldn’t be accurate as we know (at least I hope so) that the robots are not yet self aware.

But on the other hand, I’m not sure of a better verb to use for this kind of incident.

The robot had pinned the man, who was then programming software for two disabled Tesla robots nearby, before sinking its metal claws into the worker’s back and arm, leaving a ‘trail of blood’ along the factory surface.

1 Like

It could just be an industrial accident.

2 Likes

Robots can be incredibly fast and strong. In most factory settings they are caged in to avoid human contact. In the development lab that is not always possible and some safety measures can be turned off if they interfere with certain tests.

I was working with a robot arm in University for a project. It is incredibly easy to make a dangerous mistake and send a wrong command that wrecks the expensive robot or people in their way. We were very cautious.

3 Likes

What really happened came out and it is absolutely not what the clickbait media was reporting :laughing:

• This unfortunate injury happened two years ago before Tesla even had a real working Optimus prototype
• A simple industrial Kuka robot arm was the cause of the injury. This robot arm is found in all car factories.

Kuka

1 Like

A key part of AB InBev’s “Brewery of the Future” program, Spot conducts 1,800 individual inspections each week across ten packaging lines that churn over 50,000 containers of Stella Artois, Budweiser, and Corona beer every hour. In the first six months of deployment, Spot discovered nearly 150 anomalies and slashed average repair times from a few months to 13 days.

In the future one Spot will be remotely piloted 24/7 by three skilled, low cost technicians living in Asia, Africa, and South America. There will be several key advantages to such piloted robots. Their intelligence will be provided by humans rather than programmed in so they will be cheaper, more adaptable and able to make repairs rather than just “spot” them. And rather than eliminating human workers piloted robots will incorporate them in a hybrid robotics model. Last but not least enabling low cost, skilled workers to telecommute to high cost, labor short countries will solve the immigration problem

My question is, if only a few will have jobs, who will companies market and sell products to? A bunch of destitute people isn’t a market at all.

1 Like

May I introduce you to Andrew Yang’s freedom dividend and data dividend, generated from 10% value-added tax on business transactions, a 0.1% tax on financial transactions, and tax on social media for using our data.

1 Like

Either way, something’s gotta happen because if everyone’s destitute and the company revenue is too low, then it’s just going to break down.

Very clever, particularly the acoustic analysis. It’s just like a mechanic listening for funny noises.

The main downside I can see is that their maintenance teams are going to become slowly de-skilled. They’re going to lose the ability to spot these things “manually”.

Also you really can’t train an AI to repair and diagnose problems. It might be able to look at problems of electronic nature by detecting faults, but listening for sounds and such and diagnosing a failure? That takes experience. It could also be just combination of things that tells you about a problem.

Would you want an AI doctor for example? Would you trust it to make an accurate diagnosis?

Those are some of the main things ANNs are being used for already. Fault diagnosis, medical image analysis etc.

So it can look at say some guitar with problems and see that some brace is broken inside?

Yea if AI’s can just replace just about everyone, then capitalism isn’t going to work, even socialism isn’t going to work either.

Either the AI’s going to murder all of us, or we will have to do something to account for this.

But the thing is, right now our AI is based on LLM, meaning if you ask it a question, like “who shot Abraham Lincoln”, it doesn’t look at established data or encyclopedias on the answer to the question, but rather it uses the data and gives an answer based on how most users answer the question. Meaning if the answer turns out to be wrong, it still thinks it’s right. We call this hallucination.

The danger with say ChatGPT is that people are basically spamming question and answer sites like Quora and filling them with ChatGPT answers that sounds great but are often wrong.

So if you ask an AI to diagnose a problem, but what if one person discovers that established model is wrong and therefore the AI’s been doing it wrong all these years? How do you train the AI to be right?

It doesn’t know anything, all it knows is statistics. It means if I want to make the AI say that Al Gore shot Abe Lincoln, all someone gotta do is make millions of sockpuppet alias, and spam the data set, and it will hallucinate like crazy.

Yes, you can train them to diagnose mechanical faults. This is already done in many factories around the world.

LLMs, such as ChatGPT are trained on data scraped from the internet, things like Wikipedia, web pages and so on, but also using books and data sheets. However, openAI don’t give much details for obvious reasons.

Making sure the models work correctly, like anything else, is the responsibility of the people developing them.

The irony in this comment is delicious.

Not all of them. Those are just the ones that make the news. There are many, many different ways of implementing “intelligent” systems, but most of them are just very boring. There has been an iPhone app out there for many years, I believe, which is supposed to be an expert system for medical diagnosis. If you ever see your doctor frantically prodding his phone while you’re describing your symptoms, be suspicious. Or not (apparently, it’s right more often than the average doctor).

I presume right now all they can do is tell you where the fault is and tell you how to fix them, but it can’t do it without some very sophisticated hardware?

Kinda like a PLEK machine basically… very expensive but still require a skilled operator.

No. Obviously, a computer running a model for monitoring something like motor bearings in a manufacturing plant can’t go out onto a production line and change a bearing.

However, if it’s was monitoring a process that is computer controlled, then the system could be designed to intervene, if that’s what you want.

One of the biggest advantages with these models is that once they are suitably trained they can process vast amounts of data very quickly. So the key applications will be high volume, such as image/data processing, complex optimisation problems and so on.

3 Likes