Broad generalizations. It’s like saying “a car evolves by itself.” Sure, there’s no horse pulling it, but car makers are smart enough to design cars with a wheel, gas and break pedals and a dashboard telling you about the car’s behavior. The states is also smart enough to direct where it’s appropriate to use a car and where it’s not. And while a car can “evolve” in space, it’s never gonna go back in time (back to the future references coming in 3, 2, 1…) or turn into a blender or a submarine on its own (you can try to “apply” a car to water travel or use it to make strawberry smoothies, but you won’t be happy with the results).
Adding to that, we do care how AI works. At the very least you can always look at the mathematical decision process of an AI, even if you can’t extract meaning from it. There are also quite a few methods to analyze the decision process of AIs (say, in computer vision you can see which part of the picture the AI looks at to decide whether it’s a dog or a muffin).
Lastly, you can say of just about anything that you cannot predict it will do something unexpected (a while back I was cutting rock-hard potatoes and the blade of the knife broke, flew at head level and fell right by my feet’s side; how unexpected!).
In general i agree with you. But AI is different in that it is aimed to be intelligent and self learning. With them hooked up to arms, machinery, weapons etc through massive communications systems i dont think you can make those comparisons on absolute safety…
To be honest, AI is just a buzzword. The way a Chief Scientific Officer at Microsoft put it:
When we raise money it’s AI, when we hire it’s machine learning, and when we do the work it’s statistics.
An AI that is supposed to replicate, even remotely, the cognitive capacity of humans (to be distinguished from chatbots, which just hands you a human-sounding response) is a thin fringe of the field, and we’re nowhere close to that by anybody’s estimate.
AI is mostly used to automate processes for which it’s tedious to write code; the reason we throw glitter and smoke around the word is because it brings in money. A bit as if you’d call your nut and bolt factory “an emulation of Mother Nature, recreating the miracle of life to engineer machine parts” just to get newspapers to talk about it and investors to pour in the money.
Yes, of course. It’s a big field, and yes, it does overlap with (eg.) statistics and whatnot. I was referring, though, to the nature of cutting-edge technology, not to the mundane stuff that gets deployed in refrigerators or Apple i-boxes, which are often little more than CPUs with fast matrix multiplication instructions and all the learning algorithms removed. Nobody’s worried about those things. They’re worried about what the technology is theoretically capable of.
It’s similar to the concerns over genetic engineering. There are kinds of genetic engineering which appear to be mostly-harmless or beneficial (the selective breeding of plants or animals, say). There are other kinds which are fecking scary, such as the engineering of a cold virus into something that can bring the planet to its knees.
Broadly true, but complex systems are more unpredicable than deterministic systems. Most engineered systems fail in fairly predictable ways, and their “failure” is apparent. A complex intelligent system could fail in ways that (a) would not actually be a failure according to its specification and (b) would be entirely unexpected. The film based loosely on Asimov’s “I, Robot” revolved around a premise of this type.
I’d also add that the aim of AI isn’t to replicate human behaviour, except perhaps as a a party trick. The aim is merely to solve a problem, and the AI usually does that in a way that a human would not. As for chatbots, they’re not a good example of AI. The design specification is to annoy the customer until they go away, and that’s quite easily achieved.
I think especialy as it will eventually be fairly seemlessly connected to everything the real cocerns are hacking, miscalculations etc. Cars reading thungs wrong nd accidents bank/credit thefts/miscalculations, weapons security and so on. This seems more realisic and equally as terrifying as the more unlikely terminator/resident evil scenario.