HOW do I stop the AI conspiracy nuts?

Dear people of Earth,

If you do not have a degree in computer science... do not talk about AI. If you have never written an artificially intelligent piece of software... do not ponder the threat. You are NOT qualified.

Is AI a threat? Well, sure. There is probably a definition of "threat" within which AI falls.

Is a Skynet like AI an impending threat? Absolutely not.

What people don't understand is what artificial intelligence even is. The simplest definition is a non-human agent which can, output information of a level consistent with at least basic human intelligence.

To put that in perspective, a hard coded decision tree with a sufficient number of branches on a topic meets the criteria to be an artificial intelligence. HARD CODED. Incapable of learning. Incapable of subjugating the human race.

Is that limit of modern artificial intelligence? No.

But, it gives of the birth of AI. Early artificial intelligences were a collection of decision tree and heuristics based algorithms. They had no capacity to learn. And yet, they could diagnose our symptoms as accurately as doctor and beat a pro a chess. Arguably, within these spheres, they weren't simply AT human intelligence. They surpassed us.

Typical learning algorithms are both different and the same from these past AIs. Learning algorithms do change over time. But, it isn't learning in the same sense as human learning. The scope of the data and the way it is handled at a rudimentary level is still hard coded. It follows rules. Rules which it cannot change. A neural network built control traffic lights couldn't just "learn" how to drive your car. And an AI complex enough to drive a car can't simply learn how to control traffic lights.

They're built to process input in a particular format and output it in a particular format. And, usually, the "brains" of these AI are hard coded when in use.

To go from a traditional neural net to a Skynet like intelligence we need three things; it needs to be self guided learning algorithm, it needs to update it's own data model, and it needs to be able to rewrite itself.

Today, when we build a neural net we do what we call "guided learning". We tell the machine how to identify a good result from a bad one. We vet the data we then feed into in the hopes that it doesn't develop any undesirable biases once the data model has been trained we dump it into a product environment.

Without human intervention the code never changes. Without human intervention, the data model doesn't usually change. Without human intervention the training criteria never change.

For each of these points which you'd need to remove to get an artificially sentient intelligence of any value you add AT LEAST an order of magnitude of complexity and that over again in terms of processing power.

To get to the nightmare style AI you need 3 things you don't have today; Sentience, flexibility, self-sustaining.

By sentience I mean a simulated sense of self awareness. The ability to guide your own learning in a reasonably efficient manner. We can build fully self-guided learning algorithms today. They just tend to be painfully bad. It takes billions of learning iterations to master BASIC tasks. And that is assuming that they don't get stuck in a logic rut which they can't learn their way out of. I've never seen or heard of anything approaching this level of sophistication. 

And you might be tempted to say that "just because I haven't heard of it, it doesn't mean it doesn't exist". True. But! Here is an example of why I don't think it exists; today, we many VERY brilliant teams of people (DARPA, MIT, etc...) working on robots of all sorts. These are fairly complex machines, but attempting to accomplish just a VERY small subset of the things a human can do or knows how to do. Many of these leverage machine learning and the learning is guided by us. We still can't build a robot capable of reliable traversing a complex terrain. IE we can't build a robot that walks as well as a human. WALK. Just WALK! And we KNOW what success looks like. We know how to tell a robot when it succeeds and fails. It has a limited number of sensors and actuators and states for them to be in. These are devices PURPOSE BUILT for walking.  A self-guided learner MIGHT eventually arrive at a better answer. But, I think, with modern computers we'd experience the heat death of the universe first.

Now imagine that robot needed to not only walk, but manipulate the environment. Safely move debris. Find shelter from damaging conditions. Repair itself. Etc... the learning required gets even more complicated. And, we're still talking about a robot stuck with it's factory programming and curated data.

Next up is training data. You'd think this would the easiest point to solve. But, it is probably the hardest. In fact, humans need to learn how to acquire data. Let's say I want to learn Japanese. There is a LOT of information out there. Most of the information I can acquire has NOTHING to do with Japanese. A lot of it cannot be deciphered immediately. Some of it won't progress my learning. Some will teach me bad habits. And that is just talking about data in the form we usually think of it; textbooks, web sites and videos. My senses provide data as well. If I'm learning something new, there may not be resources, or I may not know the best place to find them. This is an insanely complicated tasks. Of all of the recorded data in human history an immeasurably small piece of that is good enough to helpful at each stage of learning.

Without humans programming it, no one is telling me which sensors to use, how to organize the data, how to parse it. No one is telling me what sequence to consume it in or even how to consume. No one can tell me how to test that I have learned it.

Data gathering is probably the single hardest thing for the machines to learn.

Lastly, is self-reprogramming. A human being can learn to do brand new functions. We do things today, no human was able to do 10 years ago. 100 years ago. And so on. For a computer to know how to teach itself and how to acquire data. But, it also has to potentially be able to rewrite everything about itself.

Compile the code wrong and your robot no longer boots up.

Did I say gathering data was the hardest? No wait. It isn't. A system which acquires it's own data, trains itself and rewrites itself... must rewrite itself PERFECTLY. Every. Single. Time. If it commits a compile error into "production" it destroys itself. It is commits a run time error, ditto. Every stage of code defect from major goof up to rare edge case presents an opportunity for the robot to simply end up destroying itself. Defects will propagate over time. Every line of code must not only be syntactically correct, but also logically sound.

AI is a threat. It can take your job. An algorithm may develop a bias which costs you job or a mortgage application. A mistake may make your self driving car kill you or someone else by accident. There are threats to be sure. But, imminent threats of the Skynet sort? I don't buy it.

Comments

Popular Posts