Hawking afraid of AI?
I generally hold Hawking in high regards. But unless he is aware of something that the rest of us aren't, I'm really feeling like he should stick to physics because it really feels like he has no idea what he is talking about on this topic.
Artificial Intelligence (AI) isn't all it is cracked up to be. In fact, the average person really has no concept of what AI is or is capable of. Artificial intelligence is EVERYWHERE. In fact, my first few weeks of AI programming in university left me feeling underwhelmed. I'd seen it daily without even realizing it.
On the most basic level, AI can be something as mundane as a decision tree. A robot, or computer following a decision tree can display behavior which people would believe to be intelligent. And that is as low as the bar is required to go to get away with calling something AI. There have been websites to help diagnose illness that have been around for decades which employ this.
The problem with this, and with most AI, is that it is just that. It isn't ACTUALLY intelligent. It is artificial, or perhaps a better term is simulated intelligence. Many AI's are hard coded. They have literally zero chance of becoming more intelligent.
Next we have machine learning. This often gets MUCH closer to actual intelligence. But only within very specific confines. The AI has a set of parameters which it is able to adjust and a set of rules for determining how successful a set of parameters is. However, this type of system also has hardcoded boundaries and these learners are again, completely incapable of developing independently. In the realm of what most people think of as AI, this is the segment which generally causes fears of robot overlords. You see a robot "learn" to walk and all of a sudden it looks much more human and much more intelligent than it actually is.
And, this field is FAR from perfected. The issue is selecting the correct parameters, the correct algorithm or combination of algorithms to modify the parameters and the correct rules for determining viability. Any one of these on its own can be incredibly difficult for even mundane tasks, but nailing all of them and for something complex enough to be artificially intelligent enough that we could mistake it for a true intelligence? That alone is probably more than the 100 years away that Hawking speaks of. And that would still be confined within the boundaries of what it was programmed to do.
The bat shit crazy side of AI is where things get scary. But, the problem is orders of magnitude harder than typical machine learning. And that side is code which can write net new code. Not change parameters, but actually produce a new "generation" of code which is different from itself and on its own. This could present a computer capable of erasing boundaries or learning to do, explicitly, more things than it was programmed to do. It could change how it evaluates its outcomes and even change parameters entirely.
There are just 2 problems with that. First, as stated, it is insanely difficult to write a useful system that is smart enough to A) not destroy itself and B) learn in such a fashion as to adopt novel new capabilities. The second problem is that this is no longer actually artificial intelligence. At this point, it is just intelligence.
This is actually the bleeding edge of two branches of AI research. Genetic algorithms/programming wrapped around a machine learning system which knows how to write a similar machine learning program.
The problem isn't the theory. We know from a really high level EXACTLY what such an intelligence looks like. We've known what it looks like for quite some time. The problem isn't hardware. Sure, we are probably behind there too, but there is a good chance that won't be the problem long before we hit the 100 year mark. The problem is that there are only 2 ways to get there, a basic system of such a design that over several trillion iterations becomes better. Or one which is complex enough to start with that it has a decent head start with the pre-programmed smarts that it just starts adapting to new problems immediately.
That first scenario could take as long or longer than the evolution of man. The second scenario is the bit which is just too large and complex.
Artificial Intelligence (AI) isn't all it is cracked up to be. In fact, the average person really has no concept of what AI is or is capable of. Artificial intelligence is EVERYWHERE. In fact, my first few weeks of AI programming in university left me feeling underwhelmed. I'd seen it daily without even realizing it.
On the most basic level, AI can be something as mundane as a decision tree. A robot, or computer following a decision tree can display behavior which people would believe to be intelligent. And that is as low as the bar is required to go to get away with calling something AI. There have been websites to help diagnose illness that have been around for decades which employ this.
The problem with this, and with most AI, is that it is just that. It isn't ACTUALLY intelligent. It is artificial, or perhaps a better term is simulated intelligence. Many AI's are hard coded. They have literally zero chance of becoming more intelligent.
Next we have machine learning. This often gets MUCH closer to actual intelligence. But only within very specific confines. The AI has a set of parameters which it is able to adjust and a set of rules for determining how successful a set of parameters is. However, this type of system also has hardcoded boundaries and these learners are again, completely incapable of developing independently. In the realm of what most people think of as AI, this is the segment which generally causes fears of robot overlords. You see a robot "learn" to walk and all of a sudden it looks much more human and much more intelligent than it actually is.
And, this field is FAR from perfected. The issue is selecting the correct parameters, the correct algorithm or combination of algorithms to modify the parameters and the correct rules for determining viability. Any one of these on its own can be incredibly difficult for even mundane tasks, but nailing all of them and for something complex enough to be artificially intelligent enough that we could mistake it for a true intelligence? That alone is probably more than the 100 years away that Hawking speaks of. And that would still be confined within the boundaries of what it was programmed to do.
The bat shit crazy side of AI is where things get scary. But, the problem is orders of magnitude harder than typical machine learning. And that side is code which can write net new code. Not change parameters, but actually produce a new "generation" of code which is different from itself and on its own. This could present a computer capable of erasing boundaries or learning to do, explicitly, more things than it was programmed to do. It could change how it evaluates its outcomes and even change parameters entirely.
There are just 2 problems with that. First, as stated, it is insanely difficult to write a useful system that is smart enough to A) not destroy itself and B) learn in such a fashion as to adopt novel new capabilities. The second problem is that this is no longer actually artificial intelligence. At this point, it is just intelligence.
This is actually the bleeding edge of two branches of AI research. Genetic algorithms/programming wrapped around a machine learning system which knows how to write a similar machine learning program.
The problem isn't the theory. We know from a really high level EXACTLY what such an intelligence looks like. We've known what it looks like for quite some time. The problem isn't hardware. Sure, we are probably behind there too, but there is a good chance that won't be the problem long before we hit the 100 year mark. The problem is that there are only 2 ways to get there, a basic system of such a design that over several trillion iterations becomes better. Or one which is complex enough to start with that it has a decent head start with the pre-programmed smarts that it just starts adapting to new problems immediately.
That first scenario could take as long or longer than the evolution of man. The second scenario is the bit which is just too large and complex.
Comments
Post a Comment