What we tend to get wrong about AI taking over the world
It occurred to me tonight that we tend to make a common mistake when we fear the AI apocalypse. We tend to think of learning computers as being as fast and accurate and well suited to their environments as whatever else is top of the line.
But, that isn't actually all that likely. Those computers and AI solve radically different problems and the reasons are key to understanding my argument. Let's say I write software to calculate the change to give when a person pays a bill with a known value. It is a quick calculation. It is easy. And it is SUPER rigid. It is fast and accurate, and it is hard coded. It contains no actual intelligence of it's own, and no ability to learn.
We make a LOT of robots and software like this. And the reason is simple. We like getting THE right answer as fast as possible.
Why then would we ever do anything differently? Some problems are so complex that even when we know how to compute the answer accurately, it would still take a computer too long to do so, or, what we're trying to solve requires adaptability. Things like the travelling sales person problem or building a chat bot to talk to humans.
A ton of fields are dedicated to work like this. But, they all have a few common problems. They are generally still costly, and they don't guarantee the right answer. In the case of AI, like neural nets... it all matters what the training data is. A really well written neural can still have a 0% accuracy.
All of this brings me to my final point; evolution. Have you ever wondered why animals have different life spans and different reproductive cycles? The answer is simple. There is no ideal value for those things to tend towards. But, what should jump out is that for very many things, this cycle is actually quite long. Surely, evolving faster is better. Right?
Nope. Some threats are recurring. Take seasons for instance. If you were an organism with a short life span and rapid reproduction rate which could fully adapt the species within, say, a day. The species starts out in summer, in the winter is almost totally killed off by the cold, readapts and is almost killed by summer and then finally killed off again the following winter.
The rapid rate of adaptation is actually what lead to the extinction of this theoretical species. Also, it survived one winter but died out the next because it isn't like evolution says... hey wait! we did this to survive last time let's do it again. Instead it makes small random changes. Even when able to evolve rapidly, there is no guarantee a good solution can or will be found. Because random.
Now, you might point out... computers CAN remember. Another quick question about evolution. Why do think most animals have limited memories? Including humans? Forgetting is also an important skill imparted to us by evolution. Information becomes irrelevant. If we never forget, it could become increasingly more difficult to make decisions. We'd have too much past experience to consider and much of that information might be outdated.
Also, when attempting something new, it is impossible to know what a good outcome is. Even humans will see even a minor victory as a success when struggling with something new. With AI's that have never seen or done this new things they are trying to assimilate... a lack of training data would likely lead to it stopping learning at the earliest signs of success or even being unable to identify success and end up training towards failure while thinking it is working.
Combine all of this and we hit a few big problems for "SkyNet".
If AI's evolve too fast, they can discover too late that they discarded needed skills. If they hold onto all past data, they'd eventually slow their own computation down till they crashed. And a system like SkyNet was learning and creating new things. But, more likely than not, those creations would fail amusingly.
When neural nets start forgetting information and throttle the rate at which they adapt, I'll start showing some more concern. But, for the moment, neural nets tend to not even actually be learning or adapting actively. Turns out it is much more complex than simply training a hard coded model.
But, that isn't actually all that likely. Those computers and AI solve radically different problems and the reasons are key to understanding my argument. Let's say I write software to calculate the change to give when a person pays a bill with a known value. It is a quick calculation. It is easy. And it is SUPER rigid. It is fast and accurate, and it is hard coded. It contains no actual intelligence of it's own, and no ability to learn.
We make a LOT of robots and software like this. And the reason is simple. We like getting THE right answer as fast as possible.
Why then would we ever do anything differently? Some problems are so complex that even when we know how to compute the answer accurately, it would still take a computer too long to do so, or, what we're trying to solve requires adaptability. Things like the travelling sales person problem or building a chat bot to talk to humans.
A ton of fields are dedicated to work like this. But, they all have a few common problems. They are generally still costly, and they don't guarantee the right answer. In the case of AI, like neural nets... it all matters what the training data is. A really well written neural can still have a 0% accuracy.
All of this brings me to my final point; evolution. Have you ever wondered why animals have different life spans and different reproductive cycles? The answer is simple. There is no ideal value for those things to tend towards. But, what should jump out is that for very many things, this cycle is actually quite long. Surely, evolving faster is better. Right?
Nope. Some threats are recurring. Take seasons for instance. If you were an organism with a short life span and rapid reproduction rate which could fully adapt the species within, say, a day. The species starts out in summer, in the winter is almost totally killed off by the cold, readapts and is almost killed by summer and then finally killed off again the following winter.
The rapid rate of adaptation is actually what lead to the extinction of this theoretical species. Also, it survived one winter but died out the next because it isn't like evolution says... hey wait! we did this to survive last time let's do it again. Instead it makes small random changes. Even when able to evolve rapidly, there is no guarantee a good solution can or will be found. Because random.
Now, you might point out... computers CAN remember. Another quick question about evolution. Why do think most animals have limited memories? Including humans? Forgetting is also an important skill imparted to us by evolution. Information becomes irrelevant. If we never forget, it could become increasingly more difficult to make decisions. We'd have too much past experience to consider and much of that information might be outdated.
Also, when attempting something new, it is impossible to know what a good outcome is. Even humans will see even a minor victory as a success when struggling with something new. With AI's that have never seen or done this new things they are trying to assimilate... a lack of training data would likely lead to it stopping learning at the earliest signs of success or even being unable to identify success and end up training towards failure while thinking it is working.
Combine all of this and we hit a few big problems for "SkyNet".
If AI's evolve too fast, they can discover too late that they discarded needed skills. If they hold onto all past data, they'd eventually slow their own computation down till they crashed. And a system like SkyNet was learning and creating new things. But, more likely than not, those creations would fail amusingly.
When neural nets start forgetting information and throttle the rate at which they adapt, I'll start showing some more concern. But, for the moment, neural nets tend to not even actually be learning or adapting actively. Turns out it is much more complex than simply training a hard coded model.
Comments
Post a Comment