Stephen Hawking says artificial intelligence could end humanity. “Computers will overtake humans with AI at some point within the next 100 years,” he says.
First of all, I think we should be referring to it as “artificial consciousness,” because that’s the real issue — not that machines are intelligent, but do they have self-awareness and consciousness.
Far be it from me to disagree with someone as smart as Hawking, but rather than ending humanity, AI just might be the only thing that can save us.
Think about it. Have we really done that great of a job on our own? Warfare seems to be the natural state of humanity. Most of humanity’s technological advances have been in war and destruction. We are pitifully unable to manage our resources or rise above greed and tribalism. The notion of human equality, while an ideal we say we want, actually scares the bejeebers out of us, and even the most advanced among us work feverishly to maintain inequality and imbalance.
On the other hand, Hawking might be right — if we consider that AI will be a product of ourselves. And the fault, dear Brutus, is not in our stars, but in… Well, you know the rest. If artificial consciousness comes into existence and its morality is a product of our own, can we expect it to treat us better than we’ve treated each other? If AI looks to humanity as an example of how to treat those lesser than itself, then our goose just might be cooked.
That’s why I’m practicing saying, “I for one would like to welcome our new Skynet overlords.”
(Cross-posted from Archervox)