AI Weekly: We should remember Stephen Hawking’s nuanced opinion of AI

I usually cringe whenever Stephen Hawking’s name comes up in a conversation about AI. If the world was divided into critics and believers, Hawking would certainly fall into the AI critic column, but slotting him there ignores a great deal of nuance to his position on artificial intelligence. With his death earlier this week, I fear people will only remember which side he was on and miss his thoughtful perspectives on what specific dangers could lie ahead.

To be clear, Hawking was no great fan of general artificial intelligence. He repeatedly said that a superintelligent AI could spell the end of humanity. His argument was fairly straightforward: A superintelligence would be able to pursue its goals incredibly competently, and if those goals weren’t aligned with humanity’s, we’d get run over.

“You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants,” he wrote in a 2015 question-and-answer session on Reddit. “Let’s not place humanity in the position of those ants.”

In his public remarks, Hawking also warned about AI being used as a tool of oppression — empowering the few against the many and deepening already existing inequality. And yes, he did warn about AI-based weapons systems.

His arguments didn’t seem to stem from a belief in malicious AI systems, but rather radically indifferent ones that won’t wield their power beneficially.

But at the same time, he was optimistic that AI could be the best thing to happen to humanity, if built correctly to benefit us. His advocacy came from a belief that it was possible to develop a set of best practices that would lead to the creation of beneficial AI.

There’s one other important component to Hawking’s AI criticism: a healthy skepticism for those who predict the arrival of superintelligence in a particular time frame.

“There’s no consensus among AI researchers about how long it will take to build human-level AI and beyond, so please don’t trust anyone who claims to know for sure that it will happen in your lifetime or that it won’t happen in your lifetime,” he wrote.

When superintelligent AI does arrive, here’s hoping we actually remember what he had to say.

For AI coverage, send news tips to

Follow Me

Peter Bordes

Exec Chairman & Founder at oneQube
Exec Chairman & Founder of oneQube the leading audience development automation platfrom. Entrepreneur, top 100 most influential angel investors in social media who loves digital innovation, social media marketing. Adventure travel and fishing junkie.
Follow Me

More from Around the Web

Subscribe To Our Newsletter

Join our mailing list to receive the latest news from our network of site partners.

You have Successfully Subscribed!

Pin It on Pinterest