----
Will Artificial Intelligence Be Humanity's Worst Mistake?
// Mysterious Universe
Unlike certain other celebrity scientist elder statesmen, Stephen Hawking isn't prone to saying unbelievably weird stuff just to troll us—so when he wrote last week that developing artificial intelligence would be "the biggest event in human history" but "might also be the last, unless we learn to avoid the risks," the international press listened.
Stuart Armstrong (of Oxford's Future of Humanity Institute) briefly goes over existential risk in general, and existential risk posed by AI in particular, here:
In the short term, the biggest danger posed by AI is autonomous drones, and the best time to prevent the development of autonomous drones is to ban their use before it becomes commonplace (this has already been proposed in Canada, and would appear to be an obvious future topic for a U.N. covenant). Most of the longer-range fears concerning AI may appear far-fetched now, which means that this may be an ideal time to have these conversations, before powerful military and financial interests find themselves in the unenviable and dangerous position of planning the use of technology that the general public has not yet discussed. The last time this happened with respect to a new technology that had the potential to pose an existential risk, the result was an international nuclear arms race.
You can find out more about the risks of AI by visiting Oxford's FHI and Cambridge's Centre for the Study of Existential Risk (CSER).
----
Shared via my feedly reader
Dwight A. Hunt, Sr. A+, MCP
Facebook and Twitter: dahuntsr
Blogs:
Books, Podcasts, Old Time Radio & Movie instant stream reviews: http://audio-book-addict.blogspot.com
Tech articles, firearms & all other posts: http://dahuntsr.blogspot.com/
No comments:
Post a Comment