“Success in creating effective A.I.,” said the late Stephen Hawking, “could be the biggest event in the history of our civilization. Or the worst. We just don’t know.” Elon Musk called A.I. “a fundamental risk to the existence of civilization.” Are we creating the instruments of our own destruction or exciting tools for our future survival? Once we teach a machine to learn on its own—as the programmers behind AlphaGo have done, to wondrous results—where do we draw moral and computational lines? Leading specialists in A.I., neuroscience, and philosophy will tackle the very questions that may define the future of humanity.
The Big Ideas Series is supported in part by the John Templeton Foundation.