The men were too absorbed in their work to notice my arrival at first. Three walls of the conference room held whiteboards densely filled with algebra and scribbled diagrams. One man jumped up to sketch another graph, and three colleagues crowded around to examine it more closely. Their urgency surprised me, though it probably shouldn’t have. These academics were debating what they believe could be one of the greatest threats to mankind – could superintelligent computers wipe us all out?
I was visiting the Future of Humanity Institute, a research department at Oxford University founded in 2005 to study the “big-picture questions” of human life. One of its main areas of research is existential risk. The physicists, philosophers, biologists, economists, computer scientists and mathematicians of the institute are students of the apocalypse.
Predictions of the end of history are as old as history itself, but the 21st century poses new threats. The development of nuclear weapons marked the first time that we had the technology to end all human life. Since then, advances in synthetic biology and nanotechnology have increased the potential for human beings to do catastrophic harm by accident or through deliberate, criminal intent.
To read more, click here.