Main image of article Cambridge University vs. Skynet
Could Artificial Intelligence (AI) end up dooming the world? That seems like a pretty far-fetched possibility, considering how some of the most visible examples of AI—including Apple’s Siri and various automated online assistants—can barely conjure up movie times at your local theater or the right customer-service phone number without a hilarious flub or two. But as those systems increase in sophistication, and their amount of stored data accumulates to gargantuan proportions, a subset of researchers at Cambridge believes AI could potentially threaten the existence of the human race—and they’re starting The Centre for the Study of Existential Risk (CSER) to study the issue. “Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole,” read a note posted earlier this year on the center’s homepage. “Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change.” (Hat tip to the U.K.’s Daily Mail Online for drawing attention to the center’s launch.) All those issues, the note continued, “require a great deal more scientific investigation than they presently receive.” A “multidisciplinary” research center could steer Cambridge’s not-inconsiderable intellectual might toward that investigation. “It seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology," Cambridge philosophy professor Huw Price, one of the center’s co-founders, told the Associated Press. Should that Terminator-style future come to pass, he added, human beings would no longer be “the smartest things around.” But that doesn’t mean an artificial intelligence, suddenly gifted with sentience, will take one look at reality TV and decide the human race needs to be annihilated posthaste for the good of the universe: rather, sentient machines’ interests simply won’t mirror those of their creators. "It tends to be regarded as a flakey concern,” Price told the AP, “but given that we don't know how serious the risks are, that we don't know the time scale, dismissing the concerns is dangerous. What we're trying to do is to push it forward in the respectable scientific community." Computing platforms have become much more powerful in recent years, with no end in sight. Earlier in November, Hans Meuer and his team of researchers released the second, semi-annual TOP500 list of the world’s most powerful, general-purpose supercomputer systems in common use for high-end applications: the list features 23 with petaflop class performance, just four and a half years since the debut of Roadrunner, a supercomputer built by IBM for the Department of Energy. That’s not to say that the push for more intelligent computers, each capable of processing the biggest of Big Data, is going to lead to Skynet. But at least some scientists are studying that distant possibility.   Image: jimmi/Shutterstock.com