Machines are getting more intelligent every year and researchers believe they could possess human levels of intelligence in the coming decades. Once they reach this point they could then start to improve themselves and create other, even more powerful AIs, known as superintelligences, according to Oxford philosopher Nick Bostrom and several others in the field.
In 2014, Musk, who has his own $1 billion AI research company, warned that AI has the potential to be “more dangerous than nukes” while Hawking said in December 2014 that AI could end humanity. But there are two sides to the coin. AI could also help to cure cancer and slow down global warming.
See also: AI Program Beats Humans In Poker Game.
The 23 principles designed to ensure that AI remains a force for good — known as the Asimolar AI Principles because they were developed at the Asimolar conference venue in California — are broken down into three categories:
- Research issues
- Ethics and values
- Longer-term issues
The principles, which refer to AI-powered autonomous weapons and self-replicating AIs, were created by the Future of Life Institute.
The non-profit Institute — founded by MIT cosmologist Max Tegmark and Skype co-founder Jaan Tallinn, and DeepMind research scientist Viktoriya Krakovna in March 2014 — is working to ensure that tomorrow’s most powerful technologies are beneficial for humanity. Hawking and Musk are on the board of advisors.
“Artificial intelligence has already provided beneficial tools that are used every day by people around the world,” wrote the Future of Life on its website. “Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.”
The principles were developed off the back of the Beneficial AI conference that was held earlier this month and attended by some of the most high-profile figures in the AI community, including DeepMind CEO Demis Hassabis and Facebook AI guru Yann LeCun.
At the conference, Musk sat on a panel alongside Hassabis, Bostrom, Tallinn, and other AI leaders. Each of them were asked in turn what they thought about superintelligence — defined by Bostrom in an academic paper as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.”
When the panel was asked if superintelligence is possible, everyone said yes, except Musk, who appeared to be joking when he said no.
When asked whether superintelligence will actually happen, seven of the panel said yes, while Bostrom said “probably” and Musk again joked “no.”
Interestingly, when the panel was asked whether it wanted superintelligence to happen, there was a more mixed response, with four people opting to respond “it’s complicated” and Musk saying that it “depends on which kind.”