The anglo saxon solution towards Super Intelligence will not be enough for the world.

After watching some of the videos of the Asilomar 2017 conference hosted by the Future of Life Institute in January. A new concern raised in my line of thought, are our efforts enough for the challenge of the development of an Artificial Intelligence (AI) entity smarter than us (Humans)?

Inspired by the panel of Superintelligence: Science or Fiction? were some of the most recognized global experts, entrepreneurs and researchers on Artificial Intelligence (AI) were gathered.  Thanks to the donation of Elon Musk and other concerned entrepreneurs, in the past decade new institutions dedicated towards the analysis of the ethical, social, and human implications of the development of a super intelligence system have been born. Besides the Future of Life, who focuses in safeguarding life and developing optimistic visions of the future, the Machine Intelligence Research Institute (MIRI) who focuses in ensure that the creation of smarter-than-human intelligence has a positive impact, to Leverhulme Centre for the Future of Intelligence addressing the challenges and opportunities of future development of artificial intelligence (AI), The Centre for Effective Altruism focusing in creating a global community of people who have made helping others a core part of their lives, The Centre for the Study of Existential Risk focused on the study of risks threatening human extinction that may emerge from technological advances, and the Future of Humanity Institute and Strategic Artificial Intelligence Research Center among others.

All of these institutions are the ones leading the research on what is necessary to do to analyze how we should adapt as humanity to super intelligence (meaning an Artificial Intelligence (AI) that becomes more intelligent than humans). The conference was great because it gives you the insight in what is going on and what the leading scientists are trying to achieve. However, as you may see from the panel, it is an Anglo-Saxon effort. All of the people who are participating in this institutions are located in Silicon Valley, Boston, Oxford and Cambridge. The rest of the world is nonexistent. And maybe you can argue that the powerful tech companies’ ala Alphabet, Amazon, Facebook are located in the US or have talent from the leading UK universities have the best people in the world, but there is still a big issue. What happens to all the companies, researchers and experiments in AI being developed in other parts of the world?

From Japan, implementing the new Honda Research & Development X centre to work with automated cars. To the astounding growth of Chinese efforts to position themselves as global leaders on AI. In the past five years the biggest tech companies in China, who have as much money as their American counterparts (Baidu, Didi, Tencent) are heavily investing in AI. Same with the rise of researchers from China who in February 2017 represented almost half of the participants at the Association of the Advancement of Artificial Intelligence AAAI 2017 conference held in San Francisco. And the surprise is bigger because now the Chinese AI newcomers are not only from the Chinese Ivy league schools but also from rural universities all around China.

What happens to the research in North Korea, whose hackers have been able to destabilize several times the south Korean economy through advance cyber attacks? Or by Iran who were able to do a reverse engineering process of the Stuxnet virus and use it against the creators? What about the countries with great amount of talented hackers and scientist like Russia and Ukraine?

The advantage of knowledge is that could be created everywhere around the world, the time when the knowledge was so secretive is over, that only is till applied to the nuclear energy. But Artificial Intelligence could be learn by anyone and if the local governments start to execute strategic investments they could find great talent that will push the AI programs into places that we have not seen before.

One of the interesting thoughts on the panel, expressed by Raymond Kurzweil, was that even though the technologist could create a balanced system of AI system, there will be always the political and social conditions who will determine what would happen to that system. And here is the possible path to find a common solution to start preparing for a future on which a Super Intelligence will be born. We need to engage into an international collaboration effort beyond the Anglo-saxon discussions and start preparing our future politicians towards the implications of these type of systems for our lives. If we don’t do that. It will not matter that even Stephen Hawking tells us to be cautious about the possibility of Terminator.

Watch this video portraying what a Super Intelligence could become and what could be our decision making process.

Tell me your thoughts and how we can create a global alliance that gathers more people, especially people that don’t come with a tech background but understand the necessities of preparing for the future.

 

Leave a Reply

Your email address will not be published. Required fields are marked *