In a world increasingly reliant on artificial intelligence (AI), Jaan Tallinn, a founding engineer of Skype and the founder of the Future of Life Institute, has issued a stark warning about the potential future of AI warfare. Tallinn’s concerns center around the concept of ‘slaughterbots’, militarized killer drones powered by AI, and the terrifying possibility of an AI arms race. The Estonian computer programmer’s chilling prediction, “We might just be creating a world where it’s no longer safe to be outside because you might be chased down by swarms of slaughterbots,” reflects his apprehension about the unchecked development and deployment of advanced AI technologies.
Tallinn’s fears are not unfounded, given the increasing militarization of AI and the difficulty in attributing attacks in an AI warfare scenario. He argues that the integration of AI into military applications makes it incredibly difficult for humanity to control the trajectory of AI, pushing us into a literal arms race where strategic advantage supersedes thoughtful technological approach. This concern is shared by many in the AI field, with hundreds, including Elon Musk and Apple co-founder Steve Wozniak, signing an open letter earlier this year calling for a six-month pause on advanced AI development.
Skype Co-founder Warns of Potential AI Arms Race
Jaan Tallinn, a founding engineer of Skype and founder of the Future of Life Institute, has expressed concerns about the possible dangers of an unregulated advance in artificial intelligence (AI) technology. Tallinn warns that the world might become unsafe due to the potential emergence of AI-powered “slaughterbots.”
The Risk of Unchecked AI Progress
Tallinn, who is also a founder of the Cambridge Centre for the Study of Existential Risk, has been actively involved in the study and mitigation of existential risks arising from the development of advanced AI technologies. His concerns about “slaughterbots” stem from a 2017 short film of the same name, released by the Future of Life Institute. The film portrays a dystopian future where militarized killer drones powered by AI have taken over the world.
AI in the Military: A Risky Affair
Tallinn is particularly worried about the implications of military usage of AI. He believes that incorporating AI into warfare could make it difficult for humanity to control the trajectory of AI development. “Putting AI in the military makes it very hard for humanity to control AI’s trajectory, because at this point you are in a literal arms race,” Tallinn stated in a video interview with Al Jazeera.
In addition, Tallinn points out that AI warfare could make attacks difficult to attribute, and fears that the natural evolution for fully automated warfare is “swarms of miniaturized drones that anyone with money can produce and release without attribution.”
Rising Concerns Over AI Development
The Future of Life Institute, which aligns with Tallinn’s concerns, was established almost a decade ago in 2014 and soon drew the attention of notable figures like Elon Musk, who donated $10 million to the institute in 2015. However, the issue has become more pressing recently with the release of AI models like ChatGPT, fears about AI taking over people’s jobs, and concerns about the potential misuse of AI by the public.
High-profile individuals, including Elon Musk, Apple cofounder Steve Wozniak, Stability AI CEO Emad Mostaque, researchers at Alphabet’s AI lab DeepMind, and other notable AI professors, signed an open letter issued by the Future of Life Institute earlier this year. The letter called for a six-month pause on advanced AI development, stating that “advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.”
Tallinn’s warnings highlight the critical need for careful regulation and oversight in the development and deployment of advanced AI technologies. As the AI field continues to evolve at a rapid pace, it is crucial for policymakers, researchers, and stakeholders to take these concerns seriously and work together to ensure the safe and ethical use of AI.