The Campaign to Stop Killer Robots is a coalition of non-governmental organizations who seek to pre-emptively ban lethal autonomous weapons. [2] [3]
First launched in April 2013, the Campaign to Stop Killer Robots has urged governments and the United Nations to issue policy to outlaw the development of lethal autonomous weapons systems, also known as LAWS. [4] Several countries including Israel[ citation needed ], Russia, [5] South Korea[ citation needed ], the United States, [6] and the United Kingdom [7] oppose the call for a preemptive ban, and believe that existing international humanitarian law is sufficient enough regulation for this area.
In December 2018, a global Ipsos poll quantified growing public opposition to fully autonomous weapons. It found that 61% of adults surveyed across 26 countries oppose the use of lethal autonomous weapons systems. Two-thirds of those opposed thought these weapons would “cross a moral line because machines should not be allowed to kill," and more than half said the weapons would be “unaccountable." [8] A similar study across 23 countries was conducted in January 2017, which showed 56% of respondents were opposed to the use of these weapons. [9]
In November 2018, the United Nations Secretary-General António Guterres called for a ban on killer robots, stating, "For me there is a message that is very clear – machines that have the power and the discretion to take human lives are politically unacceptable, are morally repugnant, and should be banned by international law." [10]
In July 2018, over 200 technology companies and 3,000 individuals signed a public pledge to "not participate nor support the development, manufacture, trade, or use of lethal autonomous weapons." [11] In July 2015, over 1,000 experts in artificial intelligence signed on to a letter warning of the threat of an arms race in military artificial intelligence and calling for a ban on autonomous weapons. The letter was presented in Buenos Aires at the 24th International Joint Conference on Artificial Intelligence (IJCAI-15) and was co-signed by Stephen Hawking, Elon Musk, Steve Wozniak, Noam Chomsky, Skype co-founder Jaan Tallinn and Google DeepMind co-founder Demis Hassabis, among others. [12] [13]
In June 2018, Kate Conger, then a journalist for Gizmodo and now with the New York Times, revealed Google's involvement in Project Maven, a US Department of Defense-funded program that sought to autonomously process video footage shot by surveillance drones. [14] Several Google employees resigned over the project, and 4,000 other employees sent a letter to Sundar Pichai, the company's chief executive, protesting Google's involvement in the project and demanding that Google not "build warfare technology." [15] Facing internal pressure and public scrutiny, Google released a set of Ethical Principles for AI which included a pledge to not develop artificial intelligence for use in weapons and promised not to renew the Maven contract after it expires in 2019. [16]
The campaign won the Ypres Peace Prize in 2020 [17] [18] and was nominated for the 2021 Nobel Peace Prize by Norwegian MP Audun Lysbakken. [19] [20]
Stop Killer Robots are due to release a documentary called Immoral Code [21] in May 2022 on the subject of automation and killer robots. The film is due to premiere at Prince Charles Cinema in London's Leicester Square and examines whether there are situations where it’s morally and socially acceptable to take life, and importantly - would a computer know the difference?
The full membership list of the Campaign to Stop Killer Robots is available on their website. [22]
This section needs to be updated. The reason given is: There is no information about countries voting against killer robots beyond 2018.(July 2022) |
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.
An autonomous robot is a robot that acts without recourse to human control. Historic examples include space probes. Modern examples include self-driving vacuums and self-driving cars.
Robotic control is the system that contributes to the movement of robots. This involves the mechanical aspects and programmable systems that makes it possible to control robots. Robotics can be controlled by various means including manual, wireless, semi-autonomous, and fully autonomous.
Military robots are autonomous robots or remote-controlled mobile robots designed for military applications, from transport to search & rescue and attack.
Geoffrey Everest Hinton is a British-Canadian computer scientist and cognitive psychologist, most noted for his work on artificial neural networks. From 2013 to 2023, he divided his time working for Google and the University of Toronto, before publicly announcing his departure from Google in May 2023, citing concerns about the risks of artificial intelligence (AI) technology. In 2017, he co-founded and became the chief scientific advisor of the Vector Institute in Toronto.
Stuart Jonathan Russell is a British computer scientist known for his contributions to artificial intelligence (AI). He is a professor of computer science at the University of California, Berkeley and was from 2008 to 2011 an adjunct professor of neurological surgery at the University of California, San Francisco. He holds the Smith-Zadeh Chair in Engineering at University of California, Berkeley. He founded and leads the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley. Russell is the co-author with Peter Norvig of the authoritative textbook of the field of AI: Artificial Intelligence: A Modern Approach used in more than 1,500 universities in 135 countries.
An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Stories of AI takeovers remain popular throughout science fiction, but recent advancements have made the threat more real. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.
Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation.
Toby Walsh is Chief Scientist at UNSW.ai, the AI Institute of UNSW Sydney. He is a Laureate fellow, and professor of artificial intelligence in the UNSW School of Computer Science and Engineering at the University of New South Wales and Data61. He has served as Scientific Director of NICTA, Australia's centre of excellence for ICT research. He is noted for his work in artificial intelligence, especially in the areas of social choice, constraint programming and propositional satisfiability. He has served on the Executive Council of the Association for the Advancement of Artificial Intelligence.
Lethal autonomous weapons (LAWs) are a type of autonomous military system that can independently search for and engage targets based on programmed constraints and descriptions. LAWs are also known as lethal autonomous weapon systems (LAWS), autonomous weapon systems (AWS), robotic weapons or killer robots. LAWs may operate in the air, on land, on water, underwater, or in space. The autonomy of current systems as of 2018 was restricted in the sense that a human gives the final command to attack—though there are exceptions with certain "defensive" systems.
The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.
In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable. The four-paragraph letter, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter", lays out detailed research priorities in an accompanying twelve-page document.
Existential risk from artificial general intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
The International Committee for Robot Arms Control (ICRAC) is a "not-for-profit association committed to the peaceful use of robotics in the service of humanity and the regulation of robot weapons." It is concerned about the dangers that autonomous military robots, or lethal autonomous weapons, pose to peace and international security and to civilians in war.
Slaughterbots is a 2017 arms-control advocacy video presenting a dramatized near-future scenario where swarms of inexpensive microdrones use artificial intelligence and facial recognition software to assassinate political opponents based on preprogrammed criteria. It was released by the Future of Life Institute and Stuart Russell, a professor of computer science at Berkeley. On YouTube, the video quickly went viral, garnering over two million views and was screened at the United Nations Convention on Certain Conventional Weapons meeting in Geneva the same month.
A military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between global superpowers for better military AI, driven by increasing geopolitical and military tensions.
Do You Trust This Computer? is a 2018 American documentary film directed by Chris Paine that outlines the benefits and especially the dangers of artificial intelligence. It features interviews with a range of prominent individuals relevant to AI, such as Ray Kurzweil, Elon Musk, Jerry Kaplan, Michal Kosinski, D. Scott Phoenix, Hiroshi Ishiguro, and Jonathan Nolan. The film was directed by Chris Paine, known for Who Killed the Electric Car? (2006) and the subsequent followup, Revenge of the Electric Car (2011).
Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.
Ajung Moon is a Korean-Canadian experimental roboticist specializing in ethics and responsible design of interactive robots and autonomous intelligent systems. She is an assistant professor of electrical and computer engineering at McGill University and the Director of the McGill Responsible Autonomy & Intelligent System Ethics (RAISE) lab. Her research interests lie in human-robot interaction, AI ethics, and robot ethics.