
When we think of killer robots, it’s easy to imagine scenes straight out of science fiction movies. However, the reality is that these lethal autonomous weapon systems (LAWS) already exist and are being used in armed conflicts. Their development and proliferation pose serious ethical, legal, and human rights challenges, demanding an urgent response from states and the international community.
What are Killer Robots?
Killer robots are weapon systems that can select and attack targets without human intervention. Unlike traditional drones, which require a human operator’s command to open fire, these systems use artificial intelligence to make autonomous decisions about the use of force.
Some of the most well-known examples include Turkey’s Kargu-2, reportedly used in Libya to attack retreating soldiers, and Bayraktar TB2 drones, deployed in Ukraine. While there is debate over how much human control these systems retain, the reality is that technology is rapidly advancing toward greater autonomy.
How do they work, and how are they different from other robots?
What sets killer robots apart from other military technologies is their ability to operate autonomously, achieved through:
- Artificial Intelligence and Machine Learning – AI algorithms train these systems to recognize targets, analyze environments, and make decisions without direct human oversight.
- Advanced Sensors and Facial Recognition – Some killer robots can identify individuals or groups based on pre-defined characteristics, which could enable extrajudicial executions, war crimes, and introduce severe bias.
- Self-Navigation Capabilities – These systems operate in complex scenarios without constant communication with a human operator, making them difficult to control once deployed.

Picture from Stop Killer Robots
Unlike other automated systems used in civilian or military contexts—such as rescue robots or surveillance drones—killer robots are designed specifically to eliminate human targets without human judgment or control.
Why are they a threat?
Through the Stop Killer Robots campaign, human rights and technology experts have been warning about the dangers these weapons pose to global stability and human security because:
- They facilitate mass violence – Without human operators, large-scale attacks can be carried out without the need for significant military forces, lowering the threshold for armed conflict.
- They can be used for repression and genocide – With facial recognition technology, they could be programmed to eliminate entire populations based on racial, religious, or political profiles.
- They eliminate accountability – If a machine decides to kill, who is responsible? This challenges fundamental principles of international humanitarian law.
- They are prone to errors and malfunctions – Programming mistakes or misinterpretation of data could lead to unjustified attacks on civilians.
What should states do?
Given these risks, organizations like the United Nations (UN) and the Stop Killer Robots campaign—of which TEDIC is a member—are calling for a binding treaty to ban or strictly regulate autonomous weapons. Some countries have expressed support for a ban, but major military powers continue to resist. From Paraguay, we have secured commitments from Congresswoman Johanna Ortega and Congressman Raúl Benítez to support this campaign against autonomous weapons.
As members of the Stop Killer Robots campaign, we urge governments to:
- Promote an international treaty to ban or regulate the proliferation of these weapons.
- Enact national laws restricting the development, production, and export of LAWS.
- Ensure meaningful human control over decisions involving the use of force in military systems.
- Participate actively in international forums, supporting initiatives like the UN Convention on Certain Conventional Weapons (CCW).
Following these recommendations, in December 2024, the United Nations General Assembly (UNGA), comprising 193 member states, approved Resolution A/RES/79/62 on lethal autonomous weapon systems. A total of 166 countries—including Paraguay—voted in favor, while 3 voted against, and 15 abstained. This resolution establishes a new UN-led forum to debate the serious challenges posed by autonomous weapon systems and discuss necessary actions.
Why does this matter to Paraguay?
Many believe these issues are distant, relevant only to major global powers. However, Paraguay has been actively engaged in advocacy on this issue since 2020. At TEDIC, alongside other civil society organizations, we have promoted statements urging the Paraguayan government to take a clear stance on regulating these weapons.
Technological advancements must not be disconnected from ethical and human rights principles. Killer robots represent a real threat to global security and human dignity. Regulating these technologies is an urgent challenge, and Paraguay—along with other countries in the region—must join this effort.
Banning and regulating these weapons is a crucial step toward preventing the dehumanization of warfare and violence. At TEDIC, we will continue to promote debate and action to halt the advancement of these weapons before it is too late.