Search

“Killer Robots are the Future of the War.” What will be Their Implications on Countries’ Nuclear Doctrines? How are They Subjugating Human Rights? Critically Evaluate.

"Killer Robots are the Future of the War." What will be Their Implications on Countries' Nuclear Doctrines? How are They Subjugating Human Rights? Critically Evaluate. by Laiba Shahbaz

Killer Robots are the Future of the War | Daily Writeup | Opinions

The following article, “Killer Robots are the Future of the War.” What will be Their Implications on Countries’ Nuclear Doctrines? How are They Subjugating Human Rights? Critically Evaluate., is written by Laiba Shahbaz, a student of Sir Syed Kazim Ali. Moreover, the article is written on the same pattern, taught by Sir to his students, scoring the highest marks in compulsory subjects for years. Sir Kazim has uploaded his students’ solved past paper questions so other thousands of aspirants can understand how to crack a topic or question, how to write relevantly, what coherence is, and how to include and connect ideas, opinions, and suggestions to score the maximum.

CSS and PMS Solved Papers

Outline

1-Introduction

2-What are the killer robots or robotic weapons?

  • Understanding the term
    • Case in Point: According to the Air University of America, “Lethal Autonomous Weapons system serves as a platform for weapon delivery; such systems or weapons independently analyse their surrounding environment and decide whether to attack with or without human guidance and supervision.”
  • Types/categories
    • Human-in-the-Loop Weapons
    • Human-on-the-Loop Weapons
    • Human-out-of-the-Loop Weapons

3-What is the role of Killer robots in future wars?

  • Lethal Autonomous Weapons (LAWs) or killer robots would industrialise and revolutionise the war.
    • Case in Point: Only one program can give a command to hundreds of interlinked weapons
  • Killer robots can detect and destroy the upcoming threat more efficiently and precisely than humans would otherwise do.
    • Case in Point: Iran’s dome is an Israeli automatic systems that detect and destroy drone attacks automatically.  
  • Killer robots would eliminate the involvement of humans from the loop.
    • Case in Point: Previously, a state needed an efficient army of hundreds of thousands of soldiers to wage war, kill the opponent and win it. To pursue the next step, soldiers wait for the orders of a higher authority, which gives orders by analysing the situation. Also soldiers require training, camps, food or payment. But killer robots do not require any of such things. Just one programmer could run several weapons at once.
  • Robots are machines that will increase the chance of technical failure or misunderstanding of the target detection.
    • Case in Point: In the case of autonomous cars, the Tesla Model S, in autopilot mode, failed to recognise the tractor-trailer on the highway in Florida and collapsed, killing the owner of the war.
  • Unlike nuclear weapons, these robots would not require sophisticated technology, enormous maintenance and hard-to-obtain raw material
    • Case in Point: Application of robots in civil applications, commercial projects, healthcare, manufacturing, agriculture, logistics, and transport.
  • The race to build sophisticated and automatic weapons has started between various states worldwide.
    • Case in Point: According to the Aljazeera, “Many states, such as the United States, China, the United Kingdom, Iran, India, Russia, Israel, Turkey, and South Korea, are investing heavily in the development of such weapons. Alone, the United States of America reserves a budget of US$ 18 billion for autonomous weapons from 2016 to 2020.”

4-What will be the killer robots’ implications on countries’ nuclear doctrines?

  • The use of autonomous weapons could increase the chance of error, such as false alarms of nuclear attack.
    • Case in Point: In 1983, Lieutenant Colonel of Soviet Air Defence Forces, Stanislav Petrov saw a computer giving the highest signal of a nuclear attack by the US, but he was sure it was a technical mistake because of his less trust in the new detection system and not take any action. He was right; the computer was wrong.
  • Autonomous weapons like killer robots introduce new errors and new opportunities for terrorist actors to manipulate them.
    • Case in Point: Just a small increase or decrease in the pixel of an image would be enough to convince an artificial intelligence program that a stealth bomber is a dog.
  • States could make catastrophic decisions in the presence of modern and precise weapons.
    • Case in Point: Advancement in sensory technology could lead to the destruction of missiles or submarines by retaliatory forces. 
  • Artificial Intelligence is all about data, which could be unbiased and incomplete.
    • Case in Point: Moreover, relying on machines, such as satellite imagery, could produce variations in data during foggy or rainy weather.
  • Use of autonomous technology could break the notions of seventy years old nuclear strategies.
    • Case in Point: According to the Rand Corporation, the US think tank, Artificial Intelligence can potentially cause nuclear war by 2040.
  • Autonomy in nuclear weapons is not an imaginary concept; it is already here.
    • Case in Point: Russia recently successfully tested an autonomous underwater nuclear weapon, Poseidon. It is an autonomous second-strike nuclear weapon, which retaliates against the opponent’s nuclear strike.  

5-How are they subjugating human rights?

  • Being lethal weapons, they would be helpful in tasks related to mass murders, assassinations, or innocent killings.
    • Case in Point: In 2015, Elon Musk, Steve Wozniak, and Stephen Hawking, along with thousands of artificial intelligence researchers, signed a letter giving a warning that autonomous weapons would become the Kalashnikovs of the future, and they would be ideal for assassinations, ethnic cleansing, providing harm to the nations or other tasks related to it.
  • Fully autonomous weapons, being machines, lack the human qualities needed to follow the rules of International humanitarian law.
    • Case in Point: Only soldiers could differentiate between the intentions of an afraid civilian and a threatening enemy, while robots could not.
  • There would always be a chance of misidentification of the required target.
    • Case in Point: It would be difficult for a robot to distinguish between a hostile soldier and a twelve-year-old girl.
  • Absence of human involvement in the decision-making process also undermines other non-legal civilian protections.
    • Case in Point: According to the Human Rights report, “Emotionless robots would behave like dictators who crack down on their people and would not be restrained by any emotions.”
  • Automatic robots would make finding someone accountable for the atrocities difficult.
    • Case in Point: According to the Human Rights report, “Military commanders deploy killer robots, and programmers design them. But atrocities are done by robots. Under such a situation, who would be accountable: the commander or the programmer? But the killing is done, robot.”
  • Use of killing robots could enhance the risk of disproportionate attack.
    • Case in Point: Protocols define that when an attack on civilian lives and destruction of civilian property crosses the anticipated military objectives, it is referred to as a disproportionate attack.

6-Conclusion

Extensive English Essay and Precis Course for CSS & PMS Aspirants

Answer to the question

Robots that have become an indispensable tool of today’s world now take on a problematic role in the military as “Killer robots.” This shift from helpful companions to weapons raises significant concerns about the implications of their use in warfare. Nowadays, many countries are working on updating their tools for war, and making fully autonomous weapons or robots is a big part of this effort. Notably, some of the manifestations are America’s Phalanx Close-In Weapon system, Israel’s Iron Dome, Britain’s Tarani warplane, Russia’s Poseidon, and undersea nuclear vehicles. Killer robots would industrialise and revolutionise future warfare. They would eliminate human interaction and mark their target independently and precisely. Such weapons always have the potential for error and misunderstanding. Killer robots would destabilize the notions of nuclear weapons and could initiate a nuclear war. Also such weapons essentially undermine human rights. This article will examine killer robots’ understanding, role in future warfare, implications on nuclear doctrine, and impact on human rights.

Understanding the term ‘Killer Robots’, they are machines that operate when programs are installed and run within or behind them. Unfortunately, humans are capable of weaponising anything. For example, in Chemistry and Biology, they have weaponised certain chemicals and biological pathogens and viruses. Similarly, militaries of powerful states are weaponising artificial intelligence and machines with such technology. Next, ‘Killer Robot’ is also called Fully Autonomous or Lethal Autonomous Weapons (LAWs). According to the Air University of the United States, “The deadly autonomous weapon system serves as a platform for weapons delivery. Such systems or weapons independently analyse their surroundings and decide whether to attack with or without human guidance and supervision.” Likewise, according to the Campaigns of Killer Robots, “Killer robots fall under the category of fully autonomous weapons. Such weapons operate without human involvement and decide their target, location and effects.” Thus, killer robots are warfare’s future, and the world is getting closer to acquiring technology.

Adding more to it, the Human Rights report describes three categories of robots and human interaction with them to further illustrate killer robots. First, ‘Human-in-the-loop weapons’ include those categories of robots utterly dependent on humans for their delivery and target selection. Second, ‘Human-on-the-loop Weapons’ incorporate weapons that can select targets and fire under human supervision. Third, ‘Human-out-of-the-Loop Weapons’ are robots capable of targeting and delivering force without human involvement and interaction. Thus, killer robots or LAWs fall under the category of human-out-of-the-loop weapons. 

Moreover, the world has yet to build a fully autonomous robot. But there are precursors available for this technology. Militaries have the dream to acquire such weapons as they need minimal manpower, save the lives of soldiers, and reduce response time with precision and efficiency. After gunpowder and nuclear weapons, fully autonomous weapons have been referred to as the third revolution in warfare. Undoubtedly, fully autonomous weapons would be a part of future wars with their benefits and implications. 

Before analysing the implications of killer robots on the nuclear doctrine of a state, it is necessary to understand its role in the future war as a whole. First, such weapons would industrialise and revolutionise the wars. For example, today, a soldier gives the command for the initiation of a war; after the use of killer robots in wars, wars are fought automatically, and only one program can command hundreds of interlinked weapons. Also such weapons work with extreme precision and efficiency compared to humans. For example, Iran’s dome is Israel’s automated system that detects and destroys drone attacks automatically. Such weapons are already used and modernizing wars, but with human supervision. So, killer robots would transform the entire approach to warfare.

Second, killer robots would eliminate human involvement from the loop. To understand it, a state needs an efficient army of millions of soldiers to wage war, kill the opponent and win it. To pursue the next step, soldiers wait for orders from higher authorities, who analyse the situation and issue orders. Also, soldiers require training, camps, food or salary. But killer robots need no such things. Just one programmer could run several weapons at once. As a result, as robots are being advanced, human interactions with them are disappearing.

Furthermore, being a machine, there would always be a chance for a technical failure or misunderstanding of the target in the case of LAWs. Today, humans are in the loop with robots. Still, several examples in the civilian domain have been observed where robots have shown unintended results or misunderstood commands. For instance, in the case of autonomous cars, the Tesla Model S, in autopilot mode, failed to recognise the tractor-trailer on the highway in Florida and collapsed in it, killing the owner of the war. Moreover, according to the recently leaked drone papers, nine out of ten people were dead mistakenly because of drone strikes. This small-level incident could be observed on a larger scale in the battleground and could have devastating consequences.

Fourth, Nuclear weapons require sophisticated technology, enormous maintenance, and hard-to-obtain raw material. The World is afraid of its consequences, as already seen in the Hiroshima and Nagasaki incidents and the proliferation of such weapons. Conversely, robots are already in use, hand in hand with people. For example, the application of robots in civil applications, commercial projects, healthcare, manufacturing, agriculture, logistics and transport. Technology is getting more advanced and automated. Such weapons just need the proper application of artificial intelligence and efficient programmers. Any state and non-state actors can easily acquire this technology. Thus, In the case of killer robots, this easy-to-obtain technology could reach a terrorist organisation and spread more devastation and killings as compared to a nuclear blast. 

Lastly, a race to build sophisticated and automatic weapons has started between various states worldwide. According to the Aljazeera, many states, such as the United States, China, the United Kingdom, Iran, India, Russia, Israel, Turkey, and South Korea, are investing heavily in the development of such weapons. Alone, the United States of America reserves a budget of US$ 18 billion for autonomous weapons from 2016 to 2020. Thus, such trends reflect the proliferation of lethal autonomous weapons.

Having understood the role of killer robots in future warfare, the use of killer robots could have severe implications for the nuclear doctrine of a state. Initially, Computers were designed for military use only; today, they handle more military capabilities than ever, including killer robots. Likewise, they are in charge of nuclear arsenals, too. Using autonomous weapons could increase the chance of error or misinterpretation, such as false alarms of nuclear attack. This is not just a prediction; it has happened before. For example, in 1983, Lieutenant Colonel of Soviet Air Defence Forces, Stanislav Petrov saw a computer giving Leara the highest signal of a nuclear attack by the US. Still, he was sure it was a technical mistake because of his less trust in the new detection system and did not take any action. He was right; the computer was wrong. If Petrov’s task is assigned to a machine, the result would be the opposite of nuclear war. Similarly, today, in the presence of nine nuclear weapons states, various states are conducting nuclear tests daily, and many false alarms can be observed more than real alarms. Thus, in the susceptible domain of nuclear weapons, the use of killer robots is nothing but a colossal blunder.

Second, autonomous weapons like killer robots introduce new errors and new opportunities for terrorist actors to manipulate them. Artificial intelligence could easily be fooled or bypassed. For example, just a small increase or decrease in the pixels of an image would be enough to convince an artificial intelligence program that a stealth bomber is a dog. It shows how easy it is to manipulate data and to show the certainty that a nuclear strike is about to come. Different States and terrorist groups could misuse this technology to attain their insane objectives. Also, states could make catastrophic decisions in the presence of modern and precise weapons. For example, advancement in sensory technology could lead to the destruction of missiles or submarines by retaliatory forces. Thus, killer robots could result in the spread of manipulation and misinterpretation of information about nuclear weapons.

Third, Artificial Intelligence is all about data, which could be unbiased and incomplete. In the case of nuclear weapons, states never disclose information about their warheads. Under such a scenario, a government always has ambiguous and incomplete information about its adversary’s capabilities. Consequently, putting artificial intelligence in charge of nuclear weapons and expecting them to give precise results in the presence of imprecise details would lead to massive havoc. Moreover, relying on machines, such as satellite imagery, could produce variations in data during foggy or rainy weather. Thus, reliance on autonomous weapons for nuclear weapons may not be a rational decision.

Fourth, most important is the maintenance of seventy years old nuclear strategies. For example, autonomous technology could break the notions of mutually assured destruction or credible minimum deterrence. Such ideologies, established between nuclear states, have been protecting the world from nuclear war. Unfortunately, according to the Rand Corporation, the US think tank, Artificial Intelligence can potentially cause nuclear war by 2040. They gave a warning that LAWs could harm geopolitical stability and remove the status of atomic weapons as a taboo or deterrent object. Also, autonomous weapons could enhance the risk of pre-emptive strike. Thus, the use of killer robots would automate the whole system associated with nuclear weapons and also destabilise the perceptions and ethics related to it.

Finally, nuclear weapons sovereignty is not a fantasy. It’s already here. The ongoing race between the world’s major powers has largely incorporated the latest technologies to modernise their weapons. For example, recently, Russia successfully tested an autonomous under-sea nuclear weapon called the Poseidon. It is an autonomous second-strike nuclear weapon which retaliates against the opponent’s nuclear strike. Still, how this weapon operates, and the consequences remain to be seen by the world.

Moving forward, the use of killer robots and debates about human rights and ethics are deeply intertwined. Internationally, various states, non-state actors, and multinational corporations have been talking about diminishing the killer role of artificial intelligence. Killer robots, being lethal weapons, would further ease mass murders, assassinations, and innocent killingsIn 2015, Elon Musk, Steve Wozniak, and Stephen Hawking, along with thousands of artificial intelligence researchers, signed a letter warning that autonomous weapons would become the Kalashnikovs of the future, and they would be ideal for assassinations, ethnic cleansing, giving harm to the nations or other tasks related to it. Thus, killer robots are a nightmare for civilians’ security and protection.

Moreover, a fully autonomous weapon, being a machine, lacks the human qualities needed to follow the rules of International humanitarian law. For example, only soldiers can differentiate between the intentions of an afraid civilian and a threatening enemy, while a robot cannot. Also, there is always the chance of misidentification of the required target. It would be difficult for a robot to distinguish between a hostile soldier and a twelve-year-old girl. Resultantly, killer robots would be unable to meet the criteria of human rights.

Also, the absence of human involvement in the decision-making process undermines other non-legal civilian protections. According to the Human Rights report, emotionless robots would behave like dictators who crack down on their people and would not be restrained by any emotions. Automatic robots would select their target itself and attack it; during such situations, it would be difficult to find an accountable for atrocities, which undermines another layer of civilian protection. According to the Human Rights report, military commanders deploy killer robots programmed by programmers, but robots would do atrocities. Under such a situation, who would be accountable: the commander or the programmer? But the killing is done by robots. Thus, problems like these undermine International humanitarian laws. 

Finally, using killer robots in war fares could enhance the risk of disproportionate attack. Protocols define that when an attack on civilian lives and destruction of civilian property crosses the anticipated military objectives, it is referred to as a disproportionate attack. And killer robots would enhance the risks of such attacks.

In conclusion, fully autonomous weapons pose significant risks due to their potential to cause more harm to civilians during conflicts. These weapons might violate international humanitarian law and hinder accountability for casualties. Even though these weapons are not yet a reality, technology is abruptly advancing towards their development. States and scientists must act immediately by reviewing and supervising autonomous robot technology. To avoid difficulties in changing the direction later, states should forbid the development of weapons with full autonomy for lethal force. To ensure compliance with the norms of international humanitarian law, anyone interested in creating such weapons should carry out legal assessments as soon as possible. Any weapons failing to meet legal requirements should be instantly decommissioned to prevent further investment in potentially harmful technology.

Free Test for CSS and PMS English

CSS Solved Past Papers’ Essays

Looking for the last ten years of CSS and PMS Solved Essays and want to know how Sir Kazim’s students write and score the highest marks in the essays’ papers? Then, click on the CSS Solved Essays to start reading them.

CSS Solved Essays

CSS Solved General Science & Ability Past Papers

Want to read the last ten years’ General Science & Ability Solved Past Papers to learn how to attempt them and to score high? Let’s click on the link below to read them all freely. All past papers have been solved by Miss Iqra Ali & Dr Nishat Baloch, Pakistan’s top CSS GSA coach having the highest score of their students. General Science & Ability Solved Past Papers

Share Via
Facebook
Twitter
LinkedIn
Recent Posts

Cssprepforum

Education Company

Cssprepforum

cssprepforum.com

Welcome to Cssprepforum, Pakistan’s largest learning management system (LMS) with millions of questions along with their logical explanations educating millions of learners, students, aspirants, teachers, professors, and parents preparing for a successful future. 

Founder: Syed Kazim Ali
Founded: 2020
Phone: +92-332-6105-842
+92-300-6322-446
Email: howfiv@gmail.com
Students Served: 10 Million
Daily Learners: 50,000
Offered Courses: Visit Courses  

More Courses

RS 7000
Cssprepforum
All
3 Weeks
CPF

CPF

5/5
RS 15000
Extensive English Essay & Precis Course for CSS
Intermediate
4 Weeks
CPF

CPF

5/5
RS 15000
DSC_1766-1-scaled_11zon
Intermediate
2 Weeks
CPF

CPF

5/5
error: Content is protected !!