Remote Controlled Warfare
Can Modern War Be Just?
Written 2012 UC San Diego BA Thesis, Political Science International Relations
Author: John Hanacek
Supervising Professor: David R. Mares, UC San Diego
Executive Summary:
In its current usage as a targeted killing platform, the Remotely Piloted Aircraft (RPA) or ‘drone’ represents a fundamental challenge to the idea of proportionality as discussed in just war theory. In combining surveillance and strike into a singular package devoid of risk to its operators, RPA have become a destabilizing force to the ideas of justice in war–jus in bello–and justice of war–jus ad bellum–due to the shift in risk calculation that their unique capabilities allow. The concept of proportionality is understood as managing the use of force such that it does not become a greater evil than the one it is fighting. In removing any threat to operators’ lives, the drone critically unbalances the notion of proportionality and risks making the choice to deploy lethal force seductively easy.
Rapid deployment of RPA capability has left a wake of unaddressed technical and strategic problems. The very notion of the accuracy of RPA, and their subsequent ability to be discriminate within the confines of proportionality, is oversimplified and misunderstood. The ability of drones to provide surveillance for their own strikes has created a fallacious assessment of their true accuracy. While RPA systems may be technically accurate, overall accuracy relies on a myriad of factors to ensure usage on the “correct” target. Intelligence failures, targeting failures and even technical issues can greatly reduce the accuracy of the weapon in practical terms.
Current targeted killing programs–most notably those of the United States–are creating strains on international law. In violating sovereign nations’ airspace and killing its own citizens without trial via RPA, the U.S. is setting precedents, which could threaten the very balance of the international system. The U.S. currently has a monopoly on the usage of RPA in combat, but like the computers that power them and the networks that connect them, drones are getting more capable and more affordable at a staggering rate. In fielding RPA without any discussion of limits, there is a real risk of atrocious behavior by states and non-states alike, and no codified way to assess such actions.
Looking toward the future, the international community would be prudent to begin discussions on acceptable limits for RPA usage and legal frameworks for managing violations of just war traditions committed via drone. Policing such limits would prove difficult, but absent any framework, the risk is that drones will be used to violate international norms without recourse for those affected, aside from direct retaliation. Determining a “middle ground” approach to the usage of RPA– one that squares states’ desires with ethical requirements–is essential to the continued stability of the international system.
Can Modern Drone War Be Just?
Technology and warfare are inexorably linked. Each shapes the other and throughout history there have been watershed events that have changed the way warfare is fought and perceived. One of the most profound was the advent and subsequent use of nuclear weapons, which fundamentally changed the power politics of nations. In the post-Nuclear world, large scale state-versus-state warfare carries incredibly costly risks due to the immense destructive power of nuclear weapons. Fortunately, as the Cold War recedes further and further into collective history, the prospect of a nuclear exchange between nations has been reduced dramatically.
Yet, as the 21st century gets firmly under way, we are presented with a new watershed moment, one that is equally important but much less obvious. The Remotely Piloted Aircraft, or “drone” as they are often referred to, is changing the nature of warfare forever. What began as a purely surveillance platform has been transformed into a front-line weapon of increasing capability. Like the nuclear weapon before it, the drone will require a fundamental reexamination of our understanding and implementation of just war traditions. However, unlike the nuclear weapon whose destructive capabilities are complete, assured and understood, the drone is a much more subtle entity. At the state level, nuclear weapons have been relegated to the role of a macro-level power and policy tool, unusable in practice but wielded in theory. Thus the goal of increasingly powerful non-nuclear weapons has become a primary concern for states.
Seeking advantage over one’s opponent has led to the creation of weapon systems that place increasing distance between the user and the target. From the spear to the cruise missile, humans have sought to inflict increasing damage with less risk. Our latest advent, the Remotely Piloted Aircraft, seems to offer the ultimate strategic advantage; it makes combat clean, precise, and nearly cost-free for those who engage in it. Building on past advances, the drone has emerged as more than just the sum of its capabilities, and is unique in its threat to the just war traditions that modern international order is based upon. The weaponized drone has dealt a major blow to proportionality in war and risks fundamentally unbalancing the principles of just war. In addition, the seductive nature of the drone’s technically precise munitions systems has caused policy-makers to miscalculate its real-world precision. In reality, the drone is no more precise than any other air-based strike platform. Continued unchecked usage of drones for targeted killing risks destabilizing the entire international order as more nation and non-nation actors acquire and field the technology without limits.
This analysis will be broken into three major components: the specific ethical challenges of drone use, the specific technical and strategic concerns in fielding the technology, and implications of such usage for foreign policy. It is worth noting that many terms exist to describe these remotely piloted systems: Unmanned Aerial Vehicle (UAV), Remotely Piloted Vehicle/Aircraft (RPV/RPA), and “drone” are often used interchangeably. However, for the purposes of clarity and accuracy, the terms Remotely Piloted Aircraft (RPA) and drone will be used to describe these systems in this discussion. The U.S. Air Force has recently begun using the term RPA, since there is always a pilot and support crew in charge of each aircraft. This term is more important than simple semantics; referring to the system as “unmanned” places emphasis on the machine, where “remotely piloted” shifts authority and responsibility back to the human operators, a welcome adjustment. For now the United States has a monopoly on the usage of RPAs, but this advantage will not last. As other nations, groups and even individuals begin fielding their own RPA, they could be following a dangerous precedent being set by the U.S. While the Obama administration has continually asserted its respect for the just war tradition in its drone operations, it has failed to adequately defend its actions and appears to fundamentally misunderstand the true impact of the RPA on just war traditions. As diverse actors increasingly field these systems across the globe, the international community will have to find a way to square the unique abilities of RPA with the ethical traditions that underpin modern warfare. Warfare has always been defined by relationships between humans and our technology; however, unlike other tools before it, the RPA represents a unique challenge to our conceptions of justice in war and even the justifications for initiating war in the first place.
Ethical Challenges of RPA Use
In the beginning of the fifth century of the Christian era, Augustine of Hippo set down in writing certain ideas about the limits of violence in warfare. (Johnson, 1) At the time, he had to reconcile Christian teachings against the use of violence with the need to defend the Roman Empire from the invading Vandals. (Johnson, 1) He arrived at a solution: “a justification of war under certain prescribed circumstances, yet with genuine limits on the harm that could be done even in a justified war.” (Johnson, 1) This idea is generally considered the foundation of just war tradition in Western thought. Today, the idea of justice of war and justice in war–jus ad bellum and jus in bello–underpins our entire conception of warfare.
The idea of ethical warfare seems contradictory at first. Many have attempted to reconcile the apparent incompatibility of “morality” and “reality” as it pertains to war. However, such approaches often fall short of offering anything other than a further divide between the moral “ought” and the real “is.” Instead, the search for an ethical approach should begin by finding the “middle ground.” Terry Nardin outlines this idea, and although his approach pertains more to international politics and jus ad bellum, the concepts he presents are equally relevant to RPA in war. The term “middle-ground ethics” originates from the English School of International Relations, and seeks “a balance between moral limits on state conduct and the real interests of states.” (Nardin) This is an extremely broad concept with the potential to reach similarly undesirable outcomes, and Nardin himself expresses the danger that such a cost/benefit analysis between morality and interests could lead to a very realist conclusion. However, when considering the place of RPA in the context of ethical behavior, we are in relatively uncharted waters. It will pay to remain open to all interpretations. Nardin believes we can find some solace in law. Namely, that law can establish “the ends for which coercion is justifiably used.” Developing a set of legal agreements for the usage of RPA is desirable, but would be arduous and likely require an extraordinary policing effort to enforce, if enforcement is even possible. For this immediate discussion, the goal is to determine where current usage of RPA fits into these conceptions of just war, in order that a dialogue might begin on how to integrate this new technology within existing notions.
There are interesting parallels to be drawn from old Cold War anxieties, which can assist in our analysis of the ethical usage of RPA. In his book, Can Modern War Be Just? James Turner Johnson discusses this very issue, but with the nuclear weapon as his focus. For Johnson, “we must first insist that the means of war allow distinguishing between [noncombatants] and combatants.” (Johnson, 115) The modern counter-terrorist campaigns have seen this issue shoved violently into the forefront of concern. Limiting use of force becomes almost impossible if one is unable to discern combatant from noncombatant. “The moral requirement to protect noncombatants implies the development of weapons usable in ways that satisfy legitimate military functions without corollary damage to the lives, livelihoods, and property of noncombatants.” (Johnson, 28) The RPA looks as if it addresses this: a platform that can remain on station, gather evidence, and strike when the target is confirmed. Yet there are issues with this idea when put into practice. As Johnson points out, even a precision weapon such as a rifle can be used indiscriminately–firing into a crowd for example–and the RPA is no exception. While its systems may be technically accurate, they rely on a myriad of factors to ensure usage on the “correct” target. Intelligence failures, targeting failures, and even technical issues can see the accuracy of the weapon greatly reduced in practical terms. Even if we accept that an RPA is capable of delivering a precise strike, which it often is, the over-usage of such precision becomes a salient ethical issue as it leads to the establishment of dangerous precedents and the loss of real lives.
To illuminate what ‘precedent’ means in this context, let us highlight how two hypothetical countries–Country A and Country B–interact with each other in response to a terrorist attack when they have RPA at their disposal. This is meant to underscore the issues of using RPA legally and ethically, and as an example of the precedent that has already been set in RPA usage.
A bomb has recently detonated inside Country A. Its target was a government building but many civilian contractors were inside. The terrorist organization believed to be responsible for the attack–referred to here as X–claims credit for the operation. Country A’s intelligence confirms this and plans are drawn up for tracking and neutralizing this terrorist cell. In the subsequent days, Country A’s intelligence officials discover that a ranking X leader may be hiding in a rural area of Country B. From satellite imagery gathered at this general location, along with pieced together call and email logs, officials have found what is believed to be the hiding place of their primary terrorist suspect. Aerial reconnaissance provided by RPA has captured numerous individuals entering and exiting the small building, which is nestled in a relatively remote rural neighborhood. They seem to match the descriptions of the target and his associates, and their behavior patterns agree with this but RPA missions have delivered nothing completely conclusive. There is no reliable intelligence for how many individuals are in the potential blast radius, or what their relationship–if any–is to the terrorist suspect. Solid details are difficult to come by, as Country A does not have any assets on the ground in Country B to confirm. As preparations for a strike continue and officials determine the potential collateral damage, Country A’s intelligence service receives new intel indicating the terrorist suspect will be moving to a new undisclosed location. Over the next few hours, it becomes clear that this window for a targeted strike is perhaps the only one Country A will have. After much deliberation at the highest levels of government, Country A green-lights a strike via an RPA that was on station gathering imagery. It was armed with two air-to-ground missiles–each one capable of completely destroying the small compound. The RPA crew receives authorization to fire on the building, as they know from recent aerial recon that the suspect is inside. The crew commands the RPA to release a missile, then watches as the building disappears in the white infrared splash of its telltale explosion pattern. It was a direct hit. One of the leading X terrorist operatives was taken out along with four unidentified Country B citizens. The relationship between these individuals to the terrorist cell is never confirmed, but it is assumed they were assisting the target willingly. The strike is considered a success by Country A. Country A and Country B are not at war, and Country B was not informed before this strike happened. Country B officials only learn of the attack on their soil after it has occurred, and not from Country A, but from their own ground sources.
In this case, Country A felt justified in penetrating Country B’s airspace–and violating their sovereignty–because the threat that X posed to Country A was perceived as real. Civilian casualties were not desired but were considered an acceptable risk, especially considering the timetable. Every effort was taken to ensure accurate targeting. This case is hypothetical but the precedent is real and it does not necessitate a background in ethics to see that such a policy is fraught with problems. “The concept of discrimination remains in the first instance a moral term defining the choice made in a particular instance to use a given weapon or not, or to use it in one way as opposed to another.” (Johnson, 115) In this hypothetical case, the weapon did as it was designed–to destroy its target–but the choice to use it was problematic as the evidence simply was not clear and collateral damage resulted. RPA create a unique situation where surveillance and attack roles are combined, and can be used simultaneously, increasing the risk for indiscriminate usage. Unlike weapons such as the cruise missile or even the traditional piloted fighter/bomber aircraft, RPA are capable of surveying the scene and acting instantly. The long flight time means an RPA can be on station waiting to make a kill, as opposed to briefly entering the airspace to deliver a strike based on third-party intelligence. This creates a system that can be used to provide its own evidence and act on it. When used in a targeted killing role, it becomes judge, jury, and executioner in one package. All the while, this system’s operators face no risk to their physical welfare. This adds up to a fundamental shift in risk-assessment calculations and can result in targeted strikes being entertained as options in instances they may not have been before.
Additionally, the previous example seems to imply that when an RPA enters airspace, it brings a status of war with it. Establishing combat zones is especially problematic given the nature of terrorism. (Brunstetter and Braun, 345) The use of lethal force is traditionally accepted while in a war zone, and traditionally forbade when in a zone of peace. Yet, Country A and B were not officially at war. The airspace that Country A’s drone entered was not hostile in any way; even if it was, the lack of risk to the actual RPA crew brings into question the very nature of “hostile.” So in this case, it would seem that the mere presence of a terror suspect created a situation where the use of lethal force was justified. Such a blurring of the lines with regards to combat zones presents huge ethical challenges. Without clear demarcation, the risk is that increasing amounts of territory will be considered fair game for strikes in what fast becomes a war in practice but not in definition, potentially devoid of the legal norms that help constrain defined war and protect those caught in it.
Indeed, one of the key principles of the just war tradition is to “manage the use of force so that it does not come to represent itself a greater evil than the evil it is used to correct.” (Johnson, 32) This is defined as the concept of proportionality and is especially salient in modern asymmetrical conflicts. In general, “the concept of proportionality permits military personnel to kill innocent civilians – provided that the intended targets of the operation are enemy forces, not civilians.” (Cohen, 3) However, a system that enables a quick kill at a moment’s notice anywhere in the world, at any time, and with minimal risk places a burden on this sentiment. The RPA creates a situation where a single aircraft can monitor targets for hours or days at a time waiting for a time to strike with no physical risk to the aircraft’s operators. This inherent lack of risk creates a problem in the cost/benefit analysis of a lethal operation. With traditional airstrikes, there is at least a possibility that a pilot could be shot down or otherwise lost. The cruise missile solved this problem partially, but still requires pre-launch intelligence and a fair degree of certainty that the target is valid. In using both of these systems, mission commanders need to be sure of target validity and potential dangers before sending in the strike. RPA do not require such certainty, in fact they are often what provide it in the first place. “The risk becomes that military leaders will bypass nonlethal alternatives, such as apprehending alleged terrorists and continued surveillance, and move straight to extrajudicial killing as the standard way of dealing with the perceived threat of terrorism.” (Brunstetter and Braun, 346) When the cost of such extrajudicial killing is decreased, the likelihood to use it increases. When the platform engaged in surveillance can also engage in combat, the barriers to doing so are broken down; such drastic minimizing of the steps between surveillance and strike offers up seductive advantages. To put it bluntly, the current implementation of RPA makes killing the easiest option, which is entirely counter-productive to every notion of just war tradition.
This is not the first time that aerial tactics have been engaged in controversy with regard to proportionality and just war. “During the 1990s the just war tradition was mired in debate over the use of aerial campaigns to stop the ethnic cleansing in the Balkans. One of the focal points of these debates was the degree of risk that allied pilots needed to accept to avoid civilian casualties.” (Brunstetter and Braun, 350) The principle of discrimination demands that “every effort” be made to reduce the risk of civilian casualties. Yet protecting civilian lives meant risking allied pilots’ lives; sorties have to be flown at lower altitudes to ensure accuracy, but this exposes pilots to Surface-to-Air Missile and other anti-aircraft fire. With the RPA, such concerns can be avoided. The remotely piloted aircraft can fly as low as it needs to successfully verify its target with no increase in risk to pilots, only the airframe itself. Yet, as we shall see, using an airborne platform to perform target verification is never ideal. Despite the immense technological promise of RPA, it’s still a case of “garbage in, garbage out;” that is to say, bad intelligence and lack of situational awareness will still translate into inaccurate targeting and collateral damage.
Technical and Strategic Challenges for RPA Usage
Not just a philosophical challenge to just war ideas, RPA present concrete and immediate challenges to militaries around the world as they are increasingly incorporated into fighting units without proper preparation. It is becoming painfully obvious that the fast pace of technology is not being matched with sufficient speed from the human side. Michael Donley, former U.S. Air Force Secretary, conceded that the amount of surveillance video captured by U.S. drones is “unsustainable,” and that it will be “years” before the Air Force will have the means to sift through all the information. (Wired) Such footage cannot be simply deleted, as it is often needed for future missions. Moreover, if any amount of oversight was to be added into the targeted killing process, such video would be invaluable. Defense Advanced Research Projects Agency (DARPA), the U.S. military’s futuristic research division, is actively trying to solve this problem by getting computers involved in the form of “algorithmic sifters.” (Wired) Such sifters would enable computers to process and serve up more relevant data to human analysts, hopefully reducing the workload and leading to more successful intelligence gathering by placing useful weight on the data. Footage required for immediate use will be pushed to humans as it comes in, while less relevant footage can be backlogged and catalogued for later use. It sounds excellent in theory, but let us draw back for a moment. Outsourcing data analysis to computers will undoubtedly raise a plethora of legal and ethical headaches: Who is to blame if the computer misidentifies something? How can we even be sure that the algorithmic analysis is actually finding all the relevant footage? Such automation may be convenient, but how many short cuts should be afforded when the footage is used to make life-and-death decisions?
The potential destabilizing effects are not limited to those on the lethal side of the weapon. For all the physical protection these new systems offer their pilots, their psychological effects are raising concern. Recent findings have shown that RPA crews experience abnormally high instances of Post Traumatic Stress Disorder (PTSD). In a 2011 Air Force survey, “46% of active-duty drone pilots reported high levels of stress, and 29% reported emotional exhaustion or burnout.” (LA Times) This seems counter-intuitive, after all, the RPA is supposed to insulate its operators from harm. The problem arises from the unique and starkly modern experience of remotely piloted combat. The three-person crews usually work 12-hour shifts constantly scanning the territory for threats, only to drive home at the end of their shift to their families. “Only rarely do drone crews fire on the enemy. The rest of the time, they sit and watch. For hours on end. Day after day.” (LA Times) When there is a target to destroy, the experience becomes immensely surreal; the target is laser designated, the missile is released and around six seconds later the target is enveloped in a soundless white flash. The crew must then continue scanning the aftermath to complete their after-action report while the once bright white heat signatures of human bodies fade to match the ground they lay on. Crews often develop a strong emotional connection to ground troops they are providing overwatch for; they communicate via text messages and radio, in effect telecommuting to the war zone. “Crews sometimes see ground troops take casualties or come under attack. They zoom in on enemy dead to confirm casualties.” (LA times) Physically, operators may be thousands of miles from combat, but psychologically they are right in the action and that disconnect creates a feeling of helplessness. The relatively high rates of PTSD in drone operators have been a driving factor in the broader reconsideration of what PTSD actually is, and what really causes it. PTSD was once considered a manifestation of mortal terror, yet now the understanding is shifting and PTSD is being recognized as “moral injury.” More than violence, PTSD arises from a person’s feelings about what they have done to others and what they failed to do for them. “The mechanisms of death may change—as intimate as a bayonet or as removed as a Hellfire—but the bloody facts, and their weight on the human conscience, remain the same.” (GQ) The physical separation from death that drones enable has not reduced the moral weight inherent in killing.
Even this briefly into the history of RPA deployment, it has become apparent that the human element of the machine is in many ways fundamentally incompatible. Much of the duties of a pilot and sensor operator are really just image processing: scanning a foreign environment looking for threats and targets, things perfectly suited to algorithmic automation, yet tiring and stressful for human minds. Advances in computer vision are continuing apace and as PTSD in drone crews continues to be better understood, the path forward becomes chillingly obvious: add more abstractions to shield the human conscience, or remove it entirely. In addition to algorithmic image processing, researchers have proposed a voice-command interface for interacting with the system, anthropomorphizing the drone and allowing its human overseers to shake off some of the blame. (GQ) A normal human, soldier or civilian, can only be asked to deal out so much death before a tipping point is reached; machines have no such point. To continue using RPA in their current manner, the only logical option becomes adding more automation, more abstraction.
In addition, while many assume that RPA operators are bathed in constant, real-time information, and state-of-the-art communications, the reality is much different. The April 6, 2011 fratricide case that claimed the lives of a Marine Staff Sgt. and a Navy Hospitalman in the Sangin River Valley serves as a brutal example of the simple infrastructure problems that plague U.S. drone operations. Here is a concrete example of precision gone wrong and the need for solid intelligence before engaging in a strike. What follows is a condensed version of a Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) Journal piece describing the incident.
The morning of April 6, 2011 ground troops were on a routine patrol in an area known for IED and small arms attacks; the Sangin River Valley in Afghanistan. Part of the team stayed with their vehicles while another part struck out on foot. The dismounted squad leader ordered a part of his force to sweep the road and the western shoulder, while another element was deployed farther west to provide overwatch for the vehicles. Almost immediately, the overwatch team encountered enemy forces and pursued; they ended up around 200-250 meters west of the road. The mission had called for securing the route out to only 100 meters, but the overwatch element had strayed well beyond that. At this point, the ground commander had no idea where exactly they were; he believed he had only one dismounted force adjacent to the route when he actually had two. He had to rely on “communicated position reports” to know where his other elements were.
Meanwhile, at the Predator ground station at March Air Reserve Base, CA, the predator crew and offsite video analysts were trying to wrap their heads around the situation. The analysts at the Air Force’s Distributed Ground System (DSG) fusion site in Terre Haute, Indiana could only communicate with the predator crew via instant messages, they could not hear or speak to them. At 08:40 local time, the Predator crew spotted three individuals, and then shortly after saw muzzle flash emanating from the group. The direction of the muzzle fire had to be ascertained; if it was east, it was likely insurgents firing on the U.S. forces, if it was west it was likely a friendly element firing on the enemy. Ultimately, the crew decided that the fire was aimed at friendly positions; a few minutes earlier ground forces had reported contact to the predator crew, so it seemed logical. The Predator’s ARC-120 radio received transmissions from the ground then fed them into the drone’s satellite link to the crew in California. At 08:41, the JTAC on the ground sent the “9-line” message containing coordinates for a strike, pending the ground commander’s approval. Meanwhile, analysts at DGS Indiana (DGS-IN) were also watching the feed. At 08:40, they sent an instant message to the Predator crew: “3 friendlies in FOV,” then 40 seconds later, “pers are shooting W,” only to then send, “disregard,” “not friendlies,” “unable to discern who pers are.”
The rationale for this change was not communicated; the only communication method was instant message. DGS-IN kept up the chat and at 08:45 voiced its concern about the fire direction in a private “whisper” chat that was only seen by the Predator mission intelligence officer and his team. The Predator pilot–who was in charge of releasing the missile–never heard about this. The intelligence coordinators scurried to assess the conflicting information. They wanted to use the station’s FalconView mapping software, but the pilot needed it. Instead the crew attempted to verify the direction of the muzzle fire by using Google Earth, but were unable to get the desired view before the missile was launched. As the Hellfire missile was in the air en route to target, the ground team realized they had made a targeting mistake. Moments after impact, the Predator pilot saw reports of friendly casualties in the online chat.
This event is a tragic example of the problems inherent in coordinating a truly global war fighting effort. In this situation, the lack of communication between each element involved–analysts at DGS-IN, the ground elements, and the Predator crew itself–resulted in friendly casualties. The investigation into this incident has prompted U.S. forces to seriously consider installing a two-way voice capability at all DGS locations. However, U.S. officials are cautious about allowing too many elements to voice their opinion. As Air Force Major General Robert Otto states, “while we always want somebody to have the ability to speak up when they fear a rule-of-engagement violation, or wait a minute, there’s women and children, you have to balance fighting a war by committee with a ground-force commander who is presumed to have situational awareness and has the authority to say, we need to strike this target.” (C4ISR Journal) The main message from the investigators and Major General Otto is that once the ground commander loses track of his troops, there is really no way to prevent such situations from happening.
The Sangin Valley case shows that, without multiple perspectives, air-based strikes have a very real chance of hitting the wrong target, even if they do it with perfect precision. Due to the very narrow field of view of the Predator RPA camera, the crew relies on mapping software to put its captured video in context. The video itself reveals very little. Despite these well-known technical limitations, many suspects on the Obama administration’s “kill list” are vetted primarily from RPA camera footage. A limited, top-down perspective just does not give assets the situational awareness they need to make properly informed decisions. Interestingly, the Obama administration appears to understand these limitations more than its broad behavior suggests. Particularly telling is that when the suspect really mattered, Obama chose not an aerial strike, but a more classic and time-tested method of attack: the U.S. Navy SEALS.
When the Obama administration first learned that Osama bin Laden might be hiding in a compound in Abbottabad Pakistan, early bombing options were laid out on the table. Ultimately, Obama chose to launch a raid because it fit the mission best; he needed bin Laden’s body as evidence of his death. (Time) Even if the compound had been bombed, allied personnel would have had to gather evidence from the compound post-strike to prove it was truly bin Laden they had killed. In addition, the Obama administration was concerned about collateral damage from an air-born strike. “Abbottabad is a heavily populated city, with nearly 1 million residents. Moreover, numerous civilian residences and the Pakistani military academy were near bin Laden's [compound].” (Foreign Policy) However, weighing even more heavily was the fact that U.S. officials were simply not sure if bin Laden was really in the compound; reports suggested that there was a 20-40 percent chance he wasn’t in the building. (Foreign Policy) This decision, and the considerations made before engaging in the mission, point to an interesting conclusion: the U.S. is willing to use RPA-based surveillance and missile strikes on most targets, unless the target is very important. The success of this operation has, in a way, undermined the promise of RPA. It would seem the old adage of “if you want something done right, do it yourself” still very much applies and takes precedence in the modern remote-controlled war on terror.
Continuing this trend, recent U.S. counter-terror efforts have manifested in Special Forces raids conducted against al Qaeda operatives in Somalia and Libya within quick succession. While the operation in Somalia was reportedly ended by a heavy firefight that saw U.S. Navy SEALs retreat empty-handed, the joint CIA, FBI, and U.S. Special Operations forces mission in Libya was successful and resulted in the capture of an al Qaeda leader who goes by the alias Abu Anas al Liby. (LA Times) It would seem the Obama administration is at least experimenting with a new counter-terrorism strategy. The administration has not formally shifted importance away from CIA drone strikes, but there has been recent talk of limiting the amount of strikes in Pakistan and Yemen, with a renewed emphasis placed on capturing of suspects rather than killing them. (LA Times) Of course, drone strikes are still being carried out. As in the Osama bin Laden raid, the Obama administration understands there are times when a drone strike is not the proper option. Yet constant deployment of special operations forces is not sustainable and carries high risk of blowback coming from both domestic sources and abroad. It remains to be seen what, if any, balance will be struck between capturing and killing of suspects. However, for the moment any such non-lethal operations exist only as fractional support to the overall counter-terrorist strategy. While collateral damage from drone operations is increasingly well documented, there remains no better or cheaper way to enact military counter-terror ambitions. Ultimately the drone program works, according to those who have the power to stop it, so it will continue.
More broadly, the reliance on drone strikes by a liberal democracy such as the United States raises an interesting question: what does it say about a country’s ethics that it places such a high value on killing without risk? In a perverse way, the drone can be seen as the ultimate weapon of democracy. It allows political leaders to pursue what military strikes they deem necessary, at no cost to their domestic populace; with no threat to domestic lives, the potential for public outcry is greatly reduced. This allows leaders to expand operations without fear of losing military personnel. Let it be clear though, RPA themselves are simply tools; it is their usage that defines their ethical status.
Implications for Foreign Policy
The stability of the post-WWII international system hinges largely on the conceptions of just war and ethical practices that have been discussed. As RPA pose a threat to the broader concept of just war, they also post real challenges to international relations. The earlier hypothetical example of Country A and Country B is not some mythical construct, it is based on the actual behavior of RPA wielding countries, notably the United States. Over the past decade, RPA have become something of a wonder-weapon for Western political leaders. As an anonymous security expert told the BBC, "When a drone goes missing, no mothers weep." What’s more, the remote-operated war on terror–at face value–seems to be working. As of June 5, 2012, the Obama administration has announced the successful dispatch of senior al-Qaeda leader Abu Yahya al-Libi via a drone-launched missile strike. Yet this accomplishment came with an immediate price: 15 additional people were killed in the strike. It would be a gross over-simplification to call these deaths “collateral damage,” yet that is what they are considered by U.S. officials. For domestic U.S. opinion, drones have been a boon to the war on terror; allowing the quick dispatch of suspects without risking American lives. Yet these remotely piloted machines often result in the deaths of more individuals than intended. In combating global terrorism, U.S. political leaders are so far choosing to minimize the domestic cost while exacerbating the cost for those in the unofficial battle space.
Beyond casualties, U.S. RPA-based targeted killing programs are directly damaging international law as well. “Prior to 9/11, the U.S. government was opposed to targeted killings because they were seen as violations of international law, but this policy has since been modified to permit certain forms of extrajudicial killings.” (Brunstetter and Braun, 343) Under the Obama administration, the United States obviously views the drone program as worth pursuing, and indeed expanding. Foreign Policy compiled a list of all active U.S. RPA bases throughout the world. The sheer scope of the program is staggering. Even more troubling though is the amount of bases that are under CIA control or have CIA-operated RPA programs. Unlike the U.S. military, the CIA is much more secretive about its drone operations, making solid information about the number of sorties flown, and resulting casualties, difficult to come by. The following paragraph details the locations and primary missions of U.S. Airbases that house RPA, sourced from Foreign Policy’scollection of information and satellite images of said bases.
In Europe, the Incirlik airbase in Turkey serves as a base for four Predator drones. This base is primarily concerned with policing PKK (Kurdistan Workers Party) militants but also has ties to the mission in Iraq. In Southeast Asia, the Jalalabad Airfield in Afghanistan is used as a launch point for the U.S. Air Force and the CIA’s Predator and Reaper drones. It is reported that the CIA conducts most of its Predator sorties into the Afghanistan-Pakistan border region from this airfield. The Jalalabad Airfield has seen enhanced use since the Pakistani government evicted U.S. drones and personnel out of the Shamsi Airbase. Also in Afghanistan is the Khost Airfield, which is under operational control by the CIA. It was the site of a suicide bombing that killed seven American citizens on December 30, 2009. The CIA responded with 11 drone strikes that killed nearly 100 suspected militants. Additionally in Afghanistan, the Kandahar Airfield serves as a major base for surveillance and strike operations in Afghanistan and Pakistan if the need arises. It is under control of the U.S. military and is home to the RQ-170 Sentinel surveillance RPA in addition to other drones. The Shindand Airfield in Afghanistan is also home to the RQ-170, and launched the now infamous RQ-170 that went down in Iran. Before this incident, the CIA had reportedly flown RQ-170 surveillance missions over Iran for three years; the flights continue to this day. Moving out of Afghanistan, the Al-Udeid Airbase in Qatar serves as a major transshipment site for U.S. assets headed to Afghanistan, and also hosts the Combined Air and Space Operations Center. This center serves as the command and control center throughout the Middle East. Lawyers are stationed at Al-Udeid 24 hours a day with the job of approving drone strikes carried out by the U.S. military; their specific process is not public knowledge. In the Philippines, the Zamboanga Airbase serves as a base for operations against the group Abu Sayyaf, which has ties to al Qaeda. The Philippine government allows the U.S. to fly surveillance missions against said group. In the United Arab Emirates, the Al-Dhafra Airbase serves as a major hub for manned and unmanned intelligence, surveillance, and reconnaissance (ISR). The base hosts over 1,300 U.S. Air Force personnel as well as RQ-4 Global Hawk drones, U-2 manned spy planes and, more recently, F-22 manned fighter aircraft. The RQ-4s "fly daily [signals intelligence] and imagery collection missions along Iran's borders with Iraq and Afghanistan and along Iran's Persian Gulf coastline." In Yemen, the Al-Anad Airbase serves as a launching point for joint operations between U.S. and Yemeni forces targeting al-Qaeda in the Arabian Peninsula (AQAP). The U.S. drones provide surveillance information to Yemeni forces that carry out assaults and air strikes. The U.S. also launches airstrikes, having conducted 21 airstrikes in the first five months of 2012, double the amount carried out during the entire previous year. Africa also hosts U.S. forces at the Arba Minch airfield in Ethiopia. The U.S. was temporarily expelled from the base after a manned AC-130 gunship launched from the base engaged Islamic militants in Somalia. In October 2011, the U.S. military was allowed to use the base once more, and now flies Reaper drones from the airstrip. Also in Africa, Camp Lemonier in Djibouti was used to stage the November 3, 2002, Predator strike that killed al-Harithi and six others in Yemen, the first targeted killing outside of a battlefield. The CIA continues to fly drones from this base over Somalia. In the small island nation of Seychelles, the U.S. military operates a small fleet of MQ-9 Reaper drones. Although officially used to track pirates in the Indian Ocean, WikiLeaks cables revealed that the base had also been used to carry out missions against al Qaeda affiliates in Somalia. Most recently, reports have emerged detailing a secret air base in the Saudi Arabian desert. This base serves as a portal into Yemen and has the dubious honor of being the facility where the successful Predator strike on al Qaeda leader Anwar al-Awlaki originated. (Wired)
By now it should be clear just how far the U.S. drone program extends. RPA operations span 13 major bases in almost as many countries. Operations range from targeting suspected militants in active combat zones such as Iraq and Afghanistan, to more secretive excursions into Somalia and Yemen, along with the infamous campaign in Pakistan. It is worth noting though that not all RPA are employed in offensive roles; the RQ-170 and RQ-4 aircraft are primarily designed for intelligence surveillance and reconnaissance (ISR) roles. According to security analyst P.W. Singer of the Brookings Institute the U.S. military has over 7,000 RPA deployed, and that number is growing. (Scientific American) Of some consolation is evidence the U.S. seems to take the ethics of its program seriously. Stationing lawyers at the Al-Udeid Air Base in Qatar allows day-to-day U.S. drone operations throughout the entire Middle East to be scrutinized. Yet as The Atlantic observes, the perceived validity of drone attacks often dodges meaningful moral analysis because of the nature of the mission. Many policy-makers and even some journalists believe the question America faces with regards to drones is simply ‘are we justified in killing terrorists?’ The answer to that question is almost always a resounding ‘yes, of course, they’re terrorists!’ With this logic, targeted killing with drones is fully justified. The actual question of the legitimacy of drone strikes is obfuscated by counter-terrorist motives.
The major justification for the U.S. program, so far, has been worryingly black-and-white. First, al Qaeda is at war with the U.S., which makes any participant in the organization an enemy combatant. Second, anyone directly involved in terrorist plots against Americans poses an “imminent danger to U.S. security. (LA Times) The waters get even murkier when an American citizen is the source of the threat. Many in the intelligence community do not believe U.S.-born terror suspects should get special quarter; they equate it to an American joining the German army in WWII. (LA Times) Yet, this rationale seems poised to slip down the hill of injustice; after all, every American citizen is entitled to due process and denying that privilege is a serious violation of said citizen’s constitutional rights. This is not simply a thought exercise; the U.S. has actually killed a citizen with a drone strike. In September 2011, Anwar al-Awlaki, the infamous al Qaeda leader considered second only to bin Laden, was killed in a Predator missile strike. Anwar al Awlaki was born in New Mexico and was a U.S. citizen. (Wired) Constitutional rights were not designed to have exceptions, yet it seems one was made for al Awlaki. The Predator aircraft involved in this mission was flown from the secret Saudi Arabian air base; he was killed in Yemen. Whatever his crimes, the fact remains that he was extrajudicially executed without due process. When the target is labeled a terrorist, rights no longer apply; the logic is tragic in its simplicity.
To offer some comfort, the White House has assured the public that when a potential target is a U.S. citizen, President Obama himself makes the final call. (LA Times) This is placing a dangerous amount of power into the hands of the executive branch. Even if one is willing to accept that President Obama is following ethical guidelines, such power, once manifested in a branch of government, is not relinquished easily. It is uncertain how long the ethical standards–whatever they might actually be–will hold up once the specific individual who oversees and orchestrates the program is out of office. To be fair, there is congressional oversight, but it comes after the fact, “and is divided between Congress’ intelligence committees, which review CIA operations, and its armed forces committees, which review military operations.” (LA Times) The major problem with the entire program is that its rules are being created on the fly with minimal public scrutiny. (LA Times) The Obama administration has repeatedly claimed it is within the law to kill suspected terrorists–even U.S. citizens–but has never proffered a concrete legal defense. The administration is either unwilling to divulge legal specifics, or there simply are none to divulge.
The Obama administration has taken some steps to demystify the process with which potential suspects are vetted, but the specifics again remain vague. John Brennan, Obama’s former top counter-terrorism advisor, now CIA Director, has attempted to placate concerns. In an April 30, 2012, speech, he stated that an individual must be deemed by U.S. intelligence as actively involved in a plot to attack American forces or facilities before that individual can be considered for a strike. He further added that “extraordinary care and thoughtfulness” is put in to each case before a strike is approved. He was not any more specific. (LA Times) This assuring but ultimately hollow rhetoric, coupled with the incredible lack of information available about the program, underscores a major problem with the Obama administration’s targeted killing strategy. As retired Air Force General Michael V. Haydenis states, “this program rests on the personal legitimacy of the President, and that’s dangerous.” (LA Times) The targeted killing program is trumpeted as “ethical” because the individual person currently in the office of President can be claimed as “ethical.” Yet such a claim proves nothing and raises troubling questions about the future of the program’s ethics. Even if we can be reasonably sure one particular President is acting in an ethical way, the office of the President is designed to see many individuals move through it. It would be unrealistic to assume this program can remain free of ethical breaches without formal codified oversight.
Of all the locations where U.S. drones are flown, Pakistan in particular has become a volatile battleground both literally and metaphorically. As operations wind down in Afghanistan, a great deal of the action in the war on al-Qaeda shifted over the border to Pakistan. This has created a delicate situation for U.S. foreign policy in the region. Pakistan is a sovereign nation that is not at war with the U.S., yet almost daily drone sorties are straining the Pakistani government’s relationship with the U.S. and with its own people. A 2009 Brooking’s Institute report estimated that U.S. drone strikes killed an average of 10 civilians for every militant. (Scientific American) Amnesty International has released a comprehensive report detailing a few of the most jarring cases, including a drone strike on a 68-year-old woman in October of 2012. The U.S. has not formally acknowledged this event. While the report is not specifically intended to cover all U.S. drone activity, it does contain some sobering statistics. “According to NGO and Pakistan government sources the USA has launched some 330 to 374 drone strikes in Pakistan between 2004 and September 2013.” (Amnesty International) These sources place the civilian cost of these strikes in the range of 400 to 900 killed and at least 600 seriously injured. These statistics only cover Pakistan. While the U.S. claims the vast majority of individuals killed in drone strikes are members of armed groups such as the Taliban and al-Qaeda, it continues to withhold detailed information about each strike. Even if such information were to be released, it is difficult to imagine how it could sufficiently justify such high casualty rates for what is supposed to be a discriminate and targeted program. Pakistan has repeatedly called for the cessation–or at least the reduction–of U.S. drone attacks on their soil. On June 5, 2012, Pakistan lodged a formal protest over the strikes. This protest came after three strikes over three days with one strike killing at least 16 people. (The Nation) In addition to formal complaints, “Drone attacks have fuelled anti-American feeling in Pakistan, whose relations with the U.S. has unraveled after the U.S. mission that killed Osama Bin Laden in 2011.” (BBC News) As long as drone strikes kill militants and are accepted in the U.S., such a stance is unlikely to change, regardless of the documented civilian toll in the strike region.
This comes as the United States’ operations in North Africa are escalating in intensity. Along with bases in Yemen and Ethiopia, Nigeria has also recently approved U.S. drones to be stationed in its territory, with the goal of forming a stronger security relationship with Washington to combat Islamist movements in the region. (Reuters) Far from reducing the overall amount of threats, the U.S. targeted killing program has simply caused the epicenter of potential terror cells to move to new ground, and now the program has to follow. At this point it does not look hopeful that the targeted killing program can ever end.
And why should RPA strikes stop; despite collateral damage, the program has resulted in high-profile targets and has weakened al Qaeda’s overall operating potential. Moreover the program is not attracting scrutiny from the domestic populace. Surveys show that a majority of American citizens–62 percent–approve of the campaigns. (The Globe and Mail) Again, it is not difficult to see why the drone-based campaign is popular with the Obama administration. Unlike the controversy in the Balkan intervention, there is no risk of putting pilots in danger. Logically it follows that with no risk to pilots, every effort should be made to ensure minimal casualties. “Without a pilot fearing for her life, drones should be able to take more extreme measures to avoid civilian casualties.” (Brunstetter and Braun, 350) Yet, the reality of RPA usage does little to support this idea. Any possibility of reducing collateral damage with enhanced discrimination is broken down by basic probability; with the ability to fly missions with less risk, the choice has been to fly increasingly more missions. The RPA itself is a precision tool capable of some degree of discrimination due to its potential for low altitude flight, long flight-duration, and all-important risk absorbing nature, but current policy sees it playing a brute force roll; equal parts judge and assassin on a continuous mission to find more targets.
Looking to the future, CIA and the Obama administration’s liberal use of lethal drone strikes will likely accelerate a global arms race and will essentially give the green light to use remotely piloted vehicles to violate other nations’ sovereignty. Of course the problems will not remain limited to the nation-state level. Like the computers that power them and the networks that connect them, drones are getting more capable and more affordable at a staggering rate. “Singer, the Brookings analyst, estimated that at least 43 nations as well as groups such as Hezbollah have deployed or are developing drones and other robotic weapons. “ (Scientific American) The skies will soon be buzzing with all manner of remotely piloted–and even autonomous–drone aircraft. Weaponize even a fraction of them and the air starts becoming very hostile.
Due to its massive defense spending and commitment to military innovation, the United States is once again on the cutting edge of this particular destructive technology. In an eerie parallel to the atomic bomb, the U.S. is the first and most visible user of modern RPA in their newfound targeted killing role. And much like nuclear weapons, the United States will not have a monopoly on this technology for long. It is no longer science fiction to imagine a world where warfare is outsourced to externally operated machines, and countries battle inside of each other’s airspace through RPA proxy. Replace the previously mentioned “Country A” and “Country B” with any number of nations around the world and the potential for international chaos begins to reveal itself. Take the United States for example, incursions into its airspace by foreign unmanned aircraft would not be tolerated, and would more than likely be met with fierce retaliation. Yet the United States has set a precedent that such aircraft can be used to kill individuals on another country’s soil without said country’s permission or knowledge. It appears at this point that the U.S. believes it operates in a vacuum and that none will ever follow in its footsteps, or else U.S. policy-makers have been won over by the seductive promise of remote controlled warfare and are blinded to all else.
With no regard to long-term risk, the U.S. continues to place its trust in drones. This is not simply frightening from an ethical standpoint, but from a purely practical standpoint. The RPA may be America’s wonder weapon, but it is not free from error. The technological and strategic challenges of RPA usage are beginning to pile up and it is simply not feasible to believe that simply adding autonomy and complexity to drones will solve these issues. Tom Engelhardt, writing in Al Jazeera, highlights the potential dangers of outsourcing combat to machines. “We are moving towards an ever greater outsourcing of war to things that cannot protest, cannot vote with their feet (or wings), and for whom there is no ‘home front’ or even a home at all.” (Al Jazeera) It is far easier to wage a covert war with only a handful of pilots in the loop. Moreover, with no risk to pilots’ lives, domestic outrage toward overbearing military intervention struggles to find a solid audience. Additionally, future systems will gain complexity and thus require increasing amounts of computer automation to operate effectively. With increased automation comes worrying questions related to safety and accountability; hopefully asked before algorithms are given oversight on decisions affecting human life. Drones are a technology, but solving their problems is far more than a simple engineering challenge.
Lessons, Solutions and the Future
As this analysis concludes, it must be pointed out that the United States is being singled out because of the breadth and aggression of its RPA strikes, not because it is in any way inherently immoral or unethical. The United States is in a unique position in the world, as its massive defense spending correlates directly to incredible innovation in weapons technology. As with the nuclear weapon before it, the drone is changing the future of global conflict not only through its immediate usage, but also its potential to re-write the rules of war. The RPA’s combination of long flight time and zero operator risk give it the appearance of a miracle weapon. At face value, the drone seems like the perfect tool in a global war on terror: it combines the Intelligence Surveillance and Reconnaissance (ISR) element and strike element into one tidy package that can be deployed across the globe, yet operated safely from within domestic borders. However, as this analysis has shown, the belief in the accuracy of RPA strikes is a fallacy and one with deadly consequences. We are learning that a war from the air–although incredibly attractive for a number of reasons, both political and tactical–simply cannot square its cost with its benefit. The principle of proportionality that underpins modern just war theory has been fundamentally challenged by this new technology. It is too easy to make the worst mistakes with the best intentions when one wages a war from only one perspective, a perspective without risk.
A great deal of time was spent highlighting the risks and failures of RPA usage, but drones are not all doom and gloom. As surveillance tools, RPA have ushered in a new revolution of cheap, high-resolution aerial imagery for both governments and the private sector. For the U.S. military’s part, the RPA is an incredible tool that, with some changes to protocol and advancements in IT infrastructure will provide a powerful and accurate force multiplier. For one, a letter to the editor published in C4ISR Journal proposed the confusion that occurred in the Sangin Valley case could have been minimized if the drone crews went through a more comprehensive training regime. The letter serves to highlight the unfortunate delay between acquiring new technologies and the creation of new techniques and protocols to accommodate them. The majority of the letter has been repeated here:
“The problem here is that from the beginning of their training, the Predator/Reaper guys do not attend training with the Distributed Common Ground System processing, exploitation and dissemination (PED) troops. The guys go into the ground control station and practice their mission with the pilot taking on the role of the aircraft commander and imagery analyst when he or she is talking to the simulated (or real) joint terminal attack controller. When the crews finish their training and go to the field, they are all of a sudden told there are exploitation people also watching and sending them information. The pilots, of course, couldn’t care less what the PED people think. The pilots see it as their mission. The sensor operator is there to do what the pilot tells him to.
The only way anything can be changed is if the schoolhouses where the PED troops go to school are incorporated into the flying schools, meaning March Air Reserve Base, Calif., for MQ-1 Predator crews, Holloman Air Force Base, N.M., for Predator and MQ-9 Reaper crews, and now Hancock Field Air National Guard Base, N.Y., for Reapers.
The flight crews and PED troops should learn from the beginning the essence of working as a team. The pilots have to be taught that they are not imagery analysts and therefore should not make the call on women and children. The PED people also have to be incorporated into the kill-chain so they can call an abort on a Hellfire missile attack or laser-guided bomb run-in when women and children are present.”
The specificity of this letter is encouraging. These changes are relatively simple to implement and will more than likely reduce the possibility of a fatal miscommunication as occurred in the Sangin Valley case. Time will tell whether the U.S. Air Force will take heed of similar advice, but it looks hopeful. Over the last decade, RPA have evolved from expensive toys to venerable workhorses. Such a fast growth in the deployment of a new class of technology will understandably experience growing pains. Unfortunately, the fielding of RPA technology has come at an immediate cost of civilian life and damaged U.S. prestige. As other nations begin fielding more drones, they would be prudent to take lessons from the United States’ mistakes. For its part, the U.S. would be judicious to start a dialogue on acceptable-use practices, lest every nation in the world follow U.S. precedent.
Interestingly, another letter, this one in Armed Forces Journal, highlights the tangible real-world problems of a world full of drones. As Major Darin L. Gaub, the letter’s author, puts it, “There are severe shortcomings in almost every aspect of our approach to enemy UAVs, from education to materiel.” Major Gaub calls for a myriad of changes to the United States’ current state of woeful unpreparedness. He concludes,
“The American use of UAVs during the last decade opened the door to a future where their unique capabilities are sought after by multiple nation-states, terrorist organizations and terroristlike groups, such as the drug gangs along America’s southern border. It’s too late to shut the proverbial barn door; the horses are already running amok. Now that unmanned technology is on the global market and proliferating rapidly, America’s armed forces need to do a better job preparing for the use of UAVs by enemies.”
As was highlighted in the “Country A” and “Country B” example, a world where multiple nations possess and deploy RPA in an aggressive manner poses a huge problem for the principles of sovereignty and justice at the international and even individual level. It is not much of a leap of the imagination to picture a world where nations execute individuals within each others’ borders in a perverse secret war fought through machine proxy, and costing real lives.
It seems appropriate to draw on a classic example of outsourced warfare. Popular culture is often on the cutting edge of projecting future ethical problems, and no show is more cutting edge than Star Trek. While often fraught with poorly made sets and corny action, Gene Roddenberry’s original Star Trek is, at its heart, a show about big ideas and future problems. Long before the modern RPA was invented, the classic Star Trek episode A Taste of Armageddon gave us a glimpse of a future where automated war is normalized.
In this episode, the U.S.S. Enterprise finds itself caught in the middle of a conflict that has been raging for 500 years between Eminiar VII and Vendikar. Only this conflict–as Kirk and Spock discover when they are taken prisoner–is waged entirely through computer simulation. The sick twist is that while the war is simulated, the casualties are real; when a virtual strike is made by one of the planets against the other, the people declared "dead" by the simulation willingly walk into antimatter chambers and are vaporized. When Kirk arrives on Eminiar he is informed that his ship, the U.S.S. Enterprise, was “hit” in the virtual war game and its crew must report to disintegration chambers or else the agreement with Vandikar will be broken. The leader of Eminiar, Anan 7 obviously desires that the agreement continue to stand; after all a real war would result in the destruction of both planets and their inhabitants instead of the mere millions that the virtual war claims. After Kirk manages to take over what seems to be the entire planet with only three other members of the crew and Spock, he destroys the computers waging the virtual war. In a classic Kirk speech he explains the reasoning for ending the long-fought virtual war: “Death, destruction, disease, horror. That's what war is all about […] That's what makes it a thing to be avoided. You've made it neat and painless. So neat and painless, you've had no reason to stop it. And you've had it for five hundred years.” In the end, it is revealed that Kirk’s plan was to present the possibility of real war to both sides and hope that fear of annihilation would motivate them to agree to peace. It works–as Kirk’s plans often do–and the two planets reach an agreement.
While obviously a fictional scenario, the fears it presents are beginning to materialize. As nations around the world begin to outsource the very human practice of war to machines, we risk separating ourselves from the actual realities. The potential to let this separation cloud our judgment of what constitutes justice in war–jus in bello–is very real. The unique and dramatic change in proportionality that RPA technology allows risks warping perspectives of what actions are acceptable in conflict. Further, this changing calculation of risk could lead to a broader perversion in the ideas of justice of war–jus ad bellum–as the separation enabled by the technology continually removes barriers to the deployment of violent force, both practically and politically. Inevitably, drones will be fielded in more conflicts and flown on more covert missions, and it will be imperative to stay mindful of the potential consequences. When technology affects proportionality in a massive way it falls to the international community to reign itself in. The RPA has earned a position in the military arsenal due to its incredible force-multiplying potential, but like any new technology it is still struggling to find its limits. On the practical operational side, increasing situational awareness for all parts of the drone operational chain is a must; the lack of solid communication between ground commanders, offsite analysts, and the actual drone pilots is troubling. Advances in software-defined radio and further expansion of the U.S. military satellite communication (SATCOM) infrastructure will allow for some of the growth in bandwidth needed to facilitate these enhanced communication channels, but such capabilities are still years out from deployment.
On the global political stage, the international community would be prudent to work together to establish guidelines for ethical usage of drones and help the RPA find its place in the just war framework. The United States’ position of leadership in the global order cannot afford to be squandered. As the key architect of technology and strategy for autonomous and remotely piloted military systems, the U.S. must take a leadership role in discussions determining acceptable-use practices and ethical guidelines. This need not come from a purely altruistic stance, after all it is in the United States’ best interest to start discussions on RPA use now before the technology it fostered is turned against it. Working together with the international community, U.S. leaders can find an acceptable “middle ground” ethical position for RPA usage that will protect not only civilians but nations as well. Achieving a middle ground before potential major conflict emerges will be invaluable; wading into future conflicts with incredible power but no codified limits inevitably risks atrocities.
The drone represents a challenging test for humanity and our international order. It will require great strength to resist the myriad appeals of remotely controlled warfare, to ensure that our newfound abilities do not obliterate all that has been achieved. Unlike all past advances in destructive technology, the drone stands unique as the harbinger of our telepresent future, a future where individual consciousness is made to flit between where it is and where we endeavor to place it. A cruise missile is fired and forgotten about in flight, a nuclear weapon delivers infinite destruction in the flash of an instant, but a drone is made to survey, to be ever-present. Killing from the perspective of a drone means being mentally present before, during, and after the fatal act; controlling it demands one adapt to its abilities, become as stoic and enduring as the machine. Technology and warfare are uniquely human enterprises, locked in an eternal embrace. Under the threat of modern terror groups, government leaders have crawled into technology and demanded its protection, without bothering to ask what it might demand in return. The drone insulates soldiers from physical harm yet has done nothing to protect their psyches. Tasked with watching and killing other humans from afar, empathy and morality are still finding ways to shine through, despite computerized intermediaries. Such feelings represent inefficiency in an otherwise magnificently efficient enterprise. The hallmark of technological advances is in eliminating inefficiencies, especially when they are human. The myriad of cogs in the war-fighting machine iterated down to one unflinching entity always ready and unable to even conceive of protesting commands. Now that killing has been brought firmly into the digital age, it will only become more efficient. In this way the drone is demanding the ultimate price: our humanity itself, and human leaders seem happy to oblige.
Yet even as the challenges of aerial drones are still being uncovered and addressed, technological innovation marches on. Programs like Wildcat and BigDog currently in motion at Boston Dynamics, with funding from the U.S. Department of Defense, aim to give soldiers four-legged robotic companions on the battlefield for use as pack mules, communications hubs, and more. In addition, numerous companies have demonstrated small, treaded ground robots with the capability to carry loads for soldiers and still others capable of actually using various munitions. While these platforms are meant to work with soldiers, there are still others being developed that have long-term potential to replace them. Also from Boston Dynamics, Atlas is a bipedal humanoid robot designed to carry a sophisticated array of sensors and cameras over rough terrain and even manipulate and operate human artifacts. While the current implementation remains relatively crude and requires a tether to an external power source, it clearly represents the direction robotics is heading.
Imagine if the benefits of remotely piloted aircraft could be brought to bear on land-based systems. The aforementioned U.S. SEAL raid in Somalia was called off when, upon leaving their boats, the SEALs found themselves under heavy fire. (LA Times) The Obama administration stated this was to protect civilian life, and it undoubtedly was, but of course the risk to the U.S. assets was also part of that assessment. Would that assessment have been altered if the SEALs could not be killed? What if their actual selves were in a trailer in Nevada; would the raid have still been called off? These questions were once the sole purview of science fiction, yet are fast becoming tangible issues. What began in the air will continue on the ground and usher in an entirely new generation of remotely operated military systems. This coming ability to deploy ground forces with no risk to their physical well being will cause another dramatic change in risk assessment, and will place even greater strain on issues of proportionality. Automating these systems will only further exacerbate the ethical quandaries. The mechanisms of future conflict are on a dangerous trajectory, where an individual demanding death can have it delivered instantly and accurately, no middlemen required. Nations, criminals, maniacs, and ideologues alike, all empowered by digital technology’s new promise of unifying desire and action into one seamless flow. One need not worry about our machines turning on their masters just yet, but one would be wise to worry about the masters themselves when they have tools fit to wage a war and no human voices left in the loop to protest their ambitions.
However all is far from lost. Humanity has stood at the precipice of grand change before–the advent of the machine gun, chemical weapons, and the nuclear warhead in recent history–and was able to emerge on the other side with new insight, maybe even wisdom. The idea of just war is a living thing; it requires our constant involvement and reassessment to ensure its principles of ethical use of force are upheld. Drone technology is one of humanity’s most powerful enablers. It has given us incredible capabilities in surveillance and lethal action, while removing barriers of cost and risk like no technology before it; and its journey has only just begun. Ultimately, the line between having a capability and using it comes down to the choice to do so, to self-control. We were able to see beyond the immediate appeal of our past destructive inventions, now we must do the same moving forward.
Looking toward the future, the international community would be prudent to begin discussions on acceptable limits for RPA usage and legal frameworks for managing violations of just war traditions committed via drone. Policing such limits would prove difficult, but absent any framework, the risk is that drones will be used to violate international norms without recourse for those affected, aside from direct retaliation. Determining a “middle ground” approach to the usage of RPA–one that squares states’ desires with ethical requirements–is essential to the continued stability of the international system.
Sources Consulted
While not every source listed was directly cited, all were instrumental in shaping this analysis. They are sorted by section, and within each section they are sorted alphabetically. There are a great deal of very well written pieces describing the caveats of RPA, ranging from discussions of the broadest ethical terms to the most precise technical terms. The shortcomings of the drone are well documented; hopefully policy-makers around the world will listen. Thank you for reading.
Ethical Challenges of RPA Use
Brunstetter, Daniel, and Megan Braun. "The Implications of Drones on the Just War Tradition." Ethics & International Affairs 25.3 (2011): 337-58. Carnegie Council for Ethics in International Affairs. Web.
Cohen, Amichai. "Proportionality in Modern Asymmetrical Wars." Jerusalem Center for Public Affairs (2010): n. pag. Web.
Doyle, II, Thomas E. "Reviving Nuclear Ethics: A Renewed Research Agenda for the Twenty-First Century." Ethics & International Affairs 24.3 (2010): 287-308. Carnegie Council for Ethics in International Affairs. Web.
Gelb, Leslie H., and Justine A. Rosenthal. "The Rise of Ethics in Foreign Policy: Reaching a Values Consensus." Foreign Affairs 82.3 (2003): 2-7. JSTOR. Web.
Haque, Adil A. "Proportionality (in War)." The International Encyclopedia of Ethics 1 (2012): n. pag. Print.
Johnson, James Turner. Can Modern War Be Just? New Haven: Yale UP, 1984. Print.
Nardin, Terry. "Middle-Ground Ethics: Can One Be Politically Realistic Without Being a Political Realist?" Carnegie Council. N.p., n.d. Web.
Rosenthal, Joel H. "What Constitutes an Ethical Approach to International Affairs? (Lecture #1)." Carnegie Council. N.p., n.d. Web.
Rosenthal, Joel H. "What the Sages Say (Lecture #2)." Carnegie Council. N.p., n.d. Web.
Implications for Foreign Policy
Ackerman, Spencer. "Qaida’s YouTube Preacher Is Killed In Yemen." Wired.com. Conde Nast Digital, 28 Sept. 0011. Web. 14 Oct. 2013.
Bacevich, Andrew J. "War on Terror -- Round 3." Los Angeles Times. N.p., 19 Feb. 2012. Web.
Bennett, Brian, and David S. Cloud. "Obama's Counter-terrorism Advisor Defends Drone Strikes." Los Angeles Times. N.p., 30 Apr. 2012. Web.
Dozier, Kimberly. "U.S. Drone Attacks on Terror Suspects Unpopular around the World: Survey." The Globe and Mail. N.p., n.d. Web.
Engelhardt, Tom. "How Drone War Became the American Way of Life." Al Jazeera. N.p., 1 Mar. 2012. Web.
Friedersdorf, Conor. "Expanding CIA Drone Strikes Will Likely Mean More Dead Innocents." The Atlantic. N.p., n.d. Web.
Haddick, Robert. "This Week at War: The General's Dystopia." Foreign Policy. N.p., 20 Apr. 2012. Web.
"Holder's Troubling Death-by-drone Rules." Los Angeles Times. N.p., 7 Mar. 2012. Web.
Horgan, John. "Drone Assassinations Hurt the U.S. More Than They Help Us | Cross-Check, Scientific American Blog Network." Scientific American Blog Network. Scientific American, 3 Oct. 2011. Web.
Mardell, Mark. "Obama's Drone Policy Dilemma." BBC News. N.p., 5 June 2012. Web.
Masters, Johnathan. "Council on Foreign Relations." Council on Foreign Relations. N.p., 30 Apr. 2012. Web.
McManus, Doyle. "McManus: Who Reviews the U.S. 'kill List'?" Los Angeles Times. N.p., 5 Feb. 2012. Web.
Shachtman, Noah. "Is This the Secret U.S. Drone Base in Saudi Arabia?" Wired.com. Conde Nast Digital, 05 Feb. 0013. Web. 15 Oct. 2013.
Serraro, Richard A., and Andrew R. Grimm. "Eric Holder: U.S. Can Target Citizens Overseas in Terror Fight." Los Angeles Times. N.p., 5 Mar. 2012. Web.
Singer, P. W. "Wired for War: The Robotics Revolution and Conflict in the 21st Century." Carnegie Council. N.p., 5 Feb. 2009. Web.
"This Week at War: The General." Foreign Policy. N.p., n.d. Web.
Thompson, Mark. "Inside the Osama Bin Laden Strike: How America Got Its Man." Time. Time Magazine, 03 May 2011. Web.
"’Will I Be Next?’ US Drone Strikes in Pakistan." Amnesty International. Amnesty International Research, 22 Oct. 2013. Web. 23 Oct. 2013.
Zenko, Micah, and Emma Welch. "Where the Drones Are." Foreign Policy. N.p., 29 May 2012. Web.
Technical and Strategic Challenges for RPA Usage
Ackerman, Spencer. "Air Force Chief: It’ll Be ‘Years’ Before We Catch Up on Drone Data." Wired.com. Conde Nast Digital, 05 Apr. 2012. Web.
Dilanian, Ken, and David S. Cloud. "Striking Without CIA Drones." Los Angeles Times 7 Oct. 2013: n. pag. Los Angeles Times. Los Angeles Times, 07 Oct. 2013. Web. 07 Oct. 2013.
Power, Matthew. "Confessions of a Drone Warrior." GQ. N.p., 23 Oct. 2013. Web. 24 Oct. 2013.
Laster, Jill, and Ben Iannotta. "Cover Story: Learning From Fratricide." C4ISR Journal. Defense News, 1 Mar. 2012. Web.
McNeal, Gregory S. "The Bin Laden Aftermath: Why Obama Chose SEALs, Not Drones." Foreign Policy, 5 May 2011. Web.
Zucchino, David. "Stress of Combat Reaches Drone Crews." LA Times. Los Angeles Times, 18 Mar. 2012. Web.
Lessons, Solutions and the Future
Anthony, Sebastian. "ExtremeTech." ExtremeTech. N.p., 1 Mar. 2013. Web. 1 Oct. 2013.
Dorrier, Jason. "Boston Dynamics' Atlas Robot Walks Like a Human Over Field of Rubble." Singularity Hub. N.p., 7 Oct. 2013. Web. 8 Oct. 2013.
Gaub, Darin L. "Unready to Stop UAVs." Armed Forces Journal, Dec. 2011. Web.
Gaudin, Sharon. "U.S. Army Evaluates Self-Driving, Machine Gun-Toting Robots." Computerworld. N.p., 10 Oct. 2013. Web. 10 Oct. 2013.
Hanson, Steve. "Letter to the Editor: Training Changes Can Fix Drone Mistakes."
C4ISR Journal. Defense News, 30 Jan. 2012. Web.



