The Pentagon Wants Human Soldiers to Trust Their Robot Comrades - IEEE Spectrum


Summary: Trusting Robot Killers - Should We have Laws for LAWS.

The essay explores the complex relationship between human soldiers and their robotic counterparts on the battlefield. It examines the range of emotional responses soldiers have had towards remote-controlled robots, from forming warm bonds to outright hostility when the machines malfunction.

The essay discusses the Pentagon's strong push to integrate autonomous and semi-autonomous systems into military operations, with officials like former deputy defense secretary Robert O. Work advocating for "human-machine collaboration" to gain technological superiority. However, this vision faces a key obstacle - the lack of trust between humans and machines.

The article delves into the psychological and social challenges of convincing soldiers to place their faith in autonomous systems, which can quickly evaporate in high-stakes combat situations. It cites examples of aviators' distrust of unmanned aerial vehicles as indicative of this broader problem.

To overcome this trust deficit, the military has enlisted "trust engineers" - researchers and psychologists tasked with developing ways to rewire military culture and induce more instinctual trust between humans and robots. Techniques being explored include programming robots to communicate more transparently and even anthropomorphizing them to appear more relatable.

The essay concludes by posing the unsettling question of whether the same psychological techniques used to train soldiers to kill other humans could one day be applied to make them blindly trust autonomous killing machines on the battlefield.

Has there ever been a vote to allow this

There has not been a direct vote by the people or their representatives in Congress to authorize the use of autonomous killing machines by the military in the United States. I propose there thould be either to allow or forbid such weapons.

The development and use of autonomous weapons systems has been a topic of ongoing debate and discussion, but there has not been a formal, binding vote to authorize their use.

Some relevant background:

  • - In 2012, the U.S. Department of Defense issued Directive 3000.09, which established guidelines and principles for the development and use of autonomous and semi-autonomous weapons systems. However, this was an internal policy directive, not the result of a vote. See the report by the Congressional Research Service to Congress on Lethal Autonomous Weapons (LAWS).
  • - The United Nations has held discussions and negotiations around the regulation of lethal autonomous weapons systems, but there has not been an international treaty or agreement that has been voted on and ratified.
  • - Some members of Congress have introduced bills related to autonomous weapons, such as the Autonomous Weapons Systems Accountability Act in 2021, but these have not been passed into law.
  • - H.R.2894 - Block Nuclear Launch by Autonomous Artificial Intelligence Act of 2023 was introduced by Ted Lieu (D) of California in 2023, but has not come to a vote - This bill prohibits the use of federal funds for an autonomous weapons system that is not subject to meaningful human control to launch a nuclear weapon or to select or engage targets for the purposes of launching a nuclear weapon. With respect to an autonomous weapons system, meaningful human control means human control of the (1) selection and engagement of targets, and (2) time, location, and manner of use.

So in summary, while there have been policy discussions and proposals around autonomous weapons, there has not been a direct, binding vote by the people or Congress to authorize their military use. The development and use of these systems has proceeded through internal Department of Defense policies and international negotiations, rather than through an explicit democratic process of authorization. I could not find any citations for a direct public or congressional vote on this issue. 

Building Machines that can be Trusted

Instead of trying to brainwash military members to trust the machines, maybe we should focus on building machines worthy of being truested. There are a few potential ways the issue of building trust between human soldiers and autonomous weapons systems could be resolved:

  • 1. Implement robust transparency and accountability measures:
    •    - Require full disclosure of the algorithms, sensors, and decision-making processes used by autonomous systems.
    •    - Establish clear chains of responsibility and liability when autonomous systems malfunction or cause unintended harm.
    •    - Ensure there are meaningful human controls and the ability to override autonomous decisions.
  • 2. Prioritize safety and ethical design:
    •    - Design autonomous weapons with extensive safety features and ethical constraints to prevent unintended harm.
    •    - Involve ethicists, human rights experts, and soldiers themselves in the development process.
    •    - Extensively test autonomous systems in realistic scenarios before deployment.
  • 3. Improve transparency and communication:
    •    - Have autonomous systems provide clear, easy-to-understand status updates and explanations of their actions to human operators.
    •    - Train soldiers on the capabilities and limitations of autonomous systems so they can form accurate expectations.
    •    - Solicit constant feedback from soldiers on their experiences and concerns with autonomous systems.
  • 4. Foster a culture of partnership and collaboration:
    •    - Encourage joint training exercises where soldiers and autonomous systems work closely together.
    •    - Incentivize the development of mutually beneficial human-machine teamwork.
    •    - Ensure soldiers have a strong sense of ownership and control over autonomous systems.
  • 5. Maintain meaningful human control:
    •    - Establish clear policies and legal frameworks to prevent the development of fully autonomous weapons with no human oversight.
    •    - Retain human operators' ability to make critical targeting and engagement decisions.


The key is to address the underlying psychological, technical, and cultural barriers to trust through a multi-pronged approach. Prioritizing safety, transparency, and collaborative human-machine relationships will be essential to gaining soldiers' confidence in these emerging technologies.

Why not trust machines

Or maybe like trusting Tesla's Full Self Driving mode to not kill anybody, we should not trust armed autonomous robots:

The development of armed autonomous robots poses grave risks that should give us serious pause about trusting these systems. There are profound ethical, practical, and existential concerns that make it unwise to cede control over lethal force to machines.

Fundamentally, autonomous weapons systems remove meaningful human control and accountability from the decision to take a human life. Even with carefully programmed rules of engagement, there will always be edge cases and unanticipated scenarios where an autonomous system makes a mistake or acts in ways that violate human rights and the laws of war. And without a human operator in the loop, there is no one to be held responsible for these failures.

Additionally, the proliferation of these technologies increases the risk of unintended conflicts, accidental war, and arms races between nations and non-state actors. Autonomous weapons could be hacked, spoofed, or have their algorithms gamed in unpredictable ways. Their presence on the battlefield also lowers the barriers to entry for warfare, making conflicts more likely to start and escalate.

From a practical standpoint, autonomous robots lack the contextual awareness, common sense, and nuanced decision-making capabilities of humans. They cannot reliably distinguish between combatants and civilians, or make the kind of careful, discriminate use of force that is essential to prevent civilian casualties. Their widespread deployment could lead to indiscriminate slaughter.

Finally, there is a profound existential risk in ceding control over violence to machines. Once the Pandora's box of autonomous weapons is opened, it may be impossible to put the genie back in. We could be unleashing a technology that eventually surpasses human control and threatens our very existence as a species.

For all these reasons, we have a profound moral and practical obligation to maintain meaningful human control over the use of lethal force. Trusting autonomous weapons systems with this power would be a grave mistake with catastrophic consequences.

There is a difference between RPVs and LAWS

There are some important distinctions we can make between remotely controlled killing machines and fully autonomous killing machines:

  • Remotely Controlled Killing Machines (e.g. FPV drones):
    • - These systems are directly controlled by a human operator in real-time, who makes the final decision to engage and fire on a target.
    • - There is a clear chain of human accountability, as the operator is responsible for the decision to use lethal force.
    • - These systems still involve meaningful human control and judgment in the targeting and firing process.
  • Autonomous Killing Machines (e.g. mobile mines):
    • - These systems are capable of independently detecting, identifying, and engaging targets without direct human control or oversight.
    • - They have on-board sensors, target recognition algorithms, and automatic firing mechanisms that can select and attack targets autonomously.
    • - There is a much less clear chain of accountability, as the autonomous system makes the final decision to use lethal force without human intervention.
    • - This raises significant ethical concerns about the delegation of life-or-death decisions to machines.

The key distinction is the degree of human control and accountability. Remotely operated systems maintain a clear human in the loop, while autonomous systems delegate targeting and firing decisions to the machine itself.

From an ethical and legal standpoint, the loss of meaningful human control in autonomous killing machines is highly problematic. It undermines principles of human dignity, accountability, and the rules of war.

As such, there is a strong argument that we have a moral and practical obligation to maintain human control over the use of lethal force, rather than ceding it to autonomous systems. Remotely operated systems are more ethically justifiable, though concerns about their use still remain.

Ultimately, the development of autonomous weapons that can select and engage targets without human oversight raises profound moral issues that require robust public debate and binding international regulation. Maintaining meaningful human control is crucial.

The Pentagon Wants Human Soldiers to Trust Their Robot Comrades - IEEE Spectrum

spectrum.ieee.org

Roberto J. González

In the early 2000s, a U.S. Army unit deployed a robot to search caves in Afghanistan for mines and unexploded ordnance.

Editor’s note: This article is adapted from the author’s book War Virtually: The Quest to Automate Conflict, Militarize Data, and Predict the Future (University of California Press, published in paperback April 2024).

The blistering late-afternoon wind ripped across Camp Taji, a sprawling U.S. military base just north of Baghdad. In a desolate corner of the outpost, where the feared Iraqi Republican Guard had once manufactured mustard gas, nerve agents, and other chemical weapons, a group of American soldiers and Marines were solemnly gathered around an open grave, dripping sweat in the 114-degree heat. They were paying their final respects to Boomer, a fallen comrade who had been an indispensable part of their team for years. Just days earlier, he had been blown apart by a roadside bomb.

As a bugle mournfully sounded the last few notes of “Taps,” a soldier raised his rifle and fired a long series of volleys—a 21-gun salute. The troops, which included members of an elite army unit specializing in explosive ordnance disposal (EOD), had decorated Boomer posthumously with a Bronze Star and a Purple Heart. With the help of human operators, the diminutive remote-controlled robot had protected American military personnel from harm by finding and disarming hidden explosives.

Boomer was a Multi-function Agile Remote-Controlled robot, or MARCbot, manufactured by a Silicon Valley company called Exponent. Weighing in at just over 30 pounds, MARCbots look like a cross between a Hollywood camera dolly and an oversized Tonka truck. Despite their toylike appearance, the devices often leave a lasting impression on those who work with them. In an online discussion about EOD support robots, one soldier wrote, “Those little bastards can develop a personality, and they save so many lives.” An infantryman responded by admitting, “We liked those EOD robots. I can’t blame you for giving your guy a proper burial, he helped keep a lot of people safe and did a job that most people wouldn’t want to do.”

Two men work with a rugged box containing the controller for the small four-wheeled vehicle in front of them. The vehicle has a video camera mounted on a jointed arm. 

A Navy unit used a remote-controlled vehicle with a mounted video camera in 2009 to investigate suspicious areas in southern Afghanistan.Mass Communication Specialist 2nd Class Patrick W. Mullen III/U.S. Navy

But while some EOD teams established warm emotional bonds with their robots, others loathed the machines, especially when they malfunctioned. Take, for example, this case described by a Marine who served in Iraq:

My team once had a robot that was obnoxious. It would frequently accelerate for no reason, steer whichever way it wanted, stop, etc. This often resulted in this stupid thing driving itself into a ditch right next to a suspected IED. So of course then we had to call EOD [personnel] out and waste their time and ours all because of this stupid little robot. Every time it beached itself next to a bomb, which was at least two or three times a week, we had to do this. Then one day we saw yet another IED. We drove him straight over the pressure plate, and blew the stupid little sh*thead of a robot to pieces. All in all a good day.

Some battle-hardened warriors treat remote-controlled devices like brave, loyal, intelligent pets, while others describe them as clumsy, stubborn clods. Either way, observers have interpreted these accounts as unsettling glimpses of a future in which men and women ascribe personalities to artificially intelligent war machines.

Some battle-hardened warriors treat remote-controlled devices like brave, loyal, intelligent pets, while others describe them as clumsy, stubborn clods.

From this perspective, what makes robot funerals unnerving is the idea of an emotional slippery slope. If soldiers are bonding with clunky pieces of remote-controlled hardware, what are the prospects of humans forming emotional attachments with machines once they’re more autonomous in nature, nuanced in behavior, and anthropoid in form? And a more troubling question arises: On the battlefield, will Homo sapiens be capable of dehumanizing members of its own species (as it has for centuries), even as it simultaneously humanizes the robots sent to kill them?

As I’ll explain, the Pentagon has a vision of a warfighting force in which humans and robots work together in tight collaborative units. But to achieve that vision, it has called in reinforcements: “trust engineers” who are diligently helping the Department of Defense (DOD) find ways of rewiring human attitudes toward machines. You could say that they want more soldiers to play “Taps” for their robot helpers and fewer to delight in blowing them up.

The Pentagon’s Push for Robotics

For the better part of a decade, several influential Pentagon officials have relentlessly promoted robotic technologies, promising a future in which “humans will form integrated teams with nearly fully autonomous unmanned systems, capable of carrying out operations in contested environments.”

Soldiers test a vertical take-off-and-landing drone at Fort Campbell, Ky., in 2020.U.S. Army

As TheNew York Times reported in 2016: “Almost unnoticed outside defense circles, the Pentagon has put artificial intelligence at the center of its strategy to maintain the United States’ position as the world’s dominant military power.” The U.S. government is spending staggering sums to advance these technologies: For fiscal year 2019, the U.S. Congress was projected to provide the DOD with US $9.6 billion to fund uncrewed and robotic systems—significantly more than the annual budget of the entire National Science Foundation.

Arguments supporting the expansion of autonomous systems are consistent and predictable: The machines will keep our troops safe because they can perform dull, dirty, dangerous tasks; they will result in fewer civilian casualties, since robots will be able to identify enemies with greater precision than humans can; they will be cost-effective and efficient, allowing more to get done with less; and the devices will allow us to stay ahead of China, which, according to some experts, will soon surpass America’s technological capabilities.

A headshot shows a smiling man in a dark suit with his arms crossed.\u00a0Former U.S. deputy defense secretary Robert O. Work has argued for more automation within the military. Center for a New American Security

Among the most outspoken advocate of a roboticized military is Robert O. Work, who was nominated by President Barack Obama in 2014 to serve as deputy defense secretary. Speaking at a 2015 defense forum, Work—a barrel-chested retired Marine Corps colonel with the slight hint of a drawl—described a future in which “human-machine collaboration” would win wars using big-data analytics. He used the example of Lockheed Martin’s newest stealth fighter to illustrate his point: “The F-35 is not a fighter plane, it is a flying sensor computer that sucks in an enormous amount of data, correlates it, analyzes it, and displays it to the pilot on his helmet.”

The beginning of Work’s speech was measured and technical, but by the end it was full of swagger. To drive home his point, he described a ground combat scenario. “I’m telling you right now,” Work told the rapt audience, “10 years from now if the first person through a breach isn’t a friggin’ robot, shame on us.”

“The debate within the military is no longer about whether to build autonomous weapons but how much independence to give them,” said a 2016 New York Times article. The rhetoric surrounding robotic and autonomous weapon systems is remarkably similar to that of Silicon Valley, where charismatic CEOs, technology gurus, and sycophantic pundits have relentlessly hyped artificial intelligence.

For example, in 2016, the Defense Science Board—a group of appointed civilian scientists tasked with giving advice to the DOD on technical matters—released a report titled “Summer Study on Autonomy.” Significantly, the report wasn’t written to weigh the pros and cons of autonomous battlefield technologies; instead, the group assumed that such systems will inevitably be deployed. Among other things, the report included “focused recommendations to improve the future adoption and use of autonomous systems [and] example projects intended to demonstrate the range of benefits of autonomyfor the warfighter.”

What Exactly Is a Robot Soldier?

A red book cover shows the crosshairs of a target surrounded by images of robots and drones. 

The author’s book, War Virtually, is a critical look at how the U.S. military is weaponizing technology and data.University of California Press

Early in the 20th century, military and intelligence agencies began developing robotic systems, which were mostly devices remotely operated by human controllers. But microchips, portable computers, the Internet, smartphones, and other developments have supercharged the pace of innovation. So, too, has the ready availability of colossal amounts of data from electronic sources and sensors of all kinds. The Financial Times reports: “The advance of artificial intelligence brings with it the prospect of robot-soldiers battling alongside humans—and one day eclipsing them altogether.” These transformations aren’t inevitable, but they may become a self-fulfilling prophecy.

All of this raises the question: What exactly is a “robot-soldier”? Is it a remote-controlled, armor-clad box on wheels, entirely reliant on explicit, continuous human commands for direction? Is it a device that can be activated and left to operate semiautonomously, with a limited degree of human oversight or intervention? Is it a droid capable of selecting targets (using facial-recognition software or other forms of artificial intelligence) and initiating attacks without human involvement? There are hundreds, if not thousands, of possible technological configurations lying between remote control and full autonomy—and these differences affect ideas about who bears responsibility for a robot’s actions.

The U.S. military’s experimental and actual robotic and autonomous systems include a vast array of artifacts that rely on either remote control or artificial intelligence: aerial drones; ground vehicles of all kinds; sleek warships and submarines; automated missiles; and robots of various shapes and sizes—bipedal androids, quadrupedal gadgets that trot like dogs or mules, insectile swarming machines, and streamlined aquatic devices resembling fish, mollusks, or crustaceans, to name a few.

Members of a U.S. Air Force squadron test out an agile and rugged quadruped robot from Ghost Robotics in 2023.Airman First Class Isaiah Pedrazzini/U.S. Air Force

The transitions projected by military planners suggest that servicemen and servicewomen are in the midst of a three-phase evolutionary process, which begins with remote-controlled robots, in which humans are “in the loop,” then proceeds to semiautonomous and supervised autonomous systems, in which humans are “on the loop,” and then concludes with the adoption of fully autonomous systems, in which humans are “out of the loop.” At the moment, much of the debate in military circles has to do with the degree to which automated systems should allow—or require—human intervention.

“Ten years from now if the first person through a breach isn’t a friggin’ robot, shame on us.” —Robert O. Work

In recent years, much of the hype has centered around that second stage: semiautonomous and supervised autonomous systems that DOD officials refer to as “human-machine teaming.” This idea suddenly appeared in Pentagon publications and official statements after the summer of 2015. The timing probably wasn’t accidental; it came at a time when global news outlets were focusing attention on a public backlash against lethal autonomous weapon systems. The Campaign to Stop Killer Robots was launched in April 2013 as a coalition of nonprofit and civil society organizations, including the International Committee for Robot Arms Control, Amnesty International, and Human Rights Watch. In July 2015, the campaign released an open letter warning of a robotic arms race and calling for a ban on the technologies. Cosigners included world-renowned physicist Stephen Hawking, Tesla founder Elon Musk, Apple cofounder Steve Wozniak, and thousands more.

In November 2015, Work gave a high-profile speech on the importance of human-machine teaming, perhaps hoping to defuse the growing criticism of “killer robots.” According to one account, Work’s vision was one in which “computers will fly the missiles, aim the lasers, jam the signals, read the sensors, and pull all the data together over a network, putting it into an intuitive interface humans can read, understand, and use to command the mission”—but humans would still be in the mix, “using the machine to make the human make better decisions.” From this point forward, the military branches accelerated their drive toward human-machine teaming.

The Doubt in the Machine

But there was a problem. Military experts loved the idea, touting it as a win-win: Paul Scharre, in his book Army of None: Autonomous Weapons and the Future of War, claimed that “we don’t need to give up the benefits of human judgment to get the advantages of automation, we can have our cake and eat it too.” However, personnel on the ground expressed—and continue to express—deep misgivings about the side effects of the Pentagon’s newest war machines.

The difficulty, it seems, is humans’ lack of trust. The engineering challenges of creating robotic weapon systems are relatively straightforward, but the social and psychological challenges of convincing humans to place their faith in the machines are bewilderingly complex. In high-stakes, high-pressure situations like military combat, human confidence in autonomous systems can quickly vanish. The Pentagon’s Defense Systems Information Analysis Center Journalnoted that although the prospects for combined human-machine teams are promising, humans will need assurances:

[T]he battlefield is fluid, dynamic, and dangerous. As a result, warfighter demands become exceedingly complex, especially since the potential costs of failure are unacceptable. The prospect of lethal autonomy adds even greater complexity to the problem [in that] warfighters will have no prior experience with similar systems. Developers will be forced to build trust almost from scratch.

In a 2015 article, U.S. Navy Commander Greg Smith provided a candid assessment of aviators’ distrust in aerial drones. After describing how drones are often intentionally separated from crewed aircraft, Smith noted that operators sometimes lose communication with their drones and may inadvertently bring them perilously close to crewed airplanes, which “raises the hair on the back of an aviator’s neck.” He concluded:

[I]n 2010, one task force commander grounded his manned aircraft at a remote operating location until he was assured that the local control tower and UAV [unmanned aerial vehicle] operators located halfway around the world would improve procedural compliance. Anecdotes like these abound…. After nearly a decade of sharing the skies with UAVs, most naval aviators no longer believe that UAVs are trying to kill them, but one should not confuse this sentiment with trusting the platform, technology, or [drone] operators.

U.S. Marines [top] prepare to launch and operate a MQ-9A Reaper drone in 2021. The Reaper [bottom] is designed for both high-altitude surveillance and destroying targets.Top: Lance Cpl. Gabrielle Sanders/U.S. Marine Corps; Bottom: 1st Lt. John Coppola/U.S. Marine Corps

Yet Pentagon leaders place an almost superstitious trust in those systems, and seem firmly convinced that a lack of human confidence in autonomous systems can be overcome with engineered solutions. In a commentary, Courtney Soboleski, a data scientist employed by the military contractor Booz Allen Hamilton, makes the case for mobilizing social science as a tool for overcoming soldiers’ lack of trust in robotic systems.

The problem with adding a machine into military teaming arrangements is not doctrinal or numeric…it is psychological. It is rethinking the instinctual threshold required for trust to exist between the soldier and machine.… The real hurdle lies in surpassing the individual psychological and sociological barriers to assumption of risk presented by algorithmic warfare. To do so requires a rewiring of military culture across several mental and emotional domains.… AI [artificial intelligence] trainers should partner with traditional military subject matter experts to develop the psychological feelings of safety not inherently tangible in new technology. Through this exchange, soldiers will develop the same instinctual trust natural to the human-human war-fighting paradigm with machines.

The Military’s Trust Engineers Go to Work

Soon, the wary warfighter will likely be subjected to new forms of training that focus on building trust between robots and humans. Already, robots are being programmed to communicate in more human ways with their users for the explicit purpose of increasing trust. And projects are currently underway to help military robots report their deficiencies to humans in given situations, and to alter their functionality according to the machine’s perceived emotional state of the user.

At the DEVCOM Army Research Laboratory, military psychologists have spent more than a decade on human experiments related to trust in machines. Among the most prolific is Jessie Chen, who joined the lab in 2003. Chen lives and breathes robotics—specifically “agent teaming” research, a field that examines how robots can be integrated into groups with humans. Her experiments test how humans’ lack of trust in robotic and autonomous systems can be overcome—or at least minimized.

For example, in one set of tests, Chen and her colleagues deployed a small ground robot called an Autonomous Squad Member that interacted and communicated with infantrymen. The researchers varied “situation-awareness-based agent transparency”—that is, the robot’s self-reported information about its plans, motivations, and predicted outcomes—and found that human trust in the robot increased when the autonomous “agent” was more transparent or honest about its intentions.

The Army isn’t the only branch of the armed services researching human trust in robots. The U.S. Air Force Research Laboratory recently had an entire group dedicated to the subject: the Human Trust and Interaction Branch, part of the lab’s 711th Human Performance Wing, located at Wright-Patterson Air Force Base, in Ohio.

In 2015, the Air Force began soliciting proposals for “research on how to harness the socio-emotional elements of interpersonal team/trust dynamics and inject them into human-robot teams.” Mark Draper, a principal engineering research psychologist at the Air Force lab, is optimistic about the prospects of human-machine teaming: “As autonomy becomes more trusted, as it becomes more capable, then the Airmen can start off-loading more decision-making capability on the autonomy, and autonomy can exercise increasingly important levels of decision-making.”

Air Force researchers are attempting to dissect the determinants of human trust. In one project, they examined the relationship between a person’s personality profile (measured using the so-called Big Five personality traits: openness, conscientiousness, extraversion, agreeableness, neuroticism) and his or her tendency to trust. In another experiment, entitled “Trusting Robocop: Gender-Based Effects on Trust of an Autonomous Robot,” Air Force scientists compared male and female research subjects’ levels of trust by showing them a video depicting a guard robot. The robot was armed with a Taser, interacted with people, and eventually used the Taser on one. Researchers designed the scenario to create uncertainty about whether the robot or the humans were to blame. By surveying research subjects, the scientists found that women reported higher levels of trust in “Robocop” than men.

The issue of trust in autonomous systems has even led the Air Force’s chief scientist to suggest ideas for increasing human confidence in the machines, ranging from better android manners to robots that look more like people, under the principle that

good HFE [human factors engineering] design should help support ease of interaction between humans and AS [autonomous systems]. For example, better “etiquette” often equates to better performance, causing a more seamless interaction. This occurs, for example, when an AS avoids interrupting its human teammate during a high workload situation or cues the human that it is about to interrupt—activities that, surprisingly, can improve performance independent of the actual reliability of the system. To an extent, anthropomorphism can also improve human-AS interaction, since people often trust agents endowed with more humanlike features…[but] anthropomorphism can also induce overtrust.

It’s impossible to know the degree to which the trust engineers will succeed in achieving their objectives. For decades, military trainers have trained and prepared newly enlisted men and women to kill other people. If specialists have developed simple psychological techniques to overcome the soldier’s deeply ingrained aversion to destroying human life, is it possible that someday, the warfighter might also be persuaded to unquestioningly place his or her trust in robots?

 

Comments

Popular posts from this blog

The 100-year-old railway Mexico hopes will rival the Panama Canal | The Week

USS Midway: A Navy Battle 'Aircraft Carrier' now a museum in San Diego