I am troubled by the fact that law enforcement agencies are increasingly using robots to neutralize threats, surveillance and hostage-taking. Maybe i just saw RoboCop all too often, but I am wary of machines making crucial decisions, life or death, especially given the frequency with which genuine human agents abuse their authority. Do I have any moral obligation to obey a robot police officer?
Hollywood has not been particularly optimistic about robots in positions of authority. RoboCop is just one example of the larger sci-fi canon that has burned in our minds the tragic consequences of relinquishing critical tasks to inflexible machines – robots whose primary directives are honored with a literalism that can become lethal, which can detonate a person to death but are confused by a series of stairs. The message of these films is clear: rigid automatons are incapable of the improvised solutions and the moral nuances so often required in times of crisis.
Perhaps it is this stereotype that led Boston Dynamics, some of whose robots are integrated into the police department, to publish a video of its models dancing on the 1950s Contours hit “Do You Love Me” last December. . Maybe you saw it? The robots included Atlas, an android who looks like a deconstructed Storm Soldier, and Spot, who was the inspiration for the killer dogbots in the episode “Metalhead” of Black mirror. Neither machine appears to have been designed to allay fears of a robot takeover, so what better way to endear them to audiences than to show off their agility? And what better test of this agility than a skill considered so uniquely human that we invented a movement designed to poke fun at an automaton’s inability to do so (the Robot)? Watching machines mix, spin and spin, it’s hard to avoid seeing them as vibrant, embodied creatures, capable of the same flexibilities and sensibilities that we are.
It doesn’t matter that Spot’s knuckles could cut your finger, or that police bots were once used to exert deadly force. One way to answer your question, Suspect, without appealing to moral philosophy, might be in terms of pragmatic consequences. If you, like most of us, intend to stay alive and well, then yes, you absolutely must obey a police robot.
But I feel that your question is not just practical. And I agree that it is important to consider the trade-offs involved in transferring policing tasks to machines. The Boston Dynamics video, by the way, was released in late 2020 as a way to “celebrate the start of what we hope will be a happier year.” A week later, insurgents stormed the Capitol, and images of police officers showing little resistance to the crowd multiplied – photos that were vividly juxtaposed, on social media, against more responses. severe to the Black Lives Matter protests last summer.
At a time when many police services face a crisis of authority due to racial violence, the most compelling argument for police robotics is that machines have no inherent capacity for bias. For a robot, a person is a person, regardless of skin color, gender or cause. As the White House noted in a 2016 report on Algorithms and Civil Rights, new technologies have the potential “to help law enforcement agencies make decisions based on factors and variables that are empirically correlated with risk, rather than on faulty human instincts and prejudices ”.
Of course, if current police technology is any proof, things are not that simple. Predictive policing algorithms, which are used to identify high-risk people and neighborhoods, are highly prone to bias, what robotics Ayanna Howards has called the “original sin of AI.” Because these systems are based on historical data (past court cases, past arrests), they end up pointing to the same communities that were initially unfairly targeted and reinforce structural racism. Automated predictions can become self-fulfilling, locking certain quadrants into a pattern of overpolishing. (Officers who arrive at a location that has been flagged as ripe for crime are ready to uncover one.) In other words, these tools do not so much neutralize prejudices as they formalize them, turning existing social inequalities into systems which are perpetuated unconsciously and mechanically. them. As digital ethics professor Kevin Macnish notes, the values of algorithm makers “are frozen in the code, effectively institutionalizing those values.”