If there’s one thing the US military is right about, it’s lethality. Yet even once the US military has you in its sights, it may not know who you really are – these are these so-called “signature strikes” – even though this wrathful finger of God is called d. ‘up.
As Kate Crawford, director of Microsoft Research and co-founder of NYU’s AI Now Institute, explains in this fascinating excerpt from her new book, AI Atlas, the military-industrial complex is alive and well and is now leveraging IBM-derived metadata monitoring scores to decide which part of the house / ride / sex should hit the next drone. And if you think the same insidious technology isn’t already plaguing the national economy, I have a credit rating to sell you.
Extract of AI Atlas: Power, Politics and Planetary Costs of Artificial Intelligence by Kate Crawford, published by Yale University Press. Copyright © 2021 by the President and Fellows of Yale University. Used with permission. All rights reserved.
The idea of the signature underlies the military logic of targeting. Towards the end of President George W. Bush’s second term, the CIA argued that it should be able to launch drone attacks solely on the basis of the observed “pattern of behavior” or “signature” of an individual.
While a “personality attack” involves targeting a specific person, a “signature attack” is when a person is killed because of their metadata signature; in other words, their identity is not known but the data suggests that it could be a terrorist.
As the Snowden documents showed, during the Obama years, the National Security Agency’s global metadata surveillance program geotagged a suspect’s SIM card or handset, then the U.S. military carried out drone strikes to kill. the individual in possession of the device.
“We kill people based on metadata,” said General Michael Hayden, former director of the NSA and CIA. The Geo Cell division of the NSA reportedly used more colorful language: “We follow them, you hit them”.
Signature keystrokes can seem precise and authoritative, implying a true mark of someone’s identity. But in 2014, the legal organization Reprieve released a report showing that drone strikes attempting to kill 41 people resulted in the deaths of around 1,147 people. “Drone strikes have been sold to the American public on the pretext that they are ‘precise’. But they’re only as accurate as the intelligence that feeds them, ”said Jennifer Gibson, who led the report.
But the shape of the signature strike is not a question of precision: it is a question of correlation. Once a pattern is found in the data and reaches a certain threshold, suspicion becomes sufficient to act even in the absence of definitive evidence. This pattern recognition mode of judgment is found in many areas, most often in the form of a score.
Take an example of the Syrian refugee crisis of 2015. Millions of people fled the widespread civil war and enemy occupation in the hope of finding asylum in Europe. Refugees risked their lives on overcrowded rafts and boats. On September 2, a three-year-old boy named Alan Kurdi drowned in the Mediterranean Sea, alongside his five-year-old brother, when their boat capsized. A photograph showing his body washed up on a beach in Turkey has made international headlines as a powerful symbol of the scale of the humanitarian crisis: an image depicting global horror. But some saw it as a growing threat. It was around this time that IBM was approached for a new project. Could the company use its machine learning platform to detect signature of refugee data that may be linked to jihadism? In short, could IBM automatically distinguish a terrorist from a refugee?
Andrew Borene, head of strategic initiatives at IBM, described the rationale for the program in the military publication Defense One:
“Our global team, some people in Europe, were receiving comments that there were concerns that within these populations of asylum seekers who had been starved and slaughtered there were men of old age. fighting coming out of boats that looked very healthy. Was this a source of concern for Daesh and, if so, could this type of solution be helpful? “
From the safe distance of their corporate offices, IBM data scientists considered the problem one of the best addressed by data mining and social media analysis. Putting aside the many variables that existed in the conditions of makeshift refugee camps and the dozens of assumptions used to classify terrorist behavior, IBM created an experimental “terrorist credit score” to eliminate IS fighters from the field. refugees. Analysts gathered a mix of unstructured data, from Twitter to the official list of those who drowned alongside the many capsized boats off the coasts of Greece and Turkey. They also put together a data set, modeled on the types of metadata available to border guards. From these disparate measurements, they developed a hypothetical threat score: not an absolute indicator of guilt or innocence, they stressed, but a deep ‘glimpse’ of the individual, including past addresses, workplaces and social ties. Meanwhile, Syrian refugees were unaware that their personal data was being collected to judge a system that might distinguish them as potential terrorists.
This is just one of many instances where new technical state control systems use refugee bodies as a test case. These military and police logics are now imbued with a form of financialization: socially constructed solvency models have entered many AI systems, influencing everything from the ability to obtain a loan to authorization to cross borders. Hundreds of such platforms are now in use around the world, from China to Venezuela to the United States, rewarding predetermined forms of social behavior and penalizing those who do not comply.
This “new system of moralized social classification”, according to the words of sociologists Marion Fourcade and Kieran Healy, benefits the “big boys” of the traditional economy while further disadvantaging the less privileged populations. Credit rating, in the broadest sense, has become a place where military and commercial signatures combine.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through any of these links, we may earn an affiliate commission.