New trial that shows facial recognition officially a civil rights issue


Williams’ wrongful arrest, which was first reported by The New York Times in August 2020, was based on a mismatch of the Detroit Police Department’s facial recognition system. Two more instances of false arrests have since been made public. Both are also black men and both have filed a lawsuit.

Now Williams is following their path and going further – not just suing the department for her wrongful arrest, but trying to get the technology banned.

Tuesday, the ACLU and the University of Michigan Civil Rights Litigation Initiative filed a complaint on behalf of Williams, alleging that the arrest violated his Fourth Amendment rights and was in violation of Michigan Civil Rights Act.

The lawsuit calls for compensation, greater transparency over the use of facial recognition, and an end to the Detroit Police Department’s use of facial recognition technology, whether direct or indirect.

What the trial says

the documents filed Tuesday state the case. In March 2019, the DPD released a grainy photo of a black man with a red cap from Shinola’s surveillance video via his facial recognition system, made by a company called DataWorks Plus. The system returned a match with an old photo of Williams’ driver’s license. Investigators then included William’s license photo in a photo queue, and Shinola’s security guard identified Williams as the thief. Officers obtained a warrant, which requires several departmental leadership approvals, and Williams was arrested.

The complaint argues that Williams’ fake arrest was a direct result of the facial recognition system and that “this wrongful arrest and jail case illustrates the serious harm caused by the misuse of facial recognition technology and the use this technology. ”

The case contains four counts, three of which focus on the lack of probable cause for the arrest while one focuses on racial disparities in the impact of facial recognition. “By employing technology which has been empirically proven to misidentify black people at rates much higher than other groups of people,” he asserts, “the DPD has denied Mr. Williams full and equal enjoyment. of the services, privileges and advantages of the Detroit Police Department because of his race or color.

The difficulties of facial recognition technology in identifying people with darker skin are well documented. Following the murder of George Floyd in Minneapolis in 2020, some cities and states announced bans and moratoria on police use of facial recognition. But many others, including Detroit, have continued to use it despite growing concerns.

“Rely on inferior images”

When MIT Technology magazine spoke to Williams’ ACLU attorney Phil Mayor last year, it pointed out that issues of racism within U.S. law enforcement made the use of facial recognition even more worrying.

“This is not a bad actor situation,” said the mayor. “This is a situation where we have a criminal justice system that is extremely quick to bring charges and extremely slow to protect the rights of people, especially when it comes to people of color.”

Eric Williams, a senior attorney with the Economic Equity Practice of Detroit, says cameras have many technological limitations, including that they are hard-coded with color ranges to recognize skin tone and often simply cannot treat darker skin.

“I think every black person in the country has had the experience of being in a photo and the photo is either lighter or much darker.”

“I think every black person in the country has had the experience of being in a photo and the photo appears lighter or darker,” says Williams, who is a member of the Michigan ACLU Lawyers Committee but not working on the Robert Williams case. “Lighting is one of the main factors in the quality of an image. So the fact that law enforcement relies, to some extent … on really poor images is problematic. ”

There have been cases that have challenged biased algorithms and artificial intelligence technologies on the basis of race. Facebook, for example, suffered a massive civil rights audit after its targeted advertising algorithms were found to serve ads on the basis of race, gender, and religion. YouTube has been sued a class action lawsuit brought by black designers who alleged that its AI systems were profiling users and censoring or discriminating content on the basis of race. YouTube has also been sued by LGBTQ + creators who have said that content moderation systems pointed out the words “gay” and “lesbian”.

Some experts say it was only a matter of time before the use of biased technology by a large institution like the police faced legal challenges.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *