Facebook ad algorithms continue to illegally discriminate against women

The study provides the latest evidence that Facebook has not addressed its ad discrimination issues since ProPublica. highlighted the issue in October 2016. At the time, ProPublica revealed that the platform allowed job and housing advertisers to exclude certain audiences characterized by traits such as gender and race. These groups enjoy special protection under US law, making this practice illegal. It took two and a half years and several legal skirmishes for Facebook to finally remove this feature.

But a few months later, the US Department of Housing and Urban Development (HUD) filed a new lawsuit, alleging that Facebook’s ad serving algorithms were still excluding audiences for real estate listings without the advertiser. does not specify exclusion. A team of independent researchers including Korolova, led by Muhammad Ali and Piotr Sapieżyński from the University of the North East, corroborated these allegations a week later. They found, for example, that homes for sale were shown more often to white users and homes for rent were shown to minority users more often.

Korolova wanted to come back to the issue with her latest audit, as the burden of proof for discrimination in employment is higher than for discrimination in housing. While any bias in the display of advertisements based on protected characteristics is illegal in the case of housing, US labor law deems justifiable if the bias is due to legitimate qualification differences. The new methodology controls this factor.

“The design of the experiment is very clean,” says Sapieżyński, who was not involved in the last study. While some might argue that car and jewelry salespeople do indeed have different qualifications, he says, the differences between delivering pizza and delivering groceries are negligible. “These gender differences cannot be explained by gender differences in qualifications or a lack of qualifications,” he adds. “Facebook can no longer say [this is] defensible by law. “

The publication of this audit comes as part of an in-depth review of Facebook’s work on AI bias. In March, the MIT Technology Review published the results of a nine month investigation within the AI ​​team responsible for the company, which found that the team, first formed in 2018, neglected to work on issues such as algorithmic amplification of disinformation and polarization due to of its blind focus on AI bias. The company published a blog post shortly thereafter, highlighting the importance of this work and stating in particular that Facebook seeks to “better understand the potential errors that may affect our ad system, as part of our ongoing and broader work to study fairness. algorithmic advertising ”.

“We have taken significant steps to address the issues of discrimination in ads and have teams working on ad fairness today,” Facebook spokesperson Joe Osborn said in a statement. “Our system takes into account many signals to try and deliver the ads that will be of most interest to them, but we understand the concerns raised in the report … We continue to work closely with the civil rights community, regulators and academics on these important issues.

Despite these claims, however, Korolova says she found no noticeable change between the 2019 audit and this one in how Facebook’s ad serving algorithms work. “From that point of view, it’s really disappointing because we brought this to their attention two years ago,” she says. She also offered to work with Facebook to resolve these issues, she said. “We haven’t had a response. At least for me, they haven’t contacted me.”

In previous interviews, the company said it was unable to discuss details of how it was working to mitigate algorithmic discrimination in its advertising department due to an ongoing litigation. The publicity team said its progress has been limited by technical challenges.

Sapieżyński, who has now performed three audits of the platform, says this has nothing to do with the problem. “Facebook still hasn’t recognized that there was a problem,” he said. While the team resolves the technical issues, he adds, there’s also a simple workaround: They could turn off algorithmic ad targeting specifically for housing, employment, and loan listings without affecting the rest of their business. service. It’s really just a matter of political will, he says.

Christo Wilson, another Northeastern researcher who studies algorithmic biases but was not involved in Korolova’s or Sapieżyński’s research, agrees: “How many times do researchers and journalists have to find these problems before we just accept that the whole ad targeting system is bankrupt? ? “

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *