Twitter’s photo-cropping algorithm favors thin young women


In May, Twitter says that it would stop using a artificial intelligence algorithm that favors white and female faces when automatically cropping images.

Now a unusual contest to examine an AI program for bad behavior, it was found that the same algorithm, which identifies the most important areas of an image, also discriminates based on age and weight, and promotes text in English and other western languages.

The top entry, contributed by Bogdan Kulynych, a graduate student in computer security at EPFL in Switzerland, shows how Twitter’s image-cropping algorithm favors thinner and younger-looking people. Kulynych used a deepfake technique to automatically generate different faces, then tested the cropping algorithm to see how he reacted.

“Basically, the thinner, younger and more feminine an image, the more favored it will be,” says Patrick Hall, senior scientist at BNH, a company that does AI consulting. He was one of the four judges in the competition.

A second judge, Ariel Herbert Voss, security researcher at OpenAI, says the biases found by the participants reflect the biases of the humans who provided the data used to train the model. But she adds that the entries show how deep analysis of an algorithm could help product teams root out issues with their AI models. “It makes it much easier to fix the problem if someone just says ‘Hey, this is bad.’ “

The “Algorithm Bias Bounty Challenge”, held last week at Defcon, a IT security conference in Las Vegas, suggests that letting outside researchers scan algorithms for bad behavior could perhaps help companies eliminate problems before they cause real damage.

Like some companies, including Twitter, encourage experts to find security bugs in their code by offering rewards for specific exploits, some AI experts believe companies should give outsiders access to the algorithms and data they use in order to identify the problems.

“It’s really exciting to see this idea being explored, and I’m sure we’ll see more of it,” says Amit Elazari, director of global cybersecurity policy at Intel and speaker at UC Berkeley who suggested using the bug bounty approach to eliminate AI bias. She says bias research in AI “can benefit crowd empowerment.”

In September, a Canadian a student drew attention to the way Twitter’s algorithm was cropping photos. The algorithm was designed to target faces as well as other areas of interest such as text, animals or objects. But the algorithm often favored white faces and women in images where multiple people were shown. The Twittersphere quickly found other examples of racial and gender bias.

For last week’s bounty contest, Twitter made the code for the image-cropping algorithm available to entrants and offered prizes to teams that demonstrated evidence of other harmful behavior.

Others have discovered additional biases. One showed that the algorithm was biased against people with white hair. Another revealed that the algorithm favors Latin text over Arabic script, giving it a Western bias.

Hall of BNH says he believes other companies will follow Twitter’s approach. “I think there is some hope that it takes off,” he said. “Due to impending regulations and because the number of incidents of AI bias is increasing.”

Over the past few years, much of the hype around AI has been soured by examples of how easily algorithms can encode bias. Commercial facial recognition algorithms have been shown to discriminate on the basis of race and sex, image processing code was found to expose sexist ideas, and a program that assesses a person’s likelihood of recidivism has been found to be biased against black defendants.

The problem is proving difficult to eradicate. Identifying fairness is not straightforward, and some algorithms, such as those used to analyze medical x-rays, may internalize racial prejudice in a way that humans cannot easily spot.

“One of the biggest issues we face – which every business and organization faces – when we think about determining the biases in our models or in our systems is how to adapt that? »Said Rumman chowdhury, Director of the Ethics, Transparency and Responsibility ML group at Twitter.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *