AI algorithms can assess your attractiveness


A comparison of two Beyonce Knowles photos from Lauren Rhue’s research using Face ++. His AI predicted that the left image would be 74.776% for men and 77.914% for women. The image on the right, meanwhile, got 87.468% for men and 91.14% for women in its model..

Beauty scores, she says, are part of a disturbing dynamic between an already unhealthy beauty culture and the recommendation algorithms we come across online every day. When scores are used to decide which posts to run on social media platforms, for example, it reinforces the definition of what is deemed attractive and distracts attention from those who do not fit the strict ideal of the media. machine. “We are restricting the types of images available to everyone,” says Rhue.

It’s a vicious cycle: With more gazes on content featuring attractive people, these images are able to drive higher engagement, so they’re shown to even more people. Ultimately, while a high beauty score isn’t a direct reason a message is shown to you, it is an indirect one.

In one to study published in 2019, she looked at how two algorithms, one for beauty scores and one for age predictions, affected people’s opinions. Participants were shown pictures of people and asked to rate the beauty and age of the subjects. Some of the participants saw the AI-generated score before giving their answer, while others did not show the AI ​​score at all. She found that participants without knowledge of the AI ​​rating did not exhibit additional bias; However, by knowing how AI ranked people’s attractiveness, people gave scores closer to the algorithm-generated result. Rhue calls this the “anchoring effect”.

“Recommendation algorithms actually change our preferences,” she says. “And the technological challenge, of course, is not to restrict them too much. When it comes to beauty, we’re seeing a much bigger shrinkage than I expected.

“I saw no reason not to assess your flaws, because there are ways to correct them.”

Shafee Hassan, Studio Qoves

At Qoves, Hassan says he tried to tackle the running problem head-on. When doing a detailed facial analysis report – the type clients pay for – his studio attempts to use data to categorize the face based on ethnicity so that not everyone is simply assessed against it. to a European ideal. “You can escape this Eurocentric bias just by becoming the best version of yourself, the best version of your ethnicity, the best version of your race,” he says.

But Rhue says she is concerned that this type of ethnic categorization is being embedded more deeply into our technological infrastructure. “The problem is, people do it no matter how we look at them, and there’s no kind of regulation or oversight,” she says. “If there is one type of conflict, people will try to find out who falls into which category.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *