Twitter announced a to study the fairness of its algorithms. As part of the effort, which the company has dubbed the “Responsible Machine Learning Initiative,” data scientists and engineers across the company will study the potential “unintentional damage” caused by its algorithms and make the results public.
“We are conducting in-depth analysis and studies to assess the existence of potential harms in the algorithms we use,” the company wrote in a blog post announcing the initiative.
For starters, the company will study Twitter’s image-cropping algorithm, criticized as being people with lighter skin. Twitter will also consider its content recommendations, including an “assessment of the fairness of our recommendations on the house timeline across racial subgroups” and “an analysis of content recommendations for different political ideologies in seven countries.”
It is not known what impact this initiative will have. Twitter notes that in some cases it may change aspects of its platform based on its findings, and other studies may simply result in “important discussions about how we build and apply ML.” [machine learning]. But the problem is topical for Twitter and other social media platforms. Lawmakers lobbied Twitter, YouTube and Facebook for more transparency following the insurgency on the United States Capitol, and some lawmakers have law Project this would force companies to evaluate their algorithms to detect bias.
Twitter CEO Jack Dorsey also spoke about his desire to create a , which would allow users to control the algorithms they use. In its latest blog post, the company says it is in the “early stages of exploring” such an idea.