Welcome to TikTok’s never-ending cycle of censorship and mistakes


It’s not necessarily a surprise that these videos are making the news. People make their videos because they work. Getting views has been one of the most effective strategies in getting a big rig to fix something for years. Tiktok, Twitter, and Facebook have made it easier for users to report abuse and rule violations by other users. But when these companies seem to be breaking their own policies, people often find that the best way forward is to simply try and post about it on the platform itself, in the hopes of going viral and attracting. the attention that leads to some sort of resolution. Tyler’s two videos on the Marketplace bios, for example, each have over a million views.

“The content is being flagged because it’s a member of a marginalized group talking about their experiences with racism. Hate speech and hate speech can sound a lot like an algorithm.”

Casey Fiesler, University of Colorado, Boulder

“I’m probably tagged in something about once a week,” says Casey Fiesler, an assistant professor at the University of Colorado, Boulder, who studies technological ethics and online communities. She’s active on TikTok, with over 50,000 subscribers, but while not everything she sees seems like a legitimate concern, she says the regular parade of issues with the app is real. TikTok has had several such errors over the past few months, all of which have disproportionately impacted marginalized groups on the platform.

MIT Technology Review asked TikTok about each of these recent examples, and the answers are similar: Upon investigation, TikTok finds that the issue was created in error, points out that the blocked content in question does not violate their policies and links to support the company gives such groups.

The question is whether this cycle – a technical or political blunder, a viral response, and an apology – can be changed.

Solve problems before they arise

“There are two types of damage to this probably algorithmic content moderation that people are observing,” says Fiesler. “One is the false negatives. People are asking ‘why is there so much hate speech on this platform and why is it not being deleted?’ ”

The other is a false positive. “Their content is being flagged because it’s about a member of a marginalized group talking about their experiences with racism,” she says. “Hate speech and talking about hate speech can sound a lot like an algorithm.”

Both of these categories, she noted, harm the same people: Those who are disproportionately abused end up being censored by an algorithm for talking about it.

TikTok mysterious recommendation algorithms are part of its success– but its unclear and constantly evolving limits are already having a deterrent effect on some users. Fiesler notes that many of the creators of TikTok self-censor on the platform’s words in order to avoid triggering criticism. And while she isn’t exactly sure how well this tactic accomplishes, Fielser has also started doing it, herself, just in case. Account bans, algorithmic mysteries, and bizarre moderation decisions are all part of the conversation on the app.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *