The world needs Deepfake experts to stem this chaos


Recently the army Myanmar’s coup government has added serious allegations of corruption to an existing set of spurious cases against Burmese leader Aung San Suu Kyi. The new accusations build on statements by a prominent detained politician that were first published in a March video that many in Myanmar suspected to be a deepfake.

In the video, the voice and face of the political prisoner appear distorted and unnatural as he claims in detail that he is providing gold and silver to Aung San Suu Kyi. Social media users and journalists in Myanmar immediately questioned the veracity of the statement. This incident illustrates a problem that will only get worse. As real deepfakes improve, the people’s willingness to reject real images as a deepfake increases. What tools and skills will be available to investigate both types of complaints, and who will use them?

In the video, Phyo Min Thein, the former chief minister of Myanmar’s largest city, Yangon, sits in a bare room, apparently reading a statement. Her speech sounds strange and doesn’t sound like her normal voice, her face is static, and in the shoddy version that first circulated, her lips seem out of sync with her words. Apparently everyone wanted to believe it was a fake. Screenshots from an online deepfake detector spread quickly, showing a red box around the politician’s face and a claim with Over 90% confident that the confession was a deepfake. Burmese journalists lacked the forensic skills to pass judgment. The past state and present military actions have reinforced the grounds for suspicion. Government spokespersons shared staged footage targeting the Rohingya ethnic group while the organizers of military coups have denied that the evidence of their killings on social media can be real.

But was the prisoner’s “confession” really a deepfake? Along with deepfake researcher Henry Ajder, I consulted with deepfake creators and media forensics specialists. Some noted that the video was of poor enough quality that the mouth issues people were seeing were as likely to be compression artifacts as they were evidence of deepfake. Detection algorithms are also unreliable on low quality compressed video. Her unnatural sounding voice could be the result of reading a script under extreme pressure. If it’s a fake, it’s a very good one, as its throat and chest move at key times in sync with the words. Researchers and manufacturers were generally skeptical that this was a deepfake, but not sure. At this point, it’s more likely to be what human rights activists like me are familiar with: a forced or forced confession on camera. In addition, the substance of the allegations should not be relied upon given the circumstances of the military coup, unless there is a legitimate legal process.

Why is this important? Whether the video is a forced confession or a deepfake, the results are most likely the same: words digitally or physically forced out of a prisoner’s mouth by a coup government. However, while the using deepfakes to create non-consensual sexual images Currently far outstripping political bodies, deepfake and synthetic media technology is improving, proliferating and commercializing rapidly, increasing the potential for harmful uses. The Myanmar case demonstrates the growing gap between the ability to make deepfakes, the opportunities to claim that a real video is a deepfake, and our ability to challenge that.

It also illustrates the challenges of getting the public to trust free online detectors without understanding the strengths and limitations of detection or how to guess a misleading result. Deepfake detection is still an emerging technology, and a detection tool applicable to one approach often does not work on another. We should also be wary of second opinion – where someone deliberately takes steps to confuse a detection approach. And it is not always possible to know which detection tools to trust.

How can we prevent conflicts and crises across the world from being blinded by deepfakes and supposedly deepfakes?

We should not turn ordinary people into false observers, scanning pixels to discern truth from lies. Most people will do better by relying on simpler approaches to media literacy, such as the SIEVE method, which focus on checking other sources or finding the original context of the videos. In fact, encouraging people to be amateur forensic experts can send people down the rabbit hole of conspiracy distrust of images.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *