Deepfakes are now doing business presentations

New technologies in the workplace often start their lives as both status symbols and productivity aids. The first car phones and PowerPoint presentations made deals and also signaled the weight of their users.

Some partners of EY, the accounting giant formerly known as Ernst & Young, are now testing a new workplace gadget for the age of artificial intelligence. They spice up customer presentations or routine emails with synthetic talking head style video clips featuring virtual doubles of themselves made with AI software – an enterprise version of a technology. commonly called deepfakes.

The company’s exploration of the technology, provided by UK startup Synthesia, comes as the pandemic has undermined more traditional ways of cementing business relationships. Golf and long lunches are difficult or impossible, Zoom calls and PDFs are too routine.

EY partners used their duplicates in emails and to enrich presentations. A partner who does not speak Japanese used Synthesia’s technology-integrated translation feature to display their native-speaking AI avatar of a client in Japan, with a seemingly positive effect.

Synthesia, a London-based startup, has developed tools that make it easy to create synthetic videos of real people. Video courtesy of Synthesia.

“We use it to differentiate and strengthen who the person is,” says Jared Reeder, who works at EY as part of a team that provides creative and technical support to partners. In recent months, he has specialized in making double AIs of his colleagues. “Instead of sending an email and saying ‘Hey we’re still here for Friday’ you can see me and hear my voice,” he says.

The clips are openly presented as synthetic, and not as real videos intended to deceive viewers. Reeder says they have proven to be an effective way to facilitate otherwise routine interactions with customers. “It’s like bringing a puppy in front of the camera,” he says. “They’re warming up in it.”

New business tools require new jargon: EY calls them its ARI virtual doubles, for artificial reality identity, instead of deepfakes. No matter what you call them, they are the latest example of the commercialization of generated AI imagery and audio, a technical concept that became widely known to the public in 2017 when synthetic and pornographic clips of Hollywood actors began circulating online. Deepfakes have become more and more convincing, commercial and easier to do since.

Technology has found uses in the personalization of archival photos, model generation to show off new clothes and in conventional Hollywood productions. Lucasfilm recently hired a prominent member of the thriving online community of amateur deepfakers, who had won millions of views for clips in which he reworked faces in Star Wars clips. Nvidia, whose graphics chips are fueling many AI projects, revealed last week that a recent speech by CEO Jensen Huang was rigged with the help of machine learning.

Synthesia, which powers EY’s ARIs, has developed a suite of tools for creating synthetic videos. His clients include the WPP advertising company, which has used the technology to deliver internal corporate messages in different languages ​​without the need for multiple video shoots. EY has helped some consulting clients produce synthetic clips for internal announcements.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *