Deepfake: Democracy for CGI

Credit to author.

On October 11th, Gemini Man bombed at the box office. 

With Will Smith playing a 51-year old assassin tasked with the job of killing Junior, his own 23 year-old clone, Gemini Man is packed with action and combat scenes. It really makes you wonder how a one-man show can so convincingly portray two different people. Strategic camera angles, perhaps? Post-production editing? Does Will Smith have a secret twin?

Well, the secret’s out. Junior isn’t played by a secret twin… nor is he played by Smith under layers of prosthetics and heavy makeup. In fact, he isn’t played by anyone at all. He is one hundred percent CGI, from his photogramatic skin pores to the digital tears spilling from his eyes.

It’s a tough pill to swallow for acclaimed director Ang Lee and A-list lead Will Smith. The movie’s global opening weekend earnings total about $60 million, which doesn’t look good compared to its $140 million budget. Seems like its groundbreaking high frame rate technology — which cost filmmakers tens of millions of dollars — didn’t exactly inspire the public to flock to the theatres.

New Zealand visual effects company Weta Digital spent hundreds of hours modelling and animating Junior. But the people may be justified for their relative indifference over these innovations. After all, stuff like this has been done before, for a minute fraction of the cost. Just look at this The Matrix parody remake, uploaded by a single Deepfake creator who goes by “Sham00K”. Or viral Chinese app Zao, which exploded last month when it launched on the App Store. It was controversial, for many reasons: previously, Deepfake had only really been used on celebrities and public figures, who have a wealth of image and video data available online. This data could then easily lend itself to manipulation. But with Zao, just one selfie allowed you to insert your own face into hundreds of music video and movie clips. With the tap of a button, you traded places with Leonardo DiCaprio, staring out at sea on the Titanic. This subsequently kindled conversation on what will happen when Deepfake creators move on from generating celebrity pornography (which accounts for over 96% of their current content right now). Will they become interested in manipulating democratic elections, bullying people, or slandering corporations? 

Companies like Facebook and Google have all been trying to improve Deepfake detection for awhile now, primarily with aim to deter the proliferation of “fake news” articles on their sites

The amount of available Deepfake content has doubled in the last nine months, making questions around the future of this technology more important than ever. And the price of innovation in realism is dropping too… exponentially. In as little as ten years, Weta Digital’s fancy high frame rate CGI will be an expected service for the graphics industry. 

This data could then easily lend itself to manipulation. But with Zao, just one selfie allowed you to insert your own face into hundreds of music video and movie clips. With the tap of a button, you traded places with Leonardo DiCaprio, staring out at sea on the Titanic.

In the same way blogging changed the traditional publication industry, Deepfake almost seems like it’s making CGI accessible to the average Joe. Anyone with a smartphone can use Zao and have a laugh. But where Deepfake differs from blogging is its near total lack of monetisable opportunity. It’s hard to imagine the currently largely underground operation being structured to fit into any business model, or receive corporate sponsorship. Of course, the free, open-source nature of the software is also a major roadblock to commercialisation. This means there is relatively no incentive to develop and improve the technology (unless perhaps you are a fan of Scarlett Johansson). Weta Digital was certainly paid for their efforts in large sums for Gemini Man, but is Hollywood the only place for Deepfake? Can CGI be democratised, if you will, for the use of everyday people?

Will they become interested in manipulating democratic elections, bullying people, or slandering corporations? 

This is where things can get interesting. Companies like Facebook and Google have all been trying to improve Deepfake detection for awhile now, primarily with aim to deter the proliferation of “fake news” articles on their sites. But they can take this further, and help bring Deepfake onto the right direction. Perhaps one day we may see upgrades to Snapchat Bitmoji or Apple Memoji, or even physical representations of currently voice-only assistants like Alexa and Siri. Few know what other applications this software could have in our world, but the future certainly looks bright. So long as we use it for good. 

So, dear Deepfake creators, please don’t be evil.

1 Comment

Leave a Reply

Your email address will not be published. Required fields are marked *