NewsVideo

Deepfakes Are Here Now

FAKE NEWS, as those eerily accurate videos of a lip-synced Barack Obama demonstrated last year, is soon going to get a hell of a lot worse. As a newly revealed video-manipulation system shows, super-realistic fake videos are improving faster than some of us thought possible.

We’re already getting a taste of the jaw-dropping technologies that are being used by agencies and corporations with the funding to use them.

One of these systems, dubbed Deep Video Portraits, shows the dramatic extent to which deepfake videos are improving. The manipulated Obama video from last year, developed at the University of Washington, was pretty cool, but it only involved facial expressions, and it was pretty obviously an imitation. The exercise served as an important proof-of-concept, showcasing the scary potential of deepfakes—highly realistic, computer-generated fake videos. Well, that future, as the new Deep Video Portraits technology shows, is getting here pretty damned fast.

The new system was developed by Michael Zollhöfer, a visiting assistant professor at Stanford University, and his colleagues at Technical University of Munich, the University of Bath, Technicolor, and other institutions. Zollhöfer’s new approach uses input video to create photorealistic re-animations of portrait videos. These input videos are created by a source actor, the data from which is used to manipulate the portrait video of a target actor. So for example, anyone can serve as the source actor and have their facial expressions transferred to video of, say, Barack Obama or Vladimir Putin.

But it’s more than just facial expressions. The new technique allows for an array of movements, including full 3D head positions, head rotation, eye gaze, and eye blinking. The new system uses AI in the form of generative neural networks to do the trick, taking data from the signal models and calculating, or predicting, the photorealistic frames for the given target actor. Impressively, the animators don’t have to alter the graphics for existing body hair, the target actor body, or the background.

Secondary algorithms are used to correct glitches and other artifacts, giving the videos a slick, super-realistic look. They’re not perfect, but holy crap they’re impressive. The paper describing the technology, in addition to being accepted for presentation at SIGGRAPH 2018, was published in the peer-reviewed science journal ACM Transactions on Graphics.

Deep Video Portraits now presents a highly efficient way to do computer animation and to acquire photorealistic movements of pre-existing acting performances. The system, for example, could be used in audio dubbing when creating versions of films in other languages. So if a film is shot in English, this tech could be used to alter the lip movements to match the dubbed audio in French or Spanish, for example.

Unfortunately, this system will likely be abused—a problem not lost on the researchers.

“For example, the combination of photo-real synthesis of facial imagery with a voice impersonator or a voice synthesis system, would enable the generation of made-up video content that could potentially be used to defame people or to spread so-called ‘fake-news’,” writes Zollhöfer at his Stanford blog. “Currently, the modified videos still exhibit many artifacts, which makes most forgeries easy to spot. It is hard to predict at what point in time such ‘fake’ videos will be indistinguishable from real content for our human eyes.”

Sadly, deepfake tech is already being used in pornography, with early efforts to reduce or eliminate these invasive videos proving to be largely futile. But for the burgeoning world of fake news, there are some potential solutions, like watermarking algorithms. In the future, AI could be used to detect fakes, sniffing for patterns that are invisible to the human eye. Ultimately, however, it’ll be up to us to discern fact from fiction.

“In my personal opinion, most important is that the general public has to be aware of the capabilities of modern technology for video generation and editing,” writes Zollhöfer. “This will enable them to think more critically about the video content they consume every day, especially if there is no proof of origin.”

* * *

Source: Gizmodo

For Further Reading

Previous post

I Can't — But We Can

Next post

The Toxic Media and Their Hate, Lies, and Desperation

1 Comment

  1. JimB
    October 8, 2018 at 12:39 am — Reply

    This reminds me of the tech in the scene from the old 80s movie The Running Man, you know, where Schwarzenegger’s character (Ben Richards) was “recorded” supposedly disobeying orders as a helicopter policeman to “not fire upon innocent civilians” during a food-riot, thus becoming the mass-murderer in ‘the Bakersfield massacre’… but was actually the person who refused to carry out the crime and tried to prevent his fellow officer from doing the deed. The System used a similar type of tech as this to superimpose his likeness upon the actual mass-murdering cop.

    Yes, indeed, deepfakes. Scary stuff. With this type of tech, anyone could be “proven” to be a “terrorist”.

Leave a reply

Your email address will not be published. Required fields are marked *

Slander, crude language, incivility, off-topic drift, or remarks that might harm National Vanguard or its users may be edited or deleted, even if unintentional. Comments may be edited for clarity or usage.