Have you ever wanted to be somebody else? Now, because of a new video manipulation technique; you may soon be able to (in videos at least). The new facial tracking software is called Face2Face, and allows target videos to be altered ‘with any commodity webcam’. It has been developed at Stanford University, by building on the work of fellow academics at the University of Erlangen-Nuremberg (Germany) and the Max-Planck Institute for Informatics.
The innovative facial reenactment tech is due to be exhibited at the IEEE Conference on Computer Vision and Pattern Recognition in June. For now, however, a demonstration video has been released to the Internet, and it causing quite a lot of excitement.
The technology has been (fittingly) named Face2Face because it allows one face to control another. Amazingly, because the technology relies solely on RGB data (for both the source and destination video) the technique can be used to modify YouTube videos in real-time. The results are truly outstanding, and certainly make you wonder if we are quickly approaching a time when we won’t be able to believe our own eyes.
Despite the more sinister applications that come to mind (which we will discuss later), the ability to perfectly control another person’s face is truly amusing. When one considers how popular smartphone apps like Face Swap have become, the commercial applications of Face2Face are obvious. After all, it will be incredibly fun to precisely control other people’s faces – whether they be celebrities – or friends.
So how does Face2Face work?
A short abstract explaining how the researchers managed to commandeer people faces appears on the Stanford University website (alongside a video demonstration). In it, Matthias Nießner explains how his team built on initial third-party research to achieve these latest results,
‘Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling.’
Although this sounds complicated, the results were achieved by carefully examining both the target video and the input face simultaneously. By doing so, the researchers were able to create a third output video in which the target face performs the inputted facial movements. The remarkable result is that anybody can control Obama’s, Putin’s, or anyone else’s face in YouTube videos,
‘At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target.’
Incredibly, Face2Face even manages to alter the inside of the mouth of target videos in a realistic way. Allowing the target video to be animated into open-mouthed expressions, even when the mouth is tightly closed in the original.
‘In order to evaluate our approach, we perform a cross-validation based on optical flow. To this end, we retrieve mouth interiors from the first half of the video; the second half is used for evaluation queries. As we can see our re-rendering error is very low,’ explains the team in the video.
In fact, the level of control is truly mesmerizing, and although it clearly has entertainment purposes it is also easy to spot a much more nefarious use for the technology. The technique could, for example, be used to tamper with footage that is later relied upon in court. In that way, video of a suspect could be altered to make them appear to say something that they had not. Meaning that this technology (combined with some form of vocal processing technology) could easily be used to frame somebody of a crime that they had not committed.
This is not the only foreseeable problem either. The choice of faces that are manipulated in the demo video (those of politicians) also bring corrupt uses to mind. Imagine how easy it might be, for example, to make one of the current presidential candidates say something embarrassing or incriminating. That footage could easily receive a lot of press coverage and ruin the candidate’s chances of being elected.
Even if later proven to be a hoax, altered footage could be terribly damaging to a person’s reputation in the mean time. Especially considering the viral nature of social media these days.
In fact, you would not need to be a celebrity or political leader to be negatively affected by the slanderous way that Face2Face manipulation could be applied. Being made to look like you spoke badly about a colleague or boss, for example, could lead to unemployment. Lies about infidelity could lead to divorce.
The nefarious uses are almost endless, and although this technology is still in its early stages (and the artifacts present in the footage probably easy to spot) there may come a day when video manipulation of this kind is a very serious security risk.