FaceDirector software allows manipulation of actor performances in post-production

Disney Research Zürich has released a white paper on the newly developed algorithms and methods used by their system to redirect scenes after they have been shot. Aptly named FaceDirector—the system allows a director to seamless adjust the facial expressions and delivery of an actor by utilizing multiple production takes.

In television and film production one of the key elements to successful storytelling is the performance of the actors. A key hurdle for a director to overcome is pushing actors to convey believable emotions with appropriate facial expressions, speed and timing. FaceDirector could change how advertisements, long format programming and films are created.

“Scenes are often shot and re-shot over and over as multiple takes until the director is satisfied, often requiring considerable amounts of time and cost. For example, the opening scene of the movie “The Social Network” required 99 takes, “Gone Girl” required an average of 50 takes per scene and one scene in “The Shining” required 127 takes.”

What makes FaceDirector so groundbreaking is the system uses both visual and auditory markers to synchronize video together. The system tracks and accounts for differences in head pose, emotion, expression intensity, as well as pitch accentuation and even the wording of the speech. More importantly, the system works with normal 2D video acquired by standard cameras used in today’s productions. There is no need for additional hardware, special cameras or 3D face reconstruction.

FaceDirector seems to be the answer every director has been looking for. A director can now exert control over an actor’s performance after the shoot with just a few takes.

This means great things for an agency’s clients as well.

  • More creative control for both the director and client.
  • Ability to correct mistakes and errors in actors’ performances without having to re-shoot scene reducing costs.
  • Shorten an actor’s performance for concise delivery of lines.

There are of course some current techniques we can deploy to manipulate an actor’s timing, facial expression and performance. The most traditional way to manipulate an actor’s performance has always been to cut around the bad parts and join multiple takes of the actor’s performance throughout the scene. This always requires you to cut away to different framing or to b-roll.

At INN we utilize the Adobe Creative Suite. Since the release of Creative Cloud 2015, Adobe has implemented some tools to increase the seamlessness of our manipulation to an actor’s performance. Morph Cut is a powerful transition within Adobe Premiere Pro that utilizes the powerful facial tracking developed for Adobe After Effects. The transition analyzes the footage at the cut point and tracks facial cues and morphs between takes creating one seamless shot. Within Adobe After Effects, their new Face Tracker allows you to obtain a truly profound amount of data in relation to the geometry of a person’s face. While this doesn’t aid directly in being able to modify or re-time an actor’s performance, it does allow us to adjust the actors facial expressions to some degree.

Although there is currently no commercial implementation of FaceDirector and this is still all lab research, this is a sign of innovation to come in the video world.