‘This is a game changer’: Runway releases new AI facial expression motion capture feature Act-One


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


AI video has come incredibly far in the years since the first models debuted in late 2022, increasing in realism, resolution, fidelity, prompt adherence (how well they match the text prompt or description of the video that the user typed) and number.

But one area that remains a limitation to many AI video creators — myself included — is in depicting realistic facial expressions in AI generated characters. Most appear quite limited and difficult to control.

But no longer: today, Runway, the New York City-headquartered AI startup backed by Google and others, announced a new feature “Act-One,” that allows users to record video of themselves or actors from any video camera — even the one on a smartphone — and then transfers the subject’s facial expressions to that of an AI generated character with uncanny accuracy.

image ce090c

The free-to-use tool is gradually rolling out “gradually” to users starting today, according to Runway’s blog post on the feature.

While anyone with a Runway account can access it, it will be limited to those who have enough credits to generate new videos on the company’s Gen-3 Alpha video generation model introduced earlier this year, which supports text-to-video, image-to-video, and video-to-video AI creation pipelines (e.g. the user can type in a scene description, upload an image or a video, or use a combination of these inputs and Gen-3 Alpha will use what its given to guide its generation of a new scene).

Despite limited availability right now at the time of this posting, the burgeoning scene of AI video creators online is already applauding the new feature.

As Allen T. remarked on his X account “This is a game changer!”

It also comes on the heels of Runway’s move into Hollywood film production last month, when it announced it had inked a deal with Lionsgate, the studio behind the John Wick and Hunger Games movie franchises, to create a custom AI video generation model based on the studio’s catalog of more than 20,000 titles.

Simplifying a traditionally complex and equipment-heavy creative proccess

Traditionally, facial animation requires extensive and often cumbersome processes, including motion capture equipment, manual face rigging, and multiple reference footages.

Anyone interested in filmmaking has likely caught sight of some of the intricacy and difficulty of this process to date on set or when viewing behind the scenes footage of effects-heavy and motion-capture films such as The Lord of the Rings series, Avatar, or Rise of the Planet of the Apes, wherein actors are seen covered in ping pong ball markers and their faces dotted with marker and blocked by head-mounted apparatuses.

Accurately modeling intricate facial expressions is what led David Fincher and his production team on The Curious Case of Benjamin Button to develop whole new 3D modeling processes and ultimately won them an Academy Award, as reported in a prior VentureBeat report.

Yet in the last few years, new software and AI-based startups such as Move have sought to reduce the equipment necessary to perform accurate motion capture — though that company in particular has concentrated primarily on full-body, more broad movements, whereas Runway’s Act-One is focused more on modeling facial expressions.

With Act-One, Runway aims to make this complex process far more accessible. The new tool allows creators to animate characters in a variety of styles and designs, without the need for motion-capture gear or character rigging.

Instead, users can rely on a simple driving video to transpose performances—including eye-lines, micro-expressions, and nuanced pacing—onto a generated character, or even multiple characters in different styles.

As Runway wrote on its X account: “Act-One is able to translate the performance from a single input video across countless different character designs and in many different styles.”

The feature is focused “mostly” on the face “for now,” according to Cristóbal Valenzuela, co-founder and CEO of Runway, who responded to VentureBeat’s questions via direct message on X.

Runway’s approach offers significant advantages for animators, game developers, and filmmakers alike. The model accurately captures the depth of an actor’s performance while remaining versatile across different character designs and proportions. This opens up exciting possibilities for creating unique characters that express genuine emotion and personality.

Cinematic realism across camera angles

One of Act-One’s key strengths lies in its ability to deliver cinematic-quality, realistic outputs from various camera angles and focal lengths.

This flexibility enhances creators’ ability to tell emotionally resonant stories through character performances that were previously hard to achieve without expensive equipment and multi-step workflows.

The tool’s ability to faithfully capture the emotional depth and performance style of an actor, even in complex scenes.

This shift allows creators to bring their characters to life in new ways, unlocking the potential for richer storytelling across both live-action and animated formats.

While Runway previously supported video-to-video AI conversion as previously mentioned in this piece, which did allow users to upload footage of themselves and have Gen-3 Alpha or other prior Runway AI video models such as Gen-2 “reskin” them with AI effects, the new Act-One feature is optimized for facial mapping and effects.

As Valenzuela told VentureBeat via DM on X: “The consistency and performance is unmatched with Act-One.”

Enabling more expansive video storytelling

A single actor, using only a consumer-grade camera, can now perform multiple characters, with the model generating distinct outputs for each.

This capability is poised to transform narrative content creation, particularly in indie film production and digital media, where high-end production resources are often limited.

In a public post on X, Valenzuela noted a shift in how the industry approaches generative models. “We are now beyond the threshold of asking ourselves if generative models can generate consistent videos. A good model is now the new baseline. The difference lies in what you do with the model—how you think about its applications and use cases, and what you ultimately build,” Valenzuela wrote.

Safety and protection for public figure impersonations

As with all of Runway’s releases, Act-One comes equipped with a comprehensive suite of safety measures.

These include safeguards to detect and block attempts to generate content featuring public figures without authorization, as well as technical tools to verify voice usage rights.

Continuous monitoring also ensures that the platform is used responsibly, preventing potential misuse of the tool.

Runway’s commitment to ethical development aligns with its broader mission to expand creative possibilities while maintaining a strong focus on safety and content moderation.

Looking ahead

As Act-One gradually rolls out, Runway is eager to see how artists, filmmakers, and other creators will harness this new tool to bring their ideas to life.

With Act -ne, complex animation techniques are now within reach for a broader audience of creators, enabling more people to explore new forms of storytelling and artistic expression.

By reducing the technical barriers traditionally associated with character animation, the company hopes to inspire new levels of creativity across the digital media landscape.

It also helps Runway stand out and differentiate its AI video creation platform against the likes of an increasing swath of competitors, including Luma AI from the U.S. and Hailuo and Kling from China, as well as open source rivals such as Genmo’s Mochi 1, which also just debuted today.



Source link

About The Author

Scroll to Top