Popular Lesson

6.5 – Full Character Control Lesson

Take your AI-generated movie scenes to the next level by mastering full control of your character’s facial expressions using Runway’s “Act One” tool. This lesson shows how to move beyond simple lip-syncing, giving your characters subtle (or bold) new acts. Follow the paired tutorial video for a step-by-step walkthrough.

What you'll learn

  • Use Runway’s Act One tool to animate facial expressions on your characters

  • Upload and work with driving videos for custom expression mapping

  • Troubleshoot common issues, like face selection in multi-character scenes

  • Pair advanced face animation with AI-generated dialogue and voice changing

  • Integrate edited character shots back into your main video

  • Enhance scene realism and personality beyond basic lip syncing

Lesson Overview

Bringing believable emotion and nuance to AI-generated characters is a recurring challenge for creators. While previous lessons covered techniques for controlling voices and lip-syncing, subtle expression—like smirking, eyerolls, or comedic delivery—remains tricky. This lesson introduces Runway’s Act One, a tool that transfers your real facial expressions onto AI characters through a driving video. Rather than relying on random outputs or basic mouth movement, Act One lets you act out moments yourself (for example, rolling your eyes or reacting sarcastically), then maps those expressions onto any character in your scene.

We’ll also see how this can be paired with advanced voice changing, making it possible to coordinate both voice and expression for maximum impact. If you’re building dialogue-heavy or emotionally complex scenes, these skills are especially valuable. Runway’s tool works best for creative projects, skits, and even professional demos where facial nuance matters. The bridge between your own performance and digital characters is now wide open, giving you a direct way to infuse personality and intent into every frame.

Who This Is For

Whether you’re looking to animate subtle emotions or orchestrate wild facial reactions, this lesson is designed for:

  • Filmmakers working with AI-generated content
  • Creators who want more nuanced character performances
  • Content producers using AI for explainer videos or skits
  • Solo storytellers eager to put their performance into their digital work
  • Educators or workshop leaders needing fully expressive AI avatars
Skill Leap AI For Business
  • Comprehensive, Business-Centric Curriculum
  • Fast-Track Your AI Skills
  • Build Custom AI Tools for Your Business
  • AI-Driven Visual & Presentation Creation

Where This Fits in a Workflow

After developing your characters’ voices and achieving basic lip-sync, this lesson helps you add another essential layer: custom facial animation through expression mapping. You’ll typically use this technique once your base scenes are generated and you want to refine character delivery for important lines or standout actions. For example, a reaction shot requiring a believable eyeroll or a sarcastic grin now becomes possible without elaborate prompting or repeated attempts. This makes your editing workflow more flexible and gives you directorial choice over which character gets an expression—even if multiple faces share the screen. It’s especially handy for dialogue sequences or one-off comedic moments that demand a personal touch.

Technical & Workflow Benefits

Before tools like Runway’s Act One, creators were limited to whatever facial expressions happened to be generated, or forced to rely on simplistic lip-sync animations. These methods lacked subtlety and made it difficult to inject specific emotion or timing. By uploading driving videos, you sidestep those limits, letting you perform and capture even challenging microexpressions with ease. In multi-character scenes, Act One lets you assign expressions precisely (and with a simple workaround, even fix incorrect character mapping).

For instance, pairing a performance with a voice changed via 11 Labs offers full control: the delivery, timing, and facial cues can all match perfectly. This not only saves time during production but also lifts the quality and personality of your finished video—making it far more compelling for viewers.

Practice Exercise

Try applying what you’ve just learned to a scene of your own:

  1. Record a short driving video of yourself performing a specific reaction—maybe a skeptical eyebrow raise or broad smile.
  2. Use Runway’s Act One to apply your performance to a character in one of your existing AI-generated video scenes. If the wrong face is selected, use basic video editing to cut and relayer as shown in the video.

Reflect: Does your character’s expression match your intended emotion more closely than earlier, prompt-based methods? Compare this approach with simple lip-sync results from previous lessons.

Course Context Recap

This lesson builds on your skills in voiceover, voice changing, and lip-sync control by adding full expression mapping for truly customized performances. In earlier sections, you established convincing dialogue delivery; now, you can match those lines with authentic facial animation. You’ll soon move on to combining these techniques for fully realized, expressive scenes. Continue to the next part of the course for more advanced workflows and techniques that bring your AI movies to life. Explore the rest of the Creating Movies With AI Complete Course to round out your character animation skills.