Luma AI unveils Ray3 Modify for controllable AI video
Luma AI has launched Ray3 Modify, an artificial intelligence video model that builds hybrid workflows around human performances and is now available within its Dream Machine platform.
The company said the model lets production teams keep an actor's timing, motion and emotional delivery while they rework scenes, locations and visual style using generative AI.
Luma AI positions Ray3 Modify as a response to early AI video systems that often struggled to preserve live action performance once scenes were transformed. The new model generates content in direct response to filmed footage and other physical inputs.
Ray3 Modify sits inside Dream Machine, Luma AI's online environment for AI video tools that target film, advertising and post-production work. The launch extends the company's push into professional workflows where predictability and continuity are critical.
Hybrid control
Luma AI said Ray3 Modify centres the human performer, camera operator or other real-world input as the primary source of direction for AI. The system conditions generation on this input footage. It then uses AI to extend, reinterpret or transform the shot while keeping the underlying performance intact.
The model tracks motion, timing, framing and emotional intent from the original recording. It can then apply those elements to alternative environments, cinematography and styling. The approach aims to reduce guesswork in AI-led production and give directors more direct control over scene evolution.
In traditional AI video generation, outputs can drift from the source prompt and introduce unwanted changes to performance or continuity. Luma AI said Ray3 Modify is designed for workflows where the original shot acts as an anchor, and AI functions as a visual effects and scene modification layer.
"Generative video models are incredibly expressive but also hard to control. Today, we are excited to introduce Ray3 Modify that blends the real-world with the expressivity of AI while giving full control to creatives. This means creative teams can capture performances with a camera and then immediately modify it to be in any location imaginable, change costumes, or even go back and reshoot the scene with AI, without recreating the physical shoot," said Amit Jain, CEO and Co-founder, Luma AI.
Keyframe-based editing
Ray3 Modify introduces Start and End Frame control to video-to-video workflows. Creative teams can set these keyframes to steer transitions and character behaviour between them. This gives editors a method to manage spatial continuity during longer camera moves, reveals and complex blocking.
The keyframe system separates creative decisions about performance from decisions about visual treatment. Directors can first secure a take they want from actors. They can then define how AI should evolve that performance across a shot using the Start and End Frame controls.
This method aligns with existing post-production habits that rely on precise in-and-out points and editorial markers. It also draws AI video closer to the grammar of conventional film editing.
Character continuity
The model's Character Reference feature lets users apply a custom character identity to an actor's recorded performance. Luma AI said this approach is important for productions that rely on actor-led work with AI-derived visuals.
Character Reference can lock likeness, costume and identity continuity across an entire shot. The feature separates the performer's motion and delivery from the on-screen appearance of the character. It allows teams to build consistent fictional or branded characters around real performances.
This approach is likely to interest advertising, where brand characters must remain visually stable across campaigns, and visual effects-heavy film production, where actors often perform on bare stages.
Performance preservation
Luma AI said Ray3 Modify places performance preservation at the centre of the workflow. The model maintains an actor's original motion, timing, eye line and emotional delivery as the base of the scene.
On top of this base, users can change environments, visual style, cinematography and other attributes. That includes scene relighting, location swaps and costume changes. The company said these changes occur with scene-aware fidelity, so that edits do not break continuity or identity.
The approach seeks to address one of the main concerns around AI video: the risk that visual transformations weaken the integrity of the original performance. By tying AI generation tightly to recorded footage, Ray3 Modify aims to keep actors' work at the core of the final shot.
Enhanced modify pipeline
Ray3 Modify also introduces an enhanced Modify Video pipeline with a new model architecture. Luma AI described it as a higher-signal design that improves adherence to physical motion, shot composition and performance characteristics.
The updated pipeline is intended to keep scenes stable even as they undergo extensive visual changes. The model tracks movement and layout from the source footage so that edits respect the geometry and blocking of the original take.
This stability links AI video closer to established production practices. It allows editors and visual effects teams to use AI as another layer in the post-production stack rather than as an unpredictable generator of full scenes.
According to Luma AI, the combination of Keyframe Control, Character Reference, performance preservation and the new Modify Video pipeline forms a toolset for hybrid-AI production. The company said creative authority begins with the performer or camera, while AI acts as an extension of that direction across film, advertising and post-production work.