Netflix Open-Sources VOID: The AI That Erases Video Objects and Rewrites the Physics They Leave Behind
Netflix and INSAIT Sofia University have released VOID (Video Object and Interaction Deletion), an open-source AI framework under Apache 2.0 that removes objects from video while automatically reconstructing the physical interactions — shadow, collision, movement — those objects caused.

D.O.T.S AI Newsroom
AI News Desk
Netflix has open-sourced a video AI system that does something no mainstream editing tool has done before: it does not just erase objects from footage — it rewrites the physical reality those objects were part of. The framework, called VOID (Video Object and Interaction Deletion), was developed collaboratively by Netflix researchers and INSAIT Sofia University and released on April 4, 2026 under the Apache 2.0 license.
The Problem With Existing Object Removal
Conventional video inpainting — the technical term for filling in pixels where an object has been removed — treats the problem as a visual gap. Remove the object, fill the gap with plausible-looking content. The physics of the original scene do not change. Which means if a removed object was casting shadows, those shadows remain. If it was colliding with other objects, those objects still move as though something pushed them. The scene becomes visually inconsistent.
For professional post-production — removing boom microphones, stunt rigging, crew members who wandered into frame — this creates costly manual correction work. For more complex scenes with significant physical interactions, it has historically been essentially intractable without frame-by-frame rotoscoping.
How VOID Works
VOID uses a multi-model pipeline. Scene analysis is powered by Google's Gemini 3 Pro, which identifies areas of the scene affected by the object being removed. Meta's SAM2 (Segment Anything Model 2) handles object segmentation. The actual video generation — reconstructing the scene without the object — is performed by Alibaba's CogVideoX video diffusion model, fine-tuned on synthetic training data from Google's Kubric and Adobe's HUMOTO to understand interaction physics. An optional secondary pass uses optical flow to correct shape distortions that can emerge from the initial generation.
The result is a system that understands what the removed object was doing — how it affected lighting, what it was touching, how it influenced the movement of other elements — and generates a replacement scene in which those physical effects are absent or appropriately reconstructed.
The Open-Source Strategy
Netflix's decision to release VOID under Apache 2.0 — a permissive license enabling commercial use — is strategically notable. The company is not building a competing post-production software business. By open-sourcing the framework, Netflix accelerates development of tools that reduce its own production costs, builds goodwill in the research community, and potentially establishes the technology as a standard that benefits the broader industry.
The full codebase is on GitHub, the research paper is on arXiv, and a demonstration version runs on Hugging Face. All components are publicly accessible.
Practical Implications
The most immediate applications are in professional post-production: removing equipment, crew, and incidental objects from footage without the manual labor currently required. For a streaming platform that produces hundreds of hours of original content annually, even modest reductions in post-production time and cost compound significantly.
The broader implication is that physics-aware video generation — the ability to manipulate not just pixels but the simulated physical world they represent — is now accessible to any developer with a GitHub account. The tools for video manipulation are moving toward a capability level that will require updated thinking about provenance, authenticity, and trust in video content.