Runway released Gen-4 on April 8 and the response from working filmmakers has been different than every previous AI video release. Previous versions were demos. This one is showing up in actual production pipelines.
The core improvement is consistency. Gen-4 holds a character, a lighting setup, and a camera angle across shots in a way Gen-3 and its competitors could not. If you generated a woman walking through a warehouse in a teal jacket with Gen-3, the jacket would drift to green by the second shot, the face would subtly change, and the warehouse layout would reset each clip. Gen-4 fixes that. You can now anchor a character reference and a scene reference and keep them stable across a two minute sequence. For anyone who has tried to actually tell a story with generative video, that single change is the difference between a gimmick and a tool.
Runway did a private beta with roughly forty commercial directors and production companies from late February through the April release. The names on the list matter. Stink Studios, Park Pictures, Imaginary Forces, and a handful of smaller VFX houses ran it through real client work. Two of the commercials that aired during the Final Four weekend were built substantially in Gen-4. The teams involved are not marketing the AI element, which is a tell about where the industry is emotionally. The tool is being used but not celebrated.
Price is a factor. Gen-4 costs about eight times what Gen-3 cost per second of generated footage. For a small creator that makes it prohibitive. For a production company building a thirty second spot, the math still works because the alternative was a location shoot, a crew, and two days of edit time. One creative director told The Hollywood Reporter that Gen-4 dropped an effects budget from roughly ninety thousand dollars to under twelve thousand. That is the real story of the release.
The legal picture remains messy. Runway is named in three active lawsuits from artists and rightsholders over training data. The company has not disclosed its full dataset and has stated only that it uses licensed content, publicly available content, and content it generated itself. The DGA, WGA, and SAG-AFTRA have each issued statements reminding members that any work using generative video on a union production requires specific disclosures and in some cases human performer consent under the 2024 contract language. Runway provides a dashboard for studios to track provenance, but the industry is still figuring out whether that is enough.
The performance of Gen-4 also raised a quieter question about where the smaller players go from here. Pika, Luma, and a handful of open source projects were making steady progress through 2025. Gen-4 put a gap between Runway and the rest of the field that those competitors are going to have to close fast. OpenAI's Sora has been quiet since the initial rollout, and the expected April update did not include a major quality jump. Google's Veo 2 sits in a separate lane because access is limited and enterprise focused.
What is worth watching in the next three months is not the model itself. It is how directors, editors, and union crews negotiate the workflow. The first test comes in early summer when at least one major streamer is expected to release a short form project built substantially with Gen-4. Names have not been confirmed, but three sources inside two production companies told Variety the pipeline is active.
For creators without a studio budget, Gen-4 is mostly a preview of what will be cheap in eighteen months. The quality improvements that hit the pro tier first tend to trickle down fast. The Gen-3 pricing that seemed expensive twelve months ago is now considered accessible. The same curve is likely to apply here. If you are a filmmaker planning projects for late 2026 and 2027, Gen-4 is the first version of the technology that should change how you scope a shoot.