| | |
| Company type | Private |
|---|---|
| Industry | Artificial intelligence machine learning software development |
| Founded | 2018 |
| Headquarters | Manhattan, New York City, U.S. |
Area served | Worldwide |
Key people |
|
| Products |
|
Number of employees | 86 |
| Website | runwayml |
Runway AI, Inc. (also known as Runway and RunwayML) is an American company headquartered in New York City that specializes in generative artificial intelligence research and technologies. [1] The company is primarily focused on creating products for generating videos, images, and various multimedia content through developing commercial text-to-video and video generative AI models. [2] [3]
Runway's tools and AI models have been utilized in films such as Everything Everywhere All at Once , [4] and in editing television shows like The Late Show with Stephen Colbert. [5]
The company was founded in 2018 by the Chileans Cristóbal Valenzuela, [6] Alejandro Matamala and the Greek Anastasis Germanidis after they met at New York University Tisch School of the Arts ITP. [7] Runway is focused on generative AI for video, media, and art. The company focuses on developing proprietary foundational model technology that professionals in filmmaking, post-production, advertising, editing, and visual effects can utilize. Additionally, Runway offers an iOS app aimed at consumers. [8]
The company raised US$2 million in 2018 to build a platform to deploy machine learning models at scale inside multimedia applications.[ citation needed ]
In December 2020, Runway raised US$8.5 million [9] in a Series A funding round.
In December 2021, the company raised US$35 million in a Series B funding round.[ citation needed ]
On December 21, 2022, Runway raised US$50 million [10] in a Series C round. Followed by a $141 million Series C extension round in June 2023 at a $1.5 billion valuation [11] [12] from Google, Nvidia, and Salesforce [13] to build foundational multimodal AI models for content generation to be used in films and video production. [14] [15]
On 3 April 2025, Runway raised $308 million in a funding round led by General Atlantic, valuing it at over $3 billion. [16] [17]
In August 2022, the company co-released an improved version of their Latent Diffusion Model called Stable Diffusion, an open-source deep learning, text-to-image model based on the original paper High-Resolution Image Synthesis with Latent Diffusion Models, together with the CompVis Group at Ludwig Maximilian University of Munich and a compute donation by Stability AI. [18] [19] [20] [21]
In February 2023 Runway released Gen-1 and Gen-2, commercially and publicly available foundational video-to-video and text-to-video generation models. [1] [2] [3]
Gen-1 is a video-to-video generative AI system that synthesize new videos by applying the composition and style of an image or text prompt to the structure of a source video. The model was released in February 2023. The Gen-1 model was trained and developed by Runway based on the original paper Structure and Content-Guided Video Synthesis with Diffusion Models from Runway Research. [1] [2] [3]
Gen-2 is a multimodal AI system that can generate novel videos with text, images or video clips. The model is a continuation of Gen-1 and includes a modality to generate video conditioned to text. Gen-2 is one of the first commercially available text-to-video models. [22] [23] [24] [25]
Gen-3 Alpha is a model trained by Runway on a new infrastructure built for large-scale multimodal training, improving fidelity, consistency, and motion over Gen-2, and a step towards building General World Modelссвs. [2] Training data for Gen-3 has been sourced from thousands of YouTube videos and potentially pirated films. A former Runway employee alleged to 404 Media that a company-wide effort was to compile videos into spreadsheets, which was then downloaded using youtube-dl through proxy servers to avoid being blocked by YouTube. In tests, 404 Media discovered that names of YouTubers would generate videos in their respective styles. [26]
In March 2025, Runway released Gen-4, a video-generating AI model that the company described as its most advanced to date. According to the company, the model can generate consistent characters, objects, and environments across scenes, using reference images and text prompts. [27]
Gen-4 Turbo, released in April 2025, [28] is a faster, more cost-effective version of Gen-4. The Turbo model uses fewer credits per second of video.
Act-One, released in October 2024, enables users to upload a driving video and then transform that performance into realistic or animated characters. [29] With Act-One, creators can animate characters in various styles without motion-capture equipment or character rigging, but still maintain important elements of the original performance including eye-lines, micro-expressions, and nuanced pacing onto generated characters. [30] [31] Act-Two, an expanded version, allows users to animate characters using driving performance videos, providing control over gestures and body movement when using character images, and automatically adding environmental motion. [32]
In 2025, Runway launched Game Worlds, a tool allowing users to play or create text-based adventures accompanied by pictures. [33]
In August 2025, Runway partnered with IMAX to screen AI Film Festival winners in 10 major cities across the United States. IMAX Chief Content Officer, Jonathan Fischer, said of the partnership, “The IMAX Experience has typically been reserved for the world’s most accomplished and visionary filmmakers. We’re excited to open our aperture and use our platform to experiment with a new kind of creator, as storytelling and technology converge in an entirely new way.” [34]
In June 2025, AMC Networks became the first cable company to formally partner with Runway for AI-powered content creation. [35] [36] The partnership focuses on using Runway's technology to generate marketing images and help pre-visualize shows before production begins. [37] According to AMC Networks executive Stephanie Mitchko, the partnership aims to "enhance both how we market and how we create" while providing creative partners with tools to "fully realize the stories they want to tell." [35]
In September 2024, Runway announced a partnership with Lionsgate Entertainment to create a custom AI video generation model trained on the studio's proprietary catalog of over 20,000 film and television titles. [38] [39] The custom model is designed exclusively for Lionsgate's use and cannot be accessed by other Runway users. [40]
The partnership allows Lionsgate filmmakers, directors, and creative talent to use AI-generated cinematic video for pre-production and post-production processes. [41]
Runway has partnered with the Tribeca Film Festival since 2024 to showcase AI-generated films and explore the integration of artificial intelligence in filmmaking. [42] The collaboration includes a programming partnership called "Human Powered," which features short films and music videos created using AI tools, followed by Q&A sessions with filmmakers. [43]
The Tribeca Film Festival is also a recurring partner at Runway’s AI Film Festival. The partnership has grown significantly, with Runway's AI Film Festival receiving around 3,000 submissions in 2024, a ten-fold increase from the previous year. [44] Tribeca CEO Jane Rosenthal has emphasized the importance of working directly with AI companies to foster dialogue about the technology's role in filmmaking rather than avoiding or fighting it. [42]
Runway is incorporated into design and filmmaking curricula at major universities, such as the NYU Tisch School of the Arts. [45]
Runway has hosted an annual AI Film Festival (AIFF) since 2023 to showcase films created using artificial intelligence tools. The festival has experienced dramatic growth, expanding from 300 submissions and screenings in small New York City theaters in its inaugural year to over 6,000 submissions by 2025. [46] The 2025 festival culminated in a sold-out screening at Lincoln Center’s Alice Tully Hall, drawing filmmakers and audiences from around the world. [47]
The festival serves as a platform for emerging filmmakers to explore the creative possibilities of AI-assisted storytelling and is considered by many to have become a significant event in the intersection of technology and cinema. [48]
Runway organizes the Gen:48 short film competition, a filmmaking challenge that gives participants 48 hours and AI generation credits to create short films between 1–4 minutes in length. The competition format is designed to encourage experimentation with AI tools under time constraints, fostering innovation in AI-assisted filmmaking. [49] Participants receive Runway credits and temporary unlimited access to Runway's tools during the 48-hour period, and winners receiving prizes including cash prizes, Runway credits, and the opportunity to screen their films at the AI Film Festival. [49]
In June 2023 Runway was selected as one of the 100 Most Influential Companies in the world by Time magazine. [50]