![]() | |
Company type | Private |
---|---|
Industry | Artificial intelligence machine learning software development |
Founded | 2018 |
Headquarters | Manhattan, New York City, U.S. |
Area served | Worldwide |
Key people |
|
Products | Gen-1 Gen-2 Gen-3 Alpha Frames Gen-4 Aleph |
Number of employees | 86 |
Website | runwayml |
Runway AI, Inc. (also known as Runway and RunwayML) is an American company headquartered in New York City that specializes in generative artificial intelligence research and technologies. [1] The company is primarily focused on creating products and models for generating videos, images, and various multimedia content. It is most notable for developing the commercial text-to-video and video generative AI models Gen-1, Gen-2, [2] [3] Gen-3 Alpha, [1] Gen-4, [4] Act-One and Act-Two [5] , Aleph, and Game Worlds [6] .
Runway's tools and AI models have been utilized in films such as Everything Everywhere All at Once , [7] in music videos for artists including A$AP Rocky, [8] Kanye West, [9] Brockhampton, and The Dandy Warhols, [10] and in editing television shows like The Late Show [11] and Top Gear. [12]
The company was founded in 2018 by the Chileans Cristóbal Valenzuela, [13] Alejandro Matamala and the Greek Anastasis Germanidis after they met at New York University Tisch School of the Arts ITP. [14] The company raised US$2 million in 2018 to build a platform to deploy machine learning models at scale inside multimedia applications.
In December 2020, Runway raised US$8.5 million [15] in a Series A funding round.
In December 2021, the company raised US$35 million in a Series B funding round. [16]
In August 2022, the company co-released an improved version of their Latent Diffusion Model called Stable Diffusion together with the CompVis Group at Ludwig Maximilian University of Munich and a compute donation by Stability AI. [17] [18]
On December 21, 2022, Runway raised US$50 million [19] in a Series C round. Followed by a $141 million Series C extension round in June 2023 at a $1.5 billion valuation [20] [21] from Google, Nvidia, and Salesforce [22] to build foundational multimodal AI models for content generation to be used in films and video production. [23] [24]
In February 2023 Runway released Gen-1 and Gen-2 the first commercial and publicly available foundational video-to-video and text-to-video generation model [1] [2] [3] accessible via a simple web interface.
In June 2023 Runway was selected as one of the 100 Most Influential Companies in the world by Time magazine. [25]
On 3 April 2025, Runway raised $308 million in a funding round led by General Atlantic, valuing it at over $3 billion. [26] [27]
Runway is focused on generative AI for video, media, and art. The company focuses on developing proprietary foundational model technology that professionals in filmmaking, post-production, advertising, editing, and visual effects can utilize. Additionally, Runway offers an iOS app aimed at consumers. [28]
The Runway product is accessible via a web platform and through an API as a managed service.
Stable Diffusion is an open-source deep learning, text-to-image model released in 2022 based on the original paper High-Resolution Image Synthesis with Latent Diffusion Models published by Runway and the CompVis Group at Ludwig Maximilian University of Munich. [29] [30] [18] Stable Diffusion is mostly used to create images conditioned on text descriptions.
Gen-1 is a video-to-video generative AI system that synthesize new videos by applying the composition and style of an image or text prompt to the structure of a source video. The model was released in February 2023. The Gen-1 model was trained and developed by Runway based on the original paper Structure and Content-Guided Video Synthesis with Diffusion Models from Runway Research. [31] Gen-1 is an example of generative artificial intelligence for video creation.
Gen-2 is a multimodal AI system that can generate novel videos with text, images or video clips. The model is a continuation of Gen-1 and includes a modality to generate video conditioned to text. Gen-2 is one of the first commercially available text-to-video models. [32] [33] [34] [35]
Gen-3 Alpha is the first of an upcoming series of models trained by Runway on a new infrastructure built for large-scale multimodal training. It is a major improvement in fidelity, consistency, and motion over Gen-2, and a step towards building General World Modelссвs. [2]
Training data for Gen-3 has been sourced from thousands of YouTube videos and potentially pirated films. A former Runway employee alleged to 404 Media that a company-wide effort was to compile videos into spreadsheets, which was then downloaded using youtube-dl through proxy servers to avoid being blocked by YouTube. In tests, 404 Media discovered that names of YouTubers would generate videos in their respective styles. [36]
In March 2025, Runway released Gen-4, a video-generating AI model that the company described as its most advanced to date. According to the company, the model can generate consistent characters, objects, and environments across scenes, using reference images and text prompts. [37]
Unlike earlier models that treated each frame as a separate creative task with only loose connections between them, Gen-4 allows users to generate consistent characters across lighting conditions using a reference image of those characters. [38] The model introduced several key features designed to address longstanding challenges in AI video generation, particularly around visual consistency and narrative continuity that had previously made AI-generated content appear disjointed. [39]
Gen-4 Turbo, released in April 2025, [40] is a faster, more cost-effective version of Gen-4. The Turbo model uses fewer credits per second of video.
With Gen-4 References, users can upload reference images as a baseline for characters, objects, sets, or environments across different scenes. [41] The system can extract a character from one image and place them in different scenes, transform character elements or environments, blend visual styles between images, or combine elements from multiple sources. [42]
In July 2025, Runway released Aleph, which adds the ability to perform edits on input videos including adding, removing, and transforming objects, generating any angle of a scene, and modifying style and lighting. [43] With the launch of Aleph, the Gen-4 model can now support various editing tasks including object manipulation, scene transformation, camera angle generation, and style transfer. [44]
Act-One, released in October 2024, enables users to upload a driving video and then transform that performance into realistic or animated characters. [45] With Act-One, creators can animate characters in various styles without motion-capture equipment or character rigging, but still maintain important elements of the original performance including eye-lines, micro-expressions, and nuanced pacing onto generated characters. [46] [47] Act-Two, an expanded version, allows users to animate characters using driving performance videos, providing control over gestures and body movement when using character images, and automatically adding environmental motion. [48]
In 2025, Runway launched Game Worlds, described as “an early look at the next frontier of non-linear narrative experiences.” [49] The tool allows users to play or create text-based adventures accompanied by pictures. [50] Runway positions Game Worlds as "a first step towards the next era of gaming" and states it "represents the next frontier" for interactive entertainment and education. [51]
In August 2025, Runway partnered with IMAX to screen AI Film Festival winners in 10 major cities across the United States. IMAX Chief Content Officer, Jonathan Fischer, said of the partnership: “The IMAX Experience has typically been reserved for the world’s most accomplished and visionary filmmakers. We’re excited to open our aperture and use our platform to experiment with a new kind of creator, as storytelling and technology converge in an entirely new way.” [52]
In June 2025, AMC Networks became the first cable company to formally partner with Runway for AI-powered content creation. [53] [54] The partnership focuses on using Runway's technology to generate marketing images and help pre-visualize shows before production begins. [55] AMC Networks plans to use the AI tools to create promotional materials without requiring traditional photoshoots and to accelerate pre-visualization during development. [56] According to AMC Networks executive Stephanie Mitchko, the partnership aims to "enhance both how we market and how we create" while providing creative partners with tools to "fully realize the stories they want to tell." [57]
In September 2024, Runway announced a partnership with Lionsgate Entertainment to create a custom AI video generation model trained on the studio's proprietary catalog of over 20,000 film and television titles. [58] [59] The custom model is designed exclusively for Lionsgate's use and cannot be accessed by other Runway users. [60]
The partnership, described as the first of its kind between an AI company and a major Hollywood studio, allows Lionsgate filmmakers, directors, and creative talent to use AI-generated cinematic video for pre-production and post-production processes. [61] [62]
Runway has partnered with the Tribeca Film Festival since 2024 to showcase AI-generated films and explore the integration of artificial intelligence in filmmaking. [63] [64] The collaboration includes a programming partnership called "Human Powered," which features short films and music videos created using AI tools, followed by Q&A sessions with filmmakers. [65]
The Tribeca Film Festival is also a recurring partner at Runway’s AI Film Festival. The partnership has grown significantly, with Runway's AI Film Festival receiving around 3,000 submissions in 2024, a ten-fold increase from the previous year. [66] Tribeca CEO Jane Rosenthal has emphasized the importance of working directly with AI companies to foster dialogue about the technology's role in filmmaking rather than avoiding or fighting it. [67]
Runway is incorporated into design and filmmaking curricula at major universities including NYU Tisch School of the Arts, [68] UCLA, [69] RISD, [70] and Harvard [71] among others. The company offers discounted resources and support to educators through their "Runway for Educators" program, working with institutions to ensure classes and students have access to AI creative tools. [72]
Runway has hosted an annual AI Film Festival (AIFF) since 2023 to showcase films created using artificial intelligence tools. [73] The festival has experienced dramatic growth, expanding from 300 submissions and screenings in small New York City theaters in its inaugural year to over 6,000 submissions by 2025. [74] The 2025 festival culminated in a sold-out screening at Lincoln Center’s Alice Tully Hall, drawing filmmakers and audiences from around the world. [75]
The festival serves as a platform for emerging filmmakers to explore the creative possibilities of AI-assisted storytelling and is considered by many to have become a significant event in the intersection of technology and cinema. [76]
Runway organizes the Gen:48 short film competition, a rapid-fire filmmaking challenge that gives participants 48 hours and free AI generation credits to create short films between 1-4 minutes in length. [77] The competition format is designed to encourage experimentation with AI tools under time constraints, fostering innovation in AI-assisted filmmaking. [78] The first edition launched in October 2023, followed by a second edition in February 2024, and subsequent competitions continuing through 2025. [79] The August 2025 "Aleph Edition" marked the fifth iteration of the competition. [80] Participants receive Runway credits and temporary unlimited access to Runway's tools during the 48-hour period. [81] The competition has become a notable fixture in the AI filmmaking community, with winners receiving prizes including cash prizes, Runway credits, and the opportunity to screen their films at the AI Film Festival. [82]
{{cite web}}
: CS1 maint: numeric names: authors list (link){{cite web}}
: CS1 maint: url-status (link)