![]() | |
Company type | Private |
---|---|
Industry | Artificial intelligence machine learning software development |
Founded | 2018 |
Headquarters | Manhattan, New York City, U.S. |
Area served | Worldwide |
Key people |
|
Products | Gen-1 Gen-2 Gen-3 Alpha Frames Gen-4 Aleph |
Number of employees | 86 |
Website | runwayml |
Runway AI, Inc. (also known as Runway and RunwayML) is an American company headquartered in New York City that specializes in generative artificial intelligence research and technologies. [1] The company is primarily focused on creating products and models for generating videos, images, and various multimedia content. It is most notable for developing the commercial text-to-video and video generative AI models Gen-1, Gen-2, [2] [3] Gen-3 Alpha, [1] Gen-4, [4] Act-One and Act-Two [5] , Aleph, and Game Worlds [6] .
Runway's tools and AI models have been utilized in films such as Everything Everywhere All at Once , [7] in music videos for artists including A$AP Rocky, [8] Kanye West, [9] Brockhampton, and The Dandy Warhols, [10] and in editing television shows like The Late Show [11] and Top Gear. [12]
The company was founded in 2018 by the Chileans Cristóbal Valenzuela, [13] Alejandro Matamala and the Greek Anastasis Germanidis after they met at New York University Tisch School of the Arts ITP. [14] The company raised US$2 million in 2018 to build a platform to deploy machine learning models at scale inside multimedia applications.
In December 2020, Runway raised US$8.5 million [15] in a Series A funding round.
In December 2021, the company raised US$35 million in a Series B funding round. [16]
In August 2022, the company co-released an improved version of their Latent Diffusion Model called Stable Diffusion together with the CompVis Group at Ludwig Maximilian University of Munich and a compute donation by Stability AI. [17] [18]
On December 21, 2022, Runway raised US$50 million [19] in a Series C round. Followed by a $141 million Series C extension round in June 2023 at a $1.5 billion valuation [20] [21] from Google, Nvidia, and Salesforce [22] to build foundational multimodal AI models for content generation to be used in films and video production. [23] [24]
In February 2023 Runway released Gen-1 and Gen-2 the first commercial and publicly available foundational video-to-video and text-to-video generation model [1] [2] [3] accessible via a simple web interface.
In June 2023 Runway was selected as one of the 100 Most Influential Companies in the world by Time magazine. [25]
On 3 April 2025, Runway raised $308 million in a funding round led by General Atlantic, valuing it at over $3 billion. [26] [27]
Runway is focused on generative AI for video, media, and art. The company focuses on developing proprietary foundational model technology that professionals in filmmaking, post-production, advertising, editing, and visual effects can utilize. Additionally, Runway offers an iOS app aimed at consumers. [28]
The Runway product is accessible via a web platform and through an API as a managed service.
Stable Diffusion is an open-source deep learning, text-to-image model released in 2022 based on the original paper High-Resolution Image Synthesis with Latent Diffusion Models published by Runway and the CompVis Group at Ludwig Maximilian University of Munich. [29] [30] [18] Stable Diffusion is mostly used to create images conditioned on text descriptions.
Gen-1 is a video-to-video generative AI system that synthesize new videos by applying the composition and style of an image or text prompt to the structure of a source video. The model was released in February 2023. The Gen-1 model was trained and developed by Runway based on the original paper Structure and Content-Guided Video Synthesis with Diffusion Models from Runway Research. [31] Gen-1 is an example of generative artificial intelligence for video creation.
Gen-2 is a multimodal AI system that can generate novel videos with text, images or video clips. The model is a continuation of Gen-1 and includes a modality to generate video conditioned to text. Gen-2 is one of the first commercially available text-to-video models. [32] [33] [34] [35]
Gen-3 Alpha is the first of an upcoming series of models trained by Runway on a new infrastructure built for large-scale multimodal training. It is a major improvement in fidelity, consistency, and motion over Gen-2, and a step towards building General World Modelссвs. [2]
Training data for Gen-3 has been sourced from thousands of YouTube videos and potentially pirated films. A former Runway employee alleged to 404 Media that a company-wide effort was to compile videos into spreadsheets, which was then downloaded using youtube-dl through proxy servers to avoid being blocked by YouTube. In tests, 404 Media discovered that names of YouTubers would generate videos in their respective styles. [36]
In March 2025, Runway released Gen-4, a video-generating AI model that the company described as its most advanced to date. According to the company, the model can generate consistent characters, objects, and environments across scenes, using reference images and text prompts. [37]
Unlike earlier models that treated each frame as a separate creative task with only loose connections between them, Gen-4 allows users to generate consistent characters across lighting conditions using a reference image of those characters. [38] The model introduced several key features designed to address longstanding challenges in AI video generation, particularly around visual consistency and narrative continuity that had previously made AI-generated content appear disjointed. [39]
Gen-4 Turbo, released in April 2025, [40] is a faster, more cost-effective version of Gen-4. The Turbo model uses fewer credits per second of video.
With Gen-4 References, users can upload reference images as a baseline for characters, objects, sets, or environments across different scenes. [41] The system can extract a character from one image and place them in different scenes, transform character elements or environments, blend visual styles between images, or combine elements from multiple sources. [42]
In July 2025, Runway released Aleph, which adds the ability to perform edits on input videos including adding, removing, and transforming objects, generating any angle of a scene, and modifying style and lighting. [43] With the launch of Aleph, the Gen-4 model can now support various editing tasks including object manipulation, scene transformation, camera angle generation, and style transfer. [44]
Act-One, released in October 2024, enables users to upload a driving video and then transform that performance into realistic or animated characters. [45] With Act-One, creators can animate characters in various styles without motion-capture equipment or character rigging, but still maintain important elements of the original performance including eye-lines, micro-expressions, and nuanced pacing onto generated characters. [46] [47] Act-Two, an expanded version, allows users to animate characters using driving performance videos, providing control over gestures and body movement when using character images, and automatically adding environmental motion. [48]
In 2025, Runway launched Game Worlds, described as “an early look at the next frontier of non-linear narrative experiences.” [49] The tool allows users to play or create text-based adventures accompanied by pictures. [50] Runway positions Game Worlds as "a first step towards the next era of gaming" and states it "represents the next frontier" for interactive entertainment and education. [51]
Runway hosts an annual AI Film Festival [52] in Los Angeles and New York City. [53] [54]
{{cite web}}
: CS1 maint: numeric names: authors list (link)