Company type | Private |
---|---|
Industry | Artificial intelligence, machine learning, software development |
Founded | 2018 |
Headquarters | |
Area served | Worldwide |
Key people |
|
Products | Gen-1, Gen-2, Gen-3 Alpha |
Number of employees | 86 |
Website | runwayml |
Runway AI, Inc. (also known as Runway and RunwayML) is an American company headquartered in New York City that specializes in generative artificial intelligence research and technologies. [1] The company is primarily focused on creating products and models for generating videos, images, and various multimedia content. It is most notable for developing the commercial text-to-video and video generative AI models Gen-1, Gen-2 [2] [3] and Gen-3 Alpha. [1]
Runway's tools and AI models have been utilized in films such as Everything Everywhere All At Once, [4] in music videos for artists including A$AP Rocky, [5] Kanye West, [6] Brockhampton, and The Dandy Warhols, [7] and in editing television shows like The Late Show [8] and Top Gear. [9]
The company was founded in 2018 by the Chileans Cristóbal Valenzuela, [10] Alejandro Matamala, and the Greek Anastasis Germanidis after they met at New York University Tisch School of the Arts ITP. [11] The company raised US$2 million in 2018 to build a platform to deploy machine learning models at scale inside multimedia applications.
In December 2020, Runway raised US$8.5 million [12] in a Series A funding round.
In December 2021, the company raised US$35 million in a Series B funding round. [13]
In August 2022, the company co-released an improved version of their Latent Diffusion Model called Stable Diffusion together with the CompVis Group at Ludwig Maximilian University of Munich and a compute donation by Stability AI. [14] [15]
On December 21, 2022 Runway raised US$50 million [16] in a Series C round. Followed by a $141 million Series C extension round in June 2023 at a $1.5 billion valuation [17] [18] from Google, Nvidia, and Salesforce [19] to build foundational multimodal AI models for content generation to be used in films and video production. [20] [21]
In February 2023 Runway released Gen-1 and Gen-2 the first commercial and publicly available foundational video-to-video and text-to-video generation model [22] [23] [24] accessible via a simple web interface.
In June 2023 Runway was selected as one of the 100 Most Influential Companies in the world by Time magazine. [25]
Runway is focused on generative AI for video, media, and art. The company focuses on developing proprietary foundational model technology that professionals in filmmaking, post-production, advertising, editing, and visual effects can utilize. Additionally, Runway offers an iOS app aimed at consumers [26]
The Runway product is accessible via a web platform and through an API as a managed service.
Stable Diffusion is an open-source deep learning, text-to-image model released in 2022 based on the original paper High-Resolution Image Synthesis with Latent Diffusion Models published by Runway and the CompVis Group at Ludwig Maximilian University of Munich. [27] [28] [29] Stable Diffusion is mostly used to create images conditioned on text descriptions.
Gen-1 is a video-to-video generative AI system that synthesize new videos by applying the composition and style of an image or text prompt to the structure of a source video. The model was released in February 2023. The Gen-1 model was trained and developed by Runway based on the original paper Structure and Content-Guided Video Synthesis with Diffusion Models from Runway Research. [30] Gen-1 is an example of Generative artificial intelligence for video creation.
Gen-2 is a multimodal AI system that can generate novel videos with text, images or video clips. The model is a continuation of Gen-1 and includes a modality to generate video conditioned to text. Gen-2 is one of the first commercially available text-to-video models. [31] [32] [33] [34]
Gen-3 Alpha is the first of an upcoming series of models trained by Runway on a new infrastructure built for large-scale multimodal training. It is a major improvement in fidelity, consistency, and motion over Gen-2, and a step towards building General World Models. [2]
Training data for Gen-3 has been sourced from thousands of YouTube videos and potentially pirated films. A former Runway employee alleged to 404 Media that a company-wide effort was to compile videos into spreadsheets, which was then downloaded using youtube-dl through proxy servers to avoid being blocked by YouTube. In tests, 404 Media discovered that names of YouTubers would generate videos in their respective styles. [35]
Runway hosts an annual AI Film Festival [36] in Los Angeles and New York City. [37] [38]
Brainly is an education company based in Kraków, Poland, with headquarters in New York City. It is an AI-powered homework help platform targeting students and parents. As of November 2020, Brainly reported having 15 million daily active users, making it the world's most popular education app. In 2024, FlexOS reported Brainly as the #1 Generative AI Tool in the education category and the #6 Generative AI Tool overall. Also in 2024, Andreessen Horowitz reported Brainly as #6 in the Top 50 Gen AI Mobile Apps by monthly active users.
Multimodal learning is a type of deep learning that integrates and processes multiple types of data, referred to as modalities, such as text, audio, images, or video. This integration allows for a more holistic understanding of complex data, improving model performance in tasks like visual question answering, cross-modal retrieval, text-to-image generation, aesthetic ranking, and image captioning.
Artificial intelligence art is visual artwork created or enhanced through the use of artificial intelligence (AI) programs.
Cristóbal Valenzuela is a Chilean-born technologist, software developer, and CEO of Runway. In 2018, Valenzuela co-founded the AI research company Runway in New York City with Anastasis Germanidis and Alejandro Matamala.
Cohere Inc. is a Canadian multinational technology company focused on artificial intelligence for the enterprise, specializing in large language models. Cohere was founded in 2019 by Aidan Gomez, Ivan Zhang, and Nick Frosst, and is headquartered in Toronto and San Francisco, with offices in Palo Alto, London, and New York City.
You.com is an AI assistant that began as a personalization-focused search engine. While still offering web search capabilities, You.com has evolved to prioritize a chat-first AI assistant.
Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. The generative artificial intelligence technology is the premier product of Stability AI and is considered to be a part of the ongoing artificial intelligence boom.
A text-to-image model is a machine learning model which takes an input natural language description and produces an image matching that description.
Mohammad Emad Mostaque is a British-Bangladeshi business executive, mathematician, and former hedge fund manager. He is the founder and was CEO of Stability AI until 23 March 2024, one of the companies behind Stable Diffusion.
LAION is a German non-profit which makes open-sourced artificial intelligence models and datasets. It is best known for releasing a number of large datasets of images and captions scraped from the web which have been used to train a number of high-profile text-to-image models, including Stable Diffusion and Imagen.
A text-to-video model is a machine learning model that uses a natural language description as input to produce a video relevant to the input text. Advancements during the 2020s in the generation of high-quality, text-conditioned videos have largely been driven by the development of video diffusion models.
Generative artificial intelligence is a subset of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models learn the underlying patterns and structures of their training data and use them to produce new data based on the input, which often comes in the form of natural language prompts.
The AI boom, or AI spring, is an ongoing period of rapid progress in the field of artificial intelligence (AI) that started in the late 2010s before gaining international prominence in the early 2020s. Examples include protein folding prediction led by Google DeepMind as well as large language models and generative AI applications developed by OpenAI.
In the 2020s, the rapid advancement of deep learning-based generative artificial intelligence models raised questions about whether copyright infringement occurs when such are trained or used. This includes text-to-image models such as Stable Diffusion and large language models such as ChatGPT. As of 2023, there were several pending U.S. lawsuits challenging the use of copyrighted data to train AI models, with defendants arguing that this falls under fair use.
Aleph Alpha GmbH is a German artificial intelligence (AI) startup company founded by Jonas Andrulis and Samuel Weinbach, both of whom have professional experience at companies such as Apple and Deloitte. Based in Heidelberg, the company aims to develop a sovereign technology stack for generative AI that operates independently of U.S. companies and complies with European data protection regulations, including the Artificial Intelligence Act. Aleph Alpha has established reportedly one of the most powerful AI clusters within its own data center, and specializes in developing large language models (LLM). These models are designed to provide transparency regarding the sources used for generating results and are intended for use by enterprises and governmental agencies. The training of these models has been conducted in five European languages.
Stability AI is an artificial intelligence company, best known for its text-to-image model Stable Diffusion.
Colossyan is a technology company that uses generative artificial intelligence (GenAI) to create corporate training videos.
MiniMax is an artificial intelligence (AI) company based in Shanghai, China. As of 2024, it has been dubbed one of China's "AI Tiger" companies by investors.
Qodo is a code integrity platform that uses AI to help create software through out its development stages.
Flux is a text-to-image model developed by Black Forest Labs, based in Freiburg im Breisgau, Germany. Black Forest Labs was founded by former employees of Stability AI. As with other text-to-image models, Flux generates images from natural language descriptions, called prompts.