Runway is an artificial intelligence company specializing in generative AI tools for creative professionals, with a primary focus on AI-powered video generation. Founded in 2018 by Cristobal Valenzuela, Alejandro Matamala, and Anastasis Germanidis, the company is headquartered in New York City. Runway gained early prominence through its co-authorship of the foundational latent diffusion paper that underpinned Stable Diffusion, and has since become one of the most recognized names in AI video synthesis. As of February 2026, Runway is valued at $5.3 billion and has raised over $860 million in total funding.
Runway was co-founded by three graduate students who met at New York University's Tisch School of the Arts, specifically the Interactive Telecommunications Program (ITP). Cristobal Valenzuela, a Chilean designer and technologist, serves as CEO. Alejandro Matamala, also Chilean, holds the role of Chief Innovation Officer and leads Runway Labs, the company's generative AI incubator. Anastasis Germanidis, who is Greek, serves as Chief Technology Officer.
The trio shared a vision of making AI tools accessible to artists, filmmakers, and designers. Their backgrounds in design, art, and technology informed the development of a platform that would eventually put sophisticated machine learning capabilities into the hands of everyday creators, rather than limiting them to researchers and engineers.
Runway initially launched as a browser-based creative suite offering a collection of AI-powered tools for image editing, video manipulation, and content generation. The early product included features like background removal, object detection, and style transfer, marketed primarily toward creative professionals looking to integrate AI into their workflows without needing to write code.
One of Runway's most significant contributions to the broader AI ecosystem was its involvement in the development of the latent diffusion model (LDM) architecture. In late 2021, researchers from the CompVis (Computer Vision & Learning) group at LMU Munich and Runway co-authored the paper "High-Resolution Image Synthesis with Latent Diffusion Models." The paper was authored by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser (of Runway), and Bjorn Ommer. It was presented at CVPR 2022 and introduced a method for running diffusion models in a compressed latent space rather than directly in pixel space, dramatically reducing computational costs while maintaining output quality.
This research became the foundation for Stable Diffusion, the open-source text-to-image model released in August 2022 by Stability AI in collaboration with researchers from LMU Munich and Runway. Development of Stable Diffusion was led by Patrick Esser of Runway and Robin Rombach of CompVis. Four of the five original paper authors (Rombach, Blattmann, Esser, and Lorenz) later joined Stability AI to work on subsequent versions of the model.
The release of Stable Diffusion was a watershed moment for generative AI, making high-quality image generation freely available and sparking an explosion of creative applications. Runway's role in this research cemented its reputation as a serious player in the AI research community, not just a product company.
Runway has raised approximately $860 million across seven funding rounds since its founding. The company's valuation has grown from an early-stage startup to $5.3 billion in just over seven years.
| Round | Date | Amount | Valuation | Lead Investor(s) |
|---|---|---|---|---|
| Seed | 2018 | $1.4M | N/A | Human Ventures |
| Series A | 2020 | $7.5M | N/A | Initialized Capital |
| Series B | 2022 | $36M | ~$500M (pre-money) | Coatue Management |
| Series C | December 2022 | $50M | N/A | Various |
| Series C Extension | June 2023 | $141M | $1.5B (post-money) | Google, Nvidia, Salesforce |
| Series D | April 2025 | $308M | $3B | General Atlantic, Fidelity, Baillie Gifford, Nvidia, SoftBank |
| Series E | February 2026 | $315M | $5.3B | General Atlantic, Nvidia, Fidelity, Adobe Ventures, AMD Ventures |
The June 2023 Series C extension was a pivotal moment, bringing Runway's valuation to $1.5 billion and establishing it as a unicorn. That round attracted strategic investors including Google, Nvidia, and Salesforce, signaling confidence from major technology companies in Runway's approach to generative video.
The April 2025 Series D round doubled the valuation to $3 billion, led by General Atlantic with participation from Fidelity, Baillie Gifford, Nvidia, and SoftBank. The company stated that the capital would go toward AI research, hiring, and expanding its Studios film and animation production division.
By February 2026, Runway closed a $315 million Series E round that nearly doubled its valuation again to $5.3 billion. This round was led by General Atlantic with participation from Nvidia, Fidelity Management & Research, AllianceBernstein, Adobe Ventures, Mirae Asset, Emphatic Capital, Felicis, Premji Invest, and AMD Ventures. The funds are earmarked for pre-training next-generation world models and bringing them to new products and industries.
Runway's core product line consists of its generative video models, branded under the "Gen" series. Each generation has represented a meaningful step forward in video quality, coherence, and controllability.
| Model | Release Date | Key Capabilities | Max Duration |
|---|---|---|---|
| Gen-1 | February 2023 | Video-to-video; applies style/composition from an image or text prompt to a source video | Variable (based on source) |
| Gen-2 | June 2023 | Multimodal video generation from text, images, or video clips; first text-to-video offering | ~4 seconds |
| Gen-3 Alpha | June 2024 | Major leap in fidelity, consistency, and motion; text, image, or video inputs; advanced 3D dynamics understanding | 10 seconds |
| Gen-3 Alpha Turbo | August 2024 | 7x faster than Gen-3 Alpha at half the cost; image input required | 10 seconds |
| Gen-4 | March 2025 | Consistent characters, objects, and environments across scenes using reference images and text prompts | 10 seconds |
| Gen-4 Turbo | April 2025 | Generates 10-second clips in roughly 30 seconds; approximately 5x faster than standard Gen-4 | 10 seconds |
| Gen-4.5 | November 2025 | Unprecedented physical accuracy; native audio; long-form multi-shot generation; advanced physics understanding | Up to 1 minute |
Gen-1 was Runway's first dedicated video generation model, released in February 2023. Rather than creating video from scratch, it functioned as a video-to-video system, taking existing footage and transforming it according to style references or text descriptions. In demo reels, Runway showed how Gen-1 could turn clips of people on a street into claymation puppets, or books stacked on a table into a cityscape at night. Beyond direct transformations of video clips, Gen-1 supported a mode called Storyboard, which turned mockups into animated renders. Runway claimed that Gen-1's outputs were preferred over existing methods like Stable Diffusion 1.5 and text2Live by more than 73% and 88% of evaluators, respectively [21]. This approach made the model useful for re-stylizing existing content but limited its creative flexibility compared to later models.
Gen-2 expanded the scope considerably by introducing true text-to-video generation alongside image-to-video and video-to-video modes. Released to the public on June 9, 2023, after initially being available via Discord for about two months, Gen-2 allowed users to type a text description and receive a generated video clip. While the outputs were short (typically around four seconds) and had visible artifacts, Gen-2 demonstrated that consumer-accessible AI video generation was viable [22].
Gen-3 Alpha, released in June 2024, represented what many observers called a generational leap. Built from scratch on new infrastructure purpose-built for large-scale multimodal training, it delivered substantially better fidelity, temporal consistency, and motion quality compared to Gen-2. The model could produce 10-second clips and showed an advanced understanding of three-dimensional dynamics, making generated scenes feel more grounded in physical reality [11].
Gen-3 Alpha Turbo, launched on August 15, 2024, was a faster variant that could generate videos seven times faster than the standard Gen-3 Alpha model at half the cost. The tradeoff was that image input became required rather than optional, and there were some quality concessions compared to the standard model [23].
Gen-4 arrived in March 2025 and focused on consistency and controllability. Its standout feature was the ability to maintain consistent characters, objects, and environments across multiple scenes using reference images combined with text prompts. This addressed one of the biggest pain points in AI filmmaking: the difficulty of keeping a character looking the same from shot to shot [6].
The Gen-4 References system allows users to upload one or more images that serve as visual anchors for generation. The model can [16]:
For example, a "cybernetic biker" character can appear in various scenarios (seated on a motorcycle, at a beachside bar, in a neon-lit nightclub, working in a garage) while the model preserves the character's design, details, and proportions across each scene. This consistency is essential for professional filmmaking workflows where characters must be recognizable across cuts [16].
Gen-4 Turbo, released a month later in April 2025, could produce 10-second clips in approximately 30 seconds, making it about five times faster than the standard model [10].
Gen-4.5, initially released in November 2025 and subsequently updated in December 2025, topped the Artificial Analysis Text-to-Video leaderboard with an Elo score of 1,247 points, surpassing models from Google DeepMind and OpenAI. The model demonstrated unprecedented physical accuracy, with objects moving with realistic weight and momentum, liquids behaving naturally, and human motion appearing convincingly lifelike. Camera movements and cause-and-effect relationships were also handled with a new level of sophistication [1][2].
The December 2025 update to Gen-4.5 added native audio generation and native audio editing capabilities, allowing users to generate videos with synchronized dialogue, background sounds, and sound effects. The update also introduced long-form, multi-shot generation, enabling the creation of videos up to one minute in length with character consistency maintained across scenes and camera angles [24]. This update brought Runway closer to feature parity with competitors like Kling, which had also launched native audio capabilities around the same time.
Announced on October 22, 2024, Act-One is Runway's character performance transfer tool. It allows users to drive AI-generated character animations using nothing more than a single video recording and a character reference image [25].
Unlike traditional animation pipelines that require complex rigging, motion capture suits, and specialized equipment, Act-One needs just two inputs: a video of a performance and an image of the character to animate. A user can record themselves (or any actor) using any video camera, including a smartphone camera, and Act-One transfers the subject's facial expressions, head movements, and gestures onto the AI-generated character with high fidelity [25][26].
| Feature | Description |
|---|---|
| Input | A single video of a performance + a character reference image |
| Output | Animated video of the character mimicking the performance |
| Supported expressions | Facial expressions, head movements, lip sync, eye movements, gestures |
| Hardware requirements | Any video camera (including smartphone) |
| Model compatibility | Initially Gen-3 Alpha; later Gen-3 Alpha Turbo |
| Availability | Free for users with enough generation credits |
Act-One was initially available on Gen-3 Alpha and was later extended to Gen-3 Alpha Turbo for faster, more affordable generations. The tool has practical applications for independent filmmakers, animators, and content creators who lack access to expensive motion capture studios. It effectively democratizes character animation by removing the need for specialized hardware and technical expertise [26].
Released in November 2024, Frames is Runway's dedicated image generation model, designed with a focus on stylistic control and visual fidelity [27].
Frames was built to work in tandem with Runway's video generation models. By generating a high-quality still image in a consistent style, users can then feed that image into Gen-3 Alpha, Gen-4, or Gen-4.5 to create video with matching visual characteristics. This pipeline enables a workflow where the "look" of a project is established through Frames and then carried forward into video, maintaining stylistic consistency across both stills and motion [27][28].
Key features of Frames include:
Frames was initially available to Unlimited and Enterprise subscribers and later rolled out more broadly.
On December 11, 2025, Runway released GWM-1, its first family of General World Models. Unlike traditional video generation models that produce pre-rendered clips, GWM-1 is an autoregressive model built on top of Gen-4.5 that generates frame by frame, runs in real time, and can be controlled interactively with actions such as camera pose changes, robot commands, and audio input [29].
GWM-1 is released in three specialized variants:
| Variant | Purpose | Key Capabilities |
|---|---|---|
| GWM Worlds | Real-time environment simulation | Generates immersive, infinite, explorable spaces with geometry, lighting, and physics; runs at 24 fps and 720p resolution; maintains spatial consistency across long movement sequences |
| GWM Avatars | Audio-driven interactive characters | Simulates natural human motion and expression for photorealistic or stylized characters; renders facial expressions, eye movements, lip-syncing, and gestures; supports extended conversations without quality degradation |
| GWM Robotics | Robotic manipulation simulation | Predicts video rollouts conditioned on robot actions; supports counterfactual generation for exploring alternative trajectories; multi-view video generation for training data |
GWM Worlds takes a static scene and generates an immersive, explorable space that the user can navigate in real time. As the user moves through the environment, the model generates consistent geometry, lighting, and physics on the fly. This has potential applications in gaming, education, training AI agents, and virtual reality experiences [29].
GWM Avatars produces audio-driven, interactive video of characters that can carry on natural conversations. The model renders realistic facial expressions, eye movements, lip-syncing, and gestures during both speaking and listening phases. It runs for extended conversations without quality degradation, making it suitable for real-time tutoring, customer support, training simulations, and interactive entertainment [29].
GWM Robotics serves as a learned simulator for robot training. It can generate synthetic training data conditioned on robot actions and supports counterfactual generation, allowing researchers to explore alternative robot trajectories and outcomes without physical testing. The company has announced plans to make GWM Robotics available through an SDK and is in active conversations with robotics firms and enterprises [29].
The GWM-1 family represents Runway's strategic shift toward what it calls "simulating reality" rather than simply generating video clips. The models aim to unify multiple domains and action spaces under a single base world model architecture.
Runway provides a developer API that enables programmatic access to its generative models, allowing developers and businesses to integrate video and image generation capabilities directly into their own applications, products, platforms, and websites [17].
The Runway API is designed around asynchronous tasks. Developers start a generation (such as image-to-video), then poll the task endpoint until it completes, and finally download the outputs. This architecture accommodates the computationally intensive nature of video generation, where rendering a single clip can take anywhere from a few seconds (Gen-4 Turbo) to several minutes [17].
| Endpoint | Function | Available Models |
|---|---|---|
| Text-to-video | Generate video from text prompts | Gen-4, Gen-4 Turbo, Gen-4.5 |
| Image-to-video | Animate a still image into video | Gen-4, Gen-4 Turbo, Gen-4.5 |
| Gen-4 Image (with References) | Generate images using reference images and text | Gen-4 |
| Video-to-video | Transform existing video with new styles or modifications | Gen-3 Alpha, Gen-4 |
API credits can be purchased for $0.01 per credit in the developer portal. Generation costs vary by model: Gen-4.5 costs 25 credits per second of output, Gen-4 standard costs approximately 10 credits per second, and Gen-4 Turbo costs 5 credits per second. Enterprise customers can negotiate custom API pricing with dedicated GPU allocations and priority processing [17][18].
In October 2025, Runway released Workflows, a node-based system that allows users to chain multiple AI models and create custom multi-stage generative pipelines tailored to specific production needs [19].
Workflows use a visual editor where users connect nodes, each of which performs a specific function or runs a generative model. Nodes take input from other nodes and process it to create output, enabling complex multi-step creative processes:
| Node Type | Function |
|---|---|
| Generation nodes | Run Gen-4, Gen-4.5, or image generation models |
| Processing nodes | Apply transformations, resizing, color adjustments |
| Input nodes | Accept text prompts, images, or video as starting material |
| Output nodes | Export final results in specified formats |
| Logic nodes | Control flow, branching, and conditional generation |
Workflows are particularly valuable for creating repeatable creative pipelines. A studio might build a workflow that takes a character reference image and a scene description, generates a video clip, applies color grading, and exports the result in a specific format, all as a single automated process [19].
Beyond video generation, Runway offers a broader suite of AI-powered creative tools accessible through its web platform:
The platform is browser-based, meaning users can access its full capabilities without installing desktop software. This design choice aligns with Runway's mission of making AI tools accessible to as wide an audience as possible.
Runway's tools were used by the visual effects team on the Oscar-winning film Everything Everywhere All at Once (2022), directed by Daniel Kwan and Daniel Scheinert (known collectively as the Daniels). The film, produced by A24, won seven Academy Awards at the 95th ceremony in March 2023, including Best Picture, making it the first science-fiction film to win that award [30].
VFX artist Evan Halleck used Runway's green screen background removal tool while working on the film's "rock universe" sequence. The scene required rotoscoping shots filmed on a green screen to create mattes that would allow the rocks to be composited with other imagery. Halleck noted that Runway's tool was "cutting things out better than my human eye was, and it gave me a clean mat that I could use for other things." For a small VFX team working under tight deadlines, the AI-assisted workflow translated what would have been days of manual rotoscoping into a matter of minutes [30][31].
The film's association with Runway became a prominent talking point in the industry, demonstrating that AI tools could contribute meaningfully to award-winning creative work without replacing human artists.
Editors on The Late Show with Stephen Colbert have adopted Runway's tools for rapid shot editing, using the platform's AI capabilities to speed up the turnaround time required for a nightly show's production schedule.
Runway has invested heavily in positioning itself within the entertainment industry. The company operates Runway Studios, a film and animation production arm that works with professional filmmakers to create content using Runway's tools.
The company also hosts the annual AI Film Festival (AIFF), which has become the most prominent showcase for AI-generated cinema. The inaugural festival in 2023 attracted roughly 300 submissions. By 2025, that number had grown to over 6,000 submissions, and the event was held at Lincoln Center's Alice Tully Hall in New York, a venue associated with the prestigious New York Film Festival. The festival has received partnership support from the Tribeca Film Festival and the Geneva International Film Festival [12].
In July 2025, IMAX partnered with Runway to screen the ten finalist films from the 2025 AI Film Festival in IMAX theaters. The screenings ran from August 17 to August 20 at ten locations across the United States, including Los Angeles, New York, San Francisco, Chicago, Seattle, Dallas, Boston, Atlanta, Denver, and Washington, D.C. An acclaimed panel of jurors including filmmakers Gaspar Noe, Harmony Korine, and producer Jane Rosenthal selected the finalists [32][33].
In January 2026, Runway announced that its AI Festival would expand beyond film to include categories such as advertising, gaming, design, and fashion, reflecting the broadening applications of generative video. The festival, renamed AIF (AI Festival), continues to be held at Alice Tully Hall in Lincoln Center [13].
In December 2025, Adobe and Runway announced a multi-year strategic partnership that brings together Runway's generative video models with Adobe's industry-leading creative tools. The partnership represents a significant distribution channel for Runway's technology [20].
Key elements of the partnership include:
The partnership positions Runway's technology within the workflows of millions of Adobe Creative Cloud subscribers, significantly expanding its potential reach beyond Runway's own platform. Adobe is designated as Runway's preferred API creativity partner [20].
On January 5, 2026, Runway announced a partnership with NVIDIA centered on the upcoming NVIDIA Vera Rubin platform. Runway's Gen-4.5 model was ported from NVIDIA Hopper to the Vera Rubin NVL72 architecture within a single day, demonstrating seamless backward compatibility across GPU generations [34].
At NVIDIA GTC 2026 (held March 16-19 in San Jose), Runway announced a research preview of a new real-time video generation model developed in collaboration with NVIDIA. The model runs on Vera Rubin hardware and achieves instant HD video generation with a time-to-first-frame under 100 milliseconds. This represents a significant step toward interactive, real-time AI video generation rather than the batch-processing approach used by current models [34][35].
Runway CEO Cristobal Valenzuela stated: "These are long-context, physics-aware workloads, and that's exactly where NVIDIA Rubin platform shines. Together, we're accelerating a new class of world models that can power explorable worlds, interactive avatars, and robotics training" [34].
The Rubin platform delivers 50 petaflops of inference compute per GPU and is designed to accelerate real-time, long-form, high-fidelity video generation. Runway's GWM-1 world models are among the primary applications the platform is designed to support.
In December 2025, Runway signed a long-term agreement with CoreWeave, the AI cloud infrastructure provider, to power its next-generation video and world models. Under the agreement, Runway will utilize CoreWeave's NVIDIA GB300 NVL72 systems for large-scale training and inference. The partnership also includes access to W&B Models for observability and W&B Inference, powered by the CoreWeave AI Cloud Platform, as well as CoreWeave AI Object Storage for managing training datasets across geographies without egress charges [36].
Runway's partnership with IMAX, established in July 2025, brought AI-generated short films to IMAX's large-format auditoriums for the first time. The commercial screenings of AI Film Festival finalist works across ten U.S. cities marked a milestone for AI-generated content reaching mainstream theatrical distribution [32].
Runway operates on a credit-based subscription model. As of early 2026, the pricing structure is as follows:
| Plan | Monthly Price | Credits per Month | Notes |
|---|---|---|---|
| Basic | Free | 125 (one-time) | Access to basic tools; enough for ~25 seconds of Gen-4 Turbo video |
| Standard | $12/month (billed annually) | 625 | Gen-4.5 access; Gen-4.5 costs 25 credits/second, Gen-4 Turbo costs 5 credits/second |
| Pro | $28/month (billed annually) | 2,250 | Higher resolution, longer clips, custom voices |
| Unlimited | $76/month (billed annually) | Unlimited Gen-4 Turbo | Unlimited Gen-4 Turbo; Gen-4.5 usage billed separately; 2,250 credits + unlimited Explore Mode |
| Enterprise | Custom | Custom | Custom deployment, dedicated API access, white-glove onboarding, 24/7 SLA support |
Runway's Enterprise tier offers capabilities designed for large-scale production environments [18]:
| Feature | Description |
|---|---|
| Dedicated GPU allocation | Reduced render times through reserved compute resources |
| White-labeling | Tools branded for internal workflows |
| REST API access | Dedicated endpoints, webhooks, and SFTP integration |
| Pipeline integration | Direct connection to editing suites, CDN deployments, and data lakes |
| Custom onboarding | Initial workshops, ongoing training sessions, and dedicated account manager |
| Priority support | 24/7 support channels with guaranteed SLA response times |
| Custom model training | Train models on proprietary datasets for brand-specific outputs |
The AI video generation market has become intensely competitive since 2024, with several major technology companies and startups vying for dominance.
| Competitor | Developer | Notable Strengths |
|---|---|---|
| Sora / Sora 2 | OpenAI | High visual quality; synchronized dialogue and sound effect generation |
| Veo 3 / Veo 3.1 | Google DeepMind | Strong overall preference in benchmarks; native audio generation |
| Pika | Pika Labs | User-friendly interface; quick iteration on features; Pika 2.2 with scene-level editing |
| Kling | Kuaishou | Aggressive pricing; videos up to 2 minutes; Kling 2.5 Turbo with 60% faster speeds |
| Luma Dream Machine | Luma AI | Accessible free tier; strong community adoption |
| Minimax | MiniMax | Video-01 model; strong in Chinese market; text-to-video with audio |
The competitive dynamics shifted meaningfully in 2025. OpenAI's Sora 2 introduced synchronized audio generation alongside video, and Google's Veo 3 and Veo 3.1 performed strongly in independent benchmarks. Kuaishou's Kling, developed in China, applied pricing pressure across the market; average cost per minute of AI video dropped approximately 65% between 2024 and 2025. MiniMax emerged as another competitor from China with its Video-01 model, adding to the price pressure from the Asian market.
Runway has responded by focusing on quality leadership (Gen-4.5's top ranking on the Artificial Analysis leaderboard) and by carving out a distinct niche among professional creatives and filmmakers, rather than competing solely on price. The company's emphasis on consistency across scenes, reference-based character control, and filmmaker-oriented features like its Studios division, Film Festival, Adobe partnership, and IMAX screenings distinguishes it from competitors that target more casual users.
| Factor | Runway | OpenAI Sora | Google Veo | Kling |
|---|---|---|---|---|
| Primary audience | Professional creatives, filmmakers | General consumers, creators | Enterprise, developers | Cost-conscious creators |
| Consistency controls | Gen-4 References (character, scene, style) | Limited | Moderate | Limited |
| API availability | Yes (REST API) | Yes | Yes (Vertex AI) | Yes |
| Enterprise features | Dedicated GPUs, white-labeling, SLA | Enterprise tier | Google Cloud integration | Basic enterprise |
| Pricing model | Credit-based subscription | Subscription | API usage-based | Credit-based, aggressive pricing |
| Adobe integration | Yes (Firefly partnership) | No | No | No |
| Max clip length | Up to 1 minute (Gen-4.5) | Up to 20 seconds | Up to 8 seconds | Up to 2 minutes |
| Native audio | Yes (Gen-4.5 update) | Yes (Sora 2) | Yes (Veo 3) | Yes (Kling 2.0+) |
| World models | GWM-1 (Worlds, Avatars, Robotics) | No | No | No |
| Performance transfer | Act-One | No | No | No |
Runway hit approximately $300 million in annualized recurring revenue by late 2025, serving over 300,000 customers. The company's revenue growth has been driven by a combination of individual creative professionals, small studios, and larger enterprise clients in the entertainment and advertising industries.
| Metric | Value | Date |
|---|---|---|
| Annualized recurring revenue | ~$300M | Late 2025 |
| Total customers | 300,000+ | Late 2025 |
| Total funding raised | $860M+ | February 2026 |
| Valuation | $5.3B | February 2026 |
| AI Film Festival submissions | 6,000+ | 2025 |
The broader market for AI video companies saw $3.08 billion in global funding in 2025, up 94.6% from $1.58 billion in 2024, reflecting the rapid growth of the sector in which Runway operates [14].
As of early 2026, Runway is focused on what it calls "world models," AI systems that can simulate realistic environments and physics rather than simply generating video frames. The release of GWM-1 in December 2025, with its three variants (Worlds, Avatars, and Robotics), represents the first concrete step in this direction.
The February 2026 Series E funding round explicitly cited pre-training next-generation world models as a primary use of capital. The company is expanding its team across research, engineering, and go-to-market functions.
At NVIDIA GTC 2026, Runway demonstrated a research preview of real-time video generation running on NVIDIA Vera Rubin hardware, with time-to-first-frame under 100 milliseconds. This points toward a future where AI video generation is interactive rather than batch-processed, opening new possibilities for gaming, live production, and immersive experiences [35].
Runway is increasingly seeing adoption beyond its traditional media and advertising customer base, with growing interest from the gaming and robotics industries. The CoreWeave compute partnership, signed in December 2025, provides the infrastructure backbone for scaling these new workloads [36].
Runway continues to operate its web-based platform, its Studios production division, its annual AI Festival (now expanded to categories beyond film), and its developer API. The Adobe partnership provides a significant new distribution channel, and the NVIDIA collaboration positions the company at the forefront of next-generation GPU-accelerated AI inference. With $860 million in total funding and a $5.3 billion valuation, the company sits among the most well-capitalized startups in the generative AI space.