Runway Gen-3 Alpha is a generative AI video model developed by Runway, a New York based AI research and product company. It was unveiled on June 17, 2024 as the first member of a new series of models trained on a freshly built infrastructure for large scale multimodal training, and rolled out to paying subscribers in July 2024. Gen-3 Alpha generated 720p clips of roughly five or ten seconds in length from text prompts, still images or short video references, and was widely seen as a major step up from the company's earlier Gen-2 system in fidelity, motion realism and prompt adherence. Within a few weeks Runway also added an image to video mode and an Extend feature for stitching together longer sequences, and in mid August 2024 it released a faster and cheaper sibling model called Gen-3 Alpha Turbo.
Gen-3 Alpha arrived during a stretch of intense competition in text to video generation. OpenAI's Sora had been previewed in February 2024 but was still gated to a tiny group of artists and red teamers, while Chinese systems such as Kling and consumer focused tools such as Luma's Dream Machine and Pika were starting to ship to the public. Runway's pitch with Gen-3 Alpha was that filmmakers and serious creative users could actually access a state of the art video model on a credit based subscription, and the model quickly became a fixture in commercial work, music videos, viral short films and stage visuals. The launch was also followed by a 404 Media report on a leaked internal training data spreadsheet that allegedly listed thousands of YouTube channels and films, which sparked a long running discussion about training data sources for video models.
| Field | Value |
|---|---|
| Developer | Runway ML (Runway AI, Inc.) |
| Type | Text to video, image to video diffusion model |
| Announced | June 17, 2024 |
| Public release | July 2024 (paid subscribers) |
| Architecture | Joint text and video diffusion transformer |
| Parameters | Not disclosed |
| Training compute | Runway research compute cluster (size not disclosed) |
| Output resolution | 720p |
| Base clip length | 5 seconds and 10 seconds |
| Extended clip length | Up to roughly 16 to 18 seconds via Extend |
| Variants | Gen-3 Alpha, Gen-3 Alpha Turbo |
| Successor | Runway Gen-4 (March 2025) |
| License | Proprietary, web platform and API |
Runway was founded in 2018 by Cristobal Valenzuela, Alejandro Matamala and Anastasis Germanidis, three graduates of New York University's Tisch School of the Arts who met at the Interactive Telecommunications Program. The company started as a browser based suite of small machine learning tools for creators and slowly built a reputation for shipping research grade models in a usable interface. Runway was also a co-author, with researchers at LMU Munich, of the 2022 paper on latent diffusion models that became the basis for Stable Diffusion.
The company's first generation of branded video models, named the Gen series, traced an arc from controlled video editing to open ended generation:
In parallel, the AI video field was moving fast. OpenAI had teased Sora in February 2024 with sample clips that were noticeably longer and more cinematic than anything publicly available, and that put pressure on every other lab to demonstrate their next model. Runway's response was Gen-3 Alpha, announced about four months after the Sora teaser and shipped to paying users while Sora itself was still locked behind a closed program.
Gen-3 Alpha was the first time a high quality general purpose video model was actually available to anyone with a credit card, on a normal web app, with reasonable rendering times. Sora was widely covered but not usable. Pika, Luma Dream Machine and Kling all offered free or cheap access but with weaker motion and lower fidelity at launch. For roughly the second half of 2024, Gen-3 Alpha sat in an unusual position as both a frontier research demo and a commercial product, and a lot of the public's first impression of what serious AI video looked like came from Runway clips circulating on social media.
| Date | Event |
|---|---|
| June 17, 2024 | Runway announces Gen-3 Alpha with a research blog post and a reel of sample clips. Initial access via a waitlist and Runway's creative partners program. |
| Late June 2024 | First public sample reels released; coverage in The Verge, TechCrunch, Wired, AI Insider and others. |
| July 1, 2024 | Gen-3 Alpha rolls out to paying subscribers on the Runway web app for text to video at 5 and 10 second lengths. |
| July 22, 2024 | Image to video mode added to Gen-3 Alpha, allowing users to animate a still image with optional text prompt. |
| July 24, 2024 (approx.) | 404 Media publishes its report on a leaked training data spreadsheet alleged to be from Runway's Jupiter project. |
| August 15, 2024 | Runway releases Gen-3 Alpha Turbo, advertised as roughly seven times faster and half the credit cost of Gen-3 Alpha. Image to video required. |
| Late August 2024 | Extend feature rolls out, allowing users to add additional 5 or 10 second segments to an existing clip, in some workflows producing usable sequences of around 16 to 18 seconds. |
| October 22, 2024 | Act-One performance transfer feature launches on Gen-3 Alpha, then extends to Turbo. |
| November 2024 | Frames image model and Lip Sync arrive; Gen-3 Alpha remains the primary video backbone. |
| March 31, 2025 | Runway announces Gen-4, the successor to the Gen-3 family, with consistent character and scene controls. |
| April 2025 | Gen-4 Turbo released, ending Gen-3 Alpha's role as the company's flagship model. Gen-3 Alpha and Turbo remain available on the platform. |
Runway has been deliberately sparse about Gen-3 Alpha's internals, which is consistent with its policy across the Gen series. The publicly stated facts come from the company's research blog post and from interviews with Cristobal Valenzuela and Anastasis Germanidis around the launch.
Runway pitched Gen-3 Alpha as the first model on its "new infrastructure built for large scale multimodal training," framing it as the start of a series of "general world models." That framing later became central to the company's roadmap, but at launch the practical difference was simply that Gen-3 Alpha was a much more coherent video model than Gen-2 in basically every dimension that creators cared about.
Gen-3 Alpha generated 720p video at variable frame rates, with 24 frames per second as the most common output for clips that mimicked traditional cinema. The headline improvements over Gen-2 were in three areas:
The model still struggled with hands and very fast multi object interaction, and produced occasional hallucinations such as duplicated limbs or unstable text on signs, but the failure rate per generation dropped substantially compared with Gen-2.
Gen-3 Alpha shipped with an interface called Director Mode that exposed explicit camera controls in addition to the text prompt. The user could select a camera move (zoom in, zoom out, pan left, pan right, tilt up, tilt down, dolly forward, dolly backward, orbit clockwise, orbit counterclockwise), set the intensity of that move on a slider and combine multiple moves in a single shot.
This turned Gen-3 Alpha into a usable tool for previs and storyboarding, since users could iterate on a shot composition without rewriting the prompt every time.
A few weeks after the initial release, Runway added an image to video mode in which a user supplied a still image as the first frame, optionally with a text prompt that described how the scene should evolve. This made Gen-3 Alpha much more controllable in production workflows. Users typically generated a key frame in a separate image model such as Midjourney or Stable Diffusion, then handed that frame to Gen-3 Alpha to animate.
In the Turbo variant, image to video became the primary supported mode.
Gen-3 Alpha's base output length was 5 or 10 seconds. To produce longer sequences, users could apply the Extend feature, which conditioned a new generation on the last frame of the previous clip and continued the action. In practical workflows, two extensions on a 10 second base clip produced sequences in the 16 to 18 second range, although quality and continuity tended to drift in later extensions.
For longer sequences, most filmmakers cut between many short Gen-3 Alpha clips rather than relying on a single extended generation.
Users could apply broad style cues through prompt language (for example, anime, claymation, 1970s film stock, watercolor) and through reference images. Gen-3 Alpha preserved style across a clip more reliably than Gen-2, but did not yet support the consistent character transfer across cuts that Gen-4 later introduced via References.
In September 2024, Runway added a Lip Sync feature that took an existing Gen-3 Alpha clip of a character and re-rendered the mouth area to match an uploaded audio track or text to speech output. This was not real time and did not produce native synchronized audio inside the model itself; the audio still had to be generated or recorded separately and then aligned. Native audio generation came to Runway only with later models in the Gen-4 family.
In October 2024, Runway released Act-One on top of Gen-3 Alpha. Act-One let users record a short video of a real performer, then drive an AI generated character with the same facial expressions, head movement and gesture. It was widely seen as the most credible character animation feature in any consumer AI video tool at that point. Act-One later extended to Gen-3 Alpha Turbo for cheaper iteration.
The Gen-3 Alpha family ultimately consisted of two production models that shared the same backbone with different inference cost and quality tradeoffs.
| Variant | Released | Key traits | Typical use |
|---|---|---|---|
| Gen-3 Alpha | June and July 2024 | Highest quality output in the family; slowest and most expensive per second; supports text to video, image to video and video to video | Hero shots, music videos, key sequences in commercials, festival films |
| Gen-3 Alpha Turbo | August 15, 2024 | Roughly 7x faster and 50% cheaper per credit; image to video required; small loss of fidelity in fine motion compared to Alpha | Iteration, previs, social posts, large batch generation, Act-One performance transfer |
Both variants remain available on the Runway platform after the launch of Gen-4, although as of 2026 most active users have moved to Gen-4 or Gen-4.5 for new work.
Runway has used a credit based subscription model since well before Gen-3 Alpha. Credits cap how much video a user can generate per month at each tier, with extra credits available as add-on top ups. The pricing below reflects the structure that was in place when Gen-3 Alpha was the company's flagship model in 2024.
| Plan | Monthly price (annual billing) | Monthly credits | Notes |
|---|---|---|---|
| Basic | Free | 125 one time | Limited generations; watermarked exports for some content |
| Standard | $12 | 625 | Unlocked exports; access to Gen-3 Alpha and Turbo |
| Pro | $28 | 2,250 | Higher resolution exports, longer clip features, custom voices |
| Unlimited | $76 | Unlimited generations in Explore Mode plus 2,250 fast credits | Designed for heavy users; render queue prioritization |
| Enterprise | Custom | Custom | Dedicated GPUs, white labeling, REST API SLA, training on internal datasets |
| Mode | Cost per second of generated video |
|---|---|
| Gen-3 Alpha (text to video, image to video) | 10 credits per second |
| Gen-3 Alpha Turbo (image to video) | 5 credits per second |
| Extend (Gen-3 Alpha) | 10 credits per additional second |
| Extend (Gen-3 Alpha Turbo) | 5 credits per additional second |
A Standard plan with 625 credits therefore worked out to roughly 62 seconds of Gen-3 Alpha video or about 125 seconds of Gen-3 Alpha Turbo video per month before having to buy top ups. Top up credit packs were available at $10 for 1,000 credits, with additional volume discounts on the Pro and Unlimited tiers. Some sources have reported the Standard tier rate as 5 credits per second of Gen-3 Alpha output, which corresponds to the Turbo rate; this difference reflects changes to credit pricing that Runway made over the life of the model and the introduction of cheaper tiers as Gen-4 became the flagship.
As of 2026, Gen-3 Alpha and Turbo remain billed at lower credit costs than Gen-4 and Gen-4.5, since Runway has progressively repriced older models downward as new flagships have replaced them.
Reception of Gen-3 Alpha was unusually positive for a Runway release, even in tech press that had been skeptical of earlier Gen models.
The community on Twitter and on Runway's own Discord pushed thousands of clips through the model in the first weeks, with a lot of attention going to filmmakers like Paul Trillo and Nicolas Neubert, who had already been doing notable work with Gen-2 and Pika and now had a substantially better tool. Clips set in physically grounded contexts, particularly cars, weather, food and architecture, looked unusually clean for AI video. Clips with crowds, fast multi object interaction and complex hands were where Gen-3 Alpha's seams still showed.
A fair amount of coverage also dwelled on what Gen-3 Alpha could not yet do. There was no native audio. Clip lengths were short enough that real narrative storytelling required heavy editing across many shots. Character consistency across cuts was not yet a built in feature, which became Runway's main pitch a year later with Gen-4 and the References system.
Gen-3 Alpha became the default tool for a wave of AI assisted film and music video work in late 2024 and 2025. A non exhaustive list:
In July 2024, the technology publication 404 Media published a report based on what it described as a leaked Google Sheet from Runway's internal "Jupiter" project. The spreadsheet, dated to early 2024, was reported to contain thousands of rows listing YouTube channels, films, TV shows and individual videos that had allegedly been used as references or training data for Gen-3 Alpha or its predecessors.
Key claims in the 404 Media report:
Runway did not formally publish a line by line response to the report. Cristobal Valenzuela addressed the story in passing in interviews and on social media, framing the company's data practices in terms of fair use, in-house pipelines, partnered datasets and standard scraping of publicly available content. Runway did not confirm or deny that the spreadsheet was a real internal document. The company also declined to release a model card itemizing training sources.
The report fed into a wider 2024 and 2025 conversation about training data for video models. Other companies in the space, including OpenAI for Sora and several open source video efforts, faced similar questions about whether YouTube and movie content had been used. The legal landscape around scraping for AI training was unsettled at the time of publication and remains a moving target as of 2026.
| Aspect | Reported claim | Runway's public position |
|---|---|---|
| Source of leak | Internal Jupiter spreadsheet allegedly maintained by Runway researchers | Did not confirm or deny authenticity |
| Type of content listed | YouTube channels, films, TV shows, animation studios, VFX channels | No itemized response |
| Use of content | Reference and training data for Gen-3 Alpha and earlier models | General appeal to fair use, in-house pipelines and partnered datasets |
| YouTube terms of service issue | Some scraping would have violated YouTube's terms | Did not address directly |
| Model card disclosure | Demanded by some critics and journalists | Not provided |
| Lawsuits | None publicly filed against Runway specifically over the spreadsheet as of 2026 | No public comment on potential litigation |
The controversy did not slow Gen-3 Alpha's commercial uptake meaningfully, but it has remained a frequently cited reference point in academic and policy discussions about training data transparency in generative AI.
Gen-3 Alpha sat at the center of Runway's roadmap for roughly nine months before being eclipsed by newer models.
Gen-3 Alpha and Gen-3 Alpha Turbo remain available on Runway's platform for users who want the older model's specific look, although new feature work has been focused on Gen-4.5, GWM-1 and the company's developer API.
The table below compares Gen-3 Alpha at its peak (mid 2024 to early 2025) with the major contemporary text to video systems. It is intentionally focused on the period when Gen-3 Alpha was a current flagship; later versions of these competitors (Sora 2, Veo 3, Kling 2.0 and so on) are listed where they are clearly relevant to the comparison.
| Model | Developer | First public release | Public availability when Gen-3 Alpha launched | Max clip length | Native audio | Notable strength | Notable weakness |
|---|---|---|---|---|---|---|---|
| Gen-3 Alpha | Runway | June 2024 (announce), July 2024 (paid users) | Available on the Runway web app | ~10s base, ~16 to 18s with Extend | No (separate Lip Sync feature later) | Realistic motion, accessible to anyone with a credit card | Hands, fast multi object interaction, no native audio |
| Sora (original) | OpenAI | Announced February 2024, public December 2024 (Sora Turbo) | Closed alpha, no consumer access | 60s in demos, capped lower in Sora Turbo | No (in original release) | Long high quality demo clips | Closed access during 2024 |
| Sora 2 | OpenAI | September 2025 | Public via ChatGPT and the Sora app | Up to ~20s typical | Yes | Synchronized audio, character consistency | Stricter content policy |
| Veo | Google DeepMind | May 2024 (Veo 1 demo) | Limited Vertex AI preview | Demos at 60s, product capped lower | No (Veo 1) | Long shots in demos | Limited public access in 2024 |
| Veo 2 | Google DeepMind | December 2024 | Limited Vertex AI access | ~8s typical | No | High fidelity, strong physics | Restricted access |
| Veo 3 | Google DeepMind | May 2025 | Public via Gemini, Vertex AI | ~8s typical | Yes | Native audio, top tier benchmarks | Shorter clip length than Runway |
| Kling (1.0 and 1.5) | Kuaishou | June 2024 (1.0), September 2024 (1.5) | Public for users with a Chinese phone number, then global | Up to 2 minutes (Kling 1.0) | No (initial), later yes | Long clips, realistic motion | Earlier versions geo gated |
| Kling 2.0 | Kuaishou | April 2025 | Public globally | Up to 2 minutes | Yes | Long clips with audio, aggressive pricing | Strict content policy in China |
| Luma Dream Machine | Luma AI | June 2024 | Public free tier | ~5s base, extendable | No (initial), later yes | Free access, fast turnaround | Lower fidelity at launch |
| Pika (Pika 1.0 to 2.0) | Pika Labs | Late 2023 (1.0), December 2024 (2.0) | Public free tier | ~3 to 10s | Yes (Pika 2.0) | Friendly UI, fast iteration | Fidelity below Runway and Sora |
| Pika 2.0 | Pika Labs | December 2024 | Public free and paid tiers | ~10s | Yes | Scene Ingredients feature for character control | Quality below Gen-3 Alpha at launch |
| Hailuo (Video-01) | MiniMax | September 2024 | Public free and paid | ~6s | No (initial) | Strong realism in human motion | Free tier rate limited |
| HunyuanVideo | Tencent | December 2024 | Open weights | ~5s | No | Open weights, runs on consumer GPUs at small sizes | Lower fidelity than Gen-3 Alpha |
| Wan2.1 (Wanx) | Alibaba | February 2025 | Open weights | ~5s | No | Open weights, strong Chinese language prompts | Quality variable |
The practical effect of this landscape was that for roughly the second half of 2024 Gen-3 Alpha was the most plausible answer to the question "how do I actually get state of the art AI video right now," and that role only began to slip in late 2024 as Sora Turbo, Veo 2 and Kling 1.5 reached more users.
Gen-3 Alpha is a 2024 era video model, and like every other model from that period it had clear ceilings.