Sora 2 is a text-to-video and audio generation model developed by OpenAI, released on September 30, 2025. It succeeded the original Sora research preview from February 2024 and introduced synchronized audio generation, improved physical simulation, and a consumer-facing social app for iOS. At launch, access was limited to invited users in the United States and Canada. A developer API became available at OpenAI's DevDay on October 6, 2025, and the model was integrated into Microsoft 365 Copilot in November 2025. A major content licensing partnership with The Walt Disney Company was announced in December 2025. OpenAI shut down the Sora web and iOS app on April 26, 2026, citing high compute costs and a strategic pivot toward enterprise products; the API is scheduled for discontinuation on September 24, 2026.
Sora 2 is best understood as both a model and a moment. The model was a substantial upgrade over the 2024 research preview, with native audio, better physics, and a consent-based likeness feature called Cameos. The moment was the first time a major lab pushed a text-to-video tool out as a mainstream consumer app with a TikTok-style feed, and it produced almost everything that move was likely to produce: viral generations, watermark removal tools, lawsuits in spirit if not always in fact, and a fast and bumpy path from launch to shutdown in about seven months.
| Sora 2 | |
|---|---|
| Developer | OpenAI |
| Type | Text-to-video and audio generation model |
| Architecture | Diffusion transformer (DiT) |
| Predecessor | Sora (2024) |
| Variants | sora-2, sora-2-pro |
| Maximum resolution | 1792x1024 (Pro variant) |
| Maximum duration | ~25 seconds (Pro tier) |
| Native audio | Yes (dialogue, sound effects, ambient, music) |
| Announcement | September 30, 2025 |
| App launch | September 30, 2025 (iOS, US and Canada) |
| API launch | October 6, 2025 (DevDay) |
| App shutdown | April 26, 2026 |
| API shutdown | September 24, 2026 (scheduled) |
| Provenance marking | Visible watermark, C2PA Content Credentials |
| Pricing (API) | $0.10 to $0.50 per second |
The original Sora model was announced by OpenAI in February 2024 as a research preview, without public access. It demonstrated an ability to generate short video clips from text prompts, but it produced no audio, exhibited frequent temporal artifacts such as objects appearing or disappearing between frames, and its physics simulation was inconsistent. Videos flickered, shadows did not track correctly, and fast-moving objects would warp or duplicate in unrealistic ways. Maximum clip length was roughly 10 seconds. The model attracted considerable attention as a research demonstration but was not released as a product.
Sora 1 reached general availability through ChatGPT Plus and Pro on December 9, 2024, during OpenAI's "12 Days of OpenAI" event, but it was not a runaway success. The interface was tied to the chat product, the price was high, and rival systems caught up quickly. Over the following 18 months, models from Google, ByteDance, Kuaishou, and Runway accumulated production users, and criticism mounted that OpenAI's video capabilities lagged behind its text and image tools. Veo 3 in particular, with native audio generation, made Sora 1 look frozen in place by mid-2025.
Sora 2 was developed to address these gaps and to move from research preview to a consumer-ready product with a dedicated social platform. OpenAI also wanted a video product that lived outside ChatGPT, where pricing pressure on text generation could not constrain it, and where a social feed could create engagement loops on its own.
OpenAI announced Sora 2 on September 30, 2025, in a blog post titled "Sora 2 is here" and a system card published the same day. The announcement was paired with the launch of a standalone iOS application named simply Sora, available at sora.com on the web and as an app in the US Apple App Store. Access at launch was invite-only and limited to the United States and Canada. The first wave of invitations went to existing ChatGPT users, with the company gradually expanding the pool over the following weeks.
OpenAI executives framed the release as an attempt to build a generative video product that anyone could use, not a researcher tool wrapped in a chat interface. Sam Altman described the social feed and the Cameos system in a livestream as the moment video synthesis became a consumer product rather than a tech demo. Internally, the company hoped to convert the launch buzz into a durable footing in the social media stack, where short-form video is the dominant format.
The model was simultaneously made available to Pro subscribers inside ChatGPT, and the API followed a week later at OpenAI's developer event in San Francisco.
Sora 2 uses a diffusion model combined with a transformer architecture, commonly called a diffusion transformer (DiT). Like the original Sora, it represents video as sequences of spatiotemporal patch tokens, treating frames in four-dimensional blocks containing spatial height, width, temporal position, and channel information. This lets the model learn both what objects look like within a frame and how they move between frames.
The principal architectural advance in Sora 2 is a multimodal training pipeline that jointly processes video and audio data, rather than treating audio as a separate post-processing step. This allows synchronized generation where character lip movements align with speech, ambient sounds respond to on-screen events, and music tempo matches scene pacing. The model applies 3D rotary positional embeddings (3D RoPE) to encode spatial and temporal positions simultaneously, which reduces the frame-independence problem that caused flickering in the earlier model.
OpenAI has not published parameter counts or training compute figures for Sora 2. The system card cites "a substantial increase in training compute" over Sora 1 and notes that the model's audio backbone shares structural elements with the company's other multimodal systems, including GPT-4o. Beyond that, the architecture remains largely unspecified in public materials.
OpenAI describes Sora 2 as a world simulator rather than a pure video synthesis tool. The model was trained on data labeled with physical annotations, covering concepts like gravity, momentum, buoyancy, material deformation, and collision dynamics. Demonstrations at launch showed basketballs rebounding off backboards with realistic arc and spin, gymnasts maintaining momentum through aerial sequences, and liquids behaving with correct surface tension and splashing behavior.
Temporal consistency improved substantially over Sora 1. OpenAI reported roughly a 90% reduction in temporal artifacts such as flickering shadows and camera jitter. Objects maintain consistent size and color across shots, and lighting tracks globally across a scene rather than shifting between frames. Character consistency across multiple shots improved as well, though it can still drift in longer generations.
The physics improvements were uneven across domains. Crowd scenes, sports clips, and short character actions improved more than fluids, smoke, or fabric. The model still struggled with phenomena that depend on accurate volumetric simulation, such as steam rising from a cup or the specific way a parachute fills with air. OpenAI's own demo reel emphasized cases where the model performed well; informal tests circulated on social media filled in the cases where it did not.
Sora 2 generates audio within the same pipeline as video, producing dialogue synchronized with lip movements, context-aware ambient sounds, realistic sound effects tied to on-screen actions, and background music matched to scene tone. The volume and spatial positioning of sounds adjust based on object distance from the camera. Prompts can specify audio intent directly, for example asking for a wind-swept outdoor ambience or characters speaking specific lines, and the model attempts to render these specifications together with the visual output.
This was a significant departure from Sora 1 and from several contemporary competitors, which added audio in a separate post-processing pass or offered no audio at all. Independent assessments found Sora 2's audio coherent but noted that phoneme-level lip-sync accuracy was less precise than later models such as ByteDance's Seedance 2.0. The dialogue voices Sora 2 generates lean toward a generic "American narrator" register by default, and getting consistent character voices across multiple clips required careful prompt engineering or use of the Cameos system.
Alongside the model release, OpenAI launched the Sora iOS app on September 30, 2025, initially invite-only and restricted to the United States and Canada. An Android version followed approximately two months later. The app was designed as a social video platform with a scrollable feed of user-generated Sora clips, remixing tools, and creator controls.
The app reached 56,000 downloads on its first day. By October 3, 2025, it had climbed to the number one position on the US App Store, surpassing Google Gemini and OpenAI's own ChatGPT app. Total installs in the first 48 hours reached approximately 164,000, outpacing the day-one performance of Claude (21,000) and Microsoft Copilot (7,000) but falling slightly below ChatGPT's debut in 2023 (81,000). The app accumulated over one million iOS downloads within five days.
The app included a discovery feed similar to TikTok or Instagram Reels where users could scroll through publicly shared generations, remix any video with a new prompt, add themselves to a scene via the Cameos feature, and use basic editing tools for trimming, combining clips, and adding text overlays.
Cameos is a consent-based digital likeness feature that allows a user to insert their own appearance and voice into generated videos. To create a Cameo, a user records a short video-and-audio clip in the app. The recording includes a liveness check requiring head movement, blinking, and speaking a number sequence aloud, designed to prevent automated or pre-recorded inputs from being used fraudulently. OpenAI uses the recording to build a representation of the user's appearance, facial dynamics, and voice characteristics.
Once a Cameo is created, the owner controls its availability through four settings: Only me, People I approve, Mutuals, and Everyone. Users receive a notification whenever someone uses their Cameo and can revoke access or delete any video containing their likeness, including drafts, at any time. OpenAI does not permit Cameo creation using another person's likeness without their explicit consent. All Cameo-generated videos are watermarked and carry C2PA content credentials identifying them as AI-generated.
The feature was praised for building consent mechanisms directly into the workflow rather than relying on post-hoc content moderation. Critics noted that sophisticated users could still attempt to create convincing likenesses of people who had not consented, particularly of celebrities, using clever prompting. Within days of launch, social media filled with fan-made clips of Mark Zuckerberg in absurd scenarios; OpenAI did not initially block these because Zuckerberg's face was being inferred from textual prompts and existing footage rather than a Cameo recording, and the company's content policy had not anticipated quite that mode of misuse.
In October 2025, OpenAI tightened the surrounding policy. The system began rejecting prompts that named real, living people unless they had a Cameo on file, and it added warnings for prompts that closely matched a public figure's appearance even without a name. The Cameo feature itself remained the same, but its surroundings became more cautious.
The feed in the Sora app drew explicit comparisons to TikTok. It used an algorithm that mixed clips from accounts a user followed with recommended generations selected based on engagement signals. Remixing was first-class: any public clip could be used as a starting point for a new generation by editing the prompt, swapping characters via Cameos, or extending the clip with additional shots.
OpenAI integrated lightweight social features such as likes, comments, follows, and direct messages, though without the depth of an established social platform. The feed surfaced trends, including hashtag-driven challenges and recurring meme formats; "AI Steve Jobs makes pasta," "impossible parkour through Times Square," and "cat job interview" were among the more viral templates in October 2025.
Sora 2 was released in two API variants with different quality and cost trade-offs.
| Variant | Max resolution | Typical duration | Notes |
|---|---|---|---|
| sora-2 | 1280x720 (landscape) or 720x1280 (portrait) | 4, 8, or 12 seconds | Optimized for speed; suited to social media and rapid iteration |
| sora-2-pro | 1792x1024 (landscape) or 1024x1792 (portrait) | 4, 8, or 12 seconds | Higher visual fidelity; suited to marketing and professional production |
Through the consumer app, subscription tiers unlocked different generation limits. Free invited users could generate clips up to 10 seconds at 720p. ChatGPT Plus subscribers ($20 per month) accessed priority queue and higher monthly limits. ChatGPT Pro subscribers ($200 per month) unlocked the experimental higher-quality model with clips up to 20 to 25 seconds at 1080p.
The model received minor incremental updates throughout late 2025 and early 2026 rather than a major version bump. Notable updates included:
| Date | Change |
|---|---|
| October 6, 2025 | API launched at DevDay 2025 |
| October 2025 | Tightened guardrails around real-person likenesses (post-Cranston controversy) |
| November 2025 | Android app released; Microsoft 365 Copilot integration |
| November 2025 | Storyboard tool added to app |
| December 2025 | Disney character licensing rolled out |
| January 2026 | Free tier removed in app, restricting generation to Plus and Pro subscribers |
| February 2026 | Resolution and duration limits raised for Pro subscribers |
| March 2026 | Shutdown announced |
| April 2026 | App and web closure |
OpenAI made Sora 2 available to developers via the API at DevDay 2025, held in San Francisco on October 6, 2025. The API offered programmatic access to both model variants, with support for text-to-video generation, reference image uploads, and audio generation.
| Access method | Availability | Notes |
|---|---|---|
| iOS app (sora.com) | September 30, 2025 | Invite-only; US and Canada at launch |
| Android app | November 2025 | Rolling rollout |
| Developer API | October 6, 2025 (DevDay) | Preview; broader access expanded through Q4 2025 |
| ChatGPT Plus | October 2025 | Bundled in Plus subscription |
| ChatGPT Pro | September 30, 2025 | Pro variant access; full quality and quotas |
| Microsoft 365 Copilot | November 2025 | Microsoft Ignite 2025; Frontier program initially |
| Azure OpenAI | Late 2025 | Asynchronous job submission; enterprise only |
| Geographic expansion | Late 2025 to early 2026 | Mexico, UK, parts of Latin America added; EU not formally supported |
| API (scheduled discontinuation) | September 24, 2026 | OpenAI announced wind-down |
The API authenticated through standard OpenAI API keys. The asynchronous generation pipeline required developers to submit a job and poll for completion, with typical generation times of one to two minutes per clip. Rate limits scaled with billing tier: Tier 2 customers received roughly 5 requests per minute, Tier 4 and above scaled to 50 or 200 with a dedicated support agreement. The API supported a /v1/videos endpoint and, later, a /v1/videos/edits endpoint for editing existing clips and extending duration.
In November 2025, at Microsoft Ignite, Microsoft announced Sora 2 access inside Microsoft 365 Copilot. Enterprise users in Microsoft's Frontier preview program could generate short video clips directly from the Copilot prompt bar, using Sora 2 as the underlying model. The integration handled prompt construction, watermark policy, and storage in a user's OneDrive. Microsoft initially gated the feature behind enterprise admin opt-in to avoid surprising customers with new content policies.
The relationship reflected the broader Microsoft and OpenAI partnership, which by late 2025 had Microsoft offering most major OpenAI models through Azure under separate commercial terms. Sora 2 was the first OpenAI video model included in this arrangement.
API pricing followed a per-second-of-video model. Consumer app access was bundled into existing ChatGPT subscription tiers.
| Tier | Cost | Resolution | Duration |
|---|---|---|---|
| ChatGPT Free (invited) | Included | 720p | Up to 10 sec |
| ChatGPT Plus | $20/month | 720p | Up to 10 sec, priority queue |
| ChatGPT Pro | $200/month | Up to 1080p | Up to 20 to 25 sec |
| API: sora-2 | $0.10/sec | 720p | Per generated second |
| API: sora-2-pro (720p) | $0.30/sec | 720p | Per generated second |
| API: sora-2-pro (1024p) | $0.50/sec | 1024p | Per generated second |
For context, a 10-second standard clip via the API cost $1.00, while a 10-second Pro clip at higher resolution cost $5.00. The pricing was high relative to text generation but consistent with the broader AI video market in 2025 and 2026, where rivals such as Veo and Kling charged similar amounts on a per-clip basis once compute costs were factored in.
Free-tier access was wound down on January 10, 2026, restricting Sora 2 generations to Plus and Pro subscribers. OpenAI cited compute costs as the rationale; internal estimates leaked to TechCrunch suggested that the average free user was generating enough video to cost the company several dollars a month against zero revenue.
On December 11, 2025, OpenAI and The Walt Disney Company announced a licensing agreement that allowed Sora 2 to generate user-prompted videos featuring characters from Disney, Marvel, Pixar, and Star Wars properties. More than 200 characters were included in the initial license, among them Mickey Mouse, Simba, Baymax, and characters from Frozen, Moana, Toy Story, and Encanto.
As part of the deal, Disney made a $1 billion equity investment in OpenAI and received warrants to purchase additional equity. The partnership included one year of exclusivity; after that, Disney could extend or allow comparable licensing to other AI video platforms. The agreement explicitly excluded talent likenesses and voices. A selection of fan-generated Sora shorts featuring Disney characters was planned for distribution on Disney+.
The deal was the first of its kind between a major Hollywood studio and a frontier AI lab. It also provided Disney with a hedge: by partnering rather than litigating, the company gained both a financial stake in OpenAI and a measure of control over how its IP appeared on the platform. The arrangement was widely interpreted as a template for future studio-and-lab deals, though no comparable agreement materialized before Sora 2's shutdown.
The deal was abruptly terminated when OpenAI announced Sora's shutdown in March 2026. Disney reportedly learned of the discontinuation less than an hour before the public announcement.
By early 2026, several competing models offered comparable or superior performance on specific benchmarks.
| Model | Developer | Max resolution | Native audio | Open source | Relative strengths |
|---|---|---|---|---|---|
| Sora 2 | OpenAI | 1080p | Yes | No | Physics simulation, cinematic camera work |
| Sora 2 Pro | OpenAI | 1080p | Yes | No | Higher visual fidelity variant of Sora 2 |
| Veo 3 / 3.1 | Google DeepMind | 1080p | Yes | No | Scene consistency across long sequences, prompt fidelity |
| Kling 3.0 | Kuaishou | 4K native | Yes | No | Native 4K output, multi-shot storyboards, cost efficiency |
| Seedance 2.0 | ByteDance | 1080p | Yes (unified) | No | Phoneme-level lip-sync, multi-language dialogue |
| Runway Gen-4.5 | Runway | 1080p | No | No | Creative control tools, director mode, plugin ecosystem |
| Hailuo 02 | MiniMax | 1080p | Yes (limited) | No | Strong global benchmarks, cost competitive |
| Pika 2.1 | Pika Labs | 1080p | Yes (basic) | No | Fast generation, creative effects, social orientation |
| Wan 2.x | Alibaba | 1080p | No | Yes | Self-hostable, no per-clip cost at scale |
Independent comparisons published in early 2026 generally ranked Sora 2 as the leader in physics simulation and cinematic camera movement, while Veo 3.1 was preferred for long-form scene consistency, Kling 3.0 for cost-effective high volume work, and Seedance 2.0 for accurate multilingual lip-sync. Runway Gen-4.5 ranked highest for creative professional workflows. Multiple leaderboards placed Sora 2 fourth or fifth overall against these competitors by the time OpenAI announced the shutdown.
The broader competitive picture is worth pausing on. In February 2024, Sora was a year ahead of every competitor and looked like a sustainable lead. By September 2025, Sora 2's launch landed in a market where Veo 3 had already shown native audio, Kling was already shipping 4K, and Runway had already built up a paying customer base in advertising and film. Sora 2 was a strong product, but it was no longer the obvious frontier. That fact loomed over the project's economics for the rest of its short life.
All videos generated through Sora 2 include two forms of provenance marking:
OpenAI also experimented with an invisible signal closer in spirit to Google's SynthID, embedded in pixel-level patterns. The company described this as a research feature rather than a primary defense. Detection from invisible signals alone was not offered as a public service.
OpenAI blocked prompts requesting realistic depictions of named real people without their consent, generation of copyrighted fictional characters without a licensing agreement, and sexual content involving anyone. The system card published at launch described a classifier that flagged prompts for human review when they matched patterns associated with nonconsensual intimate imagery, impersonation, or election-related misinformation.
Despite these measures, watermark removal tools appeared online within one week of launch. A NewsGuard analysis conducted in early October 2025 found that when prompted to generate videos advancing specific false narratives, Sora 2 did so in 16 out of 20 tested cases (80%). Three Sora-generated videos went viral in October 2025 as purported documentary footage of police actions at political protests.
In October 2025, actor Bryan Cranston raised a public alarm after unauthorized AI-generated clips using his likeness and voice appeared on the platform. SAG-AFTRA, talent agencies UTA and CAA, and the Association of Talent Agents joined Cranston in calling on OpenAI to strengthen protections. OpenAI subsequently tightened its guardrails for voice and likeness generation and publicly endorsed the NO FAKES Act, a proposed piece of US federal legislation that would establish a national standard requiring explicit consent before an individual's voice or likeness can be used to create a digital replica.
The platform also attracted criticism for enabling copyright-infringing content. OpenAI's initial approach used an opt-out model for copyrighted IP, allowing generations by default unless a rights holder objected. Following pressure from the Motion Picture Association and other rights holders, the company shifted to an opt-in system requiring explicit authorization. The Disney partnership announced in December 2025 was, in one reading, the first concrete fruit of that shift; in another, the company's recognition that opt-out was not going to hold.
The visible watermark itself drew criticism on multiple fronts. At launch it was prominent and animated, occupying a corner of the video and moving slightly to defeat single-frame removal techniques. Pro subscribers could request unwatermarked downloads under specific conditions, mostly for verified business use. By early 2026, after watermark removal tools and reuploads stripped the badge from a large share of viral clips anyway, OpenAI quietly reduced the watermark's prominence and added the C2PA signal as the primary provenance defense. The visible mark became smaller and slightly transparent. Critics viewed this as a step backward in provenance hygiene; OpenAI framed it as a reaction to creator complaints about visual interference with otherwise high-quality clips.
OpenAI has not published a complete account of the data used to train Sora 2. The system card refers to a mixture of publicly available data, licensed content from partners, and human feedback data, in line with the disclosure pattern used for Sora 1, DALL-E 3, and other multimodal models. The company acknowledged that some training video came from public web sources but declined to specify which ones.
In June 2025, several months before Sora 2's launch, CNBC reported that OpenAI had trained earlier versions of its video systems on YouTube videos without explicit permission from Google or video creators. The reporting cited internal sources at Google and was followed by public comments from YouTube CEO Neal Mohan reiterating that scraping YouTube content for AI training violated the platform's terms of service. OpenAI did not explicitly confirm or deny the reporting; Sam Altman, asked about it on a podcast, said the company tried to respect creator preferences but did not commit to a specific policy.
Sora 2 launched into this context. Studios and rights holders watched closely for signs that the new model was generating their characters in ways that suggested it had been trained on protected material. Within days, examples surfaced. The model could generate clips with Steamboat Willie, the early Mickey Mouse, on demand. (Steamboat Willie itself is in the public domain in the US after 2024, but the broader Disney character set was not.) The model could also generate plausible imitations of Pokemon, Star Wars assets, and animation styles closely associated with Studio Ghibli.
OpenAI's initial response was the opt-out policy described above. Following pushback from the Motion Picture Association and a coalition of Japanese entertainment companies including Studio Ghibli, Bandai Namco, and Square Enix, the company shifted to opt-in. The Japanese Content Overseas Distribution Association argued that opt-out improperly reversed the burden of consent and urged OpenAI to suspend use of Japanese works until a legal framework existed.
No training-data lawsuit specific to Sora 2 reached a judgment before the model's shutdown. Several existing suits against OpenAI, including the New York Times case and various author class actions, made arguments that would in principle apply to video data, but the Sora 2 product life was too short for a dedicated case to mature. The Disney partnership, in this view, was a way to avoid litigation by restructuring it as commerce.
Sora 2's launch drew broad attention as the first mass-market consumer AI video platform from a major lab. The app's TikTok-style discovery feed was described by some commentators as "the GPT-3.5 moment for video," marking the transition from research demonstration to everyday creative tool. Coverage in The Verge, Wired, The New York Times, and Bloomberg generally treated the launch as a meaningful milestone, while raising concerns about deepfakes, copyright, and the long-term economic impact on creative industries.
The platform also attracted a wave of criticism within weeks of launch. The term "SlopTok" circulated on social media to describe the flood of low-effort AI-generated clips filling social feeds, characterized by technically competent but creatively empty content. Former TikTok Trust and Safety manager Daisy Soderberg-Rivkin described the effect as deepfakes gaining "a publicist and a distribution deal," noting that the social packaging normalized synthetic video in a way that earlier research releases had not.
The app was used to generate content featuring deceased public figures including Martin Luther King Jr. and Michael Jackson in absurd scenarios, copyrighted characters without licensing (including Star Wars and Pokemon imagery), and politically charged synthetic footage. OpenAI restricted several of these categories after backlash from estates and advocacy groups, but users continued finding workarounds. The King family pushed publicly for OpenAI to block clips involving their relative, and OpenAI complied. Carlin and Williams family complaints followed a similar pattern.
Hollywood unions expressed concern about the impact on the entertainment industry's labor market. SAG-AFTRA warned that the combination of realistic video synthesis with the Cameos likeness system could accelerate the replacement of background performers and voice actors. Major talent agencies took a defensive posture: Creative Artists Agency and United Talent Agency formally opted their clients out of Sora 2. UTA called the app "exploitation, not innovation"; CAA said it "exposes our clients and their intellectual property to significant risk."
On the other side, a number of filmmakers and advertising professionals praised Sora 2 for its physics accuracy and cinematic quality, using it for storyboarding, concept visualization, and production prototyping. Several agencies pointed to the model as a serious shortcut for early-stage creative work, even if final delivery used traditional production. Tyler Perry's earlier announcement that he would pause an $800 million Atlanta studio expansion in light of AI video tools was frequently revived in coverage of Sora 2, even though Perry's original concern referred to Sora 1.
There was also the bias question. Critics noted that in default prompts asking for, say, "a CEO" or "a doctor," Sora 2 disproportionately produced white male characters; prompts for "a criminal" or "a janitor" skewed in predictable, troubling ways. OpenAI added prompt-side rebalancing to mitigate the most blatant cases, similar to the techniques used for DALL-E 3, but the underlying skew in the training data remained visible to anyone who looked.
Documented uses of Sora 2 across its active period included:
At launch and throughout its active period, Sora 2 had documented limitations:
On March 25, 2026, OpenAI announced it would discontinue the Sora app and API in two phases. The web and iOS app closed on April 26, 2026. The API is scheduled for discontinuation on September 24, 2026.
OpenAI cited a decision to redirect computing resources toward enterprise and productivity products, including coding tools and a "super app" integrating ChatGPT with other services. TechCrunch reported in March 2026 that the app's user base had declined from approximately one million to under 500,000 active users, and that Sora was consuming roughly $1 million per day in compute costs while contributing modest revenue relative to that expenditure.
OpenAI stated that Sora's underlying research into world modeling would continue internally, and that learnings from the model would inform future products. Users were notified by email and given until the respective shutdown dates to download their generated content. All user data was scheduled for permanent deletion after the API shutdown date.
The Disney partnership, announced just months before the shutdown, was terminated along with the product. Disney reported learning of the decision less than an hour before OpenAI's public announcement.
Several commentators pointed out the contradiction between OpenAI's public framing of generative video as a strategic priority and the speed with which the company exited the consumer video market. The Information reported that OpenAI's senior leadership considered Sora 2 economically unsustainable as a stand-alone consumer product as early as February 2026, when revenue from the Plus and Pro tiers was not covering compute costs and growth metrics in the app had stalled. Plans to fold video generation back into ChatGPT, similar to how DALL-E 3 was eventually integrated into the chat product, were reported around the same time.
As of May 2026, OpenAI has not announced a successor product called Sora 3. The Sora app and consumer service have been shut down, and the API is scheduled for discontinuation in September 2026. OpenAI's public communications at and around the shutdown said that internal research into video and world simulation would continue, and that learnings from Sora 2 would inform future work, but they did not name a successor model or commit to a release timeline.
Most coverage at the time of the shutdown framed the move as a retreat from consumer video rather than an architectural reset. Whether OpenAI ships another video model under a Sora-branded name, folds the technology into a future ChatGPT or GPT-5-based multimodal product, or leaves the consumer video field to competitors such as Veo and Kling has not been publicly resolved as of this writing.