Luma Dream Machine
Last reviewed
May 16, 2026
Sources
30 citations
Review status
Source-backed
Revision
v1 ยท 3,997 words
Improve this article
Add missing citations, update stale details, or suggest a clearer explanation.
Last reviewed
May 16, 2026
Sources
30 citations
Review status
Source-backed
Revision
v1 ยท 3,997 words
Add missing citations, update stale details, or suggest a clearer explanation.
Luma Dream Machine is a generative video and image platform developed by Luma AI, a San Francisco-based startup. The product was first released to the public on June 12, 2024 as a text-to-video tool that produced five-second clips from written prompts. Over the following 18 months it became an umbrella brand for a series of underlying video models in the Ray family: Ray (the model that powered the original Dream Machine launch), Ray2 in January 2025, Ray3 in September 2025, and Ray3 Modify in December 2025. As of mid-2026 Dream Machine is available through a web app at dream-machine.lumalabs.ai, an iOS application, a developer API, and partner platforms including Adobe Firefly and Amazon Bedrock.
Dream Machine sits inside a crowded field. Its closest peers are OpenAI's Sora 2, Google DeepMind's Veo 3, Runway Gen-4, Kuaishou's Kling, MiniMax's Hailuo AI, and Pika. Luma has positioned the Ray series toward professional film and advertising workflows, particularly with the Ray3 release that added native 16-bit HDR and an internal reasoning step before generation.
| Luma Dream Machine | |
|---|---|
| Developer | Luma AI (Luma Labs, Inc.) |
| Type | Text-to-video, image-to-video, video-to-video |
| Underlying models | Ray, Ray2, Ray3, Ray3 Modify, Ray3.14 |
| First release | June 12, 2024 |
| Latest model | Ray3.14 (January 26, 2026) |
| Maximum resolution | 4K (via upscale), native 1080p |
| Maximum native HDR | 16-bit ACES2065-1 EXR (Ray3) |
| Maximum clip duration | 10 seconds native, up to 30 seconds extended |
| Platforms | Web, iOS, API, Adobe Firefly, Amazon Bedrock |
| Founders of Luma | Amit Jain, Alex Yu, Alberto Taiuti |
| Headquarters | Palo Alto, California |
Luma AI (registered as Luma Labs, Inc.) was founded in September 2021 by Amit Jain, Alex Yu, and Alberto Taiuti. Jain previously led 3D computer vision work at Apple. Yu was a UC Berkeley researcher associated with Plenoxels and early Neural Radiance Field (NeRF) research. The original mission was to make 3D capture and reconstruction accessible to anyone with a smartphone, and the company's first products were iPhone applications that turned everyday camera footage into photogrammetric 3D models. By mid-2023 the company reported more than five million 3D captures through these tools.
Luma raised a $4.3 million seed round in October 2021, a $20 million Series A in March 2023 led by Amplify Partners with participation from Nvidia's NVentures arm and General Catalyst, and a $43 million Series B in January 2024 with participation from Andreessen Horowitz, Matrix Partners, Nvidia, and South Park Commons. The pivot toward generative content began in late 2023 with Genie, a 3D asset generator, and continued in June 2024 with the launch of Dream Machine.
On November 19, 2025, Luma announced a $900 million Series C led by HUMAIN, a unit of Saudi Arabia's Public Investment Fund, with participation from AMD Ventures, Andreessen Horowitz, Amplify Partners, Matrix Partners, and Amazon. The financing valued the company at roughly $4 billion and was paired with an announcement that Luma and HUMAIN would jointly build a two-gigawatt AI training cluster in Saudi Arabia called Project Halo. Luma framed the partnership as infrastructure for training large-scale world models, a research direction that goes beyond text or video and into general physical simulation. Dream Machine is the main commercial expression of that research.
Dream Machine is the consumer brand. Underneath it sits a series of base video models, each released with new capabilities, higher fidelity, or new editing affordances. The table below summarizes the principal releases.
| Model | Release date | Headline features |
|---|---|---|
| Ray (Dream Machine v1) | June 12, 2024 | First public Luma video model; 5-second 1360x752 clips from text prompts or still images |
| Ray2 | January 15, 2025 | Trained with roughly 10x the compute of Ray1; 5-10 second clips extendable to 30 seconds; up to 1080p with 4K upscale; better physics and motion |
| Ray3 | September 18, 2025 | First reasoning video model with internal text and visual tokens; first to generate native 10-, 12-, and 16-bit HDR in ACES2065-1 EXR; cinematic 4K via Hi-Fi mastering; Draft Mode |
| Ray3 Modify | December 18, 2025 | Video-to-video editing with start and end frame control; Character Reference for performance-preserving identity swap |
| Ray3.14 | January 26, 2026 | Native 1080p; roughly 3x cheaper and 4x faster than the prior Ray3 release |
The first Dream Machine launched on June 12, 2024 alongside a wave of consumer-facing AI video products. At launch the model produced clips that were 5 seconds long, with a resolution of 1360x752 pixels. Users signed in with a Google account at lumalabs.ai/dream-machine and submitted either a text prompt or a still image as the seed. The free tier allowed 30 total videos with a daily cap of 10, and the paid Standard, Pro, and Premier plans raised these caps to 120, 400, and 2,000 videos a month respectively. (Those plan names have since been replaced by a credit-based system; see the pricing section below.)
Launch reviews noted that Ray captured motion more convincingly than several open-source baselines, and Tom's Guide called it "the AI video creator we've always wanted" in an early hands-on. A wave of viral fan animations followed within days, including Doge, Success Kid, the Picard facepalm, and the Vermeer painting "Girl with a Pearl Earring," all of which circulated on X. The crypto community used the tool to animate still photos of figures including Tron founder Justin Sun.
Critics raised two main concerns. First, Luma did not disclose its training data, which made it difficult to evaluate commercial usability. Second, generations could closely mimic named studio styles such as Pixar's Monsters, Inc. aesthetic, which raised questions about whether copyrighted footage had been used in training. Luma did not respond publicly to these specific concerns at the time of the v1 launch.
The original product retroactively became known as the Ray release once Luma started numbering its base video models. Image-to-video, video extension, and a public feed of community generations were added in the months that followed.
Luma introduced Ray2 on January 15, 2025. The model was made available to paid Dream Machine subscribers on its launch day. Luma described Ray2 as built on a new multimodal architecture and trained with approximately 10 times the compute used for the original Ray model. It produced clips of five to nine seconds, extendable up to 30 seconds, with resolution options at 540p, 720p, and 1080p plus optional 4K upscaling.
The headline improvements over Ray1 were physics and motion. Luma cofounder and chief executive Amit Jain described Ray2 on X as offering "fast, natural coherent motion and physics," and the model handled interactions between people, animals, and objects with more consistency than its predecessor. Ray2 initially supported text-to-video only at launch, with image-to-video, video-to-video, and editing added over the following weeks.
On January 23, 2025, Amazon Web Services made Ray2 generally available in Amazon Bedrock in the US West (Oregon) region, with the model ID luma.ray-v2:0. AWS described itself as the first cloud provider to offer fully managed Luma models. The Bedrock variant supported five- and nine-second clips at 540p and 720p, 16:9 aspect ratio, and 24 frames per second through asynchronous job submission. In April 2025 Adobe announced that Ray2 would be integrated into Adobe Firefly and Firefly Boards.
Ray3 was announced on September 18, 2025. Luma described it as the first video model built to think like a creative partner, with two technical claims that made it stand out from competing products. First, Ray3 produces both text tokens and visual tokens during generation, which lets it plan and self-evaluate scenes before rendering. Luma calls this its reasoning system. Second, Ray3 generates video in true 10-, 12-, and 16-bit High Dynamic Range in the ACES2065-1 EXR format, which is the color and dynamic-range pipeline used by professional film production. No competing model at the time produced native HDR at those bit depths.
The model supports a Draft Mode that lets users iterate on rough generations up to roughly 20 times faster, then promote the chosen draft to a Hi-Fi mastering pass that outputs at 4K with the full HDR pipeline. The Hi-Fi pass is meant for final delivery to film, advertising, and broadcast workflows.
Adobe was the first launch partner outside Dream Machine. Ray3 became available in Adobe Firefly and Firefly Boards on the same day as the model announcement, and Adobe offered unlimited Ray3 generations for the first 14 days on paid Firefly or Creative Cloud Pro plans. Other launch partners included the Japanese advertising agency Dentsu Digital, HUMAIN Create, Monks (S4), Galeria, and Strawberry Frog.
In October 2025, Luma published an internal evaluation report titled "Ray3 Evaluation Report: State-of-the-Art Performance for Pro Video Generation." The report compared Ray3 against Veo 3, Runway Gen-4, Midjourney Video, and Moonvalley Marey across categories including physics and motion, instruction-following, motion artifacts, aesthetic quality, dynamic range, and temporal consistency. Ray3 reached state-of-the-art performance alongside Veo 3 on most measured axes and led the field on aesthetic quality, motion artifacts, dynamic range, and temporal consistency. The evaluation did not include Sora or Kling.
Ray3 Modify launched on December 18, 2025 inside Dream Machine. Where the earlier Ray releases focused on generating video from text or images, Ray3 Modify is a video-to-video editing model designed for hybrid workflows that combine live-action footage with AI modification. It adds three main capabilities to Dream Machine.
The first is an updated Modify Video function. The newer version preserves physical logic, narrative coherence, and original performance details more faithfully than the previous video-to-video pipeline. Wardrobe, lighting, environment, virtual product placement, and other layered changes are designed to appear as if they had been captured naturally in camera.
The second is keyframe control through start and end frames. Ray3 Modify is the first Luma model to bring start- and end-frame conditioning into the video-to-video workflow, which allows directors to guide transitions, character behavior, and spatial continuity across longer camera moves. Users supply both endpoints and the model generates the intermediate footage.
The third is Character Reference, a tool that locks a specific character's likeness, costume, and identity across a modified clip. The actor's original motion, timing, eye line, and emotional delivery remain intact while the visible character is replaced with a new design supplied as a reference image. Luma pitched the feature at hybrid live-action and AI production teams in film, advertising, and post-production.
Amit Jain framed the release as a control problem rather than a fidelity problem in press materials: "Generative video models are incredibly expressive but also hard to control. Today, we are excited to introduce Ray3 Modify that blends the real-world with the expressivity of AI while giving full control to creatives." TechCrunch's coverage emphasized the start-and-end-frame mechanic, which was previously more associated with editing tools such as Runway's Keyframes than with end-to-end generation models.
On January 26, 2026 Luma released Ray3.14, an interim update that increased native resolution to 1080p and reduced inference cost. Luma described Ray3.14 as roughly three times cheaper and four times faster than the launch version of Ray3, while preserving the model's reasoning and HDR pipeline. This made Ray3 economically viable for higher-volume work and brought API per-second pricing closer in line with Ray2.
The Dream Machine product surface exposes a consistent set of generation modes across the Ray series, with each new model adding capabilities rather than replacing them. The current set is summarized below.
| Capability | Description | First Ray model to support it |
|---|---|---|
| Text-to-video | Generate a clip from a written prompt | Ray (June 2024) |
| Image-to-video | Animate a still image | Ray (added shortly after launch) |
| Video extension | Add additional seconds to an existing clip | Ray (post-launch) |
| End frames | Condition generation on a target ending frame | Ray2 |
| Loop | Create a clip that returns to its starting frame | Ray2 |
| Camera motion concepts | Specify named camera moves (dolly, crane, push-in) | Ray2 (March 31, 2025) |
| Camera angle concepts | Specify named camera angles | Ray2 (April 18, 2025) |
| Upscale | Promote a generated clip to 4K | Ray2 |
| HDR (10-, 12-, 16-bit) | Produce native HDR output in ACES2065-1 EXR | Ray3 |
| Reasoning over prompt | Internal text and visual planning before rendering | Ray3 |
| Draft Mode | Cheap, fast iterations promoted to Hi-Fi | Ray3 |
| Modify Video | Transform existing footage while preserving motion | Ray3 Modify |
| Start and end frame video-to-video | Guide a modified clip between two reference frames | Ray3 Modify |
| Character Reference | Lock a character's design across a modified clip | Ray3 Modify |
The maximum native clip length on Ray3 is 10 seconds, extendable to longer durations through the extension feature. Ray2 clips can be extended to 30 seconds. Both models support 16:9, 9:16, and 1:1 aspect ratios. The Hi-Fi mastering pass on Ray3 outputs at 4K with HDR for delivery to film and broadcast pipelines.
Dream Machine also includes Luma's Photon image generation model for still images. Photon shares the credit system used by the video models, with images generated in batches of four at a typical cost of 16 credits per batch.
Luma offers a developer API at docs.lumalabs.ai with SDKs in Python and JavaScript. The API uses asynchronous generation: a request returns an ID that the client polls until the generation completes, then retrieves the resulting media. The same model lineup available in Dream Machine is exposed through the API, including Ray2, Ray3, Ray3 Modify, and Photon.
Dream Machine itself uses a credit-based pricing system that replaced the 2024 Standard, Pro, and Premier video-count plans. The current web plans are listed below; iOS plans cost slightly more because of Apple's in-app purchase fees.
| Plan (web) | Monthly cost | Annual cost (20% off) | Monthly credits | Output | Commercial use |
|---|---|---|---|---|---|
| Free | $0 | $0 | Limited | 720p draft, watermarked | No |
| Lite | $9.99 | $7.99/month | 3,200 | Up to 4K with upscale, watermarked | No |
| Plus | $29.99 | $23.99/month | 10,000 | Up to 4K with upscale and HDR | Yes |
| Unlimited | $94.99 | $75.99/month | 10,000 fast plus unlimited relaxed | Up to 4K with upscale and HDR | Yes |
| Enterprise | Custom | Custom | 20,000 | Up to 4K with upscale and HDR | Yes |
A 10-second generation on Ray2 or Ray3 costs roughly 800 credits depending on the resolution, model, and any added features such as HDR or upscaling. The credit system replaced the old fixed-count plans because Luma's product line grew to include several models and several output sizes, each with different compute requirements.
The API uses a separate billing pool. As of early 2026, Luma's published per-second video rates for Ray3 came down sharply with the Ray3.14 update, which Luma described as approximately three times cheaper than the launch version of Ray3. Luma directs developers to lumalabs.ai/dream-machine/api/pricing for the current rate card. Through Amazon Bedrock, Ray2 is billed through the standard Bedrock asynchronous video model billing model and is invoiced under AWS rather than Luma directly.
The table below summarizes how Dream Machine's most recent Ray3 release compares to other leading video generation models as of early 2026. Native HDR and the reasoning step are the two features that most clearly distinguish Ray3 from its peers; the closest peer on overall benchmark performance is Veo 3.
| Model | Developer | Max native resolution | Native HDR | Native audio | Max clip duration | Notable differentiators |
|---|---|---|---|---|---|---|
| Luma Ray3 | Luma AI | 4K via Hi-Fi mastering, 1080p native (Ray3.14) | 10-, 12-, 16-bit ACES2065-1 EXR | No | 10 seconds (extendable) | Reasoning over prompts; Draft Mode; HDR output; Modify Video with start and end frames |
| Sora 2 | OpenAI | 1080p (Pro variant 1792x1024) | No | Yes (dialogue, sound effects, ambient, music) | ~25 seconds (Pro) | Native audio; physics simulation; Cameos likeness consent feature; consumer iOS app (shut down April 2026) |
| Veo 3 | Google DeepMind | 1080p | No (Veo 3.1 limited HDR) | Yes | ~8 seconds | Long-form scene consistency, deep Google Cloud integration |
| Runway Gen-4 | Runway | 1080p | No | No | ~10 seconds | Director Mode, keyframes, established creative-professional toolchain |
| Kling 3.0 | Kuaishou | 4K native | No | Yes | ~10 seconds (extendable) | Native 4K output, multi-shot storyboards, cost efficient |
| Hailuo AI 02 | MiniMax | 1080p | No | Yes (limited) | ~10 seconds | Strong global benchmarks, cost competitive |
| Pika 2.x | Pika Labs | 1080p | No | Yes (basic) | ~10 seconds | Creative effects, social-friendly UI |
Luma's October 2025 evaluation report covered Ray3, Veo 3, Runway Gen-4, Midjourney Video, and Moonvalley Marey, but not Sora 2 or Kling 3.0. The report concluded that Ray3 matched Veo 3 on physics and motion and on instruction-following, while leading the field on aesthetic quality, motion artifacts, dynamic range, and temporal consistency. The image-to-video subtask was close to parity with Veo 3 and ahead of the other models tested. Independent benchmarks have generally placed Ray3 in the top tier on cinematic quality, alongside Veo 3 and Sora 2, with the HDR pipeline as a distinguishing technical claim that no other major model has matched at the same bit depth.
Dream Machine's June 2024 launch was met with enthusiasm and a quick wave of viral content. Animated versions of legacy internet memes circulated on X within days, and a Pixar-style ancient Egypt animation by director Ellenor Argyropoulos drew several million views. Tom's Guide called the initial release "the AI video creator we've always wanted," and VentureBeat described it as evidence that Luma had jumped from a niche 3D capture tool into the front rank of generative AI startups.
By early 2026, Luma reported more than 25 million registered Dream Machine users since launch, with later coverage citing figures of around 30 million. Estimates of Luma's share of the AI video market in late 2025 ranged from 15 to 20 percent, placing it between Pika and Runway in most rankings. CineD covered Ray3's ACES2065-1 EXR support as the first time a generative video model had produced data suitable for direct ingestion into a film color pipeline rather than requiring a conversion pass.
The Adobe Firefly integration brought Dream Machine to a broader pool of Creative Cloud subscribers. The Hollywood Reporter described the Adobe deal as the first major partnership through which a generative video model was distributed inside an established creative software suite, and PetaPixel framed it as a turning point for how working creatives would access AI video.
Notable production credits announced alongside Ray3 included Dentsu Digital, which planned to use the model for Japanese advertising production, and additional agency partners HUMAIN Create, Monks (S4), Galeria, and Strawberry Frog. The Ray3 Modify launch in December 2025 emphasized hybrid live-action and AI workflows; brand teams described scenarios in which a single actor performance could be re-skinned for different territories or product variants without reshooting.
Criticism has been consistent across the Ray series and has tended to focus on training data transparency. Wikipedia's entry on Dream Machine notes that Luma has not disclosed which video corpora it trained on, and reviewers have remarked on the model's apparent comfort generating clips that closely resemble named studio styles. The training-data question applies to almost every generative video model, but Luma's relative silence on it has been mentioned in coverage by VentureBeat and by independent reviewers at CineD and Tom's Guide. The Ray3 release shifted some attention away from this concern because of its production-oriented framing, but the underlying disclosure gap remains.
A second strand of reception has focused on the public feed in Dream Machine. As with other consumer AI video products such as the now-shuttered Sora app, some critics argue that the feed encourages a low-effort "slop" aesthetic, while supporters describe it as a useful prompt-discovery surface for new users.