Adobe Firefly is a family of creative generative AI models developed by Adobe and integrated across its product ecosystem. Designed specifically for creative professionals, Firefly powers features in Adobe Photoshop, Adobe Illustrator, Adobe Premiere Pro, Adobe Express, and a standalone web and mobile application. Unlike many competing image generation tools, Firefly is trained exclusively on licensed Adobe Stock content, openly licensed works, and public domain material, which allows Adobe to market its outputs as "commercially safe" and offer IP indemnification to qualifying enterprise customers.
Since its public beta launch in March 2023, Firefly has generated over 24 billion assets as of mid-2025, making it one of the most widely adopted generative AI tools in the creative industry. It captured approximately 29% market share among AI design tools by the end of 2024, ahead of Midjourney (19%), Canva AI (16%), and DALL-E (14%).
Adobe first previewed Firefly at its MAX conference in September 2022. On March 21, 2023, Adobe officially unveiled Firefly as a public beta, describing it as a family of creative generative AI models that could be integrated across its product suite. The initial release focused on two capabilities: text-to-image generation and text effects (decorative lettering generated from prompts).
The beta attracted enormous interest. Within three months of launch, users had generated over 1 billion images. By September 2023, that figure had crossed 2 billion. Adobe credited the rapid adoption to Firefly's tight integration with existing Creative Cloud workflows and its focus on commercial safety.
In May 2023, Adobe introduced Generative Fill in the Photoshop beta app, bringing Firefly-powered AI directly into Photoshop for the first time. This feature allowed users to add, extend, or remove content in images using simple text prompts. It was described as the world's first AI co-pilot for creative and design workflows. In July 2023, Generative Expand followed, letting users extend images beyond their original borders with AI-generated backgrounds.
On June 22, 2023, Firefly for Enterprise became available, giving businesses access to Firefly through Adobe Experience Cloud.
On September 13, 2023, after a six-month beta period, Adobe made Firefly commercially available. Firefly-powered features were integrated into Creative Cloud, Adobe Express, and Adobe Experience Cloud and cleared for commercial use. At this milestone, Adobe also announced the first round of bonus payments to Adobe Stock contributors whose content had been used to train Firefly models.
Alongside the commercial release, Adobe introduced a generative credits system to manage usage across its platform. Creative Cloud subscribers received a monthly allotment of generative credits, with different features consuming different amounts.
At Adobe MAX on October 10, 2023, Adobe released the Firefly Image 2 Model. This second-generation model brought significant improvements in image quality, photorealistic detail (including skin pores and foliage), and creative control. Key new features included:
By the end of 2023, Firefly had generated over 3 billion assets.
Adobe introduced the Firefly Image 3 Foundation Model on April 23, 2024. This model delivered major advances in photorealistic quality, styling capabilities, and generation speed. Notable features included:
Also in early 2024, Adobe announced Firefly Services, offering brands more than 20 generative and creative APIs for integrating Firefly capabilities into their own products and workflows.
By April 2024, cumulative Firefly generations exceeded 7 billion.
At Adobe MAX in October 2024, Adobe introduced over 100 new Creative Cloud features powered by Firefly. The headline announcement was the Firefly Video Model, released in limited public beta. This was the first publicly available video generation model designed to be commercially safe. Its capabilities included:
Generative Extend, powered by the Firefly Video Model, was introduced in Premiere Pro beta, enabling video editors to extend clips to cover gaps, smooth transitions, or hold shots longer.
By September 2024, total Firefly generations reached 12 billion. By October, that number rose to 13 billion, and by November, 16 billion.
On April 24, 2025, Adobe released Firefly Image Model 4 alongside Image Model 4 Ultra and a redesigned Firefly web application. Image Model 4 delivered improvements in quality, speed, and control over structure, style, camera angles, and zoom, with image generation at resolutions up to 2K. Image Model 4 Ultra was a more capable variant designed for complex scenes with intricate details and small structures.
The Firefly Video Model also became generally available in April 2025, with support for 4K and vertical video in Premiere Pro through Generative Extend.
Additional AI features in Premiere Pro at this time included Media Intelligence (for finding relevant clips from large footage libraries) and Caption Translation (localizing captions in 27 languages).
On June 17, 2025, Adobe launched the Firefly mobile app for iOS and Android. The app brought the full range of Firefly capabilities to mobile devices, including text-to-image generation, Style Reference, Structure Reference, Generative Fill, Generative Expand, Generative Remove, image-to-video conversion, sound effect generation, and text effect creation. The mobile app syncs seamlessly with Creative Cloud desktop applications.
Firefly Boards launched globally on September 24, 2025, as a collaborative, AI-powered canvas for creative ideation. Boards allow teams to generate, edit, and arrange images, text, shapes, and video in a shared workspace with real-time collaboration. Features include Presets for one-click style generation, Describe Image (which converts any visual into a reusable prompt), and Generative Text Edit for updating text directly within visuals. Boards integrates models from multiple partners, including Black Forest Labs, Google, Luma AI, Moonvalley, Pika, and Runway.
At Adobe MAX on October 28, 2025, Adobe introduced Firefly Image Model 5 and several groundbreaking new tools. Image Model 5 generates photorealistic images in native 4MP resolution without upscaling, excels at capturing lighting and texture, creates lifelike portraits with anatomical accuracy, and handles prompt-based editing in everyday language.
New audio and video tools announced at MAX 2025 included:
Adobe also expanded its partner model ecosystem to include ElevenLabs, Google, Luma AI, OpenAI, Runway, and Topaz Labs. New Firefly Custom Models were announced, enabling creators to train personalized models on their own visual style.
Additional features included Rotate Object (converting 2D images into poseable 3D representations), PDF exporting from Boards, and bulk image downloading.
In December 2025, Adobe added Black Forest Labs' FLUX.2 model to the Firefly platform, making it available in text-to-image, Prompt to Edit, Firefly Boards, and Photoshop's Generative Fill. Video upscaling capabilities were added through partner model Topaz Astra. Adobe also launched the full beta of the Firefly video editor with enhanced capabilities for precise edits and camera motion control.
As a promotional offer, Firefly Pro, Firefly Premium, and higher-tier credit plan subscribers received unlimited image and video generations through January 15, 2026.
On March 19, 2026, Adobe launched Firefly Custom Models in public beta. This feature lets creators train a custom model on 10 to 30 of their own images, producing a personalized model that preserves details like stroke weight, color palettes, lighting, and character features. Custom models are optimized for character, illustration, and photographic styles.
The Firefly model library expanded to over 30 third-party models, with new additions including Google's Nano Banana 2, Veo 3.1, Runway's Gen-4.5, and Kling's 2.5 Turbo. Firefly Image Model 5 became generally available across the platform.
Adobe also introduced Quick Cut, a video feature that organizes raw footage into a structured first cut in minutes. A private beta for Project Moonlight launched, providing a conversational AI assistant interface across Adobe products including Photoshop, Express, and Acrobat.
Adobe has released multiple generations of Firefly models across image, video, and audio domains.
| Model | Release Date | Key Features | Max Resolution |
|---|---|---|---|
| Firefly Image Model 1 | March 2023 (beta), September 2023 (GA) | Text-to-image, text effects; trained on Adobe Stock, public domain, and openly licensed content | Standard |
| Firefly Image Model 2 | October 2023 | Generative Match, Photo Settings, improved photorealistic detail, sharing and collaboration features | Standard |
| Firefly Image Model 3 | April 2024 | Structure Reference, Style Reference, auto-stylization, 4x faster generation | Standard |
| Firefly Image Model 4 | April 2025 | Improved quality and control, camera angle/zoom controls | Up to 2K |
| Firefly Image Model 4 Ultra | April 2025 | Complex scene rendering, intricate detail handling | Up to 2K |
| Firefly Image Model 5 | October 2025 (beta), March 2026 (GA) | Native 4MP resolution, photorealistic portraits, prompt-based editing in natural language | 4MP native |
| Model | Release Date | Key Features | Max Resolution |
|---|---|---|---|
| Firefly Video Model | October 2024 (limited beta), April 2025 (GA) | Text-to-video, image-to-video, camera angle controls, start/end frame specification, atmospheric effects | Up to 1080p (4K with Generative Extend) |
| Model | Release Date | Key Features |
|---|---|---|
| Generate Soundtrack | October 2025 | Creates fully licensed audio tracks from text descriptions |
| Generate Speech | October 2025 | Produces voiceovers from text prompts |
| Generate Sound Effects | 2025 | Creates sound effects from text descriptions |
| Model | Key Features |
|---|---|
| Firefly Vector Model | Text-to-vector generation, editable SVG output, Generative Shape Fill |
| Firefly Design Model | Template generation, layout creation from text prompts |
One of Adobe Firefly's most distinctive characteristics is its approach to training data. Unlike competitors such as Stable Diffusion and Midjourney, which have faced legal challenges over the use of copyrighted images in their training datasets, Firefly is trained exclusively on:
This training approach means Firefly avoids using copyrighted material without authorization, which Adobe describes as making its outputs "commercially safe." For enterprise customers on qualifying plans, Adobe offers contractual IP indemnification, meaning Adobe will cover legal costs if any IP claims arise from content generated by Firefly, provided the customer used the product within its terms and conditions.
Adobe developed a compensation program for Adobe Stock contributors whose content was used to train Firefly models. The first Firefly Contributor bonus payment was issued on September 13, 2023, alongside the commercial launch. The bonus is paid annually and is weighted toward the number of licenses issued for an image during the training period. The third annual bonus was distributed on September 17, 2025. Eligible contributors include those with photos, vectors, illustrations, videos, and generative AI content considered for training. Bonus amounts vary by contributor and are paid at Adobe's discretion.
Firefly is deeply integrated across Adobe's Creative Cloud suite, powering AI features in multiple flagship applications.
| Feature | Description |
|---|---|
| Generative Fill | Adds, replaces, or removes objects in images using text prompts. Works non-destructively on a separate generative layer. |
| Generative Expand | Extends images beyond their original borders with AI-generated backgrounds and content. |
| Generative Remove | Removes unwanted objects from images, automatically filling in the background. |
| Selection Enhancement | Generative Fill is integrated into every selection tool, enabling AI-assisted editing from any selection. |
Generative Fill was introduced in the Photoshop beta in May 2023 and became commercially available in September 2023. It remains one of the most popular Firefly-powered features, letting users describe additions or changes in natural language and see them applied to their canvas.
| Feature | Description |
|---|---|
| Text to Vector Graphic | Generates detailed, editable vector images (scenes, subjects, icons) from text prompts. Outputs downloadable SVG files. |
| Generative Shape Fill | Fills vector shapes with detail and color based on text descriptions, matching the user's existing style. |
| Generative Expand | Extends vector artwork beyond its original bounds with AI-generated content. |
| Generative Recolor | Recolors vector artwork using text prompts, enabling rapid exploration of color variations. |
| Feature | Description |
|---|---|
| Generative Extend | Extends video and audio clips to cover gaps, smooth transitions, or hold shots longer. Supports 4K and vertical video. |
| Media Intelligence | AI-powered search that finds relevant clips from terabytes of footage in seconds. |
| Caption Translation | Instantly localizes captions in 27 languages. |
Generative Extend in Premiere Pro launched in beta in October 2024 and became generally available in April 2025. Editors use it to extend B-roll footage to match narration timing, stretch establishing shots, extend ambient audio for seamless transitions, and adjust timing without sacrificing composition.
| Feature | Description |
|---|---|
| Text to Image | Generates images from text prompts directly within Express projects. |
| Text Effects | Creates decorative, stylized text from prompts. |
| Remove Background | One-click background removal for isolating subjects. |
| Generative Fill | Adds or modifies content within Express designs. |
Adobe Express with Firefly integration became commercially available in 2023, bringing generative AI capabilities to a broader audience of content creators, marketers, and social media professionals.
Firefly Creative Production, available through Adobe Firefly Services, offers bulk workflow automation. Preset workflows for actions like Remove Background, Color Grade, and Crop Image became available to customers on paid plans with premium generative features starting October 28, 2025.
The Firefly web application (firefly.adobe.com) serves as the primary standalone interface for accessing Firefly's capabilities outside of Creative Cloud desktop apps. The web app was redesigned in April 2025 alongside the Image Model 4 release, offering a streamlined interface for text-to-image generation, image editing, text effects, and vector generation.
The Firefly mobile app, launched in June 2025 for iOS and Android, brings the full creative toolset to mobile devices. Users can generate images, edit with Generative Fill and Generative Expand, create videos from generated images, add sound effects to video timelines, and generate text effects and vector artwork. The mobile and web apps sync with Creative Cloud for seamless project continuity.
Adobe is a founding member of the Content Authenticity Initiative (CAI) and the Coalition for Content Provenance and Authenticity (C2PA), both of which aim to promote transparency and accountability in digital content creation.
Every asset generated through Adobe Firefly automatically receives Content Credentials. These credentials function like a "nutrition label" for digital content, providing tamper-evident metadata that records:
Content Credentials are built on the C2PA technical standard, which uses cryptographic methods to bind provenance information to content and verify that the information has not been altered since it was originally attached. This system helps viewers and platforms distinguish AI-generated content from human-created content, supporting trust and transparency in the creative ecosystem.
Adobe has also worked toward implementing a universal "Do Not Train" Content Credentials tag, which allows content creators to signal that their work should not be used for AI model training.
Adobe Firefly uses a generative credits system to manage usage. Credits function as tokens that reflect the processing power required for different types of generative outputs.
| Plan | Monthly Price | Premium Credits | Standard Generations | Key Features |
|---|---|---|---|---|
| Firefly Free | $0 | 25 | Limited | Basic text-to-image, text effects |
| Firefly Standard | $9.99 | 2,000 | Unlimited | Full access to standard generative features |
| Firefly Pro | $19.99 | 4,000 | Unlimited | Priority access, premium video and audio features |
| Firefly Premium | $199.99 | 50,000 | Unlimited | High-volume professional and enterprise use |
| Creative Cloud Pro | Varies | Included | Unlimited | Full Creative Cloud suite with premium Firefly features |
Paid plan subscribers receive unlimited access to standard generations. Premium credits are consumed only for premium features such as video generation, audio generation, translation, and partner model usage. Standard features (like basic text-to-image using Adobe's own models) do not consume premium credits on paid plans.
Credits renew monthly on the billing date and do not roll over. The number of credits consumed per generation depends on the model selected, the output type, and the file size. Most standard features cost 1 credit per generation.
As of February 12, 2025, the legacy Firefly plan and the Firefly Generative Credits add-on were discontinued for new purchases, though existing subscribers can continue to renew.
Firefly's core capability is generating images from text prompts. Users describe what they want to see, and the model produces multiple variations. Controls include aspect ratio, content type (photo vs. art), style presets, color and tone adjustments, lighting settings, and camera angle parameters. With Image Model 4 and later versions, users can generate images at up to 2K resolution, and Image Model 5 produces native 4MP output.
Beyond generation, Firefly provides AI-powered editing capabilities. Generative Fill allows users to select areas of an image and describe additions or replacements. Generative Expand extends images beyond their borders. Generative Remove erases unwanted elements. These features work non-destructively, preserving the original image.
The text effects feature generates decorative lettering based on text prompts. Users type a word or phrase and describe a style (for example, "made of flowers" or "carved in ice"), and Firefly renders stylized text that can be used in designs, social media posts, and marketing materials.
Firefly generates detailed, editable vector graphics from text prompts. Outputs are available in SVG format and can be directly opened or pasted into Adobe Illustrator for further editing. This feature supports the creation of subjects, scenes, icons, and patterns.
The Firefly Video Model supports text-to-video and image-to-video generation at resolutions up to 1080p. Users can specify camera angles, define start and end frames, control motion, and generate atmospheric elements. Videos can also be generated with transparent backgrounds for compositing.
Introduced at MAX 2025, Firefly's audio capabilities include Generate Soundtrack (creating licensed music from text descriptions), Generate Speech (producing voiceovers), and Generate Sound Effects (creating ambient or specific sound effects). These can be added directly to video timelines.
Firefly Boards is an AI-first collaborative canvas for ideation and moodboarding. Teams can generate, edit, and arrange multimedia content in real time, with access to Adobe's own models and over 30 third-party partner models. Features include Presets, Describe Image, Generative Text Edit, Rotate Object, and PDF export.
Launched in public beta in March 2026, Firefly Custom Models let creators train personalized AI models on 10 to 30 of their own images. The resulting models preserve visual consistency in details like stroke weight, color palettes, lighting, and character features. Custom models are currently optimized for character, illustration, and photographic styles.
Scene to Image (beta) generates images based on 3D scene compositions. Users can arrange 3D elements and generate photorealistic or stylized 2D images from those compositions, bridging the gap between 3D prototyping and final 2D output.
Adobe has expanded Firefly into a multi-model platform. As of March 2026, the Firefly ecosystem includes models from over a dozen external providers:
| Partner | Model(s) | Type |
|---|---|---|
| Black Forest Labs | FLUX.2 | Image generation |
| Nano Banana 2, Veo 3.1 | Image and video generation | |
| OpenAI | Image models | Image generation |
| Runway | Gen-4.5 | Video generation |
| Luma AI | Video models | Video generation |
| Pika | Video models | Video generation |
| Moonvalley | Marey | Video generation |
| Kling | 2.5 Turbo | Video generation |
| ElevenLabs | Audio models | Audio generation |
| Topaz Labs | Astra | Video upscaling |
| Ideogram | Image models | Image generation |
Partner model outputs consume premium credits and are subject to their respective terms of use, which may differ from Adobe's own models regarding commercial safety and IP indemnification.
Firefly's growth since launch has been rapid.
| Date | Cumulative Generations | Notable Milestone |
|---|---|---|
| June 2023 | 1 billion | Three months after public beta launch |
| September 2023 | 2 billion | Commercial release |
| December 2023 | ~3 billion | End of first year |
| April 2024 | 7 billion | Image Model 3 launch |
| September 2024 | 12 billion | Pre-MAX 2024 |
| October 2024 | 13 billion | MAX 2024 |
| November 2024 | 16 billion | Post-Video Model launch |
| April 2025 | 22 billion | Image Model 4 launch |
| May 2025 | 24 billion | Current generation rate ~1.5 billion/month |
By December 2025, monthly active users for Adobe's freemium AI offerings (including Adobe Express and the Firefly web interface) surpassed 70 million, a 35% increase year-over-year. Approximately 75% of Fortune 500 companies were using Adobe Firefly as of 2025. Enterprise revenue accounted for 61% of Firefly's total revenue, with estimated direct Firefly revenue reaching $400 million between 2024 and 2025.
Consumption of generative credits tripled in the final quarter of 2025, indicating that AI-powered creative tools had shifted from experimental novelty to a core component of professional creative workflows.
Adobe Firefly competes with several major AI image and video generation platforms.
| Platform | Developer | Key Strength | Training Data | Pricing | Commercial Use |
|---|---|---|---|---|---|
| Adobe Firefly | Adobe | Creative Cloud integration, commercial safety, IP indemnification | Licensed Adobe Stock, public domain, openly licensed | Free tier + $9.99-$199.99/mo | Commercially safe by design |
| Midjourney | Midjourney, Inc. | Artistic quality, aesthetic impact, community-driven | Web-scraped data (various sources) | $10-$120/mo (no free tier) | Allowed on paid plans |
| DALL-E | OpenAI | Prompt accuracy, ChatGPT integration, ease of use | Web-scraped and licensed data | Included with ChatGPT Plus ($20/mo) | Allowed with usage rights |
| Stable Diffusion | Stability AI | Open-source, local deployment, full customization | LAION and other web-scraped datasets | Free (open-source) or via hosted services | Varies by license and model |
| Flux | Black Forest Labs | Photorealistic quality, open weights | Various sources | Open-source with commercial options | Varies by license |
Firefly differentiates itself primarily through its commercial safety guarantees, its deep integration with the Adobe Creative Cloud ecosystem used by millions of professionals worldwide, and its IP indemnification program for enterprise customers. Competitors like Midjourney lead in artistic quality and community engagement, while Stable Diffusion offers unmatched flexibility through its open-source architecture. DALL-E benefits from integration with ChatGPT and OpenAI's conversational interface.
Adobe Firefly is built on top of Adobe Sensei, the company's longstanding AI and machine learning platform. Firefly's image generation models use diffusion model architectures, which generate images by iteratively denoising random noise guided by text prompts. The models were trained using Adobe's licensed dataset, and the training pipeline includes safeguards to prevent the reproduction of copyrighted material.
Firefly's API layer, known as Firefly Services, exposes over 20 generative and creative APIs that allow third-party developers and enterprise customers to integrate Firefly capabilities into their own applications and workflows.
Adobe positions Firefly as a responsible approach to generative AI in the creative industry.
Through the Content Authenticity Initiative (CAI) and the C2PA standard, Adobe ensures that every Firefly-generated asset carries embedded Content Credentials. This supports transparency about how content was created and helps combat deepfakes and misinformation.
Adobe is working toward a universal "Do Not Train" Content Credentials tag, enabling creators to explicitly mark their content as off-limits for AI training. This gives content creators more control over how their work is used in the development of AI models.
Adobe's annual Firefly Contributor bonus program compensates Adobe Stock contributors whose work was used to train Firefly models. While the exact bonus amounts vary and are paid at Adobe's discretion, the program represents one of the few structured compensation mechanisms in the AI training data space.
Firefly features have age restrictions. During the initial beta period, Firefly was not available to users under 18.
Adobe Research continues to explore new directions for generative AI in creative workflows.
This research area investigates how simple marks, such as lines drawn with a stylus, can be combined with AI to produce desired results with minimal brushstrokes, simplifying the image editing process.
Customizable diffusion research focuses on allowing creators to select the specific images that inform the generative AI, providing more creative control over individual outputs and streamlining the application of creative choices across multiple works.
Generative image compositing simplifies the process of combining elements from multiple photos while maintaining a natural appearance. This AI-assisted approach reduces the time spent on manual adjustments for color, shading, perspective, and shadows.
Announced in March 2026, Project Moonlight is an agentic AI assistant with a conversational interface. Instead of using a simple prompt box, users describe what they want to accomplish in a turn-by-turn chat. Moonlight is designed to work across Adobe's product suite, including Photoshop, Express, and Acrobat, and entered private beta in early 2026.