AI in art refers to the use of artificial intelligence systems to create, assist in creating, or influence visual art. The field has a longer history than many realize, stretching back to Harold Cohen's AARON program in the 1970s, but it exploded into mainstream awareness in 2022 with the release of powerful text-to-image models like DALL-E 2, Midjourney, and Stable Diffusion. AI art has since become one of the most contested topics in both the art world and the technology industry, raising questions about authorship, copyright, labor, and the nature of creativity itself.
AARON is one of the earliest and most significant AI art systems. Developed by British-born artist Harold Cohen beginning in the early 1970s at the University of California, San Diego, AARON was a procedural, rule-based program that could generate original drawings and, later, paintings. Unlike modern AI art tools, AARON did not learn from existing images or use statistical pattern matching. Instead, it operated through bodies of knowledge encoded as if-then rules and instructions for completing drawing tasks, essentially mimicking human decision-making processes [1].
Cohen continued developing AARON until his death in 2016, progressively expanding its capabilities from simple line drawings to full-color paintings of human figures and garden scenes. AARON's works were exhibited at major institutions including the Tate Gallery in London and the San Francisco Museum of Modern Art. In 2024, the Whitney Museum of American Art hosted a retrospective exhibition, "Harold Cohen: AARON," which placed AARON's work in the context of the current AI art debate [2].
In 2015, Google engineer Alexander Mordvintsev released DeepDream, a program that used a convolutional neural network (originally trained for image classification) to find and enhance patterns in images. The process, called algorithmic pareidolia, created deliberately over-processed images with a psychedelic, dreamlike appearance filled with swirling shapes, eyes, and animal-like forms. DeepDream generated widespread public interest and became the first widely shared example of neural network-generated visual art [3].
Also in 2015, researchers at the University of Tubingen published a paper demonstrating neural style transfer, a technique that uses neural networks to apply the visual style of one image (such as a Van Gogh painting) to the content of another (such as a photograph). Apps like Prisma (2016) brought this capability to millions of smartphone users, though the results were limited to applying existing artistic styles rather than generating novel compositions [4].
Generative adversarial networks (GANs), introduced by Ian Goodfellow in 2014, became a major tool for AI art in the late 2010s. A GAN consists of two neural networks: a generator that creates images and a discriminator that evaluates whether images are real or generated. Through iterative training, the generator learns to produce increasingly convincing images.
Artists and researchers used GANs to create portraits, landscapes, abstract compositions, and other works. The most famous GAN-generated artwork, "Portrait of Edmond de Belamy," was created by the Paris-based collective Obvious using a GAN and sold at Christie's auction house in 2018 (see below).
The current era of AI art began with the release of several powerful text-to-image models in rapid succession:
AI image generation tools have found applications across multiple areas of visual production.
Game studios, film production companies, and design agencies use AI tools to rapidly generate concept art, mood boards, and visual references during early creative phases. A concept artist might use Midjourney or Stable Diffusion to explore dozens of visual directions in hours rather than days, then select and refine promising directions by hand.
AI image generation has been adopted in graphic design for creating social media content, advertising visuals, website imagery, and product mockups. Adobe integrated AI generation (through its Firefly model) directly into Photoshop and Illustrator, positioning it as a tool that works within existing professional workflows.
Some artists have embraced AI as a creative medium in its own right, using models as collaborators or raw material generators. Others use AI as one tool among many, combining generated elements with traditional techniques, photography, or other digital processes.
The table below compares the major AI image generation platforms available in 2025-2026.
| Tool | Developer | Key strengths | Model type | Access model |
|---|---|---|---|---|
| Midjourney v7 | Midjourney Inc. | Artistic quality, aesthetic consistency, dramatic compositions | Proprietary | Subscription (Discord and web) |
| DALL-E 3 | OpenAI | Prompt adherence, text rendering, spatial accuracy | Proprietary diffusion | API and ChatGPT integration |
| Stable Diffusion 3.5 | Stability AI | Open-source, customizable, local execution, massive community | Open-source diffusion model | Free download, self-hosted |
| Adobe Firefly 3 | Adobe | Commercial safety (trained on licensed data), Creative Cloud integration | Proprietary | Adobe Creative Cloud subscription |
| Flux Pro | Black Forest Labs | Photorealism, accurate lighting and textures | Open-weight | API and self-hosted |
| Google Imagen 3 | Google DeepMind | Photorealism, coherent composition | Proprietary | Google Cloud, Gemini integration |
| Ideogram 2.0 | Ideogram | Text rendering in images, graphic design applications | Proprietary | Web-based subscription |
AI-generated music has become one of the most rapidly evolving areas of creative AI, with significant implications for the music industry.
Suno and Udio are the two leading AI music generation platforms. Suno, which raised $250 million in a Series C funding round in November 2025 (valuing the company at $2.45 billion), generates approximately 7 million songs daily, producing an entire Spotify catalog's worth of music every two weeks [20].
In 2024, the three major record labels (Sony Music, Universal Music Group, and Warner Music Group) filed $500 million lawsuits against both Suno and Udio for copyright infringement, alleging the platforms were trained on copyrighted music without authorization. However, the disputes led to licensing agreements rather than prolonged litigation:
These settlements mark a significant shift in the AI music landscape, establishing a model where AI music platforms operate under license from rights holders rather than in legal opposition to them [20].
Generative AI music tools have become more normalized in professional songwriting and production in 2025. Suno launched "Suno Studio," described as a "generative audio workstation," which was tested with professional musicians at Suno-led songwriter camps. While AI-generated music remains controversial among musicians, the tools are increasingly used for demos, background music, and creative exploration [20].
Other notable AI music platforms include AIVA (used for film and game scoring), Amper Music, and Beatoven.ai.
AI tools are reshaping film and video production across pre-production, production, and post-production stages.
| Tool / Platform | Developer | Primary use | Key capability |
|---|---|---|---|
| Veo 3.1 | Google DeepMind | Text-to-video generation | Creates detailed, realistic visuals in 8-second shots at up to 1080p resolution |
| Flow | AI filmmaking platform | Physics simulation, cinematic output quality, Gemini model integration | |
| LTX Studio | LightTricks | AI-powered filmmaking | End-to-end AI film production from script to rendered video |
| Flawless | Flawless AI | Performance editing | AI-powered lip sync, language dubbing, and performance adjustment |
| Runway Gen-3 Alpha | Runway | Video generation | Text-to-video and image-to-video generation for filmmakers |
Producers who built deliberate AI frameworks in 2024-2025 report running 25-35% leaner pre-production cycles. Studios using AI-driven script breakdown tools, where scene complexity, location requirements, and talent scheduling feed directly into budget models, are compressing pre-production from 16-20 weeks to 8-11 weeks on mid-budget productions [21].
The use of AI in film has generated significant labor concerns, paralleling the gaming industry's debates. The 2023 SAG-AFTRA and WGA strikes both included AI protections as central negotiating issues, and the resulting contracts established consent and compensation requirements for AI use of performers' likenesses and writers' creative work.
AI is being applied across the fashion industry in design, trend forecasting, manufacturing, and retail:
| Fashion AI application | Description | Examples |
|---|---|---|
| Design generation | AI creates original garment designs, pattern variations, and colorway options from text prompts or mood boards | Stitch Fix uses ML for personalized styling; CALA integrates AI design tools |
| Trend forecasting | AI analyzes social media, runway shows, and consumer behavior to predict fashion trends | Heuritech analyzes social media images to forecast trends; Google Trends data integrated into fashion planning |
| Virtual try-on | AI and AR technology allows customers to visualize garments on themselves without physical fitting | Zara's virtual try-on; Amazon's AI-powered size recommendation |
| Supply chain optimization | AI optimizes fabric sourcing, production scheduling, and inventory management | Inditex (Zara parent) uses AI for demand forecasting and inventory allocation |
| Sustainability | AI helps reduce waste through better demand prediction and optimized material usage | AI-driven made-to-order models reduce overproduction |
Architectural design has adopted AI tools for generative design, structural optimization, and sustainability analysis:
Firms like Zaha Hadid Architects and Foster + Partners have integrated AI tools into their design processes, using generative AI for early-stage conceptual exploration and ML models for performance optimization.
In October 2018, "Portrait of Edmond de Belamy," a portrait generated by a GAN and created by the French collective Obvious, became the first AI-generated artwork sold at a major auction house. Christie's sold the work for $432,500, vastly exceeding its initial estimate of $7,000 to $10,000. The sale drew widespread attention to AI art and prompted debate about whether an AI-generated work could be considered genuine art. The portrait, depicting a blurry male figure, was signed with a portion of the GAN's loss function rather than a human name [6].
Turkish-American artist Refik Anadol has become one of the most commercially successful and institutionally recognized AI artists. His work uses machine learning algorithms to process large datasets (such as satellite images, museum collections, or environmental sensor data) into immersive, large-scale data sculptures and visualizations. His piece "Machine Hallucinations, ISS Dreams, A" (2021) sold for $277,200 at Christie's 2025 "Augmented Intelligence" auction, exceeding its $150,000-$200,000 estimate. Anadol's works have been exhibited at the Museum of Modern Art (MoMA) in New York and Sotheby's [7].
Refik Anadol Studio launched DATALAND, an immersive AI art and NFT museum at The Grand LA in downtown Los Angeles, establishing one of the first permanent exhibition spaces dedicated to AI and data-driven art. Anadol's market trajectory saw average price year-over-year growth of +837% between 2024 and 2025, though his market is no longer defined by the speculative frenzy of the 2021 NFT boom but by a structured, value-driven trajectory anchored in institutional recognition [22].
American musician and artist Holly Herndon and her partner Mat Dryhurst have explored the intersection of AI, art, music, and data ethics. Their projects propose new approaches to collaborative creation between humans and AI. In 2024-2025, their installation "The Call" at the Serpentine Galleries in London invited the public to participate in a musical AI training process. Their 2025 exhibition "Starmirror" at KW Institute for Contemporary Art in Berlin transformed the gallery into a training ground where choirs and visitors contributed voices in call-and-response sessions to create a public choral dataset for training a Berlin AI choir. They also created Public Diffusion, a foundation image model trained entirely on public domain data [8].
In February-March 2025, Christie's hosted its first auction dedicated exclusively to AI-generated art at Rockefeller Center in New York. The sale featured over 20 lots by AI art pioneers including Refik Anadol, Alexander Reben, Harold Cohen, and Claire Silver. The auction totaled $728,784, significantly exceeding its projected $600,000 total. The event also drew protests, with over 6,500 people signing a petition demanding its cancellation [9].
AI art has gained increasing institutional recognition through major museum and gallery exhibitions:
| Exhibition | Venue | Date | Significance |
|---|---|---|---|
| Harold Cohen: AARON | Whitney Museum of American Art | 2024 | Retrospective placing AARON in context of current AI art debate |
| xhairymutantx (Holly Herndon / Mat Dryhurst) | Whitney Museum of American Art | 2024-2025 | Explored AI, music, and data ethics |
| Augmented Intelligence auction | Christie's, Rockefeller Center | Feb-Mar 2025 | First major auction house sale dedicated to AI art; totaled $728,784 |
| Intelligence Reimagined | ICME 2025 (IEEE) | 2025 | Nearly 200 artists submitted ~500 works; jury selected ~100 across painting, installations, film, music, literature |
| ArtMeta Digital Art Mile | Basel | 2025 | Boutique fair with curated AI art exhibitions and educational programming |
| DATALAND | The Grand LA, Los Angeles | 2025-present | Refik Anadol's permanent immersive AI art and NFT museum |
The relationship between AI art and NFTs (non-fungible tokens) has evolved significantly since the NFT boom of 2021.
During the 2021-2022 NFT bubble, AI-generated art was frequently minted and sold as NFTs on platforms like OpenSea and SuperRare. Some AI artists, including Refik Anadol, achieved significant sales in the NFT market. However, the broader NFT market experienced a dramatic downturn in 2022-2023, with trading volumes declining by over 90% from their peak.
As of 2025-2026, the NFT market has stabilized, with a shift toward quality digital craftsmanship over speculative trading. AI art NFTs now represent a niche but persistent segment of the digital art market, with established artists like Anadol maintaining strong valuations while the speculative excesses of the bubble era have largely subsided [22].
In August 2022, Jason Allen submitted a work titled "Theatre D'opera Spatial" to the Colorado State Fair's fine arts competition in the digital arts / digitally-manipulated photography category. The work, created using Midjourney, won first place and a $300 prize. When Allen revealed online that he had used an AI tool to create the image, the win generated intense backlash on social media, with thousands of comments calling it "unfair" to human artists [10].
Allen stated that his process took more than 80 hours and involved creating over 900 iterations through adjustments to his text prompt, followed by cleanup work in Photoshop (including adding a head to a figure that Midjourney had rendered headless). Despite this effort, the U.S. Copyright Office rejected Allen's application to register a copyright on the work, finding that his sole contribution was inputting a text prompt and that the Midjourney image itself was not copyrightable [11].
The Colorado State Fair began requiring participants to disclose the use of AI starting in 2023.
The release of Stable Diffusion in August 2022 and the rapid growth of AI image generation triggered a grassroots protest movement among visual artists. On December 5, 2022, Bulgarian illustrator Alexander Nanitchkov posted the first "No To AI-Generated Images" graphic, which spread rapidly with the hashtag #NoToAIArt across creative platforms. On ArtStation, users replaced their portfolio pieces with Nanitchkov's red "NO" graphic, turning the platform's homepage into a wall of protest imagery [12].
DeviantArt sparked additional outrage when it launched DreamUp, an AI image generator trained on the artwork of its own users without their explicit consent. Following backlash, DeviantArt reversed its policy and made all user artwork opted out of AI training by default.
The artist community remains deeply divided over AI art. The spectrum of reactions includes:
On January 13, 2023, artists Sarah Andersen, Kelly McKernan, and Karla Ortiz filed a class-action copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, alleging that these companies' AI models were trained on billions of copyrighted images scraped from the internet without consent or compensation. By August 2024, a U.S. district court ruled that the artists could proceed with their claims that the image generation systems infringed upon their copyright protections [13].
Similar lawsuits have been filed by Getty Images against Stability AI, and by other groups of artists and photographers. These cases remain in various stages of litigation as of early 2026.
In response to AI scraping, researchers at the University of Chicago developed two protective tools:
However, researchers from the University of Texas at San Antonio, University of Cambridge, and Technical University of Darmstadt developed LightShed in 2025, a system capable of detecting Nightshade-protected images with 99.98% accuracy and bypassing the protections. Glaze 2.1, released in 2025, included updates designed to resist these newer attacks [16].
The legal question of whether AI-generated works can be copyrighted has been addressed most directly in Thaler v. Perlmutter. Stephen Thaler applied to register a copyright on an image titled "A Recent Entrance to Paradise," created entirely by his AI system called the Creativity Machine, listing the AI as the author. The U.S. Copyright Office denied the registration.
In August 2023, U.S. District Judge Beryl Howell upheld the Copyright Office's decision, ruling that copyright requires human authorship. On March 18, 2025, a three-judge panel of the U.S. Court of Appeals for the D.C. Circuit affirmed the lower court's ruling. On March 2, 2026, the U.S. Supreme Court declined to hear the case (denying certiorari), effectively settling the question for now: purely AI-generated works without human authorship cannot receive copyright protection in the United States [17][18].
The U.S. Copyright Office issued guidance in January 2025 clarifying its position on AI-generated works. The Office concluded that text prompts alone are insufficient to establish copyrightable authorship, finding that prompts function as instructions that convey unprotectable ideas rather than creative expression. The Office drew a distinction between AI used as a tool assisting human creativity (where copyright may be available for the human-authored elements) and AI acting as a substitute for human creativity (where copyright is not available) [19].
The question of where exactly the line falls between AI-as-tool and AI-as-author remains unsettled. A pending case, Allen v. Perlmutter, challenges the Copyright Office's refusal to register a work generated through more than 600 iterative prompts, which may produce further guidance on this boundary.
As of early 2026, AI art generation tools have matured considerably. Midjourney v7 continues to lead in artistic quality, while DALL-E 3 offers the best prompt adherence and text rendering. Stable Diffusion remains the most flexible option through its open-source ecosystem. Newer entrants like Flux Pro (from Black Forest Labs, founded by former Stability AI researchers) have pushed photorealism forward, and Adobe Firefly has established itself as the commercially safest option by training exclusively on licensed and public domain content.
The art market is cautiously engaging with AI art. Christie's 2025 "Augmented Intelligence" auction, while financially successful at $728,784, drew significant protest. Institutional exhibitions at museums like the Whitney, MoMA, and Serpentine have given AI art critical attention, though the relationship between the traditional art world and AI remains fraught.
In music, the Suno and Udio licensing agreements with major labels in 2025 established a new framework for AI music generation operating under license rather than in legal opposition to rights holders. Suno's $2.45 billion valuation and 7 million daily song generations underscore the scale of AI music, even as concerns about creative displacement persist.
In film, AI tools are compressing pre-production timelines by 25-35% and enabling new visual effects workflows, though labor concerns and SAG-AFTRA/WGA contract protections continue to shape adoption.
Legally, the Thaler v. Perlmutter ruling (with the Supreme Court declining review in March 2026) has established that purely AI-generated works cannot be copyrighted in the U.S. The class-action lawsuits against Stability AI, Midjourney, and others continue to work through the courts and may ultimately determine whether training AI models on copyrighted images constitutes fair use.
The artist community remains deeply divided. Some have embraced AI as a creative tool, while others view it as an existential threat to visual art as a profession. The development of protective tools like Glaze and Nightshade, and the ongoing countermeasures against them, reflect an ongoing arms race between artists seeking to protect their work and AI developers seeking more training data.