Viggle AI is an artificial intelligence-powered character animation and video generation platform developed by WarpEngine Canada Inc. The platform enables users to animate static images into dynamic videos using text prompts and reference videos. Founded in 2022 by Hang Chu and headquartered in Toronto, Canada, Viggle is built on JST-1, a proprietary video-3D foundation model that incorporates physics-based understanding to produce realistic character movements. The platform launched publicly in March 2024 and quickly went viral, attracting over 4 million users within its first five months and building the second-largest AI-focused community on Discord behind Midjourney. In August 2024, Viggle raised $19 million in Series A funding led by Andreessen Horowitz (a16z).
Viggle is available as a web application, a Discord bot, and as native mobile apps for iOS and Android. The platform is widely used for creating viral memes, character dance videos, and animated content for social media platforms such as TikTok and YouTube.
Viggle AI was founded in 2022 by Hang Chu under the legal entity WarpEngine Canada Inc. Before starting Viggle, Chu had accumulated extensive research experience in computer vision, 3D modeling, and generative AI. He studied as an undergraduate at Shanghai Jiao Tong University (2009-2013), completed a master's degree in Electrical and Computer Engineering at Cornell University (2013-2015), and pursued a PhD in Computer Science at the University of Toronto (2016-2020), where he was advised by Raquel Urtasun and Sanja Fidler [1][2].
During his academic career, Chu conducted research at several major technology companies, including Google (2017), NVIDIA (2018-2019), Facebook (2019), and Autodesk (2020-2022), where he served as Principal Research Scientist at Autodesk's AI Lab. His published research spans topics such as 3D geometry reconstruction, text-to-shape generation, dance generation, and neural avatar systems [1][2].
Chu and his team began developing the JST-1 foundation model during 2022 and 2023, focusing on building a video generation system that could genuinely understand 3D structure and physical properties rather than relying on pixel-level generation alone [3].
On December 12, 2023, the Viggle development team opened access to the platform via a Discord bot, allowing users to interact with the AI through slash commands in Discord channels. This approach followed the model established by Midjourney, which had demonstrated that Discord could serve as an effective distribution platform for generative AI products [4][5].
Viggle launched its official website on January 16, 2024, and the platform became publicly available in March 2024. Within one month, Viggle had attracted over one million users [6][7].
The platform's viral breakthrough came in April 2024 when a user posted a video created with Viggle's /Mix command that replaced rapper Lil Yachty with Joaquin Phoenix's Joker character in footage from the 2021 Summer Smash Festival. The video received over 1,000 reposts and 3,900 likes on X (formerly Twitter) within nine days, spawning an entire meme format in which users replaced Lil Yachty with various characters and public figures [8][9].
By May 2024, Viggle's Discord server had grown to several million members, making it the second-largest AI community on Discord after Midjourney. TechCrunch profiled Viggle in an article about Discord's role as a foundation for the generative AI boom, noting the platform's rapid growth despite having only 15 employees at the time [5].
On August 26, 2024, Viggle announced that it had raised $19 million USD (approximately $26 million CAD) in Series A funding led by Andreessen Horowitz (a16z). Two Small Fish Ventures (TSFV) also participated in the round. The funding was structured as an all-equity, all-primary round, meaning the capital went directly to the company rather than to secondary share sales. Allen Lau, the former co-founder and CEO of Wattpad and an operating partner at Two Small Fish Ventures, joined Viggle as an advisor [3][7][10].
Justine Moore, a partner at a16z who led the investment, stated that "Viggle is driving a major shift in how creators approach character and scene consistency." Allen Lau added that "Viggle represents the future of content and will upend the entertainment industry" [7][10].
At the time of the funding announcement, Viggle reported over 4 million users and more than 4.3 million Discord community members. The company planned to use the funding to expand its team, develop stronger models, add new capabilities for objects and character-object interaction, and improve video quality [3][10].
In September 2024, Viggle released native mobile applications for both iOS and Android, available on the Apple App Store and Google Play Store. The mobile apps featured the signature "upload a picture and dance" function, which allowed users to take a photo and select a dance motion template to animate the image. The iOS app quickly achieved a 4.9-star rating on the App Store [6][11].
Viggle 2.0 launched on April 1, 2024, introducing JST-1 as the first video-3D foundation model available to the public. The update brought higher video resolution, improved facial expression portrayal, reduced artifacts during rapid motion, and faster fine-tuning capabilities [12].
Throughout 2025, Viggle rolled out several new features including multi-character interaction, advanced lip-syncing through the Mic feature, the Live mode for real-time character transformation, and an expanded Move mode with a library of over 4,000 motion templates [6][13].
Viggle's core technology is JST-1, which the company describes as "the first video-3D foundation model with actual physics understanding." Unlike most existing AI video generators, which are primarily pixel-based and do not understand the underlying structure of objects in a scene, JST-1 is designed to model 3D geometry and physical properties [3][12].
CEO Hang Chu has explained the technical distinction in the following terms: "We are essentially building a new type of graphics engine, but purely with neural networks. The model itself is quite different from existing video generators, which are mainly pixel based, and don't really understand structure and properties of physics. Our model is designed to have such understanding, and that's why it's been significantly better in terms of controllability and efficiency of generation" [3].
This physics-based approach enables Viggle to maintain character consistency across different poses and motions, produce realistic body movements that obey gravitational and kinematic constraints, and allow fine-grained control over character animation. The model accepts both static images and text prompts as input and generates animated video output [7][12].
Viggle has stated that JST-1 is trained on public data sources, including YouTube videos. This admission generated controversy, as using YouTube content for machine learning training without explicit consent from content creators may violate YouTube's terms of service. The company has not disclosed the full scope of its training dataset [14][15].
The platform employs a skeletal animation approach in which a virtual skeleton is created as a framework for each character. This skeleton serves as the basis for applying motion data, allowing the system to transfer movements from reference videos to arbitrary character images while maintaining anatomically plausible poses [16].
Viggle offers several distinct modes for video generation and character animation, accessible through both the web application and mobile apps.
The Mix feature allows users to replace a character in an existing video with a different character from an uploaded image. Users upload a reference video containing a person in motion along with a static image of the character they want to substitute in. The AI analyzes the motion in the reference video and applies it to the uploaded character image, generating a new video in which the replacement character performs the same movements. This was the feature that powered Viggle's initial viral success through the Lil Yachty/Joker meme format [6][8].
Move mode enables users to animate a static character image using motion templates from Viggle's built-in library. Unlike Mix, which requires a reference video, Move provides a catalog of over 4,000 pre-set motion templates covering dances, walks, gestures, and other movements. Users select a template and the platform animates their uploaded character accordingly, preserving the original background. The Move feature can handle up to three characters simultaneously [6][13].
The Animate mode converts text prompts into animated sequences. Users describe the desired movement or action in natural language, and the AI generates a video of a character performing that action. This mode was one of the earliest features available through the Discord bot and has been refined over subsequent updates [6][16].
Stylize mode accepts a character photo and a text description of a visual style as inputs. The platform generates a video that blends the character with the described stylistic elements. For example, a user might upload a portrait and specify "watercolor painting style with flowing brushstrokes," and the AI would render the character in that aesthetic while adding motion [17].
Ideate mode is a fully text-driven feature that does not require any image or video input. Users describe both a character and an action in text form, and the AI generates a complete video based solely on these text prompts. This mode is designed for rapid concept exploration when users do not have specific reference images available [17].
The Mic feature, launched in late 2024, combines voice-driven animation with motion control. Users can make their character speak, sing, or rap with accurate lip synchronization. The feature goes beyond basic lip sync by integrating voice input with full-body motion, allowing characters to talk and move simultaneously. The initial implementation focused on a "Rap" command that became popular for creating meme videos of characters performing rap songs [13][18].
Viggle Live is a real-time character transformation feature that uses a webcam to capture the user's movements and facial expressions, then maps them onto an uploaded character image in real time. This enables use cases such as live streaming as an animated character, role-playing, and interactive content creation. Users can prepare multiple character images and swap between them during a stream, switching from one persona to another at the click of a button [19].
The Multi feature allows users to generate videos featuring multiple characters interacting in the same scene, including synchronized dance sequences and conversations. This capability was introduced as part of the 2025 feature updates [6].
| Feature | Input Required | Output | Primary Use Case |
|---|---|---|---|
| Mix | Reference video + character image | Video with character swapped into the reference motion | Memes, character replacement in existing videos |
| Move | Character image + motion template selection | Animated character video with preserved background | Dance videos, motion animation from templates |
| Animate | Text prompt | Character animation video | Text-to-animation, concept visualization |
| Stylize | Character image + style text | Stylized animated video | Artistic video creation |
| Ideate | Text prompt (character + action) | Generated video from text only | Rapid concept exploration without reference images |
| Mic | Character image + voice/text input | Lip-synced character video | Talking/singing/rapping character videos |
| Live | Webcam feed + character image | Real-time character transformation | Live streaming, role-playing |
| Multi | Multiple character images + motion | Multi-character interaction video | Group dance scenes, character interactions |
Viggle operates on a credit-based system with a free tier and three paid subscription plans. During the initial open beta period in early 2024, the platform was entirely free to use [20].
| Plan | Monthly Price (USD) | Credits per Month | Daily Relaxed-Mode Videos | Watermark | Storage |
|---|---|---|---|---|---|
| Free | $0 | None (daily quota) | 5 | Yes | 15 days |
| Pro | $4.99 | 80 | ~10 | No | Long-term |
| Live | $9.99 | ~200 | ~25 | No | Long-term |
| Max | $31.99 | 800 | ~80 | No | Long-term |
Different commands consume varying amounts of credits. The Mix command, for example, uses fewer credits than Animate. Monthly subscription credits refresh each billing cycle and do not roll over, while separately purchased top-up credits can be carried over to future months. Paid plans also include watermark removal, 1080p HD exports, priority processing, and access to advanced motion templates [20].
Viggle's Discord server grew to become the second-largest AI-focused community on the platform, behind only Midjourney. By August 2024, the server had over 4.3 million members. The community's growth was led by Nan Ha, Viggle's head of product growth, who helped expand the Discord from 500 members to over 4 million [3][5][21].
Discord's developer team worked directly with Viggle to support the startup through its rapid growth phase, providing guidance on scaling the bot infrastructure to handle millions of simultaneous users [5].
Viggle videos became a fixture on TikTok, Instagram Reels, and X (formerly Twitter) throughout 2024 and 2025. The platform's most recognizable format involves taking a photo of a person, celebrity, or fictional character and animating them performing a dance or walk. Trends such as the "Lil Yachty Walks Out on Stage" meme format and various "dance if you can join" challenges drove significant awareness, particularly among Gen Z creators [6][8][9].
The launch of iOS and Android apps in September 2024, with their streamlined "upload a picture and dance" workflow, further cemented Viggle's position as a viral content creation tool. Major social media creators began using Viggle as a standard tool for producing trend-based content [6][11].
Viggle launched a Creator Program to support active community members. The program offers participants a free Pro subscription, 1,000 additional credits (equivalent to approximately 250 minutes of video), and early access to new features before they become available to the general public [7][10].
| Metric | Value | Date |
|---|---|---|
| Users after first month | 1,000,000+ | April 2024 |
| Discord community members | 4,300,000+ | August 2024 |
| Total users/community members | 40,000,000+ | 2025 |
| iOS App Store rating | 4.9 stars | 2024 |
| App availability | iOS, Android, Web, Discord | September 2024 onward |
| Detail | Information | |---|---|---| | Legal name | WarpEngine Canada Inc. | | Headquarters | Toronto, Canada | | Founded | 2022 | | CEO and Co-founder | Hang Chu | | Employees | Approximately 36-73 (estimates vary by source and date) | | Total funding | $19 million USD ($26 million CAD) | | Lead investor | Andreessen Horowitz (a16z) | | Other investor | Two Small Fish Ventures | | Advisor | Allen Lau (former Wattpad CEO) |
Viggle operates in the broader AI video generation market but occupies a distinct niche focused on controllable character animation rather than general-purpose video synthesis. CEO Hang Chu has stated that Viggle's "controllable video generation is unique currently," particularly in post-editing and character manipulation phases rather than initial ideation [3].
| Platform | Developer | Primary Focus | Key Differentiator |
|---|---|---|---|
| Viggle AI | WarpEngine Canada Inc. | Character animation and meme creation | Physics-based 3D understanding (JST-1), character consistency |
| Runway | Runway AI | Professional video editing and generation | High visual quality, professional workflows |
| Pika | Pika Labs | Creative video effects and generation | Pikaffects, Scene Ingredients, fast generation |
| Sora | OpenAI | Text-to-video generation | Physics accuracy, long-form video, audio sync |
| Kling | Kuaishou | High-fidelity video generation | Cinema-grade quality, precise Motion Brush |
| Veo | Google DeepMind | Text-to-video generation | Up to 4K output, character consistency |
Viggle's primary competitive strengths are its character animation speed, ease of use for non-technical users, viral social media integration, and the consistency of character identity across different motion sequences. Its limitations relative to competitors include lower overall video fidelity for cinematic use cases and fewer options for scene-level generation without a character focus [22].
Viggle's use of publicly available YouTube videos as training data for the JST-1 model has drawn criticism. The company acknowledged using YouTube content for training purposes, which raised questions about whether this practice complies with YouTube's terms of service and whether content creators had given informed consent. The AI Algorithmic and Automation Incidents and Controversies (AIAAIC) database documented this as a notable incident [14][15].
As with other AI video generation tools, Viggle's ability to place any person's likeness into any video scenario raises concerns about deepfake creation. The platform can be used to create realistic but fabricated videos of real people, potentially facilitating misinformation, non-consensual content, or identity-based harassment [23].
In February 2026, the Global Network on Extremism and Technology (GNET) published a report titled "An 'Ode to Violence': Extremist Exploitation of Viggle AI," documenting how extremist groups across multiple ideological ecosystems had used the platform to create propaganda videos glorifying violence [24].
Viggle's privacy policy states that uploaded images and assets may be used to train the company's AI models. Reports have indicated that user-uploaded data is retained indefinitely, even after account deletion, and that facial likenesses become part of the platform's training dataset permanently. The company also shares certain data with advertising and analytics partners. Privacy advocates have recommended that users concerned about data retention avoid uploading personal photographs to the platform [23][25].
Viggle has implemented community guidelines that prohibit not-safe-for-work (NSFW) content, restrict the use of political figures in certain contexts, and provide copyright takedown processes. The platform uses a combination of automated filtering and human moderation to enforce these policies, though critics have argued that enforcement is inconsistent given the volume of content generated [3][25].
Viggle serves a range of user segments spanning casual consumers and professional creators.
Social media content creation: The platform's primary use case is generating short-form viral content for TikTok, Instagram Reels, YouTube Shorts, and X. Creators use Viggle to produce dance videos, character animation clips, and memes featuring fictional characters or public figures [6][8].
Animation pre-production: Professional animators and VFX artists use Viggle to quickly prototype character animations, test movement sequences, and visualize scenes before committing to full production. The platform reduces the time required for pre-production ideation from days to minutes [7][10].
Game design: Game developers use Viggle to generate placeholder character animations and test gameplay concepts during early development phases [7].
Live streaming: With the Viggle Live feature, streamers can broadcast as animated characters in real time, enabling virtual influencer content and character-based entertainment [19].
Marketing and advertising: Brands have experimented with Viggle for creating character-driven promotional content for social media campaigns [6].
Despite its strengths in character animation, Viggle has several known limitations as of early 2026.
Video quality for photorealistic and cinematic content remains below that of competitors such as Runway and Sora. The platform sometimes produces visual artifacts during rapid motion sequences, particularly around hands and facial details. Output resolution and duration are limited compared to higher-end AI video generators. The platform is primarily optimized for single-character animation, and multi-character interactions, while supported, can produce inconsistent results. Processing times average approximately five minutes per generation, which can be affected by server load during peak usage periods [6][16][23].