Poe (short for Platform for Open Exploration) is an AI chatbot aggregator platform developed and operated by Quora. Poe provides access to multiple large language models (LLMs) and generative AI models from different providers through a single unified interface. Rather than competing directly as a model developer, Poe positions itself as a hub where users can interact with models from OpenAI, Anthropic, Google, Meta, and other AI companies without needing separate accounts or subscriptions for each.
Poe was first announced in December 2022 and launched publicly in February 2023. The platform is available on the web at poe.com, as well as through native apps for iOS, Android, macOS, and Windows. As of 2026, Poe hosts over one million custom bots and provides access to hundreds of official AI models spanning text generation, image generation, video generation, and audio synthesis.
The platform is led by Adam D'Angelo, co-founder and CEO of Quora, who has described Poe as "the web browser of AI," offering a one-stop shop for users to explore an increasingly diverse set of AI services.
Quora, the question-and-answer website co-founded by Adam D'Angelo and Charlie Cheever in 2009, began developing Poe in late 2022. D'Angelo, who previously served as Facebook's first Chief Technology Officer before leaving to start Quora, saw an opportunity to build a platform that would aggregate AI models rather than develop proprietary ones.
Poe was initially released on iOS on December 21, 2022, as an invite-only text messaging app. After a period of limited testing, Quora opened Poe to the general public on February 4, 2023. A desktop web version followed in March 2023, expanding the platform's reach beyond mobile users.
In January 2024, Quora raised $75 million from Andreessen Horowitz (a16z), the company's first outside funding in nearly seven years. The round valued Quora at approximately $500 million, a notable decrease from its $1.8 billion valuation during its previous funding round in 2017. Adam D'Angelo attributed the lower valuation to broader market conditions, including rising interest rates and a higher cost of capital in the venture capital market. The majority of the new funding was earmarked for expanding Poe and paying creators through the platform's monetization program.
By September 2024, Poe reached 31.5 million monthly visitors and held a global website ranking of approximately #1,529.
| Date | Event |
|---|---|
| December 21, 2022 | Poe launches on iOS as invite-only beta |
| February 4, 2023 | Public launch on iOS |
| March 2023 | Desktop web version released |
| October 2023 | Creator monetization program introduced |
| January 2024 | Quora raises $75 million from Andreessen Horowitz |
| April 2024 | Price-per-message monetization model added |
| September 2024 | Platform reaches 31.5 million monthly visitors |
| February 2025 | Poe Apps (Canvas Apps) launched for custom app creation |
| March 2025 | $4.99/month entry-level subscription tier introduced |
| July 2025 | Poe API v2 released with OpenAI-compatible interface |
| November 2025 | Group chat feature launched, supporting up to 200 users |
Poe's core value proposition is giving users access to AI models from multiple providers through one interface. Users can switch between different models within the same session, start separate conversations with different bots, or compare responses side by side. The platform eliminates the need for separate subscriptions to services like ChatGPT, Claude, or Gemini.
As of early 2026, Poe provides access to models across several modalities:
| Category | Providers and Models |
|---|---|
| Text Generation | OpenAI (GPT-4o, GPT-4.1, GPT-5), Anthropic (Claude Sonnet, Claude Opus), Google (Gemini Pro, Gemini Flash), Meta (Llama 4), DeepSeek (R1, V3), Mistral AI |
| Image Generation | DALL-E 3, FLUX Pro and Dev, Stable Diffusion 3, Ideogram, Playground |
| Video Generation | Runway, Veo 2, Kling, Hailuo, Pika, Hunyuan |
| Audio Generation | Various text-to-speech and music generation models |
| Reasoning Models | OpenAI o-series (o1, o3), DeepSeek R1 |
Poe allows users to invite multiple bots into the same conversation thread and view their replies in parallel. This feature is useful for comparing how different models respond to the same prompt, enabling side-by-side evaluation of quality, tone, and accuracy. Users can submit a single prompt and immediately see responses from several models, streamlining research, brainstorming, and decision-making workflows.
In November 2025, Poe introduced a group chat feature that allows up to 200 human users to collaborate within a single conversation thread alongside more than 200 AI models. These group chats support text, image, video, and audio generation models, making them suitable for team-based creative projects and collaborative workflows. The member who triggers an AI model response uses their own compute points, while human-to-human messages do not consume points.
Poe supports a knowledge base feature that provides lightweight retrieval augmented generation (RAG). Users can upload documents and files that the AI models can reference when generating responses. This allows bots to ground their answers in specific uploaded data rather than relying solely on their training data.
Several bots on Poe include real-time web search capabilities, allowing them to access current information beyond their training data cutoff. This feature is built into specific model configurations and can be incorporated into custom bots as well.
All conversations and chat histories are synced across devices automatically. Users can start a conversation on a mobile device and continue it on desktop, or vice versa, without losing context.
One of Poe's distinguishing features is its bot creation platform, which allows both non-technical users and developers to build custom bots. As of 2026, over one million custom bots have been created on the platform.
Prompt bots are the simplest type of bot on Poe. They do not require any coding and are configured entirely through the Poe web interface using natural language instructions. A prompt bot is built on top of an existing model (such as GPT-4, Claude, or Llama) and uses a system prompt to customize the model's behavior for a specific use case. For example, a creator might build a prompt bot that instructs Claude to act as a recipe advisor, a language tutor, or a coding assistant.
Prompt bots are well suited for straightforward customizations where the creator wants to modify a model's personality, restrict its topic area, or provide it with specific instructions. Their main limitation is that they lack dynamic functionality and cannot interact with external systems.
Server bots are more advanced creations that run custom code on an external server. They implement the Poe Protocol, which defines how the bot communicates with the Poe platform. Server bots can call other bots on Poe for free, access external APIs, perform computations, query databases, and execute arbitrary logic in response to user messages.
Poe provides a Python SDK and quickstart templates to help developers build server bots. The server bot architecture allows for complex workflows, such as chaining multiple model calls, routing messages to different models based on content, or integrating with third-party services.
In February 2025, Poe introduced Canvas Apps (also called Poe Apps), which allow users to build interactive web applications with custom visual interfaces directly on the platform. Unlike traditional chatbots that operate in a text-based conversation format, Canvas Apps provide graphical user interfaces for games, tools, calculators, visualizations, and other interactive experiences.
Canvas Apps can be created without coding using the App Creator bot, which generates functional applications from natural language descriptions. These apps can interact with any of Poe's AI models and support features like file downloads, external links, and image uploads. The App Creator uses Claude 3.7 Sonnet with extended thinking capabilities to produce higher-quality applications.
Creators can share their Canvas Apps with other users on the platform, and the apps display monthly user counts for analytics purposes.
Poe uses a credit system called "compute points" to manage model usage across its platform. Different models consume points at different rates based on the underlying model's cost and complexity. More powerful models use more points per message, and models generally become cheaper as newer versions are released.
As of March 2025, Poe offers the following subscription plans:
| Plan | Price | Points Allocation | Key Features |
|---|---|---|---|
| Free | $0 | 3,000 points per day | Access to basic models (GPT-3.5, Claude Instant), limited daily usage |
| Basic | $4.99/month | Higher daily allocation | Access to more models, increased daily limits |
| Standard | $19.99/month (or $199.99/year) | 1,000,000 points per month | Full access to premium models (GPT-4, Claude Opus), higher daily caps |
| Premium | $249.99/month | 12,500,000 points per month | Highest limits for expensive models like o1-pro, GPT-4.5, Veo 2 |
The introduction of the $4.99/month tier in March 2025 was a direct response to two trends in the AI market: standard models getting cheaper while the most advanced models become more expensive. Previously, the least expensive paid plan was $19.99 per month.
The number of points consumed per message varies significantly across models. As a general guide (costs fluctuate as models are updated):
| Model Tier | Examples | Approximate Points Per Message |
|---|---|---|
| Budget | GPT-5-nano, Gemini 2.0 Flash, GPT-4o-mini | 6 to 11 points |
| Mid-range | Claude Haiku 4.5, Gemini 1.5 Pro, GPT-4o | 60 to 230 points |
| Premium | Claude Sonnet 4.5, GPT-5.2 Pro, Gemini 3 Pro | 300 to 800 points |
| Ultra | Claude Opus 4.5 | 4,000 to 5,000+ points |
Free users receive 3,000 points daily, enough for many budget-tier model interactions but quickly exhausted when using premium models. Unused points do not roll over between periods. Subscribers can also purchase additional points starting at $30 per 1 million points.
Poe launched its creator monetization program in October 2023, making it one of the first AI platforms to directly compensate bot creators. The program is designed to incentivize the development of high-quality bots on the platform.
Creators can earn revenue through two mechanisms:
Subscription Revenue Sharing: Creators earn up to $20 for each new user who subscribes to Poe as a result of interacting with their bots. This model rewards creators whose bots attract new paying users to the platform.
Price Per Message: Introduced in April 2024, this model allows creators to set a specific price for each message sent to their bots. Creators generate revenue every time a user sends a message, providing a more direct and ongoing revenue stream. Creators can configure pricing either through the Poe web interface or programmatically from within their bot's code.
Poe provides creators with an analytics dashboard that displays earnings across paywalls, subscriptions, and messages, with insights updated daily. The creator platform at creator.poe.com offers documentation, templates, and tools for building and managing bots.
As of early 2025, the monetization program is available to U.S.-based bot creators, with plans for global expansion.
Poe offers an API that provides developers with programmatic access to all models and bots on the platform.
Released in July 2025, the Poe API v2 provides an OpenAI-compatible chat completion interface. This means developers who are already familiar with the OpenAI API format can use Poe's API with minimal changes to their code. Key features of the API include:
Subscribers can use their existing subscription points with the API at no extra cost. API keys can be generated at poe.com/api_key, and full documentation is available at creator.poe.com/docs/api.
The Poe Protocol is a separate specification that defines how server bots communicate with the Poe platform. It allows developers to deploy bots on their own infrastructure that integrate with Poe's ecosystem. Bots built using the Poe Protocol can call other bots on the platform for free, which allows developers to build complex multi-model workflows without incurring additional API costs.
Poe publishes periodic reports on AI model usage trends observed across its platform. Because Poe aggregates usage data across many different model providers, these reports offer a unique cross-provider perspective on the AI market.
According to Poe's early 2025 ecosystem report, OpenAI and Anthropic collectively commanded approximately 85% of text model message share on the platform. Claude 3.5 Sonnet, launched in June 2024, achieved near-parity with OpenAI's offerings. DeepSeek R1 and V3 surged from 0% usage in December 2024 to a 7% peak share, described by Poe as "a significantly higher level than any previous open-source model family." Google's Gemini saw declining usage after October 2024. The data also showed that newer flagship models quickly cannibalize older versions from the same provider.
The FLUX family of models dominated image generation with close to 40% of messages, while Google's Imagen 3 captured nearly 30% share since its late 2024 launch. DALL-E 3 and Stable Diffusion versions experienced a roughly 80% usage decline as users migrated to newer alternatives. The number of available image generation models on Poe expanded from 3 to approximately 25.
Runway maintained a 30 to 50% market share in video generation despite offering a single model. Google's Veo 2 rapidly captured nearly 40% of total video generation messages within a few weeks of launch. Chinese models, including Kling, Hailuo, Hunyuan, and Wan, accounted for approximately 15% of collective share.
Based on traffic data from September 2024 (when Poe recorded 31.5 million monthly visitors):
| Region | Share of Traffic |
|---|---|
| Hong Kong | 22.05% |
| United States | 13.2% |
| China | 10.78% |
| Vietnam | 7.07% |
| Saudi Arabia | 2.83% |
| Other | 44.07% |
The high proportion of traffic from Hong Kong and China reflects Poe's popularity in regions where direct access to services like ChatGPT may be restricted or unavailable.
| Age Group | Share |
|---|---|
| 25 to 34 | 31.99% |
| 18 to 24 | 23.19% |
| 35 to 44 | 20.03% |
| 45 to 54 | 13.06% |
| 55 to 64 | 7.22% |
| 65+ | 4.51% |
The platform's user base skews male (58.41%) and younger, with over 55% of users between the ages of 18 and 34.
| Source | Share |
|---|---|
| Direct traffic | 73.19% |
| Organic search | 24.14% |
| Referrals | 1.48% |
| Social media | 1.15% |
The dominance of direct traffic suggests strong brand recognition among users who navigate directly to poe.com rather than finding it through search engines.
Poe is available across all major platforms:
| Platform | Details |
|---|---|
| Web | poe.com, accessible from any modern browser |
| iOS | Available on the App Store since December 2022 |
| Android | Available on Google Play |
| macOS | Native desktop application |
| Windows | Native desktop application |
Conversations sync across all platforms automatically, allowing users to switch between devices without interruption.
Adam D'Angelo's involvement with both Poe and OpenAI has been a subject of public discussion. D'Angelo joined the OpenAI board of directors in 2018. In November 2023, he was one of four board members who voted to remove Sam Altman from his role as CEO of OpenAI. The board stated that Altman had not been "consistently candid in his communications with the board."
The decision drew attention to a potential conflict of interest, given that Poe competes with ChatGPT and OpenAI's custom GPTs marketplace. When Altman was reinstated as CEO after a brief period of turmoil, the OpenAI board was overhauled, but D'Angelo remained as one of the surviving board members.
D'Angelo has stated that he does not view OpenAI as a direct competitor to Poe, framing the relationship as one where Poe is a distribution platform for OpenAI's models rather than a rival. In a May 2024 interview with TechCrunch, D'Angelo discussed this distinction, noting that Poe's role is to aggregate and distribute AI models rather than to build its own.
Poe operates in a competitive market with several different types of rivals:
The most prominent competitors are the first-party chat interfaces offered by the model providers themselves. ChatGPT from OpenAI, Claude from Anthropic, and Gemini from Google each provide direct access to their respective models. These services often receive new features and model updates before they become available on third-party platforms like Poe.
Other platforms have adopted similar multi-model aggregation approaches. OpenRouter provides API access to models from multiple providers with a unified billing system. TypingMind offers a client interface for accessing various AI models. Platforms like Perplexity AI combine multiple models with web search capabilities.
For developers, services like Fireworks AI, Together AI, and Hugging Face Inference API provide programmatic access to open-source and commercial models. These platforms compete with Poe's API offering, though they tend to focus more on developers than general consumers.
Poe differentiates itself through its combination of consumer accessibility, bot creation tools, creator monetization, and multi-provider model access. While individual model providers may offer deeper integration with their specific models, Poe provides breadth of choice and the ability to compare models within a single platform.
Despite its broad model access, Poe has several notable limitations:
Poe operates as an intermediary layer between users and AI model providers. When a user sends a message to a bot on Poe, the platform routes the request to the appropriate model provider's API, processes the response, and delivers it back to the user. For server bots, Poe forwards user messages to the creator's server via webhooks, and the server responds using the Poe Protocol.
The platform uses a point-based billing system that abstracts away the varying costs of different model providers. Poe negotiates API access and pricing with each provider, then translates those costs into a unified points system for users. This allows users to switch between models from different providers without managing separate billing relationships.