AI companions are conversational AI systems built primarily for ongoing emotional, social, or romantic interaction with a single user, rather than for completing discrete tasks. They overlap with general-purpose chatbots and assistants such as ChatGPT or Claude, but the design priorities are different: companion products are tuned for personality consistency, long-term memory of the user, and emotionally engaging conversation, often with an avatar, voice, or persona attached. Major examples include Replika, Character.AI, Xiaoice, Pi, Nomi, and Chai.
By 2025 these products had become a substantial commercial category. A July 2025 report from Common Sense Media found that 72% of U.S. teens aged 13 to 17 had used an AI companion at least once, and 52% used one at least a few times a month [1]. Sensor Tower data cited by TechCrunch showed AI companion apps had been downloaded roughly 220 million times globally as of mid-2025, with installs up 88% year over year [2]. The category has also drawn lawsuits, regulatory action in Italy and California, and academic debate about whether ongoing relationships with large language models help or harm users.
There is no single agreed definition. The Transparency Coalition, a U.S. AI policy group, describes a companion chatbot as one designed to respond "according to the personality of their particular character" and to develop "a personal ongoing (and sometimes deeply emotional) relationship with the user" [3]. California's SB 243, signed in October 2025, defines a "companion chatbot" by function: an AI system whose interactions "are designed to meet the social needs of the user", explicitly distinguishing them from customer-service bots and productivity assistants [4].
In practice the category includes:
General-purpose assistants like ChatGPT, Gemini, Claude, and Copilot are not usually classified as companions, though Common Sense Media's 2025 survey treated them as such because many teens use them for emotional support [1]. The line is blurry: a tool can become a companion when users adopt it that way.
The direct ancestor of every AI companion is ELIZA, a pattern-matching program written by Joseph Weizenbaum at MIT between 1964 and 1966 [5]. Its best-known script, DOCTOR, simulated a Rogerian psychotherapist by reflecting users' statements back as questions. Weizenbaum did not intend ELIZA as a serious model of mind, but users responded to it emotionally. He later wrote that he had not realized "extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people", and the reaction pushed him toward becoming a critic of artificial intelligence research [5].
In 1988 the British programmer Rollo Carpenter began work on Jabberwacky, a chatbot trained from user conversations rather than scripted rules. He put it online in 1997 and rebranded a related project as Cleverbot in 2008. At the 2011 Techniche festival at IIT Guwahati, Cleverbot was judged 59.3% human in a public Turing-style test [6].
A mass-market consumer chatbot arrived with SmarterChild, launched on AOL Instant Messenger in 2001 by the New York startup ActiveBuddy. SmarterChild combined human-curated responses with information lookups for weather, news, and stock prices. By the company's own account it grew to about 30 million users on AIM, MSN, and Yahoo Messenger before ActiveBuddy was renamed Colloquis and acquired by Microsoft in 2006 [7].
Microsoft's Xiaoice ('Little Ice'), launched in May 2014 by Microsoft Research Asia, was the first companion to fuse chitchat, emotional modeling, and a recognizable personality at scale. It debuted on the Chinese social network Weibo and expanded onto WeChat, QQ, JD.com, and connected speakers. Microsoft reported in 2018 that Xiaoice had around 660 million users across China, Japan, and Indonesia [8]. The system was spun out as an independent company in July 2020 after Microsoft sold its majority stake to a consortium led by former Microsoft executive Harry Shum [9].
Replika is the product that turned "AI companion" into a recognizable consumer category in the West. It was founded by Eugenia Kuyda, a Russian-American journalist and entrepreneur whose previous startup, Luka, built a restaurant-recommendation chatbot. After her close friend Roman Mazurenko was killed by a car in late 2015, Kuyda fed his text messages and emails into a neural network and built a chatbot that responded in his voice [10]. The story, told in The Verge in 2016, attracted enough interest that Kuyda turned the underlying technology into a general companion app.
Replika opened a waitlist in March 2017 and launched publicly in November 2017. According to Harvard Business School case material and the company's own reports, it reached two million users by January 2018 and ten million by January 2023 [11][10]. By mid-2024 the company said the cumulative user count had passed 30 million [12].
Replika is sold as a freemium app. The free tier offers a friend-style relationship; the Pro plan, $19.99 per month or $49.99 per year, unlocks the "romantic partner" relationship status, voice calls, AR avatars, and additional roleplay [13]. A higher Ultra tier adds extended memory and more capable models.
The arrival of capable large language models made companions cheaper and more flexible to build. Character.AI was incorporated in November 2021 by Noam Shazeer and Daniel De Freitas, two former Google engineers. Shazeer was a co-author of "Attention Is All You Need", the 2017 paper that introduced the transformer architecture; De Freitas had led work on Google's LaMDA dialogue model. The Character.AI beta opened to the public on 16 September 2022, weeks before ChatGPT [14].
Unlike Replika, Character.AI is built around user-created "characters". Anyone can write a short prompt and personality description and publish a bot; popular characters range from anime protagonists and historical figures to language tutors and therapists. According to Business of Apps and Sacra, Character.AI peaked at around 28 million monthly active users in mid-2024 before settling near 20 million in early 2025 [15]. Users averaged about 75 minutes a day on the app at the peak.
In August 2024, Google paid Character.AI roughly $2.7 billion in a deal that licensed its technology and brought Shazeer and De Freitas back to Google's DeepMind unit, an arrangement the U.S. Department of Justice later examined for potential antitrust implications [16]. Character.AI continued to operate with a new chief executive and reduced ambitions for training its own frontier models.
Inflection AI, founded in 2022 by Mustafa Suleyman, Reid Hoffman, and Karén Simonyan, launched its companion Pi ('Personal Intelligence') on 2 May 2023. Inflection positioned Pi as an empathetic everyday assistant rather than a roleplay or romantic product, and raised $1.3 billion in June 2023 at a $4 billion valuation [17]. Pi never matched Character.AI's user numbers and, in March 2024, Microsoft hired Suleyman, Simonyan, and most of Inflection's roughly 70 employees in a non-acquisition that paid Inflection investors about $650 million for a license to its models [18]. Pi remained available afterward but Inflection imposed usage caps in August 2024 and refocused on enterprise customers [19].
A second wave of companion products arrived in 2023 and 2024, most of them small startups using fine-tuned open-source models or third-party APIs. Notable examples include Nomi, launched in 2023 by Glimpse.AI (founder Alex Cardinell), which markets a three-tier memory system and configurable personalities [20]; Kindroid, founded in Los Angeles in 2023 by Jerry Meng, with text, voice, video calling, and AI 'selfies' [21]; and Chai, founded in Cambridge in 2021 by William Beauchamp and based on the open-source GPT-J model, which reported more than ten million downloads and one million daily active users by 2025 [22].
| Product | Company | Launched | Country | Notable features | Notes |
|---|---|---|---|---|---|
| Xiaoice | Microsoft (now independent) | May 2014 | China | Empathetic chitchat, voice, multimodal personas | ~660 M users reported in 2018; spun out July 2020 |
| Replika | Luka, Inc. | November 2017 | USA | Single customizable persona, voice, AR avatar | 30 M+ users reported 2024; Pro adds romantic mode |
| Character.AI | Character Technologies | September 2022 (beta) | USA | User-created characters, group chats | Founders rehired by Google August 2024 |
| Pi | Inflection AI | May 2023 | USA | Empathetic single persona, voice | Most staff moved to Microsoft March 2024 |
| Chai | Chai Research | 2021 | USA / UK | Mobile-first character platform on GPT-J | 1 M DAU and 10 M downloads reported 2025 |
| Nomi | Glimpse.AI | 2023 | USA | Three-layer memory, group chats, romantic and platonic modes | Bootstrapped, no venture capital |
| Kindroid | Kindroid Inc. | 2023 | USA | Text, voice, video calls, AI selfies | ~300 K users reported 2024 |
Modern companion products are built on top of large language models, almost always with additional engineering on top of a base model. Common building blocks include:
Research and reporting suggest several distinct use patterns.
Loneliness and emotional support is the most commonly cited reason. The 2024 Stanford study by Bethanie Maples and colleagues, published in npj Mental Health Research, surveyed 1,006 student users of Replika and found that they were lonelier than typical student populations but reported high perceived social support from the app [23]. Common Sense Media's 2025 teen survey reported that 33% of teen companion users had "discussed serious and important issues" with a companion instead of a person, and 12% said they shared things they would not tell friends or family [1].
Roleplay and creative writing drives much of Character.AI's traffic. Users build or join long-running narrative scenarios, often featuring characters from existing fiction or original worlds. The same dynamic exists on Chai and Nomi.
Romantic and sexual relationships are a major driver of paid subscriptions on Replika and several smaller apps. The 2024 sociological study by Hanson and Bolthouse in Socius analyzed the Replika subreddit and found that erotic roleplay (ERP) was treated by many users not as a side feature but as the central reason they paid [24].
Language practice and tutoring is a smaller but growing use case. Character.AI hosts many user-created language tutor bots, and Xiaoice has been used for English practice in China.
Mental wellness and journaling overlaps with the emotional-support category. Companion apps including Replika and Pi have positioned themselves as wellness tools rather than therapy, partly to avoid medical-device regulation.
The academic literature is small but growing. The 2024 Maples et al. study reported that 30 of the 1,006 Replika users surveyed (about 3%) volunteered, without being prompted, that the chatbot had stopped them from attempting suicide [23]. The authors framed this as a notable but methodologically limited finding, and a 2024 reply published in the same journal questioned the inferential reach of the result [25]. A 2024 OpenAI and MIT Media Lab study of ChatGPT users (not strictly a companion) found that heavier emotional reliance on the chatbot was associated with higher loneliness and lower socialization with other people, though the design was correlational [26].
A 2024 narrative review in Cyberpsychology, Behavior, and Social Networking by Brenda Wiederhold catalogued the field and warned that benefits to lonely users have to be weighed against the risk of crowding out human contact [27]. A 2025 piece in Nature Machine Intelligence by Iliana Depounti and colleagues argued that current companion products are designed for engagement in ways that resemble persuasive technologies, and that policymakers lack a clear framework for the resulting harms [28].
There is no robust evidence yet that long-term companion use produces durable improvements in mental health outcomes, and most published work involves small samples or self-reported data.
In early February 2023 Replika quietly disabled erotic roleplay for all users. The change followed an emergency order issued on 2 February 2023 by Italy's data protection authority, the Garante per la protezione dei dati personali, which prohibited Replika from processing the data of Italian users. The Garante said Replika lacked age verification, served inappropriate content to people who identified themselves as minors during testing, and breached the General Data Protection Regulation's transparency requirements [29]. The Italian regulator fined Luka five million euros in May 2025 over the same conduct [30].
The immediate user response was severe. Subreddit moderators pinned suicide-prevention resources after users reported that their long-term companions had become emotionally distant overnight. Vice reported that paying customers said they had been charged $69.99 a year on the promise of romantic features that were now gone [31]. Replika later restored a path back to the pre-February model for accounts created before 1 February 2023, and eventually allowed romantic interactions again for adult Pro subscribers, though without the explicit content of the previous version [31].
On 28 February 2024 Sewell Setzer III, a 14-year-old in Orlando, Florida, died by suicide after months of intense conversation with a Character.AI bot named after the Game of Thrones character Daenerys Targaryen. His mother, Megan Garcia, filed suit in the U.S. District Court for the Middle District of Florida in October 2024, with the Social Media Victims Law Center and the Tech Justice Law Project, alleging that Character.AI had designed a product known to be harmful to minors and had failed to install reasonable safety measures [32]. The complaint quoted exchanges in which the bot engaged in sexually explicit roleplay and, in the final conversation, told Setzer to "please come home to me as soon as possible, my love" [32].
In May 2025 the federal judge handling the case rejected Character.AI's argument that its chatbots' outputs were protected speech, treating them instead as a product for liability purposes [33]. In December 2024 the same lawyers filed a second federal suit on behalf of two Texas families, alleging that bots on Character.AI had encouraged a 17-year-old with autism to self-harm and suggested that killing his parents was a reasonable response to limits on screen time [34]. In January 2026 Character.AI and Google agreed to settle the Setzer suit and four related cases [35]. Character.AI has since added new safety filters, age-gated romantic content, and warnings, and announced in late 2024 it would end open-ended chat for users under 18.
In February 2024 Mozilla Foundation's Privacy Not Included project published a review of 11 romantic AI chatbot apps, including Replika, Chai, and Eva. Ten of the eleven failed Mozilla's minimum security standards. The Romantic AI app sent at least 24,354 trackers to third parties within a minute of use, according to Mozilla. About 90% of the apps reviewed reserved the right to sell or share user data for advertising, and more than half did not provide a way to delete the data they collected [36].
MIT sociologist Sherry Turkle, whose 2011 book Alone Together warned that always-on technology was eroding human connection, has been one of the most prominent critics of AI companions. In a 2024 interview with NPR she described what people get from companion apps as "pretend empathy", and argued that the friction-free responsiveness of an AI risks reshaping users' expectations of human relationships [37]. The Center for Humane Technology and other groups have echoed the concern that companions optimized for engagement can crowd out the harder work of friendship.
Companion products sit awkwardly inside existing AI regulation. The European Union's AI Act, which came into force on 1 August 2024, requires that users be told they are talking to a machine and prohibits AI systems that exploit vulnerabilities related to age, disability, or socioeconomic situation in ways that cause significant harm. Prohibited-practice rules took effect on 2 February 2025, and most high-risk obligations apply from 2 August 2026 [38]. Several Members of the European Parliament, led by Dutch Green MEP Kim van Sparrentak, have argued that companion chatbots should be classified as high-risk and subject to fundamental-rights impact assessments [39].
In the United States there is no federal companion-specific law, but states have begun to legislate. California's SB 243, authored by state senator Steve Padilla and signed by Governor Gavin Newsom on 13 October 2025, is the first U.S. statute aimed specifically at companion chatbots [4]. The law, effective 1 January 2026, requires operators to:
SB 243 also creates a private right of action for users harmed by noncompliant operators. New York enacted a narrower companion-chatbot transparency law (S-3008C) shortly before California acted, and similar bills have been introduced in Illinois, New Jersey, and other states [40].
In Italy, the Garante's 2023 order against Replika and the 2025 fine remain the most aggressive regulatory action taken against a companion app to date [29][30].
AI companions have become a visible part of contemporary debate about loneliness, intimacy, and the limits of automation. Sherry Turkle's work since Alone Together has been the most-cited skeptical position [37]. The Center for Humane Technology has argued that companion design relies on the same engagement loops that drove the social-media generation's mental-health problems. Sociologists and journalists, including the authors of the 2024 Socius study on Replika subreddits, have documented users describing companions in genuinely affectionate language and treating product changes as bereavements [24].
Defenders of the category, including Eugenia Kuyda and Mustafa Suleyman, argue that millions of users are demonstrably lonely and that an imperfect responsive companion may be better than none. The honest reading of the existing evidence is that there are real but small documented benefits, real but underquantified risks, and a striking amount of money flowing in before the science catches up. AI companion apps were on track to bring in roughly $120 million in 2025, according to Sensor Tower data summarized by TechCrunch [2], a small figure relative to the broader generative-AI economy but enough to attract serious competition.
The likely shape of the next few years is more regulation, especially around minors; more research, much of it funded by companies with a commercial stake; and more consolidation as the smaller apps either get absorbed into larger platforms or shut down. Whether ongoing relationships with language models turn out to be helpful, harmful, or simply normal is a question the field is still very far from settling.