Samsung AI refers to the broad portfolio of artificial intelligence research, products, and services developed by Samsung Electronics, one of the world's largest technology companies. Headquartered in Suwon, South Korea, Samsung has invested heavily in AI across its semiconductor, consumer electronics, and software divisions. The company's AI efforts span dedicated research centers on multiple continents, proprietary large language models (the Samsung Gauss family), on-device AI features branded as Galaxy AI, custom neural processing units (NPUs) embedded in Exynos chipsets, and strategic partnerships with Google and other technology firms. Since the public launch of Galaxy AI in January 2024 with the Galaxy S24 series, Samsung has positioned AI as the central differentiator for its mobile, wearable, and home appliance product lines.
Samsung Electronics began exploring AI and machine learning applications in the early 2010s, primarily through its Samsung Advanced Institute of Technology (SAIT), which was founded in 1987 as the company's long-range research arm. As deep learning gained momentum across the technology industry, Samsung formalized its AI research by establishing the Artificial Intelligence Center Seoul (AIC-Seoul) in November 2017. This center consolidated various AI-related research teams within Samsung Electronics' corporate R&D division and became the flagship coordinating body for all of Samsung's AI research worldwide.
The creation of AIC-Seoul marked Samsung's shift from treating AI as a peripheral research interest to placing it at the core of its product strategy. Over the following years, Samsung expanded its global AI footprint and hired prominent researchers from leading universities and technology companies. In 2018, the company unveiled its "AI for All" philosophy, which it later articulated in detail at CES 2024, describing a future where AI operates seamlessly and non-intrusively across connected devices.
Samsung operates a network of Global AI Centers that conduct fundamental and applied research in areas such as computer vision, natural language processing, speech recognition, and on-device intelligence. These centers are strategically located near major academic and talent hubs around the world.
| AI Center | Location | Established | Key Research Areas |
|---|---|---|---|
| AIC-Seoul | Seoul, South Korea | November 2017 | Flagship center; coordinates global AI research; on-device AI, language models |
| AIC-Mountain View | Mountain View, California, USA | January 2018 | Computer vision, multimodal AI, mobile intelligence |
| AIC-Cambridge | Cambridge, United Kingdom | May 2018 | Natural language understanding, conversational AI |
| AIC-Toronto | Toronto, Canada | May 2018 | Machine learning theory, generative models |
| AIC-Moscow | Moscow, Russia | May 2018 | Computer vision, 3D scene understanding |
| AIC-Montreal | Montreal, Canada | 2018 | Deep learning, reinforcement learning |
| AIC-New York | New York City, USA | 2018 | AI research, robotics |
| AIC-Warsaw | Warsaw, Poland | 2019 | AI for connected devices, edge computing |
AIC-Seoul serves as the hub that coordinates research across all other centers. The Toronto and Montreal centers benefit from proximity to leading AI academic institutions, including the University of Toronto and Mila (the Quebec AI Institute). The Cambridge center draws on the deep academic tradition at the University of Cambridge, while the Mountain View lab sits in the heart of Silicon Valley's AI ecosystem.
In addition to these AI Centers under Samsung Research, the Samsung Advanced Institute of Technology (SAIT) maintains its own global labs in San Jose, Pasadena, Boston, Montreal, Yokohama, Beijing, Xi'an, Bengaluru, Moscow, Kyiv, Warsaw, and London. SAIT focuses on longer-range research, including AI for semiconductor design and next-generation computing architectures.
Since 2017, Samsung has hosted the annual Samsung AI Forum (SAIF), a conference that brings together internationally recognized scholars, industry experts, and Samsung researchers to discuss advances in AI and computer engineering. The forum typically features keynote addresses, paper presentations, poster sessions, and the Samsung AI Researcher of the Year awards. Gauss, Samsung's first generative AI model family, was unveiled at the Samsung AI Forum 2023, and Gauss 2 was announced at the Samsung Developer Conference (SDC24) in 2024. The forum is organized by SAIT and Samsung Research and has grown into a significant event within the Asian AI research community.
Samsung Gauss is a family of generative AI models developed by Samsung Research, first revealed at the Samsung AI Forum 2023 on November 8, 2023. The model family is named after Carl Friedrich Gauss, the German mathematician whose normal distribution theory Samsung describes as "the backbone of machine learning and AI." Samsung Gauss was designed with a strong emphasis on on-device processing, meaning the models can run locally on smartphones and other devices without sending data to external cloud servers.
Gauss Language is a large language model that functions as a conversational assistant capable of answering questions, composing text, summarizing documents, and translating between languages. At launch, it supported Korean, English, French, Spanish, Chinese, and Japanese. Samsung initially deployed Gauss Language internally to assist employees with tasks such as drafting emails and summarizing meeting notes before extending its capabilities to consumer products.
Gauss Code is a code generation and assistance model that powers Samsung's internal coding assistant called code.i. This tool aims to help software developers write, review, and debug code more efficiently. By the time Samsung announced Gauss 2 at SDC24 in late 2024, approximately 60% of Samsung's software developers were using code.i in their daily workflows.
Gauss Image is an image generation and editing model that can create and modify visual content based on text prompts or other input. It supports tasks such as generating new images from descriptions, editing existing photos, and applying creative transformations.
Samsung unveiled Gauss 2 at the Samsung Developer Conference 2024 (SDC24) in November 2024 as a significant upgrade over the original Gauss models. Gauss 2 is a multimodal model that integrates language, code, and image capabilities into a single architecture. Depending on the variant, it supports up to 14 languages and is 1.5 to 3 times faster than its predecessor.
Gauss 2 is available in three model variants designed for different deployment scenarios:
| Variant | Description | Target Use Case |
|---|---|---|
| Compact | Small model optimized for limited computing environments | On-device processing on phones and wearables |
| Balanced | Mid-size model balancing stability and efficiency | Consistent performance across diverse tasks |
| Supreme | Large model with a new architecture reducing computational requirements | High-performance tasks requiring top accuracy |
Gauss 2 powers many of the Galaxy AI features found in One UI 7 and later software releases. Its multimodal design allows Samsung to offer integrated language, vision, and code capabilities from a single model backbone, improving both efficiency and user experience.
Galaxy AI is the consumer-facing brand for Samsung's suite of AI-powered features on Galaxy smartphones, tablets, wearables, and other devices. It was officially launched on January 17, 2024, alongside the Galaxy S24 series at the Galaxy Unpacked event. Galaxy AI integrates Samsung's own models (including Gauss) with external technologies, most notably Google Gemini, to deliver a range of context-sensitive functions.
Galaxy AI debuted with the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. Through One UI 6.1 software updates released starting in March 2024, Samsung extended Galaxy AI features to older flagship devices, including the Galaxy S23 series, Galaxy Z Fold5, Galaxy Z Flip5, Galaxy Tab S9 series, and eventually the Galaxy S22 series and Z Fold4/Flip4. By the end of 2024, Samsung reported that Galaxy AI had reached over 200 million devices. The company set a target of 400 million devices by the end of 2025.
At launch, Galaxy AI supported 13 languages: simplified Chinese, English, French, German, Hindi, Italian, Japanese, Korean, Polish, Portuguese, Spanish, Thai, and Vietnamese. Additional languages, including Arabic, Indonesian, and Russian, were added through subsequent updates.
The following table summarizes the major Galaxy AI features available as of 2025:
| Feature | Category | Description | Processing |
|---|---|---|---|
| Live Translate | Communication | Real-time two-way voice and text translation during phone calls, enabling conversations between speakers of different languages | On-device |
| Circle to Search | Search | Gesture-based search allowing users to circle, highlight, or tap on-screen content to perform a web search without switching apps; developed in partnership with Google | Cloud (Google) |
| Chat Assist | Communication | Provides real-time writing suggestions, grammar corrections, and tone adjustments within messaging apps | On-device and cloud |
| Note Assist | Productivity | Automatically formats, summarizes, and translates notes within Samsung Notes | Cloud |
| Transcript Assist | Productivity | Transcribes voice recordings with multi-speaker recognition, generates summaries, and supports translation into multiple languages | Cloud |
| Interpreter | Communication | Split-screen real-time translation for face-to-face conversations, works without internet connectivity | On-device |
| Generative Edit | Photo Editing | AI-powered photo editing that can move, resize, or remove objects; fills missing backgrounds with contextually generated content; powered by Google Imagen 2 | Cloud |
| Sketch to Image | Creative | Converts hand-drawn sketches into realistic images using generative AI | Cloud |
| Instant Slow-Mo | Video | Generates additional frames using AI to create smooth slow-motion video from standard-speed footage | On-device |
| Audio Eraser | Photo/Video Editing | Isolates and removes specific audio sources (background noise, voices) from video recordings | On-device |
| Browsing Assist | Productivity | Summarizes web pages and articles within Samsung Internet browser | Cloud |
| Edit Suggestion | Photo Editing | AI analyzes photos and suggests specific enhancements or adjustments | On-device |
Circle to Search deserves special mention as one of Galaxy AI's most recognizable features. Developed in collaboration with Google, it allows users to select any on-screen content through intuitive gestures (circling, highlighting, scribbling, or tapping) to trigger a contextual search. The feature can recognize products, landmarks, text, plants, animals, and other visual elements. It launched exclusively on the Galaxy S24 series and Google Pixel 8 Pro before expanding to other devices. Circle to Search processes queries through Google's cloud infrastructure and delivers results from Google Search.
Live Translate is a standout communication feature that provides bidirectional real-time translation during phone calls. The feature runs entirely on-device, meaning it works without an internet connection and user conversation data never leaves the phone. It supports a growing list of languages and is integrated directly into Samsung's default Phone app. During a call, both the user's speech and the other party's speech are translated and displayed as text on screen, with translated audio played to the other party.
Samsung takes a hybrid approach to AI processing, combining on-device computation with cloud-based services depending on the complexity and privacy requirements of each task.
Features that handle sensitive personal data, such as Live Translate, Interpreter, and Audio Eraser, run entirely on the device. This approach ensures that private conversations and personal media never leave the phone. On-device processing is powered by the NPU in the device's chipset (either Samsung's Exynos or Qualcomm's Snapdragon) and by on-device AI models like Gauss and Gemini Nano.
Samsung's Personal Data Engine manages on-device AI data and is protected by Knox Vault, Samsung's dedicated security hardware. This ensures that AI-processed personal information is encrypted and isolated from other system components.
More computationally intensive features, such as Generative Edit and Note Assist's summarization capabilities, use cloud AI through partnerships with Google Cloud (Gemini Pro, Imagen 2) and Samsung's own cloud infrastructure. Samsung states that cloud-processed data is deleted shortly after the request completes and is not used for AI model training or advertising.
Samsung provides an Advanced Intelligence settings panel that gives users explicit control over how their data is processed. A single toggle allows users to disable all online (cloud) AI processing, restricting Galaxy AI to on-device features only. This privacy-first design philosophy positions Samsung as a middle ground between fully cloud-dependent AI services and Apple's emphasis on on-device processing.
Samsung's AI strategy is closely intertwined with its long-standing partnership with Google, which deepened significantly with the Galaxy S24 launch. In January 2024, Samsung and Google Cloud announced a multi-year partnership to bring Google's generative AI technologies to Samsung smartphones. Samsung became the first Google Cloud partner to deploy Gemini Pro and Imagen 2 on Vertex AI via the cloud to their smartphone devices.
Key elements of the partnership include:
Samsung's Exynos line of mobile processors includes dedicated NPUs that are essential for running on-device AI workloads. The NPU project has been providing architectures adopted in Exynos chips since 2019, powering Galaxy AI features on Samsung phones.
The Exynos 2400, introduced alongside the Galaxy S24 series in early 2024 (used in select regional variants), features a significantly upgraded NPU with a 17K MAC (Multiply-Accumulate) architecture consisting of two graphics NPU cores and two system NPU cores. It delivers 42 TOPS (Tera Operations Per Second) of AI performance, representing a roughly 14.7x improvement over the Exynos 2200. Samsung redesigned the NPU architecture specifically to accelerate non-linear operations used in Transformer-based models, achieving three times higher performance on the MobileBERT benchmark.
The Exynos 2400 supports on-device text-to-image generation, real-time language translation, generative content fill, object recognition, and camera-related AI processing.
Announced in December 2025, the Exynos 2600 is built on Samsung Foundry's 2nm GAA (Gate-All-Around) process, making it the first mobile processor manufactured at the 2nm node. Its NPU features a 32K MAC architecture that delivers a 113% improvement in generative AI performance compared to its predecessor. The chip is 25 to 30% more power-efficient overall than the 3nm Exynos 2500.
The Exynos 2600 powers the Galaxy S26 series and provides the computational backbone for Samsung's latest on-device AI features, including agentic AI capabilities and enhanced Bixby processing.
In markets where Samsung uses Qualcomm Snapdragon processors instead of Exynos, the NPU capabilities come from Qualcomm's Hexagon processor. The Snapdragon 8 Gen 3 for Galaxy, used in the Galaxy S24 Ultra and other S24 variants, delivers a 42% increase in NPU performance over the previous generation. It supports multi-modal generative AI models, including LLMs, language-vision models, and Transformer-based automatic speech recognition. The Snapdragon 8 Elite, used in the Galaxy S25 series, further extended these AI capabilities.
Samsung Research is the advanced R&D hub of Samsung Electronics, responsible for developing the core technologies that power Samsung's products and services. It oversees the Global AI Centers and conducts research in AI, next-generation communications (6G), security, and other areas. Samsung Research works closely with SAIT but focuses more on near-term product applications rather than long-range fundamental research.
Samsung Research's AI division focuses on several key areas:
Bixby is Samsung's proprietary virtual assistant, first announced on March 20, 2017, alongside the Galaxy S8 and S8+. Initially positioned as a replacement for Samsung's earlier S Voice assistant, Bixby launched in Korean on May 1, 2017, with English support following in July 2017.
Bixby evolved through several major versions: Bixby 2.0 was announced in October 2017 during the Samsung Developer Conference, extending the assistant across connected products including TVs and refrigerators. Bixby 3.0 arrived with One UI 3 in early 2021.
With the Galaxy S25 launch in January 2025, Google Gemini replaced Bixby as the default voice assistant on Samsung devices. However, Samsung continued to develop Bixby with new AI capabilities. The Galaxy S26 series (February 2026) features a substantially upgraded Bixby that functions as a conversational device agent, capable of understanding natural language instructions contextually. For example, saying "My eyes hurt after looking at the screen" prompts Bixby to open brightness settings automatically, without the user needing to use exact commands.
Bixby on the Galaxy S26 also integrates Perplexity AI for real-time web search, with results appearing directly within the Bixby conversation rather than redirecting users to a browser.
The Galaxy S26 series, unveiled at Galaxy Unpacked in February 2026, represents Samsung's push into agentic AI. Unlike conventional reactive AI that responds to explicit commands, agentic AI can autonomously make decisions and take multi-step actions on behalf of the user with minimal supervision.
Key developments with the Galaxy S26 include:
Samsung described the Galaxy S26 launch as "the beginning of truly agentic AI" in its official communications.
One UI 6.1 was the first major software update to bring Galaxy AI features to older Samsung devices beyond the S24 series. It extended support to the Galaxy S23 series, Galaxy Z Fold5/Flip5, and Galaxy Tab S9 series.
One UI 7, released starting April 7, 2025, introduced several new Galaxy AI features:
One UI 8.5 launched with the Galaxy S26 series and focuses on agentic AI, enhanced Bixby capabilities, and Perplexity AI integration.
When Galaxy AI launched in January 2024, Samsung stated that all AI features would be provided free of charge on supported devices until the end of 2025. This created some uncertainty about whether features would become paid after that date.
In early 2026, Samsung clarified its pricing policy: basic Galaxy AI features developed by Samsung (including Live Translate, Audio Eraser, Note Assist, Writing Assist, Generative Wallpapers, and Object Eraser) will remain free permanently on supported devices. However, Samsung reserved the right to introduce paid tiers for future premium or enhanced AI services. Third-party AI integrations, such as Google Gemini-powered features, may be subject to separate pricing decisions by their respective providers.
Samsung's Galaxy AI competes primarily with Apple Intelligence and Google's AI features on Pixel devices.
| Aspect | Samsung Galaxy AI | Apple Intelligence | Google Pixel AI |
|---|---|---|---|
| Launch | January 2024 (Galaxy S24) | September 2024 (iOS 18.1) | Late 2023 (Pixel 8 series) |
| AI Model Strategy | Hybrid: Samsung Gauss + Google Gemini | Apple foundation models (on-device) + OpenAI ChatGPT integration | Google Gemini (all tiers) |
| On-Device LLM | Gemini Nano + Samsung Gauss | Apple on-device models | Gemini Nano |
| Translation | Live Translate (real-time call translation) | Translation in select apps | Real-time translation in calls |
| Image Editing | Generative Edit, Sketch to Image, Object Eraser | Clean Up, Image Playground | Magic Eraser, Magic Editor, Pixel Studio |
| Voice Assistant | Bixby + Gemini | Siri (with Apple Intelligence) | Google Assistant + Gemini |
| Privacy Approach | Hybrid on-device/cloud with user toggle | On-device first; Private Cloud Compute for complex tasks | Primarily cloud-based; Gemini Nano for select on-device tasks |
| Device Reach | 400M+ devices targeted by end of 2025 | iPhone 15 Pro and later, select iPads and Macs | Pixel 8 series and later |
| NPU/Chip | Exynos NPU + Qualcomm Hexagon | Apple Neural Engine | Google Tensor TPU |
Samsung differentiates itself through broader device compatibility (covering phones, tablets, watches, earbuds, TVs, and home appliances), the real-time call translation capability of Live Translate, and the dual-assistant approach of offering both Bixby and Gemini. Apple emphasizes privacy through its on-device processing model and Private Cloud Compute infrastructure. Google benefits from deep integration with its own Gemini models across the entire software stack.
As of early 2026, Galaxy AI features are supported on the following device categories (availability of specific features varies by model and software version):