Google LLC is an American multinational technology company and the principal operating subsidiary of Alphabet Inc.. Founded in September 1998 by Larry Page and Sergey Brin while they were PhD students at Stanford University, Google began as a web search engine and grew into one of the largest companies in the world by revenue, market capitalisation, and influence over consumer and enterprise software. The company is headquartered at the Googleplex in Mountain View, California, and led since 2015 by chief executive officer Sundar Pichai, who also became CEO of Alphabet in 2019. As of April 2026 Alphabet's market capitalisation sat near $4.1 trillion, making it the second most valuable public company in the world after Microsoft.[1][2]
Google's core businesses include Google Search, the YouTube video platform, the Android mobile operating system, the Chrome web browser, the Google Cloud division, the Google Workspace productivity suite, and the Pixel hardware line. Across these products the company is in the middle of a company-wide pivot to artificial intelligence, built around the Gemini family of multimodal foundation models developed by Google DeepMind. Gemini now powers AI Overviews in Search, the Gemini app consumer assistant, the Gemini Enterprise Agent Platform on Google Cloud, on-device features on Pixel and other Android phones, and writing, summarisation, and analysis features inside Gmail, Docs, Sheets, Slides, and Meet. In its Q4 2025 results Alphabet reported full-year 2025 revenue of $402.8 billion (up roughly 15% year over year), surpassing $400 billion for the first time, with net income of $34.5 billion in the fourth quarter alone.[3][4]
Google is generally considered one of the three companies setting the pace of frontier large language model research alongside OpenAI and Anthropic. It also runs one of the only credible alternatives to NVIDIA-based AI training infrastructure through its in-house Tensor Processing Unit program, now in its seventh generation (Ironwood, also referred to as TPU v7 / TPU7x), and announced in October 2025 a multi-gigawatt deal worth tens of billions of dollars to supply Anthropic with up to one million TPUs.[5][6]
| Field | Detail |
|---|---|
| Type | Subsidiary of Alphabet Inc. (NASDAQ: GOOG, GOOGL) |
| Founded | September 4, 1998 (Menlo Park, California) |
| Founders | Larry Page, Sergey Brin |
| Headquarters | Googleplex, Mountain View, California |
| Key people | Sundar Pichai (CEO, Alphabet and Google), Ruth Porat (President and CIO), Demis Hassabis (CEO, Google DeepMind), Jeff Dean (Chief Scientist, Google) |
| Employees | ~190,800 (full-time, end of 2025) |
| 2025 revenue | $402.8 billion (Alphabet consolidated) |
| 2025 net income | $123.6 billion (Alphabet) |
| 2025 R&D spend | ~$60 billion |
| Market cap (Apr 2026) | ~$4.1 trillion |
| Capex 2026 (guided) | $175-185 billion |
| Search market share | ~90% global, ~95% mobile |
| Flagship AI model | Gemini 3 Pro (Nov 2025), Gemini 3.1 Pro (Q1 2026) |
| Custom AI silicon | TPU v7 "Ironwood" (general availability Q1 2026) |
Google traces its origins to a research project at Stanford University in 1995, when Larry Page, a graduate student in computer science, met Sergey Brin, a fellow PhD candidate from Russia by way of Maryland. Page initially proposed studying the link structure of the World Wide Web for his dissertation, an idea that grew into a system the two called BackRub because it analysed the backlinks pointing into a web page rather than the words on the page itself. The pair built the BackRub crawler on Stanford's network throughout 1996, and by 1997 the project had been renamed Google, a play on the mathematical term googol (10 to the power 100), chosen to suggest the search engine's ambition to organise an enormous amount of information.[7]
The central technical contribution was the PageRank algorithm, which estimated a page's importance by treating links as votes weighted by the importance of the linking pages. Compared with keyword frequency methods used by AltaVista, Lycos, and Excite at the time, PageRank tended to surface pages people actually wanted, particularly for navigational and informational queries. Page and Brin tried to license the idea to existing search engines and were turned down. In August 1998 Andy Bechtolsheim, a co-founder of Sun Microsystems, wrote them a $100,000 check made out to "Google Inc." before the company legally existed, which forced the pair to take a leave of absence from Stanford and incorporate the company. Google Inc. was officially founded on September 4, 1998, with its first office in the Menlo Park garage of Susan Wojcicki, who would later run YouTube.[7][8]
The company moved to Palo Alto in 1999, raised a $25 million Series A from Sequoia Capital and Kleiner Perkins, and shipped a famously spartan home page that contained almost nothing except a search box. By the end of 2000 Google was answering more than 100 million queries per day and had begun selling text ads tied to keywords through a self-serve product first called Premium Sponsorships and later relaunched in October 2000 as AdWords. AdWords used a pay-per-click auction model that made advertising affordable for small businesses, and it became the foundation of Google's revenue for the next two decades.[7]
In 2001 Page and Brin recruited Eric Schmidt, formerly CEO of Novell, as Google's chief executive. Schmidt, Page, and Brin operated as a self-described triumvirate, with Schmidt providing the operational discipline that the company's investors had asked for. Under that arrangement Google rolled out a remarkable run of new products: Google Image Search (2001), Google News (2002), AdSense (2003), Gmail (2004), Google Maps (2005), Google Earth (2005), and Google Calendar (2006). Each of those products is still operating today.
Google filed to go public in April 2004 and listed on the NASDAQ in August 2004 at a split-adjusted price of about $85 per share, raising $1.67 billion in what was then one of the most closely watched technology IPOs of the decade. Page and Brin used the prospectus to publish an unusual founders' letter that warned shareholders Google would not optimise for quarterly earnings, would issue a dual-class share structure to keep voting control with the founders, and would take risky long-term bets. Both promises were kept. Within five years Google had moved into web browsers (Chrome, launched 2008), mobile operating systems (Android, acquired 2005, first phone shipped 2008), online video (YouTube, acquired October 2006 for $1.65 billion in stock), and digital advertising serving (DoubleClick, acquired April 2007 for $3.1 billion). The YouTube and DoubleClick deals in particular reshaped the global advertising market and put Google at the centre of the entire web economy.[9][10]
In April 2011 Larry Page replaced Schmidt as CEO and pushed the company through a long internal reorganisation oriented around what he called "more wood behind fewer arrows." Google killed dozens of side projects (Google Reader, Google Wave, Google Health) and concentrated on a smaller number of large platform bets. The most consequential of those bets was Android, which Google had bought from Andy Rubin's startup in 2005 for around $50 million. By 2013 Android was running on more than three quarters of all new smartphones shipped worldwide, mostly on devices made by Samsung, LG, HTC, and a growing number of Chinese OEMs. Android gave Google a permanent default position on the world's mobile devices for Search, Maps, YouTube, Gmail, and the Play Store, and it remains the single most important strategic asset the company controls outside Search itself.[10]
In parallel, Google made two acquisitions that defined the next decade. In January 2014 it bought Nest Labs, the smart-thermostat maker founded by ex-Apple iPod engineer Tony Fadell, for $3.2 billion in cash. Later that same month it acquired the London AI research lab DeepMind, founded by Demis Hassabis, Shane Legg, and Mustafa Suleyman in 2010, for a reported $400 to $500 million. The DeepMind purchase was driven personally by Page after a dinner with Hassabis, and Google outbid Facebook to win it. As part of the deal Google agreed to set up an AI ethics board and to keep DeepMind operating semi-independently from London under its existing research culture. The decision turned out to be one of the most important acquisitions in Google's history.[11]
On August 10, 2015 Page surprised investors by announcing that Google would be reorganised under a new parent holding company called Alphabet Inc.. The narrower Google business (Search, Ads, YouTube, Android, Cloud, hardware) would become an Alphabet subsidiary led by Sundar Pichai, who had until then run Android and Chrome. The riskier long-horizon projects (Waymo, Verily, Calico, Wing, X, GV, CapitalG, Google Fiber) were spun out as separate Alphabet "Other Bets" so they could be reported and managed independently of the search advertising business that funded them. Page became CEO of Alphabet, Brin became its president, and Schmidt remained executive chairman. The structure was modelled in part on Berkshire Hathaway's federation of operating companies and was meant to give the moonshot units more autonomy while making the core Google P&L more legible.[12]
Pichai, born in Madurai, India in 1972, had joined Google in 2004 to manage the Google Toolbar and went on to lead Chrome and ChromeOS. He held a bachelor's in metallurgical engineering from IIT Kharagpur, a master's in materials science from Stanford, and an MBA from Wharton, and had a brief stint at McKinsey before joining Google. By 2014 he was overseeing all of Google's product lines. His promotion to Google CEO in 2015 was widely seen as a long-planned succession. In December 2019 Page and Brin stepped back further from day-to-day management and Pichai also took over as CEO of Alphabet, consolidating the two top jobs. Page and Brin remain controlling shareholders through their Class B super-voting stock and continue to influence strategy informally, in particular on AI.[13]
The years between 2015 and 2022 were defined by the maturation of Google's ad business, the rapid scaling of Google Cloud under former Oracle and VMware executive Thomas Kurian, the rise of YouTube to a roughly $30 billion-a-year revenue line, and a long string of antitrust investigations in the United States and Europe. The European Commission fined Google a total of more than 8 billion euros across cases involving Android, Google Shopping, and AdSense between 2017 and 2019. The US Department of Justice filed a federal antitrust lawsuit in October 2020 alleging that Google had unlawfully maintained its monopoly in general search.[14]
The public release of ChatGPT by OpenAI on November 30, 2022 set off what was widely reported inside Google as a "code red." Senior leadership reassigned engineering teams to AI work, and co-founders Page and Brin returned to the office for the first time in years to personally review the company's response. Pichai later told The New York Times that the formal phrase "code red" was an exaggeration, but the disruption was real. For the first time in two decades Google's franchise in question answering was being seriously contested.[15]
Google's first answer was Bard, a conversational chatbot first announced on February 6, 2023 and powered initially by a lightweight version of LaMDA. Bard's public demo on February 8, 2023 included a factual error about the James Webb Space Telescope and Alphabet's stock fell roughly 7% the next day, wiping out about $100 billion of market value in a single session. The episode hardened a perception that Google had fallen behind OpenAI in shipping consumer AI even though it had pioneered most of the underlying research, including the 2017 Transformer paper "Attention Is All You Need" by Vaswani et al., which is the architectural foundation of every modern LLM.[16]
In April 2023 Pichai announced that Google Brain, the deep learning research group founded by Jeff Dean, Greg Corrado, and Andrew Ng inside Google in 2011, would be merged with DeepMind under Hassabis to form a single research division called Google DeepMind. Jeff Dean took the role of chief scientist for both Google Research and Google DeepMind. The merger consolidated almost all of Alphabet's foundation-model work under Hassabis and ended the long-running internal duplication between the two labs, which had each been training their own large models in parallel.[17]
The first Gemini models were unveiled on December 6, 2023 in three sizes (Ultra, Pro, and Nano), positioned as natively multimodal from the ground up rather than text models with vision bolted on. Bard was rebranded as Gemini in February 2024 and the standalone Gemini app launched on Android and iOS shortly afterwards. Gemini 1.5 (February 2024) introduced a one-million-token context window. Gemini 2.0 (December 2024) shipped agentic features including a research-agent mode and integration with Workspace data. Gemini 2.5 Pro (March 2025) topped the LMArena leaderboard for over six months and gave Google its first sustained period of clear leadership in a public LLM benchmark since OpenAI launched GPT-4 in March 2023.[18]
On November 18, 2025 Google released Gemini 3, led by Gemini 3 Pro. Gemini 3 Pro accepts text, images, audio, video, and PDFs in a context window of up to 1,048,576 input tokens, with up to 65,536 output tokens, and shipped simultaneously across the Gemini app, AI Studio, Vertex AI, and the Search AI Mode. At launch Gemini 3 Pro scored 81% on MMMU-Pro and 87.6% on Video-MMMU, and 72.1% on the SimpleQA Verified factuality benchmark. Google also previewed Deep Think, a higher-compute reasoning mode for Pro and Ultra subscribers. Gemini 3.1 Pro followed in Q1 2026 and as of April 2026 leads the GPQA Diamond reasoning benchmark at 94.3% and is widely considered the most cost-effective frontier model at roughly $2 per million input tokens.[19][20]
At Google Cloud Next in April 2026 Pichai disclosed that Google Cloud was running at a $70 billion annualised revenue rate, growing 48% year over year, with a $240 billion contracted backlog that had doubled in twelve months. He also reported that nearly 75% of all new code written inside Google was AI-generated and reviewed by an engineer, up from 50% the previous autumn, and that the consumer Gemini app had crossed 750 million monthly users. Google announced the rebranding of Vertex AI into the Gemini Enterprise Agent Platform, the launch of TPU 8i (an inference-optimised configuration that wires 1,152 TPUs into a single pod), and partnerships with Apple to power features in the next major release of Apple Intelligence and with NASA to support flight readiness analysis for Artemis II.[21][22]
Google is the largest operating subsidiary of Alphabet Inc., the Delaware-incorporated holding company created in the 2015 reorganisation. Alphabet reports its financials in two segments: Google Services, Google Cloud, and Other Bets. Google Services is the consumer-facing and advertising-funded part of the business; Google Cloud is the enterprise infrastructure, software, and AI platform business; and Other Bets is the collection of long-horizon subsidiaries.
| Segment | Constituent businesses | 2025 revenue | Year-over-year |
|---|---|---|---|
| Google Services | Google Search and other, YouTube ads, Google Network, subscriptions, platforms, devices | ~$340B | +14% |
| Google Cloud | Vertex AI / Gemini Enterprise Agent Platform, BigQuery, GKE, GCP infrastructure, Workspace | ~$58B | +35% (Q4 +48%) |
| Other Bets | Waymo, Verily, Wing, Calico, GV, CapitalG, Isomorphic Labs (since 2023) | ~$1.6B | +24% |
The Other Bets segment exists to insulate Alphabet's core advertising profits from the cost of long-cycle moonshots. The biggest Other Bet is Waymo, the autonomous-vehicle subsidiary spun out of Google's self-driving car project in 2016 under former Hyundai executive John Krafcik. Waymo raised a $16 billion round in February 2026 at a $126 billion valuation and increased weekly paid robotaxi rides from about 175,000 at the start of 2025 to more than 450,000 by year end, with launches underway in Miami, Dallas, Houston, San Antonio, Orlando, Tokyo, and London.[23]
Other Bets also includes Verily, a health technology subsidiary that grew out of Google X's Life Sciences team and now focuses on clinical research and chronic disease management; Wing, a delivery drone operator with active commercial routes in Australia, the United States, and Ireland; Calico, an anti-aging biotech company founded in 2013 with Genentech veteran Art Levinson; the venture funds GV (formerly Google Ventures) and CapitalG; and Isomorphic Labs, a drug discovery spinout founded in 2021 by Demis Hassabis using AlphaFold-class models for structure-based drug design. The X Development "moonshot factory" continues to operate as an idea incubator inside Alphabet but has shifted in 2025 toward spinning projects out as independent companies funded by an external venture vehicle, the Series X Capital fund.[24]
Long before the term "AI" became a marketing category Google was one of the heaviest users of large-scale machine learning on the planet. Statistical models powered ranking in Google Search, click-through prediction in AdWords, spam filtering in Gmail, and translation between language pairs in Google Translate, which launched in 2006 using statistical machine translation trained on parliamentary corpora. Google's research division also published influential systems work that made later deep learning possible, including the MapReduce paper (2004) by Jeff Dean and Sanjay Ghemawat, the Bigtable paper (2006), the GFS paper (2003), and the open-source release of the Borg-derived Kubernetes container orchestration system in 2014.[25]
In 2011, Jeff Dean, Google researcher Greg Corrado, and Stanford professor Andrew Ng founded an internal deep learning team that became known as Google Brain. Much of the early work happened in employees' so-called "20 percent time." The team built DistBelief, the precursor of TensorFlow, on top of Google's data center fabric. In June 2012 the New York Times reported that a 16,000-CPU cluster running a Brain-trained neural network had learned to recognise cats in YouTube thumbnails without supervision, an early demonstration of unsupervised representation learning at scale. In March 2013 Google hired Geoffrey Hinton and acquired his University of Toronto startup DNNResearch, bringing Hinton, Ilya Sutskever, and Alex Krizhevsky into Google's orbit; the trio's 2012 AlexNet paper had set off the modern computer-vision boom.[26]
The Google Brain era produced a string of papers that defined the field: word2vec (Mikolov, 2013), Sequence to Sequence Learning (Sutskever et al., 2014), the seq2seq attention mechanism (Bahdanau, Cho, Bengio, 2014, with Brain co-authors), and most consequentially the Transformer architecture in "Attention Is All You Need" by Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser, and Polosukhin in June 2017. The Transformer paper has been cited more than 173,000 times and is the architectural foundation of every modern large language model, including ChatGPT, Claude, and Gemini itself.[27]
In parallel, the London lab DeepMind, acquired in January 2014, pursued an explicitly research-led path toward artificial general intelligence. DeepMind's results in reinforcement learning and games became some of the most recognisable AI achievements of the decade.
| Year | System | Achievement |
|---|---|---|
| 2013 | DQN | First deep reinforcement learning agent to play Atari 2600 from raw pixels |
| 2016 | AlphaGo | Beat 18-time world Go champion Lee Sedol 4-1 in Seoul |
| 2017 | AlphaGo Zero | Mastered Go from self-play with no human data |
| 2017 | AlphaZero | Mastered chess, shogi, and Go from a single algorithm |
| 2019 | AlphaStar | Reached Grandmaster level in StarCraft II |
| 2020 | AlphaFold 2 | Solved protein structure prediction at CASP14 |
| 2022 | AlphaCode | Reached top-54% in Codeforces competitive programming |
| 2024 | AlphaProof, AlphaGeometry 2 | Silver-medal performance at International Mathematical Olympiad |
| 2024 | Nobel Prize in Chemistry | Awarded to Demis Hassabis and John Jumper for AlphaFold |
The Nobel Prize awarded to Hassabis and his colleague John Jumper in October 2024 for AlphaFold 2 was the first Nobel ever given for work done principally inside an industrial AI lab. AlphaFold has since released structure predictions for more than 200 million proteins, effectively the entire known proteome, through the AlphaFold Protein Structure Database run with EMBL-EBI.[28]
The April 2023 merger of Google Brain and DeepMind into Google DeepMind ended a long-running ambiguity about who inside Alphabet was responsible for foundation models. Hassabis became CEO of the merged unit and reports directly to Pichai. According to Hassabis, the two of them now speak "pretty much every day about strategic things and where should the technology go," and roadmaps are adjusted on a daily basis. Jeff Dean was named chief scientist for both Google Research and Google DeepMind. The merged unit is responsible for the entire Gemini family, the open-weight Gemma series, the Veo video models, the Imagen image models, the Lyria music models, and the Genie world model line.[29]
Google's AI portfolio in 2026 covers consumer applications, enterprise platforms, developer tooling, and on-device models. Almost all of these products share a common Gemini model substrate, with smaller distilled variants for cost and latency.
| Product | Type | Underlying model | Notes |
|---|---|---|---|
| Gemini app | Consumer assistant (web, Android, iOS) | Gemini 3 Pro and Gemini 3 Flash | 750M+ MAU as of Apr 2026 |
| AI Overviews / AI Mode | Search summaries and dialogue | Custom Gemini variant | Available in 200+ countries, 40+ languages |
| Vertex AI / Gemini Enterprise Agent Platform | Cloud ML and agent platform | All Gemini sizes plus third-party models | Includes Agent Builder, Model Garden, BigQuery integration |
| Google AI Studio | Developer prototyping IDE | Gemini, Imagen, Veo | Free tier with usage caps |
| NotebookLM | Source-grounded research assistant | Gemini 2.5 / 3 | Audio Overviews podcast generation |
| Gemini Code Assist | IDE coding assistant | Gemini 3 Pro | VS Code, JetBrains, GitHub integration |
| Imagen 3 / 4 | Image generation and editing | Imagen series | Bundled in Gemini app and Workspace |
| Veo 3 / 3.1 | Video generation with native audio | Veo series | Powers Vids, YouTube Shorts via Veo 3 Fast |
| Lyria 3 | Music generation | Lyria series | Available to Google AI Pro / Ultra |
| Gemini Nano | On-device LLM | Gemini Nano 4 | Ships in Pixel, Galaxy 25, Xiaomi 15 via AICore |
| Gemma 3 | Open-weight model family | Gemma series | Released for research and commercial use |
| Project Astra | Real-time multimodal agent | Gemini Live foundation | Voice and camera assistant |
The most consequential deployment of Gemini is inside Google Search. What started in May 2023 as the Search Generative Experience (SGE) became a generally available product called AI Overviews at Google I/O in May 2024 and was rebranded again at I/O 2025 with the addition of a full conversational AI Mode that lets users keep refining a query in dialogue. AI Overviews is now available in more than 200 countries and 40 languages. By February 2026 AI Overviews appeared on roughly 48% of all tracked queries, up from about 31% a year earlier. Education queries triggered an AI Overview 83% of the time and B2B technology queries 82% of the time. The trade-off is contested: a Pew Research Center study tracking 68,000 queries found that users clicked on results 8% of the time when an AI summary was present, compared with 15% without, a 46.7% relative reduction. Google has argued in response that AI Overviews drive more diverse traffic and more high-intent clicks, although publishers have generally disputed that framing.[30][31]
The consumer Gemini app, which evolved out of Bard, sits at the centre of Google's challenge to ChatGPT. As of April 2026 it had crossed 750 million monthly users, and the same Gemini models are wired into every Workspace app: drafting and summarising in Gmail; outlining, rewriting, and proofreading in Docs; building tables and writing formulas in Sheets; generating slides and images in Slides; taking notes and producing meeting summaries in Meet; and creating short videos in Vids using Veo 3.1. In January 2025 Google folded Gemini into every Business and Enterprise Workspace plan rather than charging a separate per-user AI add-on, raising the base price by roughly $2 per user but bundling AI as standard. Workspace had more than three billion users at the time of the change.[32]
Google Cloud's machine learning platform Vertex AI was rebranded in 2026 as the Gemini Enterprise Agent Platform. The platform offers a Model Garden with first-party Gemini, Imagen, Veo, Lyria, and Gemma models alongside open-weight third-party models from Meta, Mistral, and others, and (uniquely among hyperscalers) Anthropic's Claude family through a partnership announced in 2023 and expanded in 2025. It also provides Agent Builder for orchestrating multi-step agents, Vertex AI Search for retrieval-augmented generation over enterprise data, and tight coupling with BigQuery for SQL-driven ML. Gemini Enterprise paid monthly active users grew 40% quarter over quarter in Q1 2026.[33]
NotebookLM, originally an experiment from Google Labs called Project Tailwind in mid-2023, ships as a research assistant grounded in user-provided sources (PDFs, slides, YouTube videos, web pages, audio recordings). Its most-talked-about feature is Audio Overviews, which generates a roughly 10-minute conversational podcast between two synthetic hosts based on uploaded sources. Veo 3, released in May 2025, generates photorealistic short-form video from text prompts with native audio; a faster Veo 3 Fast variant powers AI generation in YouTube Shorts. Imagen 3 and Imagen 4 handle image generation and editing across Workspace and the Gemini app, and Lyria 3 generates licence-cleared music for Google Vids and YouTube creators.[34]
The Tensor Processing Unit is Google's family of in-house ML accelerators. The first generation TPU, designed under Norm Jouppi and disclosed at Google I/O 2016, was an inference-only chip used inside Google data centers from 2015 onward to serve workloads like Search ranking and Translate. Subsequent generations added BF16 training support (TPU v2, 2017), HBM (TPU v3, 2018), 3D-torus scaling to thousands of chips per pod (TPU v4, 2021), and large-scale frontier training (TPU v5p, 2023). The seventh generation, codenamed Ironwood and sold as TPU v7 / TPU7x, became generally available in early 2026 and is the first TPU explicitly described by Google as inference-first.
| TPU generation | Codename | Year | Peak BF16 / FP8 | Pod scale |
|---|---|---|---|---|
| TPU v1 | n/a | 2015 | 92 TOPS (INT8) | n/a (single chip) |
| TPU v2 | n/a | 2017 | 45 TFLOPS BF16 | 256 chips |
| TPU v3 | n/a | 2018 | 123 TFLOPS BF16 | 1,024 chips |
| TPU v4 | n/a | 2021 | 275 TFLOPS BF16 | 4,096 chips (3D torus) |
| TPU v5e / v5p | Viperfish | 2023 | 459 TFLOPS BF16 (v5p) | 8,960 chips (v5p) |
| TPU v6e | Trillium | 2024 | ~918 TFLOPS BF16 | 256+ chips per pod |
| TPU v7 | Ironwood | 2025-26 | ~2,300 TFLOPS BF16 (est.); 42.5 FP8 EFLOPS per superpod | 9,216 chips per superpod |
Ironwood ships with 192 GB of HBM per chip (six times Trillium) and 7.37 TB/s of HBM bandwidth per chip. A single Ironwood superpod aggregates 9,216 chips connected over Google's optical interconnect, delivering roughly 42.5 FP8 exaflops, which Google has positioned as the largest single training and inference fabric on the market. The TPU 8i variant unveiled at Cloud Next 2026 is an inference-tuned configuration with 1,152 TPUs per pod and three times more on-chip SRAM, optimised for serving long-context, agentic Gemini workloads at low latency.[35][36]
In October 2025 Anthropic signed a deal with Google Cloud worth tens of billions of dollars to deploy more than one million Ironwood TPUs starting in 2026, adding more than a gigawatt of compute capacity to Anthropic's training stack alongside its existing AWS Trainium and NVIDIA GPU usage. In April 2026 Google announced an additional investment of up to $40 billion into Anthropic, of which $10 billion was committed immediately at a $350 billion Anthropic valuation, with another $30 billion contingent on performance milestones and a fresh five-gigawatt allocation of Google Cloud capacity over five years. Anthropic remains a customer rather than a subsidiary, and Google's stake is non-controlling.[5][37]
Google's Pixel line of smartphones, launched in October 2016, doubles as the company's reference platform for on-device AI. The Pixel 10, announced August 20, 2025, was the first phone to ship with the Tensor G5, a 3 nm system-on-chip designed by Google in partnership with TSMC (replacing the Samsung-fabricated G1 through G4). The G5 includes a TPU block roughly 60% more capable than its predecessor, runs an effective four-billion-parameter Gemini Nano 4 model entirely on-device, and powers features such as Magic Cue (proactive contextual suggestions), Voice Translate (real-time call translation), Camera Coach (composition guidance), and Gemini Live's visual mode. The Gemini Nano model used by Pixel and other Android devices is also exposed to third-party Android developers through the AICore service and ML Kit GenAI APIs, which give apps a managed runtime for summarisation, proofreading, rewriting, and image description without sending user data to the cloud.[38][39]
Google is one of the most acquisitive technology companies in history, with several hundred completed deals since 2001. The following are the acquisitions and investments most relevant to its current AI and platform strategy.
| Year | Target | Approximate price | Strategic role today |
|---|---|---|---|
| 2003 | Applied Semantics (AdSense) | $102M | Foundation of contextual ad targeting |
| 2005 | Android Inc. | ~$50M | Android operating system |
| 2005 | Where 2 Technologies (Maps) | undisclosed | Google Maps |
| 2005 | Keyhole | ~$35M | Google Earth |
| 2006 | YouTube | $1.65B | YouTube, ~$60B+ ads/subs revenue line |
| 2007 | DoubleClick | $3.1B | Display advertising stack |
| 2010 | AdMob | $750M | Mobile advertising |
| 2011 | Motorola Mobility | $12.5B | Sold to Lenovo in 2014 (patents retained) |
| 2013 | Waze | $1.3B | Crowd-sourced routing inside Maps |
| 2014 | Nest Labs | $3.2B | Folded into Google hardware |
| 2014 | DeepMind | ~$500M | Now Google DeepMind, runs Gemini |
| 2019 | Looker | $2.6B | BI inside Google Cloud |
| 2020 | Fitbit | $2.1B | Pixel Watch and health data |
| 2022 | Mandiant | $5.4B | Google Cloud security |
| 2023-26 | Anthropic (minority investment) | up to ~$43B+ in cash and compute | Strategic AI partner and large TPU customer |
| 2024 | Character.AI (talent and licensing) | reported $2.7B | Re-hired Noam Shazeer to lead Gemini work |
The Character.AI deal in August 2024 was structured the same way Microsoft's deal with Mustafa Suleyman's Inflection AI had been earlier that year: a non-equity transaction in which Google paid for a non-exclusive licence to Character.AI's technology and hired Noam Shazeer (one of the original Transformer co-authors) and most of the technical team back into Google DeepMind, where Shazeer became a co-lead of the Gemini effort. The structure avoided triggering merger review.
| Role | Person | Since |
|---|---|---|
| CEO, Alphabet and Google | Sundar Pichai | 2015 (Google), 2019 (Alphabet) |
| President and CIO, Alphabet | Ruth Porat | 2015 (CFO), 2023 (President) |
| CFO, Alphabet | Anat Ashkenazi | 2024 |
| CEO, Google DeepMind | Demis Hassabis | 2023 |
| Chief Scientist, Google and Google DeepMind | Jeff Dean | 2018 (SVP Research and AI), 2023 (Chief Scientist) |
| CEO, Google Cloud | Thomas Kurian | 2018 |
| CEO, YouTube | Neal Mohan | 2023 |
| CEO, Waymo | Tekedra Mawakana | 2024 (sole CEO; co-CEO from 2021) |
| Co-founders | Larry Page, Sergey Brin | 1998-present (controlling shareholders) |
Brin returned to a more active engineering role on Gemini in 2023, splitting his time between research review and infrastructure decisions. He is regularly seen on the Mountain View campus and in DeepMind's Mountain View office, although he holds no formal title.
The modern frontier model competition in 2026 has narrowed to a small number of players that can credibly train and serve a foundation model at the GPT-4-and-above level. Stanford HAI's 2026 AI Index identifies five contenders entering 2026 (Google, OpenAI, Anthropic, Meta, xAI). By the end of Q1 2026 Google, OpenAI, and Anthropic were generally seen as separating from the field on absolute capability, with Meta delaying its flagship model in March 2026 after internal evaluations failed to keep up and xAI losing two co-founders in February.[40]
| Company | Frontier model (Apr 2026) | Compute supplier | Distribution channel |
|---|---|---|---|
| Gemini 3.1 Pro, Gemini 3 Deep Think | Google TPUs (Ironwood / TPU 8i) | Search AI Overviews, Gemini app (750M MAU), Workspace, Vertex AI / Gemini Enterprise | |
| OpenAI | GPT-5.4 family | Microsoft Azure, Oracle, Google Cloud (multi-cloud) | ChatGPT, API, Microsoft 365 Copilot |
| Anthropic | Claude Sonnet 4.6, Claude Opus 4.6 | AWS Trainium, Google TPUs, NVIDIA | Claude.ai, API, AWS Bedrock, Vertex AI |
| Meta | Llama 4 series (delayed) | Custom plus NVIDIA H200/B200 | Meta apps, open weights |
| xAI | Grok 4 series | xAI Colossus (NVIDIA) | X (Twitter), grok.com, API |
Google's structural advantages in this competition include the largest captive distribution surface in the consumer internet (Search, YouTube, Android, Chrome, Workspace, Maps, Pixel), a vertically integrated silicon stack from the TPU upward, a pre-existing global data center footprint, and an existing $402 billion revenue line from advertising that funds capex without needing external financing. Its structural disadvantages include the antitrust overhang from the United States v. Google search case and the parallel ad-tech case, the cannibalisation risk that AI Overviews pose to its own search advertising business, and the political pressure that comes with being the world's default information utility. Pichai has framed Google's strategy as making Gemini the default AI substrate across Google's existing surfaces rather than asking users to come to a new product.[41]
In August 2024 federal judge Amit P. Mehta in the District of Columbia ruled that Google had violated Section 2 of the Sherman Act by maintaining its monopoly in general search through exclusive default-search-engine contracts with Apple, Mozilla, Samsung, and US carriers. The remedies hearing took place over fifteen days in May 2025, and in September 2025 Mehta declined to order divestiture of either Chrome or Android but imposed a set of behavioural remedies, including a prohibition on exclusive distribution contracts for Search, Chrome, Google Assistant, and Gemini, and a partial requirement that Google share certain search data with qualified competitors. Both Google and the Department of Justice appealed in early 2026, with the DOJ continuing to seek a Chrome divestiture. A separate jury trial in Virginia in 2024 found Google liable for monopolisation in parts of its ad-tech stack, and remedies in that case remain pending.[42]
Alphabet's 2025 financial year crossed several thresholds. Full-year revenue exceeded $400 billion for the first time at $402.8 billion, up roughly 15% year over year. Q4 2025 revenue alone was $113.8 billion, growing 18% (17% in constant currency), with Q4 net income of $34.5 billion (up 30%). Google Services Q4 revenue was $95.9 billion (+14%), Google Search and other was up 17%, YouTube ads were up 9%, and Google subscriptions, platforms, and devices were up 17%. Google Cloud Q4 revenue was $17.7 billion, up 48% year over year. YouTube ads plus subscriptions exceeded $60 billion for full-year 2025.[3][4]
| Line item | FY2025 | FY2024 | Change |
|---|---|---|---|
| Total revenue | $402.8B | $350.0B | +15% |
| Operating income | ~$135B | ~$112B | +21% |
| Net income | ~$124B | ~$100B | +24% |
| Capex | ~$78B | ~$53B | +47% |
| Capex guide for 2026 | $175-185B | n/a | ~2x |
| R&D | ~$60B | ~$50B | +20% |
| Headcount | ~190,800 | ~183,300 | +4% |
The most striking line in the 2025 results is the 2026 capital expenditure guide of $175 to $185 billion, the top end of which would be more than double 2025's spend. The vast majority of that capex is being directed to AI data centers, custom silicon procurement (TPUs and select NVIDIA GPUs), networking, and the underlying real estate and power deals that AI training and serving now require. Pichai has publicly defended the spending against investor concerns about returns by pointing to the $240 billion contracted Cloud backlog and the rate at which Gemini products are scaling inside both Workspace and Search.[21]
Google has been carbon neutral in operations since 2007 and matched 100% of its annual electricity consumption with renewable energy purchases since 2017, but the AI buildout has put pressure on those commitments. Alphabet's 2024 Environmental Report disclosed that Scope 2 emissions had risen substantially because of data center growth, and the company has since signed long-term supply deals for next-generation nuclear (small modular reactors via Kairos Power, October 2024) and large solar and geothermal projects. Internally Google reports that on-device Gemini Nano in Pixel reduces inference electricity use per query by orders of magnitude compared with cloud inference, although the absolute number of inferences served has grown faster than per-query efficiency.
On safety and policy, Google DeepMind operates a Responsibility and Safety Council chaired by Hassabis and publishes Responsible AI Reports alongside major model releases. Google was a founding member of the Frontier Model Forum (2023) with OpenAI, Anthropic, and Microsoft, and has signed onto the White House voluntary AI commitments and the EU AI Act compliance framework. The company has also been a frequent target of regulatory action, with antitrust enforcement in the United States and Europe, content moderation rules under the EU Digital Services Act, AI watermarking and disclosure requirements in several jurisdictions, and ongoing questions about copyright in the training data used for Gemini and Imagen.
Google's cultural reach is hard to overstate. The verb "to google" was added to the Oxford English Dictionary in 2006 and the Merriam-Webster Dictionary the same year. The company's twenty-percent time policy, its informal motto "Don't be evil" (later softened to "Do the right thing" in Alphabet's code of conduct), its colourful campus, the annual April Fool's pranks, and its Doodles (which have included interactive games, mini synthesisers, and educational tributes) became templates for how a 21st-century technology company presents itself. Google Search itself has been described by writers like Ezra Klein and Ben Thompson as the closest thing the modern internet has to a public utility, which is part of what makes the AI Overviews shift so consequential: changes to Google Search ripple through every publisher, retailer, and small business that depends on organic traffic.