Stanford University, formally Leland Stanford Junior University, is a private research university in Stanford, California, on the San Francisco Peninsula adjacent to Palo Alto. Founded in 1885 by railroad magnate and former California governor Leland Stanford and his wife Jane Lathrop Stanford as a memorial to their only son, the university opened to its first students on October 1, 1891. Stanford's 8,180-acre campus, often called "the Farm" because it was carved out of the Stanfords' Palo Alto Stock Farm, sits at the center of Silicon Valley and has been one of the most consequential institutions in the modern history of computing and artificial intelligence.
Stanford's contributions to AI span more than six decades. The Stanford Artificial Intelligence Laboratory (SAIL), founded in 1962 by John McCarthy, the computer scientist who coined the term "artificial intelligence," has been the seedbed for early breakthroughs in symbolic reasoning, robotics, computer vision, expert systems, and natural language processing. In the deep learning era, Stanford faculty and alumni built ImageNet, founded Coursera and Udacity, ran Google Brain and Baidu's AI division, and went on to lead Google, NVIDIA, Anthropic, and Sakana AI. In 2019 the university launched the Institute for Human-Centered Artificial Intelligence (HAI), and in 2021 the Center for Research on Foundation Models (CRFM) coined the term "foundation models" that now anchors the modern vocabulary of AI.
Stanford was endowed by Leland Stanford and Jane Lathrop Stanford after the death of their fifteen-year-old son, Leland Stanford Jr., from typhoid fever in Florence in 1884. Within months of his death, the Stanfords resolved that "the children of California shall be our children" and committed the bulk of their fortune, including the 8,180-acre Palo Alto Stock Farm in northern Santa Clara County, to a memorial university. The grant of endowment was signed in November 1885, and the university opened on October 1, 1891 with 555 registered students. From the outset Stanford was unusual among elite American universities. It was coeducational at a time when most private universities admitted only men, nondenominational when most were tied to a religious body, and explicitly practical, designed to produce useful citizens for the new industrial West.
The master plan for the campus was developed by landscape architect Frederick Law Olmsted, working with Leland Stanford and the architect Charles Allerton Coolidge. Olmsted argued for a flatlands quadrangle of low sandstone buildings linked by long arcades and red-tiled roofs, an aesthetic borrowed from California's Spanish missions. The Main Quad and Memorial Church remain the symbolic heart of the campus today.
Stanford grew slowly through its first half-century, surviving the death of Leland Stanford in 1893, a near-bankruptcy that forced Jane Stanford to pay faculty salaries from her own income, and the catastrophic 1906 San Francisco earthquake which damaged the original arches and destroyed the first version of Memorial Church. The transformative figure of the postwar period was Frederick Terman, dean of engineering and later provost, who actively encouraged faculty and graduates to start companies in the surrounding orchards. Hewlett-Packard, which Bill Hewlett and Dave Packard founded in a Palo Alto garage in 1939 with Terman's encouragement, is usually marked as the first Silicon Valley company. By the 1960s the university stood at the geographic and intellectual center of an emerging computing industry, and that proximity would shape every subsequent chapter of its work in AI.
John McCarthy joined the Stanford faculty in 1962 after several years at MIT, where he had organized the 1956 Dartmouth Summer Research Project on Artificial Intelligence and invented the LISP programming language. In 1963 he established the Stanford University Artificial Intelligence Project, which was renamed the Stanford Artificial Intelligence Laboratory (SAIL) in 1971. McCarthy directed the lab from 1965 until 1980. SAIL was housed in the D.C. Power Building in the Stanford foothills, a remote facility that gave the lab a distinctive, almost monastic culture. It was one of the original ARPANET nodes and one of the first communities anywhere to live and work inside an interactive, time-shared computing environment.
SAIL's research agenda in its first two decades was extraordinarily broad. The lab's faculty and students made foundational contributions to expert systems, knowledge representation, robotics, computer vision, computer music, computer-generated typesetting, speech recognition, and computer chess. SAIL produced its own dialect of LISP, known as Stanford LISP 1.6, which ran on the lab's PDP-10 computers and was widely distributed across the early ARPANET. SAIL also produced a programming language of the same name, the Stanford Artificial Intelligence Language, designed by Dan Swinehart and Bob Sproull in 1970. In the late 1970s, a Stanford team led by Richard P. Gabriel collaborated with Lawrence Livermore National Laboratory on the S-1 Lisp project, and that work fed directly into the standardization of Common Lisp in the 1980s, alongside parallel efforts at MIT, Carnegie Mellon, and the New Implementation of Lisp (NIL) project. Berkeley contributed Franz Lisp during the same period, and the friendly Stanford-Berkeley LISP rivalry was a defining feature of West Coast AI for two decades.
Several specific projects from SAIL's early years became landmarks in the field. Cordell Green's PhD thesis, supervised by McCarthy, demonstrated how to build a question-answering system on top of a resolution theorem prover, an early demonstration of the logic-programming idea later realized in Prolog. Edward Feigenbaum and his collaborators built DENDRAL, the first widely cited expert system, which inferred molecular structures from mass spectrometry data and led directly to MYCIN and the broader expert-systems boom of the 1980s. Terry Winograd's SHRDLU, a natural language understanding program that manipulated a virtual world of colored blocks, was completed at MIT in 1970 but Winograd then joined the Stanford faculty and became a key figure in the lab. Pat Hayes, who was a visiting scholar at the Stanford Center for the Study of Language and Information (CSLI) in the 1980s and a consulting professor in computer science from 1985 to 1994, worked with McCarthy on the philosophical foundations of representing time, change, and the famous frame problem.
The Stanford Cart, a battery-powered four-wheeled platform with an onboard television camera, became one of the iconic robots of early AI. Originally built in the early 1960s to study remote control of a hypothetical lunar rover from Earth, the Cart was repurposed under Hans Moravec, who joined Stanford as a graduate student in 1971. From 1973 to 1980 Moravec rebuilt the Cart as an autonomous mobile robot driven by stereo vision. In 1979, after years of incremental progress, his Cart successfully crossed a chair-cluttered room without human intervention, taking five hours to traverse roughly 20 meters by stopping every meter to take a fresh stereo photograph and recompute its world model. By the standards of any later autonomous vehicle the performance was glacial, but the Cart established the basic recipe of perception, mapping, planning, and execution that still organizes modern mobile robotics. Moravec went on to Carnegie Mellon, where he continued the work with the CMU Rover and articulated what is now known as Moravec's paradox.
A quarter century after the Cart, Stanford's robotics group made history again with Stanley, the autonomous Volkswagen Touareg that won the 2005 DARPA Grand Challenge. The Grand Challenge offered a two million dollar prize to any team that could build a fully autonomous vehicle capable of completing a 132-mile course through the Mojave Desert. The 2004 challenge ended with no entrant getting more than a few miles down the course. In 2005 Stanley, built by a team led by Sebastian Thrun, then director of SAIL, finished first in just under seven hours. Stanley combined laser range finders, a video camera, GPS, and inertial sensors with machine learning techniques that let the system learn the appearance of safe versus unsafe terrain on the fly. Stanley is now exhibited at the Smithsonian's National Museum of American History, and its software directly seeded the team that became Google's self-driving car program, eventually spun out as Waymo in 2016. Thrun's runner-up entry Junior placed second in the 2007 DARPA Urban Challenge.
The Stanford Natural Language Processing Group, founded by Christopher Manning, became one of the most influential NLP groups in the world. Manning is the inaugural Thomas M. Siebel Professor in Machine Learning, holds joint appointments in linguistics and computer science, and served as director of SAIL from 2018 to 2024. The group's research underpins much of modern statistical and neural NLP. The Stanford Parser, the Stanford CoreNLP toolkit, the Stanford Sentiment Treebank, the SQuAD reading comprehension dataset, and the GloVe word embeddings developed with Jeffrey Pennington and Richard Socher are all standard tools in the field. Manning's CS224N course on natural language processing with deep learning, taught annually at Stanford and posted publicly, has educated tens of thousands of practitioners worldwide.
The Stanford Vision Lab, founded by Fei-Fei Li when she joined Stanford in 2009, became the home of ImageNet and the annual ImageNet Large Scale Visual Recognition Challenge (ILSVRC). The 2012 ILSVRC was won by AlexNet from the University of Toronto, an event that marked the practical beginning of the deep learning era and triggered the GPU-driven AI boom that continues today. ImageNet itself was begun by Li in 2007 while she was at Princeton and grew under her leadership at Stanford to more than 14 million labeled images organized along the WordNet hierarchy. The Vision Lab also produced influential work on visual question answering, dense captioning, and the Visual Genome dataset.
A later institutional addition was the Stanford DAWN project, a five-year industrial affiliates program founded in 2017 by Christopher Re and Matei Zaharia, then at Stanford, with collaborators including Kunle Olukotun and Peter Bailis. DAWN focused on tools and infrastructure that would let domain experts build production-grade machine learning systems without an army of specialists. The DAWNBench benchmark suite, released in 2017, was the first widely cited benchmark for end-to-end deep learning training time and cost across hardware, frameworks, and clouds. DAWN alumni went on to found Snorkel AI, Databricks, SambaNova Systems, and other infrastructure companies that anchor the modern MLOps and ML systems landscape.
In October 2018 Stanford announced the creation of an institute that would put humans at the center of artificial intelligence research, education, and policy. The Stanford Institute for Human-Centered Artificial Intelligence (HAI) launched in March 2019 under co-directors Fei-Fei Li, professor of computer science and former director of SAIL, and John Etchemendy, professor of philosophy and former Stanford provost. The institute was conceived as a deliberate counterweight to a field that had become dominated by industry labs at Google, Facebook, and Microsoft and that had drawn little input from the humanities or social sciences. The launch event drew Bill Gates, then-California Governor Gavin Newsom, and Reid Hoffman, and the institute set an explicit fundraising target of more than one billion dollars, with the funds earmarked for research grants, faculty hiring, computing resources, and policy work.
HAI today coordinates an interdisciplinary faculty of around 80 affiliated professors drawn from the schools of engineering, humanities and sciences, medicine, law, business, and education. Its activities span seed grants for cross-disciplinary research, the Stanford Digital Economy Lab, the RegLab for AI-assisted government, an AI policy presence in Washington and Sacramento, and a steady stream of policy briefs that have become a reference for legislators. In 2025, HAI absorbed the operations of SAIL, with Carlos Guestrin appointed director of SAIL and tasked with integrating the lab's research community into HAI's broader interdisciplinary mission. The combined entity is now one of the largest concentrations of academic AI talent in the world.
One of HAI's flagship products is the AI Index, an annual report initiated in 2017 by Yoav Shoham, Erik Brynjolfsson, Jack Clark, and others, and published since 2019 under HAI auspices. The Index assembles vetted data on AI progress, investment, hiring, education, public opinion, and policy, and has become a reference document cited by governments, journalists, and corporate strategists. The 2025 edition, the eighth in the series, ran to several hundred pages and tracked benchmark performance across reasoning, multimodal understanding, code generation, and scientific discovery, alongside chapters on responsible AI, the economy, science, education, and policy. Erik Brynjolfsson, the Jerry Yang and Akiko Yamazaki Professor at HAI and at the Stanford Graduate School of Business, leads much of the economics and adoption analysis.
In August 2021 a group of more than 100 Stanford researchers led by Percy Liang, Rishi Bommasani, and Christopher Manning published a 200-page interdisciplinary report titled "On the Opportunities and Risks of Foundation Models." The report introduced the term "foundation models" to describe large neural networks like BERT, GPT-3, CLIP, and Codex that are trained on broad data at scale and can be adapted to a wide range of downstream tasks. The terminology was controversial at first but quickly became standard in academic, regulatory, and industry usage, including in the European Union AI Act and the United States Executive Order on AI.
The report was published by the Center for Research on Foundation Models (CRFM), launched the same month as an interdisciplinary initiative within HAI. CRFM is directed by Percy Liang and brings together researchers from across more than ten Stanford departments to study the model architectures, training procedures, evaluation, and societal implications of foundation models. CRFM has shipped a steady stream of widely used artifacts, including the open-source HELM (Holistic Evaluation of Language Models) benchmark, which evaluates language models across accuracy, calibration, robustness, fairness, bias, toxicity, and efficiency under standardized conditions. Sub-projects include MedHELM for medical reasoning, VHELM for vision-language models, and HELM Lite for low-cost evaluation. CRFM also co-developed Alpaca, a low-cost instruction-tuned LLaMA derivative that demonstrated in 2023 how cheaply academic groups could replicate the chat behavior of much larger commercial systems, and continues to publish the influential Foundation Model Transparency Index.
Stanford's AI faculty has included many of the most cited researchers in the field. The table below highlights notable past and present faculty whose work has shaped one or more eras of AI.
| Faculty | Stanford role | Major contributions |
|---|---|---|
| John McCarthy | Professor 1962-2011, founded SAIL 1962 | Coined "artificial intelligence," invented LISP, situation calculus, time-sharing concept, founder of SAIL |
| Edward Feigenbaum | Professor 1965-2000 | Father of expert systems, DENDRAL, MYCIN, 1994 Turing Award |
| Pat Hayes | Visiting scholar CSLI, consulting professor 1985-1994 | Frame problem, qualitative physics, Common Logic, naive physics manifesto |
| Terry Winograd | Professor 1973-2018 | SHRDLU, founding member of CSLI, advised Larry Page during PhD |
| Joshua Lederberg | Professor of genetics 1958-1978 | Co-developer of DENDRAL, Nobel laureate |
| Cordell Green | PhD 1969, faculty 1968-1972 | Question-answering theorem prover, automatic programming |
| Hans Moravec | PhD student 1971-1980 | Stanford Cart, mobile robotics, Moravec's paradox |
| Sebastian Thrun | Professor 2003-2011, SAIL director 2004-2011 | Stanley DARPA Grand Challenge winner, Google self-driving program, founded Udacity |
| Andrew Ng | Faculty since 2002 | CS229 ML course, founded Google Brain, co-founded Coursera, deep learning education at scale |
| Daphne Koller | Professor 1995-2018 | Probabilistic graphical models textbook, co-founded Coursera, founded Insitro |
| Fei-Fei Li | Faculty since 2009, SAIL director 2013-2018, HAI co-director since 2019 | ImageNet, Stanford Vision Lab, AI4ALL, co-founded HAI, founded World Labs in 2024 |
| Christopher Manning | Faculty since 1999, SAIL director 2018-2024 | Stanford NLP Group, GloVe, CoreNLP, SQuAD, CS224N, HAI associate director |
| Percy Liang | Faculty since 2012 | CRFM founding director, HELM benchmark, coined "foundation models," Codex evaluation |
| Carlos Guestrin | Faculty since 2024, SAIL director since 2025 | XGBoost, LIME, founded Turi (acquired by Apple), led Apple ML, Fortinet Founders Professor |
| Stefano Ermon | Faculty since 2014 | Generative models, diffusion models, sustainability and AI, MacArthur Fellow 2023 |
| Chelsea Finn | Faculty since 2019 | Meta-learning, MAML algorithm, robot learning, IRIS Lab, co-founder of Physical Intelligence |
| Dorsa Sadigh | Faculty since 2018 | Human-robot interaction, value alignment, ILIAD lab |
| James Zou | Faculty since 2017 | Biomedical machine learning, fairness, HAI faculty, Sloan Fellow |
| Erik Brynjolfsson | Faculty since 2020 | Digital economy, AI Index, Stanford Digital Economy Lab director |
| Yoav Shoham | Professor 1987-present (emeritus) | Multi-agent systems, game theory and AI, co-founded AI21 Labs, AI Index co-founder |
| Russ Altman | Faculty since 1990 | Biomedical AI, BioMedNet, HAI faculty, AAAS Fellow |
| Emma Brunskill | Faculty since 2017 | Reinforcement learning, AI for education |
| Tatsunori Hashimoto | Faculty since 2021 | Robust ML, foundation model alignment, Alpaca |
| Jure Leskovec | Faculty since 2009 | Graph neural networks, Pinterest, founded Kumo.ai |
| Noah Goodman | Faculty since 2010 | Computational cognitive science, probabilistic programming, Pyro |
Stanford's AI work is distributed across many overlapping research groups and centers. The table below lists the most prominent.
| Lab or center | Founded | Focus |
|---|---|---|
| Stanford Artificial Intelligence Laboratory (SAIL) | 1962 (as project), 1971 (renamed) | Umbrella lab for AI research; merged with HAI in 2025 |
| Stanford Vision and Learning Lab (SVL) | 2009 | Computer vision, scene understanding, Visual Genome, ImageNet |
| Stanford NLP Group | 1999 | Statistical and neural NLP, GloVe, CoreNLP, SQuAD |
| Center for the Study of Language and Information (CSLI) | 1983 | Logic, language, philosophy of mind |
| Institute for Human-Centered AI (HAI) | 2019 | Cross-school AI policy, ethics, applications |
| Center for Research on Foundation Models (CRFM) | 2021 | Foundation model research, HELM, Alpaca, transparency index |
| Stanford DAWN | 2017 | ML systems, MLOps, DAWNBench |
| RegLab | 2019 | AI for legal and regulatory work |
| Stanford Digital Economy Lab | 2020 | AI and labor markets, productivity research |
| AI Lab Robotics Group | 1972 | Manipulation, mobile robots, surgical robotics |
| Stanford Intelligent Systems Lab (SISL) | 2009 | Decision-making under uncertainty, autonomous flight |
| Stanford AI for Health and Medicine Lab | 2017 | Medical imaging, clinical NLP |
| Stanford Existential Risks Initiative (SERI) | 2020 | Long-term safety, AI risk |
Stanford's proximity to Silicon Valley and a long tradition of faculty leave-of-absence have made the university one of the single most important sources of AI startups in the world. The table below sketches some of the most consequential companies founded or led by Stanford-affiliated AI researchers.
| Company | Founded | Stanford connection |
|---|---|---|
| Hewlett-Packard | 1939 | Bill Hewlett and Dave Packard, EE alumni; encouraged by Frederick Terman |
| Sun Microsystems | 1982 | Founded by Stanford GSB graduate Vinod Khosla, Andy Bechtolsheim built SUN-1 as Stanford workstation |
| NVIDIA | 1993 | Co-founder and CEO Jensen Huang earned Stanford EE master's in 1992 |
| Yahoo! | 1994 | Founded by Stanford EE PhD students Jerry Yang and David Filo |
| 1998 | Founded by PhD students Larry Page and Sergey Brin from BackRub research project at Stanford in 1996 | |
| Cuil | 2008 | Founded by ex-Google search team including Stanford alumna Anna Patterson |
| Coursera | 2012 | Co-founded by Stanford CS professors Andrew Ng and Daphne Koller |
| Udacity | 2012 | Co-founded by SAIL director Sebastian Thrun after he taught online AI to 160,000 students |
| Insitro | 2018 | Founded by former Stanford CS professor Daphne Koller; AI-driven drug discovery |
| Landing AI | 2017 | Founded by Andrew Ng; industrial computer vision |
| DeepLearning.AI | 2017 | Founded by Andrew Ng; AI education company behind Coursera ML specializations |
| Anthropic | 2021 | CEO Dario Amodei held a postdoctoral fellowship in computational neuroscience at Stanford under Surya Ganguli before OpenAI |
| AI21 Labs | 2017 | Co-founded by emeritus Stanford CS professor Yoav Shoham |
| Sakana AI | 2023 | Co-founder David Ha was a Stanford GSB MBA graduate before joining Google Brain |
| Snorkel AI | 2019 | Spun out of Stanford DAWN by Christopher Re and Alex Ratner |
| World Labs | 2024 | Founded by Fei-Fei Li on leave from Stanford to build spatial intelligence models |
| Physical Intelligence | 2024 | Co-founded by Stanford professor Chelsea Finn for robot foundation models |
| Together AI | 2022 | Co-founded by Christopher Re of Stanford for open-source AI compute |
| Kumo.ai | 2021 | Founded by Stanford CS professor Jure Leskovec for graph neural networks |
| Covariant | 2017 | Co-founded by Pieter Abbeel, who completed his Stanford PhD under Andrew Ng before Berkeley |
| Chime, Stripe, DoorDash, Instacart | 2010s | Founded or co-founded by Stanford alumni; not pure AI but heavy ML users |
Many of these founders moved between academia and industry several times. Andrew Ng, for example, completed his PhD at Berkeley, joined Stanford in 2002, founded the Google Brain deep learning project with Jeff Dean and Greg Corrado from 2011 to 2012, then served as chief scientist at Baidu from 2014 to 2017, then returned full circle to Stanford as adjunct professor while running Coursera, the AI Fund, Landing AI, and DeepLearning.AI. Daphne Koller similarly moved from a Stanford professorship to Coursera, then to Calico, then to Insitro. The pattern of senior faculty rotating into industry leadership and back has reinforced Stanford's outsized influence on the structure of the AI industry.
No company illustrates Stanford's relationship with AI more clearly than Google. Larry Page arrived at Stanford as a computer science PhD student in 1995 and was given a campus tour by a second-year PhD student named Sergey Brin. The two began collaborating in 1996 on a research project that Page initially called BackRub, which crawled the early web to count backlinks as a proxy for site importance. Page and Brin developed the PageRank algorithm under the supervision of Terry Winograd, Rajeev Motwani, and Hector Garcia-Molina, all of them senior Stanford faculty. By 1997 the renamed search engine google.stanford.edu had become so popular that it was straining the campus network. Page and Brin took a leave of absence in 1998, incorporated Google in a Menlo Park garage, and never returned to finish their dissertations.
The deeper connection lasted long after the founding. Google's first product head was Marissa Mayer, a Stanford symbolic systems graduate. Andrew Ng founded the Google Brain deep learning project in 2011 while still on the Stanford faculty. Sebastian Thrun launched Google's self-driving car program (later Waymo) and Google X (later X) in 2009 while still affiliated with the university. Jeff Dean, although a Washington PhD, has been a frequent Stanford collaborator. Even after the Alphabet restructuring, Stanford alumni and faculty continue to populate the senior ranks of Google DeepMind, Google Research, and the various Alphabet bets.
Jensen Huang completed a Stanford master's degree in electrical engineering in 1992 while working as a microchip designer at LSI Logic. The following year he co-founded NVIDIA with Chris Malachowsky and Curtis Priem at a Denny's restaurant in San Jose. NVIDIA's GPUs became the dominant compute substrate of the deep learning era after Alex Krizhevsky used two GeForce GTX 580 cards to train AlexNet on Stanford's ImageNet in 2012. Huang has remained closely tied to Stanford, donating the Jen-Hsun Huang Engineering Center in 2010 and continuing to mentor early-career engineers there.
Stanford's AI curriculum is one of the largest and most copied in higher education. The CS229 graduate-level machine learning course, originally created by Andrew Ng, routinely enrolls more than 1,000 students per offering on campus and has been viewed millions of times online. The 2011 video edition of CS229 was the seed for Coursera's first course catalog. CS231N (convolutional networks for visual recognition), originally taught by Fei-Fei Li with Andrej Karpathy and Justin Johnson, became the de facto introduction to modern deep learning for computer vision and is required viewing in countless industry training programs. CS224N (natural language processing with deep learning) by Christopher Manning has played the same role for NLP, and CS25 (Transformers United), CS236 (deep generative models), CS336 (large language models from scratch), and CS329X (machine learning systems) cover the latest generations of foundation models. Stanford's symbolic systems undergraduate program, founded in the 1980s, blends computer science, linguistics, philosophy, and psychology and has produced an outsized number of leaders at OpenAI, Anthropic, and Google.
Stanford has become an institutional voice in the global conversation about AI policy. HAI hosts roundtables for members of the United States Congress, briefs European regulators on the AI Act, and convenes the annual State of AI in California summit. CRFM published the influential Foundation Model Transparency Index, which scores major model providers on disclosure across upstream data, model design, and downstream use. The annual AI Index Report from HAI is treated as a near-canonical statistical reference for the field. Stanford faculty including Fei-Fei Li, Erik Brynjolfsson, James Zou, and Daniel Ho have testified before the United States Senate and provided expert advice to the European Commission, the United Kingdom AI Safety Institute, and several United Nations bodies. The Hoover Institution at Stanford runs a parallel program on AI and national security, and the Stanford Cyber Policy Center contributes additional analyses on platform governance and AI safety.
Beyond founders and faculty, Stanford has educated an enormous number of practicing AI researchers and engineers. A non-exhaustive list of alumni who have gone on to leadership roles in modern AI includes:
Stanford's main campus covers more than 8,000 acres and includes the original Main Quad, the Hoover Tower, Memorial Church, the Cantor Arts Center, and dozens of academic buildings. AI research is concentrated in the Gates Computer Science Building, the Packard Electrical Engineering Building, the Huang Engineering Center, the Bill Lane Center, and the Sapp Center for Science Teaching and Learning. HAI is housed in the new HAI Annex, completed in 2023. Stanford's research compute is provided through a combination of in-house clusters at the Stanford Research Computing Center, partnerships with cloud providers, and special grants of dedicated GPU time from NVIDIA, Google, and other industry partners. The Stanford library system holds the original SAIL technical reports and the McCarthy papers in its Special Collections.
Few universities can claim to have shaped a single technology more than Stanford has shaped artificial intelligence. McCarthy founded the field's vocabulary at Dartmouth in 1956 and brought it west when he moved to Stanford in 1962. Six decades of SAIL research seeded everything from expert systems and Lisp machines to mobile robotics and self-driving cars. Stanford-trained engineers founded Google, NVIDIA, Yahoo!, Coursera, Udacity, Insitro, Anthropic's leadership pipeline, Sakana, World Labs, and Physical Intelligence. The university coined the term "foundation models," runs the most cited annual statistical report on AI, and continues to produce both the open-source benchmarks (HELM, DAWNBench) and the policy frameworks that the rest of the field uses to argue with itself. As of 2026 the institution remains, by almost any measure, the single most influential academic node in the modern AI ecosystem.