The 2024 Nobel Prizes marked a historic turning point for the field of artificial intelligence. For the first time in the history of the Nobel Prizes, both the Physics and Chemistry awards recognized work rooted in AI and machine learning. On October 8, 2024, the Royal Swedish Academy of Sciences awarded the Nobel Prize in Physics to John J. Hopfield and Geoffrey Hinton "for foundational discoveries and inventions that enable machine learning with artificial neural networks." The following day, October 9, the Nobel Prize in Chemistry was awarded to David Baker, Demis Hassabis, and John Jumper for breakthroughs in computational protein design and protein structure prediction. Together, these prizes acknowledged that AI had moved from a narrow subfield of computer science into a transformative force reshaping fundamental scientific research.
The two prizes recognized different but complementary threads in the history of AI. The Physics prize honored the theoretical and mathematical foundations laid in the 1980s that made modern deep learning possible, while the Chemistry prize honored the practical application of AI to solve one of biology's longest-standing challenges: predicting the three-dimensional structure of proteins from their amino acid sequences.
Both prizes carried a monetary award of 11 million Swedish kronor (approximately 1.1 million USD). The Physics prize was split equally between Hopfield and Hinton. The Chemistry prize was divided into two halves, with one half going to David Baker for computational protein design, and the other half shared jointly by Demis Hassabis and John Jumper for protein structure prediction using AlphaFold.
The award ceremony took place on December 10, 2024, at Konserthuset Stockholm (Stockholm Concert Hall) in Sweden, following the long-standing tradition of presenting the Nobel Prizes on the anniversary of Alfred Nobel's death.
| Prize | Laureate | Born | Affiliation | Contribution | Citation |
|---|---|---|---|---|---|
| Physics | John J. Hopfield | July 15, 1933, Chicago, Illinois, USA | Princeton University (emeritus) | Hopfield networks and associative memory | "For foundational discoveries and inventions that enable machine learning with artificial neural networks" |
| Physics | Geoffrey Hinton | December 6, 1947, London, UK | University of Toronto | Boltzmann machines, backpropagation, deep learning | "For foundational discoveries and inventions that enable machine learning with artificial neural networks" |
| Chemistry | David Baker | October 6, 1962, Seattle, Washington, USA | University of Washington | Computational protein design (Rosetta, Top7) | "For computational protein design" |
| Chemistry | Demis Hassabis | July 27, 1976, London, UK | Google DeepMind | AlphaFold for protein structure prediction | "For protein structure prediction" |
| Chemistry | John Jumper | 1985, Little Rock, Arkansas, USA | Google DeepMind | AlphaFold2 development and architecture | "For protein structure prediction" |
On October 8, 2024, the Royal Swedish Academy of Sciences announced that the Nobel Prize in Physics would go jointly to John J. Hopfield and Geoffrey E. Hinton. The official citation read: "for foundational discoveries and inventions that enable machine learning with artificial neural networks."
Ellen Moons, chair of the Nobel Committee for Physics, explained the rationale by stating that the laureates "used fundamental concepts from statistical physics to design artificial neural networks that function as associative memories and find patterns in large data sets." The committee emphasized that the work was grounded in physics, specifically in the mathematics of spin glasses, energy functions, and statistical mechanics.
John Joseph Hopfield, born on July 15, 1933, in Chicago, Illinois, is an American physicist who spent his career bridging physics and neuroscience. He earned his bachelor's degree from Swarthmore College in 1954 and his PhD from Cornell University in 1958. Over the decades he held faculty positions at the University of California, Berkeley, Princeton University, and the California Institute of Technology before returning to Princeton, where he became the Howard A. Prior Professor of Molecular Biology and helped establish the Princeton Neuroscience Institute.
In 1982, Hopfield published a landmark paper in the Proceedings of the National Academy of Sciences titled "Neural networks and physical systems with emergent collective computational abilities." This paper introduced what is now known as the Hopfield network, a type of recurrent neural network that functions as a content-addressable (associative) memory.
The key insight behind the Hopfield network was its connection to physics. Hopfield drew on the mathematics of the Ising model, a well-studied system in statistical mechanics that describes how magnetic spins interact on a lattice. In the Ising model, each spin can be in one of two states (up or down), and the system tends to settle into configurations that minimize its total energy. Hopfield recognized that a network of interconnected binary neurons could be described using an analogous energy function. Patterns stored in the network correspond to local energy minima, and when the network receives a partial or noisy version of a stored pattern, it evolves dynamically toward the nearest energy minimum, effectively retrieving the complete stored pattern.
This approach drew directly from the physics of spin glasses, disordered magnetic systems in which many competing interactions create a complex energy landscape with many local minima. The Sherrington-Kirkpatrick model of a spin glass, published in 1975, shares a structural similarity with the Hopfield network: both involve fully connected systems where each unit interacts with every other unit. Hopfield's contribution was to show that this physical framework could serve as a model for memory and computation.
The Hopfield network had a profound effect on the trajectory of AI research. Before its publication, the field of artificial intelligence was in what historians call an "AI winter," a period of reduced funding and diminished interest. Hopfield's work revitalized research into neural networks by demonstrating that ideas from physics could yield practical computational systems. The network's ability to recover complete patterns from partial inputs made it a compelling model for associative memory, and it inspired a wave of new research connecting physics, neuroscience, and computer science.
Geoffrey Everest Hinton, born on December 6, 1947, in London, England, is a British-Canadian computer scientist and cognitive psychologist widely known as the "Godfather of AI." He received his BA in Experimental Psychology from the University of Cambridge in 1970 and his PhD in Artificial Intelligence from the University of Edinburgh in 1978. In 1987, Hinton moved from the United States to Canada, partly motivated by his opposition to military funding of AI research during the Reagan administration. He joined the University of Toronto, where he would spend the bulk of his career.
Hinton's Nobel Prize-winning contribution centers on the Boltzmann machine, a type of stochastic neural network he developed between 1983 and 1985 with David Ackley and Terry Sejnowski. The paper "A Learning Algorithm for Boltzmann Machines" was published in the journal Cognitive Science in 1985. The Boltzmann machine extended Hopfield's ideas by adding hidden units (neurons not directly connected to inputs or outputs) and a learning algorithm inspired by concepts from statistical mechanics, specifically the Boltzmann distribution from thermodynamics. This allowed the network to learn internal representations of data, discovering patterns and features without being explicitly told what to look for.
While the original Boltzmann machine was slow to train in large networks, Hinton later developed the restricted Boltzmann machine (RBM), which simplified the architecture by removing connections between units in the same layer. This made training practical and scalable. Hinton then showed that multiple restricted Boltzmann machines could be stacked to form deep belief networks, where the output of one RBM served as the training data for the next. This method of layer-by-layer pretraining was one of the key techniques that made deep learning feasible, long before the large-scale breakthroughs of the 2010s.
Beyond Boltzmann machines, Hinton made another foundational contribution to neural networks. In 1986, together with David Rumelhart and Ronald J. Williams, he co-authored the highly influential paper "Learning representations by back-propagating errors," published in Nature. While backpropagation as a mathematical concept had been proposed earlier by other researchers, the Rumelhart, Hinton, and Williams paper popularized it as a practical training method for multi-layer neural networks, demonstrating that hidden units could learn to represent meaningful features of the data. This paper became one of the most cited works in all of science and remains a cornerstone of modern neural network training.
Hinton's work continued to bear fruit in the decades that followed. In 2012, his student Alex Krizhevsky, together with Hinton and Ilya Sutskever, developed AlexNet, a deep convolutional neural network that won the ImageNet Large Scale Visual Recognition Challenge by a wide margin. This victory is widely regarded as the event that launched the modern deep learning revolution, demonstrating that deep neural networks trained on GPUs could dramatically outperform traditional computer vision methods.
In 2018, Hinton shared the ACM A.M. Turing Award with Yoshua Bengio and Yann LeCun "for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing." The three are often referred to as the "Godfathers of Deep Learning."
The Nobel Committee justified awarding the Physics prize for what many consider computer science work by emphasizing the deep roots of both Hopfield's and Hinton's contributions in statistical physics. Hopfield networks are mathematically equivalent to spin glass systems, and the energy function that governs their dynamics is borrowed directly from the Ising model. Boltzmann machines are named after Ludwig Boltzmann, the 19th-century physicist whose statistical mechanics provides the theoretical framework for the learning algorithm. The committee argued that both laureates used "tools from physics" to build the theoretical foundations that made machine learning possible.
Ellen Moons noted in the announcement that "the laureates' work has already been of the greatest benefit" and that "in physics we use artificial neural networks across a wide range of areas, including developing new materials with specific properties."
On October 9, 2024, one day after the Physics announcement, the Royal Swedish Academy of Sciences awarded the Nobel Prize in Chemistry to three scientists. One half of the prize went to David Baker "for computational protein design." The other half was awarded jointly to Demis Hassabis and John Jumper "for protein structure prediction."
The Chemistry Nobel recognized two related but distinct achievements. Baker's work demonstrated that entirely new proteins, ones never seen in nature, could be designed computationally from scratch. Hassabis and Jumper's work, through the AI system AlphaFold, solved the long-standing challenge of predicting the three-dimensional structure of a protein from its amino acid sequence alone.
Proteins are the molecular machines of life. They are built from chains of amino acids that fold into specific three-dimensional shapes, and these shapes determine how proteins function. In 1961, Christian Anfinsen demonstrated that a protein's amino acid sequence alone contains all the information needed to determine its folded structure. For this discovery, Anfinsen received the 1972 Nobel Prize in Chemistry. His work established what became known as the "thermodynamic hypothesis" and implied that it should be possible, in principle, to predict a protein's structure from its sequence.
However, predicting how a chain of amino acids would fold turned out to be extraordinarily difficult. The number of possible configurations for even a small protein is astronomically large, a problem sometimes called Levinthal's paradox. For decades, determining protein structures required laborious experimental techniques such as X-ray crystallography, nuclear magnetic resonance (NMR) spectroscopy, or cryo-electron microscopy, each of which could take months or years for a single protein.
In 1994, John Moult and colleagues established the Critical Assessment of Structure Prediction (CASP) competition, a biennial blind test in which research groups attempt to predict protein structures from amino acid sequences before the experimental structures are publicly released. CASP became the gold standard for measuring progress in the field and provided a rigorous, objective way to compare different computational approaches.
David Baker, born on October 6, 1962, in Seattle, Washington, is an American biochemist at the University of Washington. He received his BA in biology from Harvard University in 1984 and his PhD in biochemistry from the University of California, Berkeley in 1989. After postdoctoral work in biophysics at the University of California, San Francisco, he joined the faculty at the University of Washington in 1993 and became a Howard Hughes Medical Institute investigator in 2000.
Baker's group developed the Rosetta software suite, originally designed for predicting protein structures from amino acid sequences (ab initio protein structure prediction). The critical insight came when Baker realized that Rosetta could be run in reverse: instead of predicting what structure a given sequence would fold into, it could be used to design a sequence that would fold into a desired structure.
In 2003, Baker and his team achieved a landmark result. They used Rosetta to computationally design Top7, a 93-amino-acid protein with a three-dimensional fold that had never been observed in nature. When synthesized in the laboratory, Top7 proved to be folded and remarkably stable, and its X-ray crystal structure matched the computational design model with a root mean square deviation of just 1.2 angstroms. This was the first successful de novo design of a protein with a completely novel fold, demonstrating that researchers could go beyond what evolution had produced and create entirely new molecular structures.
Since Top7, Baker's research group has designed a wide array of novel proteins with practical applications, including proteins that can function as pharmaceuticals, vaccines, nanomaterials, and molecular sensors. In 2021, Baker's team also reported the development of RoseTTAFold, a deep learning tool for protein structure prediction that could compute a protein structure in as little as 10 minutes.
Baker also created Rosetta@home, a distributed computing project that enlisted volunteers' home computers to help with protein design calculations, and contributed to the development of Foldit, a citizen science computer game in which players compete to fold proteins.
Demis Hassabis, born on July 27, 1976, in London, England, is a British AI researcher, neuroscientist, and entrepreneur. A chess prodigy who reached master-level play at age 13, Hassabis studied computer science at the University of Cambridge, graduating in 1997. He worked in the video game industry, first as the lead AI programmer at Lionhead Studios and then as the founder of Elixir Studios, before returning to academia. He earned his PhD in cognitive neuroscience from University College London in 2009 and completed postdoctoral research at Harvard University and MIT.
In 2010, Hassabis co-founded DeepMind with Shane Legg and Mustafa Suleyman. Google acquired DeepMind in 2014 for a reported 500 million USD, with Hassabis remaining as CEO. DeepMind gained worldwide attention in 2016 when its AlphaGo program defeated world champion Go player Lee Sedol four games to one.
John M. Jumper, born in 1985 in Little Rock, Arkansas, is an American computational biologist. He earned his bachelor's degree in mathematics and physics from Vanderbilt University in 2007, a Master of Philosophy in theoretical condensed matter physics from the University of Cambridge in 2010 (as a Marshall Scholar), and a PhD in theoretical chemistry from the University of Chicago in 2017. Before joining DeepMind in 2017, Jumper spent three years at D.E. Shaw Research, a computational laboratory in New York City, developing molecular dynamics simulations of protein behavior.
Together, Hassabis and Jumper led the development of AlphaFold, an AI system for predicting protein structures. The first version, AlphaFold 1, was entered into CASP13 in December 2018 and placed first in the overall rankings, demonstrating that deep learning could make significant advances in protein structure prediction. It was particularly successful at predicting structures for the most difficult targets, where no existing template structures were available.
The true breakthrough came with AlphaFold2, which was entered into CASP14 in 2020. AlphaFold2 achieved a median Global Distance Test (GDT) score of 92.4, a level of accuracy comparable to experimental techniques like X-ray crystallography. The protein structure prediction community widely described the results as "astounding" and "transformational," and a consensus emerged that the protein structure prediction problem for single protein chains had been effectively solved.
AlphaFold2's architecture represented a major technical innovation. It employed a novel neural network module called the Evoformer, which used attention mechanisms derived from transformer architectures to jointly process evolutionary relationships (from multiple sequence alignments) and spatial relationships between amino acid pairs. The Evoformer consisted of 48 blocks of attention-based layers that iteratively refined both sequence and structural representations. This approach allowed AlphaFold2 to capture long-range dependencies between amino acid residues that earlier convolutional approaches had struggled with.
Following CASP14, Hassabis, Jumper, and their team at DeepMind partnered with the European Molecular Biology Laboratory's European Bioinformatics Institute (EMBL-EBI) to create the AlphaFold Protein Structure Database. In 2021, they used AlphaFold2 to calculate the structures of nearly all of the approximately 20,000 human proteins. They then expanded the database to cover virtually all 200 million known proteins from about one million species. The database was made freely available to the scientific community and, by the time of the Nobel announcement, had been used by more than two million researchers from 190 countries.
In May 2024, DeepMind released AlphaFold3, an updated version with a substantially revised diffusion-based architecture capable of predicting the structures of complexes that include proteins, nucleic acids (DNA and RNA), small molecules, ions, and modified residues. AlphaFold3 demonstrated significantly improved accuracy for protein-ligand interactions compared to state-of-the-art docking tools, showing 50% greater accuracy than the best traditional methods on the PoseBusters benchmark.
The 2024 Nobel Prizes represent the first time that work primarily rooted in artificial intelligence was recognized by the Nobel committees. While previous Nobel Prizes had been awarded for computational methods in chemistry (such as the 2013 Chemistry prize for multi-scale modeling), the 2024 prizes were the first to explicitly honor AI systems and the foundational machine learning research behind them.
The back-to-back announcements on October 8 and 9 sent a strong signal about the growing importance of AI across scientific disciplines. The Physics prize validated decades of foundational research that had often been dismissed by the mainstream physics community as outside the scope of the discipline. The Chemistry prize demonstrated that AI could not merely assist scientists but could solve problems that had resisted traditional approaches for more than 50 years.
| Year | Prize | Laureates | Contribution | Connection to AI |
|---|---|---|---|---|
| 2024 | Physics | John J. Hopfield, Geoffrey Hinton | Foundational work enabling machine learning with artificial neural networks | Hopfield networks and Boltzmann machines, both derived from physics, are direct predecessors to modern deep learning |
| 2024 | Chemistry | David Baker, Demis Hassabis, John Jumper | Computational protein design and protein structure prediction | AlphaFold uses deep learning (transformers and attention mechanisms) to predict protein structures; Baker's recent work also incorporates AI |
| 2013 | Chemistry | Martin Karplus, Michael Levitt, Arieh Warshel | Multi-scale models for complex chemical systems | Computational chemistry methods, precursors to AI-driven approaches |
The 2024 Physics prize generated significant debate within the scientific community. The central question was whether work on artificial neural networks truly constituted physics.
Several prominent physicists expressed skepticism. Some argued that while Hopfield's and Hinton's work drew on physics concepts, the primary impact of their contributions was in computer science and engineering, not in advancing the understanding of physical phenomena. An astrophysicist at Imperial College London commented that it was "hard to see that this is a physics discovery" and suggested the Nobel Committee had been "hit by AI hype."
Computer scientists, meanwhile, offered a different critique. Some felt that the Physics prize represented an attempt by the physics community to claim AI as its own discipline. Others pointed out that the absence of a Nobel Prize in mathematics or computer science had distorted the outcome, forcing the committee to shoehorn important computational work into the physics category. A computer scientist and United Nations AI adviser argued that the lack of a Nobel Prize for computer science had led to the awkward situation of awarding a physics prize for what was fundamentally computer science research.
Some researchers raised concerns about historical attribution, noting that other pioneers in neural network research, including Alexey Ivakhnenko, Valentin Lapa, and Shun-Ichi Amari, had developed related techniques before or alongside the laureates. Jurgen Schmidhuber, a prominent AI researcher, also pointed to earlier work that he felt was insufficiently credited.
Supporters of the prize emphasized the genuine physics content in the laureates' work. Danica Kragic Jensfelt, a computer scientist and member of the Royal Swedish Academy of Sciences, noted that the 2018 Turing Award had already recognized Hinton's contributions to computer science, while the Nobel Prize specifically honored "the physics part" of the work. The mathematical framework of Hopfield networks is rooted in the Ising model and spin glass theory, and Boltzmann machines take their name and their learning algorithm from statistical mechanics.
The Nobel Committee itself argued that the laureates had "used tools from physics" to lay the groundwork for the machine learning revolution that began around 2010, and that the resulting technology was already being used across physics research, from particle physics to materials science.
Perhaps the most notable reaction came from Hinton himself. When reached by the Nobel Committee on the morning of the announcement, Hinton expressed surprise, saying he had no idea the prize was coming. In interviews following the announcement, he reiterated warnings he had been making since leaving Google in May 2023 about the potential dangers of advanced AI systems.
At the Nobel Prize banquet in Stockholm on December 10, 2024, Hinton delivered a speech that went beyond the customary expressions of gratitude. He warned: "There is also a longer term existential threat that will arise when we create digital beings that are more intelligent than ourselves. We have no idea whether we can stay in control." He added: "If they are created by companies motivated by short-term profits, our safety will not be the top priority. We urgently need research on how to prevent these new beings from wanting to take control."
Hinton has estimated a 50% probability that AI systems will surpass human intelligence within 5 to 20 years. Since his departure from Google, he has become one of the most prominent voices calling for regulation and safety research in AI development.
Hinton's fellow Turing Award recipients responded differently. Yann LeCun, chief AI scientist at Meta, took a more optimistic stance, arguing that AI "could actually save humanity from extinction" rather than threaten it. Yoshua Bengio, however, aligned more closely with Hinton's concerns, stating that what alarmed him and Hinton was the possibility of "loss of human control" and whether AI systems would "act morally when they're smarter than humans."
The AI research community largely celebrated the recognition while acknowledging the complexity of the disciplinary questions. Many saw the prizes as a validation of decades of work that had often been marginalized within traditional academic departments. Others expressed hope that the Nobel recognition would encourage greater investment in AI safety research, given Hinton's high-profile warnings.
The protein science community was particularly enthusiastic about the Chemistry prize. The AlphaFold Protein Structure Database had already transformed the daily practice of structural biology, and researchers noted that what once required years of experimental work could now be accomplished in minutes. The recognition of David Baker's protein design work alongside the AlphaFold team highlighted the complementary nature of predicting natural protein structures and designing entirely new ones.
The 2024 Nobel Prizes in AI are likely to be remembered as a watershed moment in the history of both artificial intelligence and the Nobel Prizes themselves. They acknowledged that AI had matured from a speculative research program into a tool capable of making fundamental contributions to science.
The work honored by these prizes has had far-reaching consequences:
Hopfield networks introduced the concept of energy-based models and content-addressable memory, ideas that continue to influence modern AI architectures. Modern Hopfield networks, developed by researchers at Johannes Kepler University Linz, have been integrated into contemporary transformer models.
Boltzmann machines and backpropagation provided the theoretical and practical foundations for training deep neural networks, which now underpin technologies ranging from natural language processing and computer vision to speech recognition and autonomous driving.
Rosetta and computational protein design have enabled the creation of novel proteins for use in medicine, industry, and biotechnology, opening a new era in which researchers can engineer molecular tools that nature never produced.
AlphaFold has been described as one of the most significant scientific breakthroughs of the 21st century. By making accurate protein structure predictions freely available for virtually all known proteins, it has accelerated research in drug discovery, enzyme engineering, evolutionary biology, and countless other fields.
The prizes also raised important questions about the boundaries of scientific disciplines, the proper attribution of credit in collaborative and incremental fields, and the responsibilities that come with developing powerful new technologies. Hinton's use of his Nobel platform to warn about AI existential risk ensured that these awards would be remembered not only for what they celebrated but also for the urgent questions they brought to public attention.