Andrew Yan-Tak Ng (born 1976) is a British-American computer scientist, entrepreneur, and one of the most influential figures in modern artificial intelligence and machine learning education. He co-founded Google Brain, co-founded Coursera, served as Chief Scientist at Baidu, and has built a constellation of AI-focused organizations including Landing AI, DeepLearning.AI, and AI Fund. His Stanford course CS229 became one of the most popular classes on campus and laid the groundwork for online AI education that has reached millions of learners worldwide. Ng famously declared that "AI is the new electricity," a phrase that captured his belief in AI's potential to transform every industry.
Andrew Ng was born in London, England, in 1976. His father, Ronald Paul Ng, was a hematologist and lecturer at University College London (UCL) Medical School. His mother, Tisa Ho, was an arts administrator who worked at the London Film Festival. Both parents were immigrants from Hong Kong [1].
In 1984, when Ng was eight years old, the family moved to Singapore. He attended Raffles Institution, one of Singapore's most prestigious secondary schools, where he demonstrated exceptional mathematical ability. During his high school years, Ng won a Silver Medal at the International Mathematical Olympiad, an early indicator of the analytical talent that would define his career [2].
Ng's academic trajectory took him through three of the most respected institutions in computer science and engineering.
| Degree | Institution | Year | Field |
|---|---|---|---|
| Bachelor's (triple major) | Carnegie Mellon University | 1997 | Computer Science, Statistics, Economics |
| Master's | Massachusetts Institute of Technology | 1998 | Electrical Engineering and Computer Science |
| Ph.D. | University of California, Berkeley | 2002 | Computer Science |
At Carnegie Mellon University, Ng earned a bachelor's degree with a triple major in computer science, statistics, and economics in 1997 [1]. Between 1996 and 1998, he also conducted research on reinforcement learning, model selection, and feature selection at AT&T Bell Labs.
He moved quickly to the Massachusetts Institute of Technology, where he completed a master's degree in electrical engineering and computer science in 1998 [1].
Ng then pursued his doctorate at the University of California, Berkeley, under the supervision of Michael I. Jordan, one of the leading figures in statistical machine learning. His 2002 dissertation, "Shaping and Policy Search in Reinforcement Learning," explored methods for enabling autonomous agents to learn optimal behaviors. It remains one of the most cited theses in the field [3].
In 2002, Ng joined the Stanford University faculty as an assistant professor in the Department of Computer Science. He would remain at Stanford for nearly two decades, rising to become one of the most recognized professors in the university's history.
Ng's signature course at Stanford was CS229: Machine Learning, which became the single most popular course in Stanford's Computer Science Department, regularly attracting over 1,000 students in a single offering [4]. The course provided a broad introduction to machine learning, covering supervised learning, unsupervised learning, learning theory, and reinforcement learning.
CS229 was remarkable not just for its enrollment numbers but for the clarity of Ng's teaching. His ability to explain complex mathematical concepts in intuitive terms drew students from across the university, not only from computer science but from biology, economics, physics, and other departments. The course's popularity on campus became a catalyst for Ng's later efforts to bring AI education online [4].
At Stanford, Ng also directed the Stanford AI Lab (SAIL) and conducted research across a range of topics, including deep learning, robotics, and natural language processing. His early work at Stanford on using graphics processing units (GPUs) for training neural networks helped lay the groundwork for the computational approaches that would later power the deep learning revolution [1].
Ng's group developed the Stanford Autonomous Helicopter, which used reinforcement learning to perform aerobatic maneuvers, and contributed to early research on sparse coding and unsupervised feature learning, both of which influenced the development of deep learning architectures [5].
Ng has authored or co-authored over 200 research papers, accumulating more than 309,000 citations on Google Scholar. His most influential publications span machine learning theory, deep learning, and applied AI:
| Publication | Year | Key Contribution | Citations |
|---|---|---|---|
| "Latent Dirichlet Allocation" (with D. Blei, M. Jordan) | 2003 | Introduced LDA, a foundational topic modeling algorithm | 50,000+ |
| "On Discriminative vs. Generative Classifiers" (with M. Jordan) | 2001 | Compared naive Bayes and logistic regression | 6,000+ |
| "On Spectral Clustering: Analysis and an Algorithm" (with A. Ng, M. Jordan, Y. Weiss) | 2001 | Spectral clustering framework | 10,000+ |
| "Sparse Autoencoder" (lecture notes) | 2011 | Influential tutorial on autoencoders and sparse representations | Widely used in education |
| "Building High-level Features Using Large Scale Unsupervised Learning" (with Q. Le et al.) | 2012 | The "cat neuron" paper from Google Brain | 5,000+ |
| "An Inverted Classroom Approach to Educating with Online Resources" | 2014 | Framework for large-scale online education | 500+ |
| "Machine Learning Yearning" (book) | 2018 | Practical guide to structuring ML projects | Widely distributed (free) |
The Latent Dirichlet Allocation (LDA) paper, co-authored with David Blei and Michael Jordan during Ng's doctoral studies, became one of the most influential papers in machine learning. LDA introduced a generative probabilistic model for collections of text that discovers latent "topics" within documents. The technique found applications far beyond text, including in genomics, image analysis, and social network analysis [1].
Ng currently holds the title of Adjunct Professor at Stanford, maintaining a connection to the university while focusing primarily on his entrepreneurial activities [6].
In 2011, Ng founded the Google Brain project at Google alongside Jeff Dean, Greg Corrado, and Rajat Monga [1]. The initiative sought to build large-scale neural networks using Google's computational infrastructure. Google Brain became one of the most consequential AI research efforts in industry history.
The team's most famous early result came from a landmark experiment in unsupervised learning. In 2012, the team trained a massive neural network on 10 million unlabeled thumbnails extracted from YouTube videos. The network consisted of nine layers and was distributed across 16,000 CPU cores, making it one of the largest neural networks ever trained at that time.
No one told the network what to look for. It was free to discover whatever patterns appeared most frequently in the video data. When researchers examined the trained network, they found that individual neurons had spontaneously learned to respond to specific high-level concepts. One neuron responded strongly to images of cats. Another responded to human faces. A third responded to human bodies [7].
The finding made headlines around the world, with news outlets reporting that "Google built a neural network that learned to recognize cats." While the popular coverage focused on the whimsical aspect of the result, the deeper significance was profound: it demonstrated that deep neural networks could discover high-level concepts without human supervision when given sufficient data and compute. This validated the hypothesis that scaling up neural networks and data could lead to emergent feature learning, an insight that would shape the direction of AI research for the next decade [7].
| Google Brain Experiment Detail | Specification |
|---|---|
| Training data | 10 million unlabeled YouTube video thumbnails |
| Network architecture | 9-layer autoencoder |
| Computational resources | 16,000 CPU cores (1,000 machines) |
| Training parameters | ~1 billion connections |
| Key finding | Neurons spontaneously learned to detect cats, faces, and bodies |
| Publication | "Building High-level Features Using Large Scale Unsupervised Learning" (ICML 2012) |
Google Brain helped transform Google from a company that used machine learning in limited, specialized ways into one that placed AI at the core of its product strategy. The project's success influenced the development of TensorFlow, Google's open-source machine learning framework, and contributed to advances in Google Translate, Google Photos, and speech recognition [1].
Ng transitioned out of the Google Brain project in 2012 to co-found Coursera, but the team he helped assemble continued to grow. Under Jeff Dean's leadership, Google Brain became one of Google's most important research divisions before being merged with DeepMind in 2023 to form Google DeepMind.
| Year | Google Brain Milestone |
|---|---|
| 2011 | Project founded by Ng, Jeff Dean, Greg Corrado, Rajat Monga |
| 2012 | "Cat neuron" experiment demonstrates unsupervised feature learning at scale |
| 2012 | Ng transitions out; Google Brain continues as core Google AI team |
| 2015 | TensorFlow open-sourced, built on Google Brain research |
| 2017 | "Attention Is All You Need" paper introduces the Transformer architecture |
| 2023 | Google Brain merged with DeepMind to form Google DeepMind |
The enormous demand for CS229 at Stanford convinced Ng that high-quality AI education should not be confined to a single campus. In 2012, Ng and Stanford colleague Daphne Koller co-founded Coursera, an online learning platform that offered free courses from top universities to anyone with an internet connection [8].
The path to Coursera began in the fall of 2011, when Ng offered his Machine Learning class online as a free Stanford course. The response was staggering: over 100,000 people enrolled from around the world. The experience convinced Ng that there was massive unmet demand for high-quality education, particularly in technical fields. Around the same time, fellow Stanford professors Sebastian Thrun and Peter Norvig had a similar experience with their AI course, which attracted 160,000 enrollees. Thrun went on to found Udacity, while Ng and Koller created Coursera [8].
Ng's "Machine Learning" course on Coursera became one of the most enrolled online courses in history. As of 2025, over 4.8 million learners have taken the course, with a rating of 4.9 out of 5 stars. Students from nearly every country in the world completed the course, many of whom had no access to comparable instruction at their local institutions. The course democratized AI education in a way that had no precedent [8].
In 2022, Ng updated the original Machine Learning course as a three-course Machine Learning Specialization, co-developed with DeepLearning.AI, incorporating modern tools like TensorFlow and Python (the original course had used Octave/MATLAB).
In 2017, through DeepLearning.AI, Ng launched the Deep Learning Specialization on Coursera, a five-course sequence covering convolutional neural networks, sequence models, structuring machine learning projects, and more. The specialization has been taken by millions of learners and is widely regarded as one of the best entry points into deep learning education [9].
| Course | Topics Covered |
|---|---|
| Course 1: Neural Networks and Deep Learning | Forward/backpropagation, activation functions, deep networks |
| Course 2: Improving Deep Neural Networks | Regularization, optimization, hyperparameter tuning, batch normalization |
| Course 3: Structuring Machine Learning Projects | ML strategy, error analysis, transfer learning, end-to-end learning |
| Course 4: Convolutional Neural Networks | CNNs, object detection, face recognition, neural style transfer |
| Course 5: Sequence Models | RNNs, LSTMs, attention mechanisms, transformers |
Ng stepped down as Coursera's co-CEO in 2014 but remains Chairman of the Board. Coursera has grown into a publicly traded company (NYSE: COUR) with over 150 million registered learners and partnerships with hundreds of universities and companies worldwide [8].
| Detail | Coursera |
|---|---|
| Founded | 2012 |
| Co-founders | Andrew Ng, Daphne Koller |
| Ng's Role | Chairman and Co-Founder |
| Registered Learners | 150+ million |
| Ng's ML Course Enrollees | 4.8+ million (rated 4.9/5) |
| Deep Learning Specialization | Launched 2017, taken by millions |
| IPO | March 2021 (NYSE: COUR) |
| Total Ng learners (all courses) | 8+ million |
In May 2014, Ng joined Baidu, China's largest search engine company, as Vice President and Chief Scientist. He relocated to Baidu's Sunnyvale, California, office and led a 1,300-person AI team responsible for the company's artificial intelligence technology and strategy [10].
At Baidu, Ng oversaw significant advances in speech recognition, including the development of the Deep Speech system, which used end-to-end deep learning to achieve state-of-the-art results in Mandarin and English speech recognition. Deep Speech was notable for its simplicity: rather than the complex pipeline of traditional speech recognition systems (with separate acoustic model, pronunciation model, and language model), it used a single deep recurrent neural network trained end-to-end. A second version, Deep Speech 2, further improved accuracy and was deployed in Baidu's products [10].
He also led Baidu's self-driving car initiative and expanded the company's AI research capabilities across offices in Beijing, Shenzhen, and Sunnyvale.
Ng departed Baidu in March 2017, announcing his intention to focus on new ventures. His time at Baidu gave him firsthand experience with AI deployment at massive scale in a market very different from Silicon Valley, perspectives that would shape his subsequent entrepreneurial work [1].
Shortly after leaving Baidu, Ng founded Landing AI in December 2017. The company's mission is to help manufacturing companies adopt AI, particularly for visual inspection and quality control applications [11].
Landing AI's core product focuses on computer vision for manufacturing inspection, built around what Ng calls "data-centric AI" principles. Rather than focusing solely on improving algorithms, data-centric AI emphasizes the importance of high-quality, well-labeled data as the primary driver of AI system performance. This philosophy reflects a broader shift in Ng's thinking about how AI should be deployed in real-world settings [11].
Landing AI developed LandingLens, a visual AI platform that allows manufacturing engineers, rather than machine learning specialists, to build and maintain AI-powered inspection systems. The platform provides tools for:
| Feature | Function |
|---|---|
| Data labeling | Intuitive interface for labeling defect images |
| Model training | Automated training of computer vision models |
| Deployment | Edge deployment for real-time inspection |
| Monitoring | Ongoing performance tracking and model drift detection |
| Collaboration | Tools for domain experts and ML engineers to work together |
In November 2021, Landing AI secured $57 million in Series A funding led by McRock Capital to expand its platform and help more enterprises adopt AI [11].
Ng founded DeepLearning.AI as an education technology company focused on providing world-class AI training and courses. The organization develops and delivers the Deep Learning Specialization and numerous other courses on Coursera, covering topics from natural language processing to generative AI to MLOps [9].
DeepLearning.AI has expanded its course catalog significantly since its founding, offering specializations and short courses on topics including:
| Course / Specialization | Topics |
|---|---|
| Deep Learning Specialization | Neural networks, CNNs, sequence models |
| Natural Language Processing Specialization | Sentiment analysis, machine translation, attention |
| TensorFlow Developer Professional Certificate | TF fundamentals, CNNs, NLP, time series |
| Machine Learning Engineering for Production (MLOps) | ML pipelines, deployment, monitoring |
| Generative AI courses (2023-2024) | Prompt engineering, LLM applications, RAG |
| AI Agents courses (2025) | Agentic workflows, tool use, multi-agent systems |
DeepLearning.AI also produces "The Batch," a weekly newsletter covering AI news and developments, which has become one of the most widely read AI newsletters in the industry. The organization's courses have collectively been taken by over 8 million learners, making Ng one of the most prolific AI educators in history [6].
In January 2018, Ng launched AI Fund, a venture studio that builds AI-focused startups from the ground up. Unlike a traditional venture capital fund, AI Fund operates as a studio, creating companies internally rather than simply investing in external entrepreneurs. The fund identifies problems that AI can solve, assembles teams, and provides operational support [12].
| Fund | Amount | Year |
|---|---|---|
| AI Fund I | $175 million | 2018 |
| AI Fund II | ~$190 million | 2025 |
| Total portfolio companies | ~35 (as of 2025) |
AI Fund has co-founded approximately 35 portfolio companies across a range of sectors. The companies span healthcare, education, logistics, financial automation, renewable energy optimization, developer tools, and other domains. The studio model means that AI Fund is deeply involved in each company's formation, often providing the initial team, technology strategy, and operational infrastructure [12].
Notable portfolio investments and companies include:
| Company | Focus Area | Stage |
|---|---|---|
| Jivi (India, 2024) | AI healthcare: diagnoses, treatment recommendations, admin | Early stage |
| Abridge | AI-powered medical documentation | Growth stage |
| Woebot Health | AI mental health support | Growth stage |
| Various stealth companies | Financial automation, renewable energy, logistics | Various |
Ng also established AI Aspire, a related initiative focused on AI-driven social impact, addressing challenges in areas such as education access, healthcare equity, and environmental sustainability.
In recent years, Ng has become the most prominent advocate for what he calls "data-centric AI," a paradigm shift in how machine learning systems are developed. Traditional AI development focuses on improving models and algorithms while keeping data relatively fixed. Data-centric AI inverts this: it holds the model relatively fixed and focuses on systematically improving the quality, labeling, and curation of training data [11].
Ng argues that for most real-world AI applications, especially in manufacturing and healthcare, the data is the bottleneck, not the model. He illustrates this with a simple framework:
| Approach | Focus | When It Works Best |
|---|---|---|
| Model-centric AI | Improve the algorithm; keep data fixed | Large datasets with consistent quality (e.g., internet-scale) |
| Data-centric AI | Improve the data; keep model relatively fixed | Smaller datasets, specialized domains, high-stakes applications |
The data-centric approach has gained significant traction in the AI community, particularly among practitioners working on applied AI problems where data quality is often the deciding factor in system performance. Ng has organized workshops (including the NeurIPS Data-Centric AI Workshop), published research, and built Landing AI's products around this philosophy.
Ng's most famous public statement is his comparison of AI to electricity. First articulated at the AI Frontiers conference and subsequently repeated in keynotes, interviews, and writings, the analogy captures his core belief about AI's transformative potential [13].
"Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don't think AI will transform in the next several years," Ng has said [13]. The phrase resonated widely because it framed AI not as a narrow technical specialty but as a general-purpose technology with the potential to reshape every sector of the economy.
The analogy has been both praised for its clarity and criticized for potentially overstating the ease of AI adoption. Ng himself has acknowledged that, like electrification, the AI transformation will be uneven and will require significant infrastructure, education, and institutional change.
Ng serves on the Board of Directors of Amazon, one of the world's largest technology companies. This appointment reflects his standing in the broader technology and business community and gives him influence over AI strategy at one of the most important companies deploying AI at scale [6].
As of early 2026, Ng continues to lead his constellation of AI organizations. He serves as Managing General Partner of AI Fund, Executive Chairman of Landing AI, founder of DeepLearning.AI, Chairman of Coursera, and Adjunct Professor at Stanford [6].
| Organization | Ng's Role | Focus |
|---|---|---|
| Stanford University | Adjunct Professor | AI research and teaching |
| Coursera | Chairman, Co-Founder | Online education |
| DeepLearning.AI | Founder | AI education and courses |
| Landing AI | Executive Chairman | Manufacturing AI, data-centric AI |
| AI Fund | Managing General Partner | AI startup studio |
| AI Aspire | Managing Partner | AI for social impact |
| Amazon | Board Director | Technology governance |
Ng remains one of the most active voices in AI public discourse. He continues to publish "The Batch" newsletter, give keynote addresses at major conferences (including a scheduled appearance at HumanX in San Francisco in April 2026), and advocate for responsible, accessible AI education [14].
In his 2025 year-end summary, Ng described the current period as the "dawn of the AI industrial era," arguing that the field is transitioning from a research-driven phase to one focused on deploying AI across industries at scale. He emphasized the growing importance of agentic AI systems, where AI agents autonomously plan, execute, and iterate on complex tasks, and predicted that 2026 would see significant adoption of AI agents in enterprise settings [14].
Through AI Fund and its portfolio companies, Ng is involved in projects spanning robotics, healthcare, and other sectors. His ongoing focus is on bridging the gap between cutting-edge AI research and practical deployment, particularly in industries that have been slower to adopt AI technologies.
Ng is married to Carol Reiley, a roboticist and entrepreneur. The couple met while both were involved in the robotics community. Reiley is co-founder of Drive.ai, a self-driving car startup that was acquired by Apple in 2019. They have two children [1].
Andrew Ng's influence on the AI field operates across multiple dimensions. As a researcher, he helped pioneer the use of GPUs for deep learning, co-authored one of the most cited papers in machine learning (LDA), and co-founded one of the most important industry AI labs. As an educator, he has taught more people about machine learning than perhaps any other individual in history, with over 8 million learners across his various courses. As an entrepreneur, he has built organizations that bridge research and commercial application. His advocacy for data-centric AI has shifted how practitioners think about building real-world AI systems.
His career illustrates a model that is rare in technology: someone who has made significant contributions in academia, industry research, online education, and entrepreneurship, moving fluidly between these worlds while maintaining a consistent message about AI's potential to benefit society broadly.