AI in healthcare refers to the application of artificial intelligence technologies, including machine learning, natural language processing, computer vision, and deep learning, to medical and clinical tasks. These applications span diagnostics, treatment planning, drug development, administrative workflows, and patient engagement. The integration of AI into healthcare has accelerated rapidly since the mid-2010s, driven by advances in computational power, the availability of large medical datasets, and breakthroughs in neural network architectures. As of 2026, AI-powered tools are embedded across the healthcare ecosystem, from radiology reading rooms and pathology labs to operating theaters and mental health platforms.
The roots of AI in healthcare trace back to the 1970s, when researchers at Stanford University began developing rule-based expert systems for medical diagnosis. The most notable of these was MYCIN, developed between 1972 and 1978 by Edward Shortliffe under the direction of Bruce Buchanan and Stanley Cohen. MYCIN used approximately 600 production rules and a backward-chaining inference engine to identify bacteria causing severe infections such as bacteremia and meningitis, and to recommend appropriate antibiotics with dosages adjusted for patient body weight. In a landmark evaluation published in the Journal of the American Medical Association, MYCIN's therapy recommendations were deemed acceptable by expert reviewers in 65% of cases, with 90.9% accuracy in prescribing appropriate antimicrobial therapy. This performance matched or exceeded that of infectious disease specialists at the time.
Despite its strong performance, MYCIN was never deployed in clinical practice. Concerns about legal liability, ethical implications, the difficulty of integrating such a system into hospital workflows, and the lack of computing infrastructure at the bedside proved insurmountable in that era. Nevertheless, MYCIN's influence was profound: it demonstrated that knowledge could be encoded as rules and applied systematically, inspiring a generation of expert systems in medicine and other domains.
Other early medical AI systems included INTERNIST-1 (later renamed QMR), developed at the University of Pittsburgh in the 1970s for general internal medicine diagnosis, and DXplain, a clinical decision support system developed at Massachusetts General Hospital in 1986 that continues to be used as a teaching tool.
The next significant phase in medical AI came with the development of computer-aided detection (CAD) systems for radiology. R2 Technology Inc. pioneered the first commercial CAD venture in mammography in 1993, and its ImageChecker system received FDA approval in June 1998, making it the first FDA-cleared CAD system for mammography.
Adoption was initially slow; by 2001, fewer than 5% of screening mammograms in the United States were interpreted with CAD assistance. However, when the Centers for Medicare and Medicaid Services (CMS) increased reimbursement for CAD-assisted mammography in 2002, uptake accelerated dramatically. By 2008, 74% of all screening mammograms in the Medicare population were interpreted with CAD. Subsequent CAD systems included the iCAD Second Look system (FDA-cleared in 2002) and Eastman Kodak's mammography CAD engine (FDA-cleared in 2004).
While CAD systems represented an important step in applying computational tools to medical imaging, their clinical effectiveness remained a subject of debate, with several studies producing mixed results regarding their impact on cancer detection rates.
The modern era of AI in healthcare began with the deep learning revolution of the early 2010s. The success of convolutional neural networks (CNNs) in image recognition tasks, exemplified by AlexNet's victory in the 2012 ImageNet competition, soon translated to medical imaging. Researchers demonstrated that deep learning models could match or exceed the performance of trained specialists in interpreting medical images across multiple specialties. This sparked an explosion of research, investment, and product development that continues to expand as of 2026.
AI is now applied across virtually every area of healthcare. The following table summarizes the major application domains:
| Application area | Description | Key technologies | Notable examples |
|---|---|---|---|
| Medical imaging and radiology | Automated detection and classification of abnormalities in X-rays, CT scans, MRIs, and ultrasound images | CNNs, vision transformers, U-Net architectures | Viz.ai (stroke detection), Aidoc (triage), Lunit INSIGHT (chest X-ray) |
| Digital pathology | AI-assisted analysis of tissue samples and histopathology slides for cancer grading, biomarker detection | Whole-slide image analysis, attention-based models | PathAI AISight Dx, Paige.AI (prostate cancer) |
| Drug discovery | Target identification, molecular generation, virtual screening, ADMET prediction | Generative models, graph neural networks, molecular diffusion models | Insilico Medicine, Recursion Pharmaceuticals, Isomorphic Labs |
| Clinical decision support | Real-time alerts, diagnostic suggestions, treatment recommendations based on patient data | Large language models, ensemble classifiers, Bayesian networks | Epic Sepsis Model, IBM Watson for Oncology |
| Electronic health record (EHR) analysis | Extraction of structured data from unstructured clinical notes, predictive analytics for patient outcomes | NLP, transformer models, temporal sequence models | Google Health EHR predictions, Amazon Comprehend Medical |
| Robotic surgery | AI-enhanced surgical systems providing greater precision, 3D visualization, and tremor filtering | Robotic control systems, computer vision, haptic feedback | Intuitive Surgical da Vinci, Medtronic Hugo RAS |
| Mental health and therapy | AI chatbots providing cognitive behavioral therapy (CBT), mood tracking, and emotional support | Conversational AI, sentiment analysis, NLP | Wysa, Woebot (pivoted to enterprise in 2025) |
| Genomics and precision medicine | Variant calling, gene expression analysis, pharmacogenomic predictions | Sequence models, recurrent neural networks, transformers | DeepVariant (Google), DRAGEN (Illumina) |
| Clinical trial optimization | Patient recruitment, site selection, protocol design, endpoint prediction | Predictive analytics, NLP for literature mining | Unlearn.AI, Medidata AI |
| Administrative and operational | Scheduling, billing, prior authorization, clinical documentation | NLP, robotic process automation | Nuance DAX Copilot (ambient clinical documentation) |
The U.S. Food and Drug Administration (FDA) has been the primary regulatory body overseeing AI-based medical devices. The number of authorized AI and machine learning-enabled medical devices has grown rapidly:
| Year | Cumulative AI/ML devices authorized | Notable milestone |
|---|---|---|
| 2018 | ~100 | FDA De Novo clearance for Viz.ai Contact (stroke), IDx-DR (diabetic retinopathy) |
| 2020 | ~300 | Radiology continues to dominate approvals |
| 2022 | ~520 | Growth accelerates across specialties |
| 2024 | ~950 | FDA updates its public database; cardiology and neurology expand |
| Mid-2025 | ~1,250 | 258 devices authorized in 2025 alone, the most in FDA history |
| Late 2025 | 1,300+ | Radiology accounts for roughly 75-80% of all listings |
The overwhelming majority of these devices fall under radiology (approximately 75-80%), with cardiology accounting for about 10% and neurology, hematology, and other specialties making up the remainder. Most devices have been cleared through the 510(k) pathway, though several pioneering devices used the De Novo premarket review pathway.
The following table highlights specific FDA-cleared AI medical devices across clinical specialties as of early 2026:
| Device name | Company | Specialty | Function | Clearance pathway |
|---|---|---|---|---|
| IDx-DR (LumineticsCore) | Digital Diagnostics | Ophthalmology | Autonomous screening for diabetic retinopathy from retinal images | De Novo (2018) |
| Viz.ai Contact | Viz.ai | Neurology | CT angiography analysis for large vessel occlusion stroke detection and care coordination | De Novo (2018) |
| Caption AI | Caption Health (acquired by GE Healthcare) | Cardiology | AI-guided cardiac ultrasound acquisition for novice users | De Novo (2020) |
| Aidoc BriefCase | Aidoc | Radiology (triage) | Automated triage flagging of CT scans for pulmonary embolism, intracranial hemorrhage, cervical spine fractures | 510(k) (multiple) |
| Lunit INSIGHT CXR | Lunit | Radiology | Detection of 10+ chest X-ray abnormalities including nodules, consolidation, pneumothorax | 510(k) (2022) |
| Paige Prostate | Paige.AI | Pathology | Detection of cancer in prostate biopsies from whole-slide images | De Novo (2021) |
| AISight Dx | PathAI | Pathology | Digital pathology image viewing and management for primary diagnosis with PCCP | 510(k) (2022, updated 2025) |
| GE Healthcare CareIntelli | GE Healthcare | Radiology | AI-powered X-ray analysis and workflow prioritization | 510(k) (2024) |
| Hyperfine Swoop | Hyperfine | Radiology | Portable, AI-enabled MRI system for point-of-care brain imaging | 510(k) (2020) |
| Pearl Second Opinion | Pearl Inc. | Dentistry | AI analysis of dental radiographs for cavity detection and pathology identification | 510(k) (2023) |
On January 7, 2025, the FDA issued draft guidance for AI-enabled device software functions that recommends including model description, data lineage, performance tied to claims, bias analysis, human-AI workflow documentation, monitoring protocols, and a Predetermined Change Control Plan in submissions. This guidance reflects the FDA's evolving approach to regulating software that continuously learns and updates [6][7].
One of the most widely cited examples of AI in healthcare is Google's system for detecting diabetic retinopathy from retinal fundus photographs. Developed by Google's health research teams and later integrated into Google DeepMind's work, the system uses deep learning to analyze retinal images and identify signs of diabetic retinopathy that, if left untreated, can lead to blindness.
In clinical evaluations, AI-assisted screening for diabetic retinopathy demonstrated sensitivities up to 96% and specificities up to 98% for detecting referable disease on fundus photographs. These results suggest that AI can serve as a reliable alternative in clinical settings, increasing early detection rates and reducing the burden on ophthalmologists, particularly in regions with limited access to specialist care. The FDA cleared IDx-DR (now known as LumineticsCore) in April 2018 as the first autonomous AI diagnostic system authorized for marketing, capable of making screening decisions without a clinician needing to interpret the results.
PathAI is a Boston-based company focused on AI-powered pathology. In June 2025, PathAI received FDA 510(k) clearance for its AISight Dx platform for primary diagnosis in clinical settings. This clearance built on an initial 510(k) clearance for AISight Dx in 2022 and included a Predetermined Change Control Plan (PCCP), enabling PathAI to validate and implement specified major changes (such as additional displays, scanners, file formats, and browsers) without requiring additional 510(k) submissions. AISight Dx was the first digital pathology image viewing and management software to receive FDA clearance with a PCCP.
PathAI has also received FDA Breakthrough Device Designation for PathAssist Derm, an AI-powered dermatopathology workflow solution, and its AIM-MASH AI Assist became the first AI-powered pathology tool to receive FDA qualification for use in metabolic dysfunction-associated steatohepatitis (MASH) clinical trials. Labcorp has expanded its partnership with PathAI to roll out the digital pathology platform across the United States.
Viz.ai developed the Viz.ai Contact application, a clinical decision support software designed to analyze CT scan results and notify providers of a potential large vessel occlusion (LVO) stroke. The system was cleared by the FDA in February 2018 through the De Novo premarket review pathway, making it one of the first AI-powered stroke detection tools authorized for clinical use.
The software analyzes CT angiography images and, upon detecting a suspected LVO, sends a text message notification directly to a neurovascular specialist, who can view the results on a mobile device and decide whether to initiate emergency treatment. In a study involving 300 CT scans, the software detected suspected strokes faster than neuroimaging specialists in more than 95% of cases and saved an average of 52 minutes in the notification workflow. Given that every minute of delay in stroke treatment can result in the loss of approximately 1.9 million neurons, this time savings has meaningful clinical implications. Viz.ai has since expanded its platform to cover pulmonary embolism, aortic disease, and other time-sensitive conditions.
AI has shown particular promise in cancer detection across multiple modalities. In lung cancer screening, studies have demonstrated that AI prioritization of low-dose CT (LDCT) scans flagging high-risk nodules reduced diagnostic intervals by 15% and enabled earlier management of stage I cancers. Lunit INSIGHT CXR, cleared by the FDA for chest X-ray analysis, can detect more than ten abnormalities including lung nodules, consolidation, and pneumothorax, serving as a second reader for radiologists.
In breast cancer screening, AI tools have been shown to reduce radiologist workload while maintaining or improving sensitivity. A 2025 study from Sweden found that AI-supported mammography screening could safely reduce the number of mammograms needing radiologist review by approximately 50%, without missing cancers, potentially addressing the growing shortage of breast imaging specialists [25].
In pathology, Paige.AI's Paige Prostate became the first AI system to receive De Novo authorization for cancer detection in pathology in 2021, and subsequent products have expanded to cover gastric and other cancer types.
The AI market in medical imaging is expanding rapidly. Industry projections estimate growth from $7.52 billion in 2025 to approximately $26.16 billion by 2030. Radiology remains the dominant application area, accounting for 75-80% of all FDA-cleared AI medical devices.
Key trends in medical imaging AI as of 2026 include:
| Trend | Description |
|---|---|
| Multimodal integration | AI systems combining radiology, pathology, genomics, and clinical data for comprehensive diagnostic assessments |
| Foundation models | Large pre-trained models (such as Google's Med-Gemini) adapted for specific imaging tasks, reducing the data requirements for specialty applications |
| Point-of-care AI | Portable, AI-enabled imaging devices like Hyperfine's Swoop MRI bringing AI-assisted imaging to the bedside and resource-limited settings |
| Quantitative imaging biomarkers | AI extraction of quantitative measurements from imaging studies to track disease progression and treatment response |
| AI-assisted reporting | Automated generation of structured radiology reports from imaging findings, reducing reporting time |
Google introduced Med-Gemini, a specialized version of Gemini designed to process complex multimodal medical data including X-rays and pathology slides, achieving a 91.1% score on standardized medical exam questions. This represents the broader trend toward foundation models being adapted for medical imaging applications [26].
The emergence of large language models (LLMs) has opened new frontiers in healthcare AI, particularly in clinical documentation, medical question answering, and decision support.
Google's Med-PaLM, introduced in 2022, was the first LLM to exceed a passing score on United States Medical Licensing Examination (USMLE)-style questions, achieving 67.2% accuracy on the MedQA benchmark. Its successor, Med-PaLM 2 (2023), achieved 86.5% accuracy on the same benchmark, approaching expert-level performance. A study published in Nature demonstrated that Med-PaLM 2's answers to medical questions were preferred by physicians over those of other models in terms of factuality, precision, and potential for harm.
OpenAI's GPT-4, released in March 2023, demonstrated strong medical reasoning capabilities without any specialized fine-tuning. On the MedQA dataset, GPT-4 achieved 86.1% accuracy, exceeding the passing threshold by over 20 points and outperforming both earlier general-purpose models like GPT-3.5 and models specifically fine-tuned on medical knowledge. Subsequent iterations further improved performance: GPT-4o achieved 93% accuracy on the autumn 2022 USMLE exam and 100% on the spring 2021 exam. When tested on 30 unique questions not available online, GPT-4o maintained a 96% accuracy rate and consistently outperformed medical students, with a mean score of 95.54% compared to students' 72.15%.
A growing body of research has evaluated LLMs as diagnostic tools beyond standardized exams. A systematic review and meta-analysis published in the Journal of Medical Internet Research in 2025 analyzed 4,762 cases across 19 LLMs and found that LLMs have demonstrated considerable diagnostic capabilities across various clinical cases. For unpublished challenging cases in internal medicine, GPT-4 achieved 61.1% correct diagnoses in its top six suggestions, compared to 49.1% for physicians. For common clinical scenarios, GPT-4 included the correct diagnosis in its top three suggestions 100% of the time, compared to 84.3% for physicians [27].
However, the same research noted that overall LLM accuracy still falls short of clinical professionals in many real-world scenarios, and proper clinical trials are needed to ensure safety and effectiveness. Most studies (65%) evaluated LLMs as support tools in the physician's diagnostic process rather than as standalone diagnosticians [27].
A randomized clinical trial published in JAMA Network Open in 2024 examined how access to an LLM influenced physician diagnostic reasoning. The study found that while LLM access improved diagnostic accuracy for some cases, the effect was inconsistent and depended on case complexity. Physicians sometimes over-relied on LLM suggestions even when the model was incorrect, highlighting the need for careful integration protocols [28].
Beyond exam performance, LLMs are increasingly used for clinical documentation. Microsoft's Nuance DAX Copilot, integrated into Epic and other EHR systems, uses ambient AI to listen to patient-clinician conversations and automatically generate clinical notes. This technology addresses one of the most significant sources of physician burnout: the documentation burden. Studies have shown that physicians spend nearly two hours on EHR tasks for every hour of direct patient care, and ambient AI documentation tools aim to reduce this substantially.
In June 2025, Microsoft unveiled the MAI Diagnostic Orchestrator (MAI-DxO), an AI diagnostic system that represents a new approach to medical reasoning. Rather than relying on a single model, MAI-DxO uses a chain-of-debate orchestration mechanism that queries multiple leading AI models, including OpenAI's o3, Google's Gemini, Anthropic's Claude, Meta's Llama, and xAI's Grok, mimicking a virtual panel of physicians engaged in diagnostic reasoning.
In evaluations using the New England Journal of Medicine (NEJM) case proceedings, a benchmark of complex diagnostic scenarios, the best-performing configuration (MAI-DxO paired with OpenAI's o3) correctly diagnosed 85.5% of cases. For comparison, 21 practicing physicians from the United States and United Kingdom achieved a mean accuracy of 20% on the same completed cases. MAI-DxO also delivered lower overall testing costs than physicians or any individual foundation model tested alone.
Microsoft emphasized that MAI-DxO is not intended to replace physicians but to augment their capabilities. Next steps include testing in real-world clinical settings and establishing regulatory pathways for deployment. The system has sparked significant discussion in the medical community about the role of AI in complex diagnostic reasoning and Microsoft's broader push toward what it has termed "medical superintelligence."
AI-driven drug discovery has progressed from experimental curiosity to clinical utility, with AI-designed therapeutics now in human trials across diverse therapeutic areas. However, as of early 2026, no AI-discovered drug has achieved full FDA approval.
The most advanced example of an AI-discovered and AI-designed drug is Rentosertib (formerly ISM001-055), developed by Insilico Medicine. Rentosertib is a first-in-class inhibitor of TRAF2 and NCK-interacting kinase (TNIK), a target identified using Insilico's AI platform. Both the target and the molecule were discovered using generative AI, making it the first drug where AI was used end-to-end from target discovery through molecular design.
In a Phase IIa randomized, double-blind, placebo-controlled trial enrolling 71 patients with idiopathic pulmonary fibrosis (IPF) across 21 sites in China, patients receiving 60 mg once-daily Rentosertib experienced a mean improvement in forced vital capacity (FVC) of +98.4 mL, compared to a mean decline of -20.3 mL in the placebo group. The drug exhibited a manageable safety and tolerability profile, with adverse events generally mild to moderate. The results were published in Nature Medicine in 2025 and presented at the American Thoracic Society (ATS) 2025 conference [29].
| Company | Drug candidate | Indication | Stage (early 2026) | AI role |
|---|---|---|---|---|
| Insilico Medicine | Rentosertib (ISM001-055) | Idiopathic pulmonary fibrosis | Phase II (positive Phase IIa results) | End-to-end: AI-discovered target and AI-designed molecule |
| Schrödinger / Nimbus | Zasocitinib (TAK-279) | Autoimmune diseases | Phase III | Physics-based computational design of TYK2 inhibitor |
| Recursion Pharmaceuticals | Multiple candidates | Oncology, rare diseases | Phase I/II | AI-driven target identification and phenotypic screening |
| Isomorphic Labs (DeepMind) | Undisclosed | Multiple | Preclinical/early clinical | AlphaFold-based protein structure prediction for drug design |
| Iambic Therapeutics | Multiple candidates | Oncology | Phase I | AI-accelerated lead optimization |
| Generate Biomedicines | Multiple candidates | Immunology, oncology | Phase I | Generative AI for protein therapeutic design |
AI can compress early discovery timelines by 30-40%, but clinical trial duration, regulatory review timelines, and manufacturing scale-up remain constrained by biology and regulatory requirements. Leading biotechs like Iambic and Generate are expected to have three or more AI-designed drugs each in clinical trials by 2026 [30].
Beyond drug design, AI is transforming clinical trial operations. AI tools help optimize patient recruitment by identifying eligible candidates from electronic health records, select appropriate trial sites based on historical enrollment data, design adaptive trial protocols, and predict clinical endpoints. Companies like Unlearn.AI use digital twins (AI-generated virtual patient models) to create synthetic control arms, potentially reducing the number of patients needed in placebo groups while maintaining statistical rigor.
The American Hospital Association reported in 2025 that AI is being used across the clinical trial lifecycle, from protocol design through post-market surveillance, with particular impact on reducing patient recruitment timelines, which historically account for up to 30% of total trial duration [31].
The convergence of telemedicine and AI has accelerated since the COVID-19 pandemic, which drove massive adoption of remote healthcare delivery. AI enhances telemedicine in several ways:
| Application | Description | Examples |
|---|---|---|
| Triage and symptom assessment | AI chatbots that assess patient symptoms before or instead of a live consultation, routing patients to appropriate care | Babylon Health (now dissolved), Ada Health, Buoy Health |
| Remote patient monitoring | AI analysis of data from wearable devices and home monitoring equipment to detect deterioration | Current Health (acquired by Best Buy Health), Biofourmis |
| Virtual diagnostic support | AI tools that assist clinicians during telemedicine visits by analyzing images, lab results, or patient data in real time | Derm.ai for teledermatology, TytoCare AI for remote physical exams |
| Chronic disease management | AI-powered platforms for ongoing management of diabetes, hypertension, heart failure, and other chronic conditions | Livongo (now part of Teladoc), Omada Health |
AI-powered triage tools have become particularly important in telemedicine, helping direct patients to the right level of care. These systems analyze symptoms, medical history, and risk factors to determine whether a patient needs emergency care, an urgent visit, or can safely wait for a scheduled appointment.
AI applications in mental health have grown significantly, with the AI mental health market estimated at $1.8 billion in 2025 and projected to reach $11.8 billion by 2034, representing a 24% CAGR [32].
Wysa is an AI-powered mental health chatbot that has helped over 5 million users in more than 90 countries. The platform combines a free chatbot for journaling and mindfulness exercises with paid live coaching sessions. Wysa's chatbot uses cognitive behavioral therapy (CBT), dialectical behavior therapy (DBT), meditation, and breathing techniques, drawing from a library of over 150 therapeutic exercises. In 2025, Wysa received FDA Breakthrough Device Designation, recognizing its potential as a clinical-grade mental health tool [32].
A clinical study evaluating Wysa's impact on adults with chronic conditions (arthritis or diabetes) found that participants using the chatbot showed reductions in depression and anxiety compared to a control group over four weeks, though no significant changes were observed in stress levels.
Woebot Health, one of the early pioneers of AI mental health tools, shut down its direct-to-consumer CBT chatbot on June 30, 2025, pivoting entirely to an enterprise model focused on partnerships with payers, providers, and businesses. This shift reflects broader challenges in commercializing consumer-facing mental health AI, including regulatory constraints and reimbursement hurdles.
A 2025 RAND study published in JAMA Network Open found that 92.7% of young users found AI mental health advice helpful. However, experts emphasize that AI chatbots are supplements to, not replacements for, licensed therapy. Concerns persist about the ability of AI tools to identify and appropriately escalate crisis situations, the quality of therapeutic relationships formed with AI agents, and the lack of long-term efficacy data for AI-delivered mental health interventions [33].
AI systems in healthcare require access to large volumes of patient data for training and operation, raising significant privacy concerns. In the United States, the Health Insurance Portability and Accountability Act (HIPAA) governs the privacy and security of patient health information. In January 2025, the HHS Office for Civil Rights proposed the first major update to the HIPAA Security Rule in 20 years, removing the distinction between "required" and "addressable" safeguards and introducing stricter expectations for risk management, encryption, and resilience. These changes are particularly significant for organizations deploying AI, as AI models may inadvertently memorize or expose protected health information during training or inference.
The challenge is compounded by the fact that many AI models, especially deep learning systems, function as "black boxes," making it difficult to verify whether the information used and shared truly aligns with privacy standards.
Biases embedded in training datasets can amplify systemic inequalities, disproportionately affecting marginalized populations. Medical AI systems trained primarily on data from specific demographics may perform poorly on underrepresented groups. For example, dermatology AI trained predominantly on images of lighter skin tones has shown reduced accuracy when applied to patients with darker skin. There is also what researchers call the "streetlight effect" or "observational bias," a tendency to use the most readily available high-volume data rather than data that would better answer clinical questions. Addressing these biases requires costly data collection, careful curation, and ongoing monitoring of model performance across diverse populations.
As AI becomes more autonomous in healthcare decision-making, assigning responsibility for errors or adverse outcomes becomes increasingly complex. When an AI algorithm makes an incorrect diagnosis or suggests a harmful treatment, determining who is liable (the healthcare provider, the AI developer, or the hospital) remains legally unclear. Clear frameworks must be established to define liability, and until they are, this uncertainty may slow clinical adoption.
Many clinicians remain skeptical of AI tools, particularly when the reasoning behind recommendations is opaque. The "black box" problem is especially acute in healthcare, where clinical decisions can have life-or-death consequences. Building trust requires not only demonstrating strong performance in clinical trials but also providing interpretable outputs, integrating AI seamlessly into existing workflows, and giving clinicians the ability to override AI recommendations when their clinical judgment suggests a different course of action.
The pace of AI development often outstrips the ability of regulatory bodies to evaluate and approve new tools. The FDA's traditional device approval pathways were not designed for software that continuously learns and updates. The agency has been working to develop new frameworks, including the Predetermined Change Control Plan (PCCP) approach piloted with devices like PathAI's AISight Dx, which allows pre-specified modifications without requiring new regulatory submissions. However, a comprehensive regulatory framework for adaptive AI in healthcare remains a work in progress.
The global AI in healthcare market has experienced rapid growth and is projected to continue expanding:
| Metric | Value | Source |
|---|---|---|
| Global market size (2025) | $36.7 billion - $38.0 billion | Grand View Research, Precedence Research |
| Projected market size (2030) | $148.4 billion+ | MarketsandMarkets |
| Projected market size (2034) | $613.8 billion | Precedence Research |
| CAGR (2025-2030) | 38.6% | MarketsandMarkets |
| AI-enabled medical devices market (2024) | $13.67 billion | Grand View Research |
| AI-enabled medical devices market (2033, projected) | $255.76 billion | Grand View Research |
| AI in medical imaging market (2025) | $7.52 billion | Industry estimates |
| AI in medical imaging market (2030, projected) | $26.16 billion | Industry estimates |
| AI mental health market (2025) | $1.8 billion | Industry estimates |
| AI mental health market (2034, projected) | $11.8 billion | Industry estimates |
| FDA-authorized AI/ML devices (end of 2025) | 1,300+ | FDA |
The software segment dominated the AI-enabled medical devices market with a revenue share of 51.15% in 2024, reflecting the importance of algorithms and analytics platforms over hardware in driving market growth.
As of early 2026, AI in healthcare is transitioning from a period of experimentation and proof-of-concept studies into broader clinical deployment. Several trends define this moment:
Regulatory maturation. The FDA has authorized over 1,300 AI/ML-enabled medical devices, and the pace of approvals continues to accelerate. New regulatory approaches, including PCCPs and the FDA's proposed framework for adaptive AI, aim to balance innovation with patient safety. The FDA's January 2025 draft guidance on AI-enabled device software functions establishes more detailed expectations for submissions.
LLM integration. Large language models are being integrated into electronic health record systems for ambient clinical documentation, clinical decision support, and patient communication. Microsoft's Nuance DAX Copilot and similar tools are seeing growing adoption across health systems.
Diagnostic AI advances. Systems like Microsoft's MAI-DxO have demonstrated performance that exceeds that of practicing physicians on complex diagnostic benchmarks, though real-world clinical validation and regulatory approval remain ongoing. LLMs are increasingly being studied as diagnostic support tools, with systematic reviews showing strong performance on standardized cases but variable results in real-world complexity.
AI drug discovery reaches clinical milestones. Insilico Medicine's Rentosertib published positive Phase IIa results in Nature Medicine, representing the most advanced clinical evidence for an end-to-end AI-discovered and AI-designed drug. Multiple other AI-designed drugs are entering Phase I and Phase II trials across oncology, immunology, and rare diseases.
Mental health AI evolution. The mental health chatbot landscape has shifted significantly. Woebot Health shut down its direct-to-consumer CBT chatbot, pivoting to enterprise partnerships. Meanwhile, Wysa received FDA Breakthrough Device Designation and continues to expand, serving over 5 million users globally.
Multimodal AI. Increasingly, AI systems are combining multiple data modalities (imaging, genomics, clinical notes, lab results) to provide more comprehensive diagnostic and prognostic insights. Google's Med-Gemini and similar foundation models are being adapted for medical applications, and this multimodal approach is expected to be a major driver of clinical utility in the coming years.
Global expansion. While the United States leads in FDA approvals and market size, AI healthcare adoption is expanding globally, with the European Union's AI Act establishing new regulatory requirements and countries across Asia investing heavily in healthcare AI infrastructure.
The field continues to grapple with challenges around bias, privacy, liability, and clinician trust, but the trajectory is clear: AI is becoming an integral part of how healthcare is delivered, and its role will only deepen in the years ahead.