AI has captured our collective imagination, promising to revolutionize scientific research, healthcare and medicine. Headlines are compelling: AI designs drugs in months instead of years; algorithms decode neural signals; machine learning speeds the path from laboratory to patient. But enthusiasm often outpaces scientific reality. Bold predictions about AI supplanting human intelligence reflect Silicon Valley optimism more than consensus about what the technology can reliably achieve in biomedicine.
Where AI is actually delivering
There are clear, concrete successes. During the COVID-19 pandemic AI helped prioritize drug candidates and accelerate aspects of vaccine research. Large language models now scan millions of papers to highlight connections that would take humans years to find. In neuroscience, AI-enabled brain–computer interfaces (BCIs) have decoded signals from paralyzed patients to control cursors and robotic limbs and, in some cases, translate neural activity into rudimentary text. Machine learning is also aiding high-resolution mapping of neural circuits.
Structural biology saw a breakthrough with DeepMind’s AlphaFold, which predicts protein folding with unprecedented accuracy. Protein-structure prediction matters because it underpins rational drug design and understanding of disease mechanisms. In drug discovery, a handful of high-profile examples show promise: Exscientia’s DSP-1181 entered human trials as an AI-designed molecule; Insilico and others have advanced candidates where AI played major roles in target selection or molecule design. Companies such as Recursion have used AI to compress timelines for target identification and preclinical testing. These are meaningful, incremental advances in defined problems.
Hype and its consequences
Yet these wins are specific, not a wholesale transformation of medicine. The AI drug-discovery sector has attracted billions of venture dollars, with startups promising to make drug development predictable and fast. That hype creates FOMO among pharmaceutical decision makers and fuels exaggerated claims. The result: inflated expectations that, when unmet, breed disappointment and jeopardize longer-term support.
History warns that hype can lead to “AI winters” — funding pullbacks and slowed progress when promises fail to materialize. Overpromising also risks diverting resources from more modest but reliable scientific gains and can erode public and investor trust in genuine AI-enabled advances.
The scientific reality check
AI performs best with large, high-quality datasets and clear, generalizable patterns. Protein folding fits that description: proteins are composed from 20 amino acids and follow biochemical rules that make prediction feasible. Small-molecule drug discovery is different. Chemical space is vast; published synthetic chemistry data are messy, biased and often inconsistent. Models trained on such literature learn those biases.
The central failures that sink most drug programs — choosing the wrong biological target and unexpected human toxicity — remain difficult for AI. Predicting human pharmacology, off-target effects, and clinical efficacy requires deep biological insight and causal reasoning that current AI systems do not possess. In neuroscience, the challenges intensify: neural circuits vary between individuals and change over time. Combining connectivity maps from one specimen with activity recordings from another is usually invalid, limiting the datasets on which AI can be trained.
Looking ahead
The next five years will likely separate winners from losers. Some AI drug-discovery firms will produce genuine, narrowly scoped successes; others will pivot or fail. With around twenty AI-discovered candidates already in clinical trials, it is reasonable to expect the first AI-designed drugs to win regulatory approval in the coming years. Still, failure rates will remain high; most algorithms will work variably across diseases, forcing companies to narrow their focus to problems where AI offers clear advantages.
In brain research, steady, domain-specific progress is likely. Expect improved BCIs for people with paralysis, better computational models of simple circuits, and AI tools for conditions where behavioral or sensor data are plentiful and structured. For instance, AI-driven gait analysis could optimize therapy for cerebral palsy, and video-analysis tools may aid earlier autism detection. But revolutionary treatments for Alzheimer’s, schizophrenia or other complex brain disorders will take much longer; the brain is not a deterministic machine and treating it as such has limits.
The path forward
AI is not a silver bullet. The most productive approach is strategic: apply AI where it has clear strengths — pattern recognition in large datasets, optimization of well-defined tasks, and augmentation of human decision-making — and combine it with traditional experimental biology and clinical insight. Hybrid models that pair machine-generated hypotheses with human creativity, domain expertise and rigorous experimentation will likely yield the best outcomes.
Researchers and companies should avoid grandiose claims and focus public and investor expectations on achievable milestones. Funders should support robust validation, transparent reporting of failures as well as successes, and long-term work that integrates AI into reproducible scientific pipelines.
Ultimately, the real promise of AI in medicine lies in augmenting, not replacing, human intelligence — helping scientists see patterns they might miss, prioritize experiments, and explore possibilities more efficiently. That is a more modest vision than Silicon Valley hyperbole, but it is also more achievable and ultimately more valuable for patients awaiting new treatments.
Dr. Mohammad Farhan is an Associate Professor at the College of Health and Life Sciences at Hamad Bin Khalifa University.
Hamad Bin Khalifa University’s Communications Directorate has submitted this piece on behalf of its author. The thoughts and views expressed are the author’s own and do not necessarily reflect an official University stance.
Kaitlyn Diana edited this piece.
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.


