Todd Hollon

Assistant Professor
University of Michigan
tocho (at) umich.edu


Intelligent Neuroimaging

A focus of the MLiNS lab is to develop clinically grounded artificial intelligence for neuroimaging by learning directly from routine health system data at scale. Our work has advanced from HLIP, which introduced hierarchical language-image pretraining for uncurated 3D MRI and CT studies, to ItemizedCLIP, which learns more complete and explainable visual representations from structured radiology supervision. Building on these foundations, we developed NeuroVFM, a generalist neuroimaging foundation model trained on millions of clinical MRI and CT volumes, and Prima, a health system-scale vision-language model for brain MRI designed for real-world diagnosis, triage, and clinical decision support. Together, this research program aims to create neuroimaging models that are accurate, interpretable, fair, and deployable across the full spectrum of neurologic disease, while establishing the health system itself as a powerful engine for medical AI discovery.

Figures

Overview of health system-scale neuroimaging data, NeuroVFM self-supervised pretraining, and clinical evaluation with grounded findings and triage.

  1. Chenhui Zhao, Yiwei Lyu, Asadur Zaman Chowdury, Edward S. Harake, Akhil Kondepudi, Akshay T. Rao, Xinhai Hou, Honglak Lee, and Todd C. Hollon
    TRANSACTIONS ON MACHINE LEARNING RESEARCH · 2026

    Vision-language pre-training for volumetric MRI and CT is usually limited by radiologist-curated datasets. HLIP instead uses hierarchical attention over slice, scan, and study to pre-train on uncurated clinical data at scale, boosting performance on public brain MRI and head CT benchmarks.


  2. Yiwei Lyu, Chenhui Zhao, Soumyanil Banerjee, Shixuan Liu, Akshay T. Rao, Akhil Kondepudi, Honglak Lee, and Todd C. Hollon
    COMPUTER VISION AND PATTERN RECOGNITION · 2026

    Standard contrastive language-image pre-training can neglect objects in visual scenes. ItemizedCLIP forces models to learn and attend to all described items, resulting in better visual representations.


  3. Akhil Kondepudi, Akshay Rao, Chenhui Zhao, Yiwei Lyu, Samir Harake, Soumyanil Banerjee, Rushikesh Joshi, Anna-Katharina Meissner, Renly Hou, Cheng Jiang, Asadur Chowdury, Ashok Srinivasan, Brian Athey, Vikas Gulani, Aditya Pandey, Honglak Lee, and Todd Hollon
    arXiv · 2026

    NeuroVFM is a visual foundation model trained on 5.24M clinical MRI and CT volumes via health system learning, a paradigm that leverages uncurated data from routine care. Using a scalable volumetric joint-embedding predictive architecture, it delivers state-of-the-art radiologic diagnosis and report generation with interpretable visual grounding, surpassing frontier models in accuracy, triage, and expert preference while reducing hallucinations.


  4. Yiwei Lyu, Samir Harake, Asadur Chowdury, Soumyanil Banerjee, Rachel Gologorsky, Shixuan Liu, Anna-Katharina Meissner, Akshay Rao, Chenhui Zhao, Akhil Kondepudi, Cheng Jiang, Xinhai Hou, Rushikesh S. Joshi, Volker Neuschmelting, Ashok Srinivasan, Dawn Kleindorfer, Brian Athey, Vikas Gulani, Aditya Pandey, Honglak Lee, and Todd Hollon
    NATURE BIOMEDICAL ENGINEERING · 2026

    Introduces Prima, a visual foundation model for brain MRI trained at health-system scale for diagnosis, triage, and clinically grounded decision support.


  5. Edward S. Harake, Joseph R. Linzey, Cheng Jiang, Rushikesh S. Joshi, Mark M. Zaki, Jaes C. Jones, Siri S. Khalsa, John H. Lee, Zachary Wilseck, Jacob R. Joseph, Todd C. Hollon, and Paul Park
    JOURNAL OF NEUROSURGERY SPINE · 2024

  6. Rachel Gologorsky, Edward Harake, Grace von Oiste, Mustafa Nasir-Moin, William Couldwell, Eric Oermann, and Todd Hollon
    PITUITARY · 2022

← Back to home