2025 Workshop at BIRS: Day 3 Recordings

Events
Videos
Wednesday, August 20 · Day 3 of the 2025 BIRS Workshop “Foundation Models and Their Biomedical Applications: Bridging the Gap”
Published

August 20, 2025

Stats Up AI Stats Up AI YouTube

Visit the Stats Up AI Channel for More

🏁 2025 Workshop at BIRS: Overview

Foundation Models and Their Biomedical Applications: Bridging the Gap

📍 Banff International Research Station (BIRS), Banff, Alberta, Canada
Event Website: 2025 Workshop Homepage · Dates: Aug 17–22, 2025

🎬 Talks — Quick Looks, Full Notes & Recordings

⮕ Full program: 2025 Workshop Schedule

↩︎ Read more on Stats Up AI 📰 Community News

▶️ Day 3 Recordings

META.talks len=0 (type=nil)

🎤 Sheng Yu: Taming EHRs for Statistical Readiness through Large Language Models and Knowledge Graphs

📅 Wednesday, August 20, 2025 • 🕘 08:49 - 09:15
🏛️ Tsinghua University

Keywords: clinical text processing, ontology alignment, data standardization
Summary: LLMs and knowledge graphs can transform unstructured EHR narratives into standardized, statistically ready data, overcoming the challenges of medical terminology variability in biomedical research data.
📖 Read more

Introduction: Biomedicine has long been one of the most important application areas of statistics. With the widespread adoption of electronic health records (EHRs) over the past decade, these records should, in theory, provide a vast amount of data for analysis. However, in practice, they remain underutilized, as effectively extracting information from EHRs is still a challenging and specialized natural language processing task, due to the substantial medical knowledge required and the variability of medical terminology. In this talk, we will briefly review fundamental concepts for analyzing EHRs, explain the challenges that make EHR analysis difficult, and introduce how we developed large language models and knowledge graphs to convert EHR narratives into structured and standardized data ready for analysis—opening up new frontiers for statistical research and accelerating progress in biomedicine.

🎬Open the video directly

🎤 Jian Kang: Scalable Bayesian inference for heat kernel Gaussian processes on manifolds

📅 Wednesday, August 20, 2025 • 🕘 09:45 - 10:15
🏛️ University of Michigan

Keywords: heat kernel methods, manifold regression, neuroimaging analysis
Summary: A scalable and general method is proposed for nonparametric regression on manifolds, making it feasible to analyze large neuroimaging datasets and broadly applicable manifold learning problems.
📖 Read more

Introduction: We introduce a scalable and general method for nonparametric regression on manifolds, motivated by the challenge of modeling complex brain activation patterns in large neuroimaging studies such as the Human Connectome Project (HCP). Our approach leverages heat kernel techniques to capture the intrinsic geometric structure of the data and incorporates a novel approximation strategy that dramatically reduces computational cost—-making it feasible to analyze datasets with thousands of subjects. Although inspired by neuroimaging applications, the method is broadly applicable to manifold learning problems across scientific domains. Numerical experiments demonstrate both its efficiency and accuracy in uncovering meaningful patterns in high-dimensional, structured data. This is joint work with Junhui He, Guoxuan Ma, and Ying Yang.

🎬Open the video directly

🎤 Huaxiu Yao: From Evaluation to Actionability: Closing Factuality Gaps in Medical Large Vision–Language Models

📅 Wednesday, August 20, 2025 • 🕘 10:33 - 11:05
🏛️ University of North Carolina at Chapel Hill

Keywords: factuality evaluation, multimodal preference optimization, retrieval-augmented generation, agent-based reasoning
Summary: A layered framework is introduced to improve factual reliability in medical vision–language models, progressing from evaluation benchmarks to clinically grounded alignment, domain-aware retrieval, and agentic reinforcement learning for more trustworthy and actionable clinical decision support.
📖 Read more

Introduction: Medical large vision–language models (Med-LVLMs) have advanced rapidly, yet their clinical deployment remains constrained by persistent factuality gaps, especially in high-stakes settings where even rare hallucinations are unacceptable. This talk outlines a layered pathway from evaluation to actionability. We begin with CARES, a benchmark that systematically identifies failure modes in Med-LVLMs by evaluating deficiencies in factuality, robustness, and uncertainty estimation. Building on these insights, MMedPO introduces clinically grounded multimodal preference optimization, enhancing factual alignment through counterfactual supervision and lesion-aware training to reduce clinically harmful errors. However, alignment alone cannot keep pace with evolving medical knowledge. To address this, MMed-RAG incorporates domain-aware retrieval across modalities and specialties, employing adaptive context selection and retrieval-guided pre-tuning to dynamically balance reliance on internal model knowledge versus external information. For complex clinical scenarios requiring multi-step reasoning and interdisciplinary collaboration, we present MMedAgent-RL, which implements a generalist-to-specialist agentic workflow and uses reinforcement learning to optimize collaborative decision-making, enabling more faithful and interpretable end-to-end reasoning. Together, these components form a practical and extensible framework that improves factual reliability and clinical decision quality while maintaining transparency. The talk concludes with strategies for integration and a discussion of open challenges in scaling this framework to real-world clinical applications.

🎬Open the video directly

🎤 Bang Liu: From Recall to Reason: Unlocking the Cognitive Core of Foundation Agents

📅 Wednesday, August 20, 2025 • 🕘 11:05 - 11:33
🏛️ University of Montreal

Keywords: long-term memory compression, hybrid reasoning, cognitive agent design
Summary: This talk presents a cognitive-inspired framework for foundation agents that advances long-term memory and adaptive reasoning, introducing reversible memory compression and hybrid fast–slow reasoning to build scalable, human-aligned agents.
📖 Read more

Introduction: Foundation agents, built on the backbone of large language models, are evolving from passive responders to active thinkers—autonomously remembering, reasoning, and improving across tasks and domains. Yet two cognitive capabilities remain crucial bottlenecks: how they remember and how they think. In this talk, I present a cognitive-inspired framework for understanding and architecting foundation agents, with an emphasis on two core pillars—memory and reasoning—through the lens of our recent advances. 1) R3Mem introduces reversible memory compression to balance long-term retention with precise retrieval, enabling LLM agents to recall extended histories and interact coherently across long horizons. 2) System-1.5 Reasoning breaks the dichotomy between fast heuristics and slow deliberation by creating dynamic shortcuts in latent space. It achieves CoT-level reasoning with up to 20× faster inference, bridging System-1 speed and System-2 depth. These systems pave the way for scalable, human-aligned foundation agents with enduring memory and adaptive reasoning. Lastly, I’ll briefly discuss the scientific significance and connection with AI agents and statistics.

🎬Open the video directly

📌 Watch All Recordings

Stats Up AI Stats Up AI YouTube

Visit the Stats Up AI Channel for More

AI is rapidly reshaping biomedical research by integrating diverse data, accelerating discovery, and supporting decision-making under uncertainty. With statisticians at the forefront, these applications gain the depth, rigor, and reliability needed to truly transform science and medicine.