Generative-AI produced podcast—thanks to NotebookLM and Seattle Children’s Hospital’s Celiac Handbook (April 2025).

 

References and Resources for Digital Health Keynote: AI to Advance Population and Planetary Health

In this Keynote, I highlight a few of the vast opportunities for generative AI to transform health and healthcare space. For those hoping to learn more, including to dive a little more deeply and understand just how generative AI (think ChatDPT, Gemini, DeepSeek, Llama, Claude, and more) work, here are a few of my favorite resources.

A few applications mentioned in the presentation

NotebookLM : Google’s research toolkit that uses generative AI to synthesize content from a range of resources shared by the user and making it accessible through chat, mapping concepts (mindmap), and a remarkably realistic AI-synthesized podcast

ReXplain: A prototype tool that helps explain medical imaging findings to patients by Luyang Luo and Pranav Rajpurkar.

The ERAS Society offers resources for translating Enhanced Recovery after Surgery (ERAS) are considered in this paper, “Elective Surgery and the Untapped Potential for Technology-Enabled Coproduction” in NEJM Catalyst 2023.

Multimodal AI

A review of multimodal AI for medical imaging by Rao et al “Multimodal generative AI for medical image interpretation” March 2025

CLIP and the original work linking two modalities—imaging and text with this paper, helpfully explained in this video by Mike Pound and Computerphile.

Generative AI / Large Language Models (LLMs) / A Fundamental Understanding

The original “Attention Is All You Need” article by Ashish Vaswani et al in 2017 at the NIPS conference built on a rich history of past natural language processing research and launched the LLM revolution. ChatGPT3.5 was fundamentally built off of this paper. This article has been dissected and explained by many others, some are listed below.

3Blue1Brown video series (YouTube): a multichapter series on Deep Learning, starting here with Chapter 1, and with an extraordinarily-animated explanation of transformers and the attention mechanism at the core of LLMs in Chapter 5.

For those wanting to dive more deeply into the computer science of generative AI, Andrej Karpathy’s series on YouTube, including how to build ChatGPT3 from scratch and recently, how to build and apply LLMs.

For Prompt Engineering, Ethan Mollick’s work, starting with this excellent paper sharing a set of prompts by Ethan and Lilach Mollick to make ChatGPT (and others) act as a coach, tutor, mentor, and others.


AAAI 2025 Reflections

The 39th Annual Conference of the Association for the Advancement of Artificial Intelligence in Philadelphia highlighted how much AI is more than ChatGPT. Nevertheless, the accessibility and obvious impact of Large Language Models that underpin this technology are attracting scientists and engineers to AI in swarms This promises phenomenal innovation to come.

Here are a few of the very many HIGHLIGHTS (too many to count): 

Agentic AI: Moving from single step to iterative problem solving by LLMs that then enable them to break large tasks into smaller tasks, each solved by more specialized (smaller) models. Also may help improve explainability of models. 

“One new start-up a month”

— Andrew Ng

Accelerating Innovation: The pace of prototyping has never been faster. LLMs accelerate the move from “hmm... I have an idea” to "I have an app/working piece of code to try” thanks to accessible generative AI code and libraries. It will mean a lot more junk to sift through, but also way more pearls. Andrew Ng’s comments on fast prototyping are worth noting--"one new start-up a month."

Powering new scientific research and discovery: Every step along the way, from scouring the literature, to applying LLMs to that literature to come up with new hypotheses to test. It’s time to think about embedding space (the way computers or LLMs represent words and knowledge) rather than real-world space as the realm of exploration and understanding.

Teaching AI to everyone: EAAI is a dedicated group of educators who share their lesson plans, their games and workbooks for teaching about AI at every level. From kids learning about embeddings and gender bias (there’s that embedding idea again) to 6 week programs for people on parental leave by Sedef Akinli Kocak, PhD, MBA to a class for the whole of UT Austin faculty and staff by Peter Stone

Of the many CHALLENGES, at least three were repeatedly made obvious at the meeting:

Questionable benchmarking: Entities using benchmarking to train models are distorting the playing field. It’s like studying to the answer key, the results don’t reflect the actual quality of the models. Subbarao Kambhampati delivered a fantastic keynote bemoaning the state of affairs and challenging the field to do better or risk losing trust.

Symbolic AI + Transformer-based LLMs: Not all problems need the trillion-parameter LLMs to solve. Often more traditional (lower resource-intensive) AI tools are just fine, and even better. This will gradually sort itself out, provided the students of AI still learn things other than LLMs. Too often students are being taught mostly about LLMs and that needs to be addressed. 

Domain experts needed: Applications need to keep up with the developers. So many examples of applications of LLMs. Everything from improving credibility in elections (Biplav Srivastava), explaining radiology findings to patients (Luyang Luo), and crowdsourcing sustainability efforts (Elena Sierra), just to name a few at AAAI. For these to continue, the AI community and domain experts need to work together. That means more experts in the fields of policy, health, finance, education, and more, need to lean in and partner with their AI colleagues.

Here’s to translating these remarkable advances to assuring a better society and better world for all.