AI TOOLS / TECH MINDSET

 

AAAI 2025 Reflections

The 39th Annual Conference of the Association for the Advancement of Artificial Intelligence in Philadelphia highlighted how much AI is more than ChatGPT. Nevertheless, the accessibility and obvious impact of Large Language Models that underpin this technology are attracting scientists and engineers to AI in swarms This promises phenomenal innovation to come.

Here are a few of the very many HIGHLIGHTS (too many to count): 

Agentic AI: Moving from single step to iterative problem solving by LLMs that then enable them to break large tasks into smaller tasks, each solved by more specialized (smaller) models. Also may help improve explainability of models. 

“One new start-up a month”

— Andrew Ng

Accelerating Innovation: The pace of prototyping has never been faster. LLMs accelerate the move from “hmm... I have an idea” to "I have an app/working piece of code to try” thanks to accessible generative AI code and libraries. It will mean a lot more junk to sift through, but also way more pearls. Andrew Ng’s comments on fast prototyping are worth noting--"one new start-up a month."

Powering new scientific research and discovery: Every step along the way, from scouring the literature, to applying LLMs to that literature to come up with new hypotheses to test. It’s time to think about embedding space (the way computers or LLMs represent words and knowledge) rather than real-world space as the realm of exploration and understanding.

Teaching AI to everyone: EAAI is a dedicated group of educators who share their lesson plans, their games and workbooks for teaching about AI at every level. From kids learning about embeddings and gender bias (there’s that embedding idea again) to 6 week programs for people on parental leave by Sedef Akinli Kocak, PhD, MBA to a class for the whole of UT Austin faculty and staff by Peter Stone

Of the many CHALLENGES, at least three were repeatedly made obvious at the meeting:

Questionable benchmarking: Entities using benchmarking to train models are distorting the playing field. It’s like studying to the answer key, the results don’t reflect the actual quality of the models. Subbarao Kambhampati delivered a fantastic keynote bemoaning the state of affairs and challenging the field to do better or risk losing trust.

Symbolic AI + Transformer-based LLMs: Not all problems need the trillion-parameter LLMs to solve. Often more traditional (lower resource-intensive) AI tools are just fine, and even better. This will gradually sort itself out, provided the students of AI still learn things other than LLMs. Too often students are being taught mostly about LLMs and that needs to be addressed. 

Domain experts needed: Applications need to keep up with the developers. So many examples of applications of LLMs. Everything from improving credibility in elections (Biplav Srivastava), explaining radiology findings to patients (Luyang Luo), and crowdsourcing sustainability efforts (Elena Sierra), just to name a few at AAAI. For these to continue, the AI community and domain experts need to work together. That means more experts in the fields of policy, health, finance, education, and more, need to lean in and partner with their AI colleagues.

Here’s to translating these remarkable advances to assuring a better society and better world for all.