Beyond Scaling: Frontiers of Retrieval-Augmented Language Models
Presented by Guest Speaker
Her research focuses on overcoming the limitations of large language models (LMs) by developing advanced systems. Her work has received top paper awards at major NLP/ML conferences, the IBM Global Fellowship, and recognition from Forbes and MIT Technology Review.
Abstract
Despite their success, Large Language Models (LLMs) remain limited by issues like hallucination and outdated knowledge. In this talk, Akari introduces Augmented LMs—a new paradigm that enhances LLMs with external modules for greater reliability. Focusing on Retrieval-Augmented LMs, she presents her research on scalable training and retrieval methods, and highlights OpenScholar, a system now used by over 30,000 researchers. She concludes with a vision for future advances in modular, multimodal AI.