Greg Durrett (U of Texas, Austin)- Specializing LLMs for Reliability
Abstract: Large language models (LLMs) have advanced the frontiers of AI reasoning: they can synthesize information from multiple sources, derive new conclusions, and explain those conclusions to their users. However, LLMs do not do this reliably. They hallucinate facts, convincingly state incorrect deductions, and exhibit logical fallacies like confirmation bias. In this talk, I will describe my lab’s work on making LLM systems reliable by introspecting their behavior. First, I will demonstrate that better understanding of LLMs helps us train them to be more reliable reasoners. Our work shows that model interpretation techniques can advance training methodology and dataset curation for reasoning models. Second, I will argue that automating fine-grained evaluation of LLM output provides a level of understanding necessary for further progress. I will describe the ingredients of effective automated evaluators and a state-of-the-art factuality evaluation system, MiniCheck, showing that analyzing the nature of hallucinations can help reduce them. Finally, I will describe how deeper understanding of LLMs will let us tackle their most fundamental limitations, such as their inconsistency when given different inputs. I will propose how these pieces might soon be combined to form reliable AI systems.
Speakers
Greg Durrett
Greg Durrett is an associate professor of Computer Science at UT Austin. His research is broadly in the areas of natural language processing and machine learning. His group develops techniques for reasoning about knowledge in text, verifying factuality of LLM generations, and specializing LLMs to make them more reliable. He is a 2023 Sloan Research Fellow and a recipient of a 2022 NSF CAREER award. His work has been recognized by paper awards at EMNLP 2024 and EMNLP 2013. He was a founding organizer of the Workshop on Natural Language Reasoning and Structured Explanations at ACL 2023 and ACL 2024 and is a current member of the NAACL board. He received his BS in Computer Science and Mathematics from MIT and his PhD in Computer Science from UC Berkeley, where he was advised by Dan Klein.