Date & Time:
February 20, 2025 2:00 pm – 3:00 pm
Location:
Crerar 390, 5730 S. Ellis Ave., Chicago, IL,
02/20/2025 02:00 PM 02/20/2025 03:00 PM America/Chicago Peter Hase (Anthropic)- AI Safety Through Interpretable and Controllable Language Models Crerar 390, 5730 S. Ellis Ave., Chicago, IL,

Abstract: The AI research community has become increasingly concerned about risks arising from capable AI systems, ranging from misuse of generative models to misalignment of agents. My research aims to address problems in AI safety by tackling key issues with the interpretability and controllability of large language models (LLMs). In this talk, I present research showing that we are well beyond the point of thinking of AI systems as “black boxes.” AI models, and LLMs especially, are more interpretable than ever. Advances in interpretability have enabled us to control model reasoning and update knowledge in LLMs, among other promising applications. My work has also highlighted challenges that must be solved for interpretability to continue progressing. Building from this point, I argue that we can explain LLM behavior in terms of “beliefs”, meaning that core knowledge about the world determines downstream behavior of models. Furthermore, model editing techniques provide a toolkit for intervening on beliefs in LLMs in order to test theories about their behavior. By better understanding beliefs in LLMs and developing robust methods for controlling their behavior, we will create a scientific foundation for building powerful and safe AI systems.

Speakers

Peter Hase

Resident AI Researcher, Anthropic

Peter Hase is an AI Resident at Anthropic. He recently completed his PhD at the University of North Carolina at Chapel Hill, advised by Mohit Bansal. His research focuses on NLP and AI Safety, with the goal of explaining and controlling the behavior of machine learning models. He is a recipient of a Google PhD Fellowship and before that a Royster PhD Fellowship. While at UNC, he also worked at Meta, Google, and the Allen Institute for AI.

Related News & Events

test of time headshots
UChicago CS News

Five Paths to Lasting Influence: Celebrating Five UChicago CS Test of Time Award Recipients

Dec 02, 2025
technology architecture
UChicago CS News

Researchers Built Their Own ISP to Fix the Internet– A Decade Later, It’s Still Running

Nov 20, 2025
presenting research at a conference
UChicago CS News

Hard to Discover, Harder to Use: The Widespread Failure of Ad Transparency Settings

Nov 18, 2025
computation performed on qubits
UChicago CS News

Constraints on Quantum-Advantage Experiments Due to Noise

Nov 13, 2025
headshot
UChicago CS News

Data Movement Without Borders: Ian Foster and the Globus Team Honored with SC25’s Test of Time Award

Nov 13, 2025
Video

How artists can protect their work from AI | Dr. Heather Zheng | TEDxChicago

Nov 05, 2025
figure detailing how net diffusion works
UChicago CS News

AI-Powered Network Management: GATEAU Project Advances Synthetic Traffic Generation

Oct 29, 2025
girl with robot
UChicago CS News

Sebo Lab: Programming robots to better interact with humans

Oct 28, 2025
Inside the Lab icon
Video

Inside The Lab: How Can Robots Improve Our Lives?

Oct 27, 2025
headshot
UChicago CS News

UChicago CS Student Awarded NSF Graduate Research Fellowship

Oct 27, 2025
LLM graphic
UChicago CS News

Why Can’t Powerful LLMs Learn Multiplication?

Oct 27, 2025
headshot
UChicago CS News

Celebrating Excellence in Human-Computer Interaction: Yudai Tanaka Named 2025 Google North America PhD Fellow

Oct 23, 2025
arrow-down-largearrow-left-largearrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-smallbutton-arrowclosedocumentfacebookfacet-arrow-down-whitefacet-arrow-downPage 1CheckedCheckedicon-apple-t5backgroundLayer 1icon-google-t5icon-office365-t5icon-outlook-t5backgroundLayer 1icon-outlookcom-t5backgroundLayer 1icon-yahoo-t5backgroundLayer 1internal-yellowinternalintranetlinkedinlinkoutpauseplaypresentationsearch-bluesearchshareslider-arrow-nextslider-arrow-prevtwittervideoyoutube