Events
Past Event
WED@NICO SEMINAR: Asma Ghandeharioun, Google DeepMind "Model Interpretability: from Illusions to Opportunities"
Northwestern Institute on Complex Systems (NICO)
12:00 PM
//
Lower Level, Chambers Hall
Details
Speaker:
Asma Ghandeharioun, Senior Research Scientist, People + AI Research Team, Google DeepMind
Title:
Model Interpretability: from Illusions to Opportunities
Abstract:
While the capabilities of today’s large language models (LLMs) are reaching—and even surpassing—what was once thought impossible, concerns remain regarding their misalignment, such as generating misinformation or harmful text, which continues to be an open area of research. Understanding LLMs’ internal representations can help explain their behavior, verify their alignment with human values, and mitigate instances where they produce errors. In this talk, I begin by challenging common misconceptions about the connections between LLMs' hidden representations and their downstream behavior, highlighting several “interpretability illusions.” For example, I demonstrate that, counterintuitively, localizing and editing facts within an LLM’s hidden representations can be disconnected; model failure and success in the wild cannot necessarily be predicted based on a relatively faithful proxy at training time; and even within the same architecture, representation similarity is not always indicative of prediction similarity.
Next, I introduce Patchscopes, a new framework that leverages the model itself to explain its internal representations in natural language. I’ll show how it can be used to answer a wide range of questions about an LLM's computation. I also demonstrate that many prior interpretability methods—based on projecting representations into the vocabulary space and intervening in LLM computation—can be viewed as instances of this framework. Furthermore, several of their shortcomings, such as difficulty inspecting early layers or lack of expressivity, can be mitigated by Patchscopes. Beyond unifying prior inspection techniques, Patchscopes opens up new possibilities, such as using a more capable model to explain the representations of a smaller model and multihop reasoning error correction.
Finally, I discuss a few failure cases in today’s most capable LLMs and show how Patchscopes can shed light on their mechanics and suggest mitigation strategies. For example, we observe that safety-tuned models may still divulge harmful information, and whether they do so often depends significantly on who they are talking to—what we refer to as the user persona. Using Patchscopes, we show that harmful content can persist in hidden representations and can be easily extracted. Additionally, we demonstrate that certain user personas can induce the model to form more charitable interpretations of otherwise dangerous queries
Speaker Bio:
Asma Ghandeharioun, Ph.D., is a senior research scientist with the People + AI Research team at Google DeepMind. She works on aligning AI with human values through better understanding [1] and controlling (language) models [2], uniquely by demystifying their inner workings [3] and correcting collective misconceptions along the way [4, 5]. While her current research is mostly focused on machine learning interpretability, her previous work spans conversational AI, affective computing, and, more broadly, human-centered AI. She holds a doctorate and master’s degree from MIT and a bachelor’s degree from the Sharif University of Technology. She has been trained as a computer scientist/engineer and has research experience at MIT, Google Research, Microsoft Research, Ecole Polytechnique Fédérale de Lausanne (EPFL), to name a few.
Her work has been published in premier peer-reviewed machine learning venues such as ICLR, NeurIPS, ICML, EMNLP, AAAI, ACII, and AISTATS. She has received awards at NeurIPS and her work has been featured in Wired, Wall Street Journal, and New Scientist.
Location:
In person: Chambers Hall, 600 Foster Street, Lower Level
Remote option: https://northwestern.zoom.us/j/91475935376
Passcode: NICO24
About the Speaker Series:
Wednesdays@NICO is a vibrant weekly seminar series focusing broadly on the topics of complex systems, data science and network science. It brings together attendees ranging from graduate students to senior faculty who span all of the schools across Northwestern, from applied math to sociology to biology and every discipline in-between. Please visit: https://bit.ly/WedatNICO for information on future speakers.
Time
Wednesday, October 9, 2024 at 12:00 PM - 1:00 PM
Location
Lower Level, Chambers Hall Map
Contact
Calendar
Northwestern Institute on Complex Systems (NICO)
Data Science Nights - December 2025 - Speaker: Yash Chainani, Chemical Engineering
Northwestern Institute on Complex Systems (NICO)
5:30 PM
//
Room 2410, Kellogg Global Hub
Details
DECEMBER MEETING: Thursday, December 18, 2025 at 5:30pm (US Central)
LOCATION CHANGE THIS MONTH:
In person: Kellogg Global Hub, Room 2410
2211 N Campus Drive, Evanston
AGENDA:
5:30pm - Meet and greet with refreshments
6:00pm - Talk with Yash Chainani, Broadbelt & Tyo Labs, Chemical Engineering
Talk title and abstract TBA.
DATA SCIENCE NIGHTS are monthly meetings featuring presentations and discussions about data-driven science and complex systems, organized by Northwestern University graduate students and scholars. Students and researchers of all levels are welcome! For more information: http://bit.ly/nico-dsn
Time
Thursday, December 18, 2025 at 5:30 PM - 7:30 PM
Location
Room 2410, Kellogg Global Hub Map
Contact
Calendar
Northwestern Institute on Complex Systems (NICO)
Winter Recess Starts - University Closed Through January 1st, 2026
University Academic Calendar
All Day
Details
Winter Recess Starts - University Closed Through January 1st, 2026
Time
Wednesday, December 24, 2025
Contact
Calendar
University Academic Calendar
Winter classes begin
University Academic Calendar
All Day
Details
Winter classes begin
Time
Monday, January 5, 2026
Contact
Calendar
University Academic Calendar
WED@NICO Winter Seminar Series returns on January 28th!
Northwestern Institute on Complex Systems (NICO)
12:00 PM
//
Lower Level, Chambers Hall
Details
The Wednesdsays@NICO speaker series will return for the winter quarter on January 28th, 2026, running through March 4th. Speakers will be announced in January!
Location:
In person: Chambers Hall, 600 Foster Street, Lower Level
Remote option: Zoom links will be provided
About the Speaker Series:
Wednesdays@NICO is a vibrant weekly seminar series focusing broadly on the topics of complex systems, data science and network science. It brings together attendees ranging from graduate students to senior faculty who span all of the schools across Northwestern, from applied math to sociology to biology and every discipline in-between. Please visit: https://bit.ly/WedatNICO for information on future speakers.
Time
Wednesday, January 28, 2026 at 12:00 PM - 1:00 PM
Location
Lower Level, Chambers Hall Map
Contact
Calendar
Northwestern Institute on Complex Systems (NICO)