Dan Ley
Dan Ley
Publications
Contact
CV
Light
Dark
Automatic
1
On Minimizing the Impact of Dataset Shifts on Actionable Explanations
The Right to Explanation is an important regulatory principle that allows individuals to request actionable explanations for algorithmic decisions. However, several technical challenges arise when providing such actionable explanations in practice.
Anna Meyer
,
Dan Ley
,
Suraj Srinivas
,
Himabindu Lakkaraju
Last updated on Oct 16, 2024
PDF
Cite
Degraded Polygons Raise Fundamental Questions of Neural Network Perception
It is well-known that modern computer vision systems often exhibit behaviors misaligned with those of humans: from adversarial attacks to image corruptions, deep learning vision models suffer in a variety of settings that humans capably handle.
Leonard Tang
,
Dan Ley
Last updated on Oct 16, 2024
PDF
Cite
Consistent Explanations in the Face of Model Indeterminacy via Ensembling
This work addresses the challenge of providing consistent explanations for predictive models in the presence of model indeterminacy, which arises due to the existence of multiple (nearly) equally well-performing models for a given dataset and task.
Dan Ley
,
Leonard Tang
,
Matthew Nazari
,
Hongjin Lin
,
Suraj Srinivas
,
Himabindu Lakkaraju
Last updated on Oct 16, 2024
PDF
Cite
GLOBE-CE: A Translation-Based Approach for Global Counterfactual Explanations
The major shortcoming associated with counterfactual methods is their inability to provide explanations beyond the local or instance-level. We take this opportunity to propose Global & Efficient Counterfactual Explanations (GLOBE-CE), a flexible framework that tackles the reliability and scalability issues associated with current state-of-the-art, particularly on higher dimensional datasets and in the presence of continuous features. Furthermore, we provide a unique mathematical analysis of categorical feature translations, utilising it in our method.
Dan Ley
,
Saumitra Mishra
,
Daniele Magazzeni
Last updated on Oct 16, 2024
PDF
Cite
Poster
OpenXAI: Towards a Transparent Evaluation of Model Explanations
While several types of post hoc explanation methods have been proposed in recent literature, there is very little work on systematically benchmarking these methods. Here, we introduce OpenXAI, a comprehensive and extensible open-source framework for evaluating and benchmarking post hoc explanation methods.
Chirag Agarwal
,
Dan Ley
,
Satyapriya Krishna
,
Eshika Saxena
,
Martin Pawelczyk
,
Nari Johnson
,
Isha Puri
,
Marinka Zitnik
,
Himabindu Lakkaraju
Last updated on Oct 16, 2024
PDF
Cite
Diverse, Global and Amortised Counterfactual Explanations for Uncertainty Estimates
To interpret uncertainty estimates, we extend recent work that generates multiple Counterfactual Latent Uncertainty Explanations (𝛿-CLUEs), by applying additional constraints for diversity in the optimisation objective (∇-CLUE). We then propose a distinct method for discovering GLobal AMortised CLUEs (GLAM-CLUE) which learns mappings of arbitrary complexity between groups of uncertain and certain groups in a computationally efficient manner.
Dan Ley
,
Umang Bhatt
,
Adrian Weller
Last updated on Oct 16, 2024
PDF
Cite
Poster
Cite
×