Dan Ley
Dan Ley
Publications
Contact
CV
Light
Dark
Automatic
3
Generalized Group Data Attribution
Data Attribution (DA) methods quantify the influence of individual training data points on model outputs and have broad applications such as explainability, data selection, and noisy label identification. However, existing DA methods are often computationally intensive, limiting their applicability to large-scale machine learning models.
Dan Ley
,
Shichang Zhang
,
Suraj Srinivas
,
Gili Rusak
,
Himabindu Lakkaraju
Last updated on Oct 16, 2024
PDF
Cite
In-Context Explainers: Harnessing LLMs for Explaining Black Box Models
Recent advancements in Large Language Models (LLMs) have demonstrated exceptional capabilities in complex tasks like machine translation, commonsense reasoning, and language understanding. One of the primary reasons for the adaptability of LLMs in such diverse tasks is their in-context learning (ICL) capability, which allows them to perform well on new tasks by simply using a few task samples in the prompt.
Nicholas Kroeger
,
Dan Ley
,
Satyapriya Krishna
,
Chirag Agarwal
,
Himabindu Lakkaraju
Last updated on Oct 16, 2024
PDF
Cite
On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language Models
As Large Language Models (LLMs) are increasingly being employed in real-world applications in critical domains such as healthcare, it is important to ensure that the Chain-of-Thought (CoT) reasoning generated by these models faithfully captures their underlying behavior.
Sree Harsha Tanneru
,
Dan Ley
,
Chirag Agarwal
,
Himabindu Lakkaraju
Last updated on Oct 16, 2024
PDF
Cite
Global Counterfactual Explanations: Investigations, Implementations and Improvements
The major shortcoming associated with counterfactual methods is their inability to provide explanations beyond the local or instance-level. While some works touch upon the notion of a global explanation, few provide frameworks that are either reliable or computationally tractable. Meanwhile, practitioners are requesting more efficient and interactive explainability tools. We take this opportunity to investigate existing global methods, with a focus on implementing and improving Actionable Recourse Summaries (AReS), the only known global counterfactual explanation framework for recourse.
Dan Ley
,
Saumitra Mishra
,
Daniele Magazzeni
Last updated on Oct 16, 2024
PDF
Cite
Poster
Diverse and Amortised Counterfactual Explanations for Uncertainty Estimates
To interpret uncertainty estimates, we extend recent work that generates multiple Counterfactual Latent Uncertainty Explanations (𝛿-CLUEs), by applying additional constraints for diversity in the optimisation objective (∇-CLUE). We then propose a distinct method for discovering GLobal AMortised CLUEs (GLAM-CLUE) which learns mappings of arbitrary complexity between groups of uncertain and certain groups in a computationally efficient manner.
Dan Ley
,
Umang Bhatt
,
Adrian Weller
Last updated on Oct 16, 2024
PDF
Cite
Poster
δ-CLUE: Diverse Sets of Explanations for Uncertainty Estimates
To interpret uncertainty estimates, we extend recent work that generates Counterfactual Latent Uncertainty Explanations (CLUEs), to produce a set of plausible CLUEs- multiple, diverse inputs that are within a δ ball of the original input in latent space, all yielding confident predictions.
Dan Ley
,
Umang Bhatt
,
Adrian Weller
Apr 13, 2021
PDF
Cite
Poster
Cite
×