Currently, I’m a PhD student at Harvard, supervised by Hima Lakkaraju, researching trustworthy ML. My current interests include attributing value to data and faithful reasoning in LLMs. I previously worked within JPMorgan AI Research under the direction of Daniele Magazzeni and Saumitra Mishra, where I researched new global counterfactual explanations methods and their impact in assessing model fairness.
I’m a graduate of the Cambridge MEng, where I was supervised by Adrian Weller and Umang Bhatt. My research centered around providing meaningful explanations for uncertainty estimates within deep learning, as part of the broader field of explainable AI.
Outside of work I’ve been a regular footballer for the University of Cambridge, and coached the Corpus Christi College team. I currently play for MIT FC in the BSSL.
Download my CV for further details.
MEng in Explainable AI, 2021
University of Cambridge
BA in Computer Engineering, 2020
University of Cambridge
1st Year: Class I (87%, 12th of 324)
2nd Year: Class I (83%, 12th of 310)
3rd Year: Pass (COVID/No Classing)
4th Year: Honours Pass with Distinction
Masters Project in Explainable AI
Outstanding Project Award for top 5% of students in Information Engineering
1st paper accepted to ICLR workshops (first-author, travel award)
2nd paper accepted to ICML workshops (first-author)
Combined paper submitted to NeurIPS conference (under review)
Specialisation of Computer and Information Engineering
Dewhurst Scholarship for First and Second Year Results
The major shortcoming associated with counterfactual methods is their inability to provide explanations beyond the local or instance-level. We take this opportunity to propose Global & Efficient Counterfactual Explanations (GLOBE-CE), a flexible framework that tackles the reliability and scalability issues associated with current state-of-the-art, particularly on higher dimensional datasets and in the presence of continuous features. Furthermore, we provide a unique mathematical analysis of categorical feature translations, utilising it in our method.
The major shortcoming associated with counterfactual methods is their inability to provide explanations beyond the local or instance-level. While some works touch upon the notion of a global explanation, few provide frameworks that are either reliable or computationally tractable. Meanwhile, practitioners are requesting more efficient and interactive explainability tools. We take this opportunity to investigate existing global methods, with a focus on implementing and improving Actionable Recourse Summaries (AReS), the only known global counterfactual explanation framework for recourse.
To interpret uncertainty estimates, we extend recent work that generates multiple Counterfactual Latent Uncertainty Explanations (𝛿-CLUEs), by applying additional constraints for diversity in the optimisation objective (∇-CLUE). We then propose a distinct method for discovering GLobal AMortised CLUEs (GLAM-CLUE) which learns mappings of arbitrary complexity between groups of uncertain and certain groups in a computationally efficient manner.
Passive Fluency
Full Fluency
Level B2