Awa Dieng

Hi! I am a research associate on the Google Brain team in Montreal. I am also a graduate student in Computer Science at Mila and Université de Montréal co-advised by Laurent Charlin and Hugo Larochelle.

My research interest is at the intersection of machine learning and causality with a focus on improving fairness, interpretability and reliability properties of ML models and treatment effect estimation methods.

contact: awaydieng {at} gmail {dot} com | google scholar | twitter

News

  • Excited to organize a 3rd edition of our workshop on Algorithmic Fairness at NeurIPS 2022 (details at Algorithmic Fairness through the lens of Causality and Privacy)!

  • I am Program Chair for the Montreal AI Symposium 2022. Looking forward to helping organize this great conference!

  • Honoured to be invited to participate in the Interpretable Machine Learning program at the Simons Institute for the Theory of Computing, UC Berkeley this summer.

  • The proceedings at PMLR for the Algorithmic Fairness through the lens of Causality and Robustness workshop is out!

  • New review paper on transportability of causal effects using RCT and observational data!

  • Excited to co-organize a second edition of the AFCI workshop at NeurIPS 2021. This year's edition will highlight work at the intersection of fairness with causality and robustness. More information at afciworkshop.org/afcr2021

  • Joined the SAMSI (Statistical and Applied Mathematical Sciences Institute) working group on Causal Inference and Missing Data!!

  • I am co-organizing a workshop on "Algorithmic Fairness through the lens of Causality and Interpretability " at NeurIPS 2020. Check afciworkshop.org for more details.

  • I am happy to share my "Path to Google" interview with the Google blog.

  • Joined the first cohort of AI residents in the Google Brain team in Accra, Ghana!

Publications

Causal inference methods for combining randomized trials and observational studies: a review

Bénédicte Colnet, Imke Mayer, Guanhua Chen, Awa Dieng, Ruohong Li, Gaël Varoquaux, Jean-Philippe Vert, Julie Josse, Shu Yang

preprint, arxiv2020

Abstract: With increasing data availability, treatment causal effects can be evaluated across different dataset, both randomized trials and observational studies. Randomized trials isolate the effect of the treatment from that of unwanted (confounding) co-occuring effects. But they may be applied to limited populations, and thus lack external validity. On the opposite, large observational samples are often more representative of the target population but can conflate confounding effects with the treatment of interest. In this paper, we review the growing literature on methods for causal inference on combined randomized trial and observational studies, striving for the best of both worlds. We first discuss identification and estimation methods that improve generalizability of randomized controlled trials (RCTs) using the representativeness of observational data. Classical estimators include weighting, difference between conditional outcome models, and double robust estimators. We then discuss methods that combine RCTs and observational data to improve the (conditional) average treatment effect estimation, handling possible unmeasured confounding in the observational data. We also connect and contrast works developed in both the potential outcomes framework and the structural causal models framework. Finally, we compare the main methods using a simulation study and real world data to analyse the effect of tranexamic acid on the mortality rate in major trauma patients. Code to implement many of the methods is provided.


Interpretable Almost-Exact Matching for Causal Inference

Awa Dieng, Yameng Liu, Sudeepa Roy, Cynthia Rudin, Alex Volfovsky

AISTATS 2019

Abstract: Matching methods are heavily used in the social and health sciences due to their interpretability. We aim to create the highest possible quality of treatment-control matches for categorical data in the potential outcomes framework. The method proposed in this work aims to match units on a weighted Hamming distance, taking into account the relative importance of the covariates; the algorithm aims to match units on as many relevant variables as possible. To do this, the algorithm creates a hierarchy of covariate combinations on which to match (similar to downward closure), in the process solving an optimization problem for each unit in order to construct the optimal matches. The algorithm uses a single dynamic program to solve all of the units’ optimization problems simultaneously. Notable advantages of our method over existing matching procedures are its high-quality interpretable matches, versatility in handling different data distributions that may have irrelevant variables, and ability to handle missing data by matching on as many available covariates as possible.