Aaron Chan

Research Scientist at Meta AI   •   aarzchan :slightly_smiling_face: gmail :upside_down_face: com


prof_pic.jpg

I am a research scientist on the AI for Modern Recommendation Systems (MRS) Team at Meta AI.

My research is at the intersection of natural language processing and machine learning. In particular, I am excited about:

  • model explainability: explaining language model behavior more faithfully, plausibly, and efficiently.
  • explanation-based learning: operationalizing explanations to improve language model generalization and decision-making.

Previously, I was a research intern at Meta AI and an engineering intern at Google.

I earned my PhD in computer science from the University of Southern California. During my PhD, I was advised by Prof. Xiang Ren at the INK Lab. Before that, I earned my MSE in robotics from the University of Pennsylvania and my BS in electrical engineering from the University of Maryland.

Currently, I work remotely from the Washington, DC metro area, where I live with my wife. Outside of work, I enjoy basketball, skiing, hiking, reading, and board games.


News

May 08, 2023 XMD was accepted to the demo track at ACL 2023! :confetti_ball:
May 02, 2023 HUFTR was accepted to ACL 2023 as an oral presentation! :confetti_ball:
Mar 10, 2023 HUFTR was accepted to the TRAIT Workshop at CHI 2023! :tada:
Mar 04, 2023 KNIFE was accepted to the TrustML-(un)Limited Workshop at ICLR 2023! :tada:
Feb 28, 2023 Check out our latest version of ER-Test on arXiv, updated with additional experiments and improved presentation! :page_facing_up:
See all news >

Selected Publications

  1. ACL
    Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales
    B. Joshi*, Z. Liu*, S. Ramnath, A. Chan, Z. Tong, Q. Wang, Y. Choi, and X. Ren
    ACL, 2023
  2. ICLR
    PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales
    P. Wang, A. Chan, F. Ilievski, M. Chen, and X. Ren
    ICLR, 2023
  3. EMNLP
    ER-Test: Evaluating Explanation Regularization Methods for Language Models
    B. Joshi*, A. Chan*, Z. Liu*, S. Nie, M. Sanjabi, H. Firooz, and X. Ren
    Findings of EMNLP, 2022
  4. ICML
    UNIREX: A Unified Learning Framework for Language Model Rationale Extraction
    A. Chan, M. Sanjabi, L. Mathias, L. Tan, S. Nie, X. Peng, X. Ren, and H. Firooz
    ICML, 2022
  5. NeurIPS
    SalKG: Learning From Knowledge Graph Explanations for Commonsense Reasoning
    A. Chan, J. Xu, B. Long, S. Sanyal, T. Gupta, and X. Ren
    NeurIPS, 2021