
Hi, I am a research scientist at Google AI Language. I am interested in controllable and interpretable deep neural networks. My goal is to develop collaborative models that work hand-in-hand with humans to generate text. Before joining Google, I completed my Ph.D. in Computer Science at Harvard, supervised by Barbara Grosz and Sasha Rush. My dissertation was titled “Human-AI Collaboration for Natual Language Generation with Interpretable Neural Networks”.
The slides for my ACL 2020 tutorial on interpretability and analysis of neural models in NLP with Yonatan Belinkov and Ellie Pavlick can be found here.
Publications
2020
Accelerating Antimicrobial Discovery with Controllable Deep Generative Models and Molecular Dynamics
[Preprint]
Parent perspectives in shared decision-making for children with medical complexity
Academic Pediatrics
[Paper (Paywall)]
A Corpus for Detecting High-Context Clinical Indications in Intensive Care Patient Notes Focusing on Frequently Readmitted Patients
Proceedings of The 12th Language Resources and Evaluation Conference.
[Paper]
exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformer Models
ACL Demos 2020
[Paper]
2019
Visual Interaction with Deep Learning Models through Collaborative Semantic Inference
IEEE Transactions on Visualization and Computer Graphics (VAST 2019)
[Paper]
GLTR: Statistical Detection and Visualization of Generated Text
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations
Nominated for Best Demo
[Paper]
[Demo]
[Code]
LSTM Networks Can Perform Dynamic Counting
Deep Learning and Formal Languages Workshop at ACL 19
[Paper]
Evaluating an Automated Mediator for Joint Narratives in a Conflict Situation
Behaviour & Information Technology
[Paper]
Improving Human Text Comprehension through Semi-Markov CRF-based Neural Section Title Generation
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics
[Paper]
Interactive Visual Exploration of Latent Space (IVELS) for Peptide Auto-Encoder Model Selection
ICLR 2019 Workshop on Deep Generative Models for Highly Structured Data
[Paper]
Generating Abstractive Summaries with Finetuned Language Models
Proceedings of the 12th International Conference on Natural Language Generation (Challenge Track)
[Paper]
Margin Call: an Accessible Web-based Text Viewer with Generated Paragraph Summaries in the Margin
Proceedings of the 12th International Conference on Natural Language Generation (Demo Track)
[Paper]
2018
Debugging Sequence-to-Sequence Models with Seq2Seq-Vis
BlackboxNLP workshop at EMNLP 2018
[Paper]
[Demo]
End-to-End Content and Plan Selection for Natural Language Generation
Proceedings of the 11th International Conference on Natural Language Generation
Best ROUGE/METEOR/CIDEr
[Paper]
[Code]
[Challenge Website]
Towards Controllable Generation of Diverse Natural Language
Proceedings of the 11th International Conference on Natural Language Generation (E2E NLG challenge track)
[Paper]
[Challenge Website]
Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models
IEEE Transactions on Visualization and Computer Graphics (VAST 2018)
Honorable Mention
[Paper]
[Demo]
[Code]
Comparing deep learning and concept extraction based methods for patient phenotyping from clinical narratives
PLoS One
[Paper]
[Code]
[Data]
2017
LSTMVis: A Tool for Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks
IEEE Transactions on Visualization and Computer Graphics (InfoVis 2017)
[Paper]
[Demo]
[Code]
Behind the Scenes: A Medical Natural Language Processing Project
International Journal of Medical Informatics
[Paper]
2015
Deploying AI Methods to Support Collaborative Writing: a Preliminary Investigation
Proceedings of the 33rd Annual ACM Conference, Extended Abstracts on Human Factors in Computing Systems
[Paper]