Publications &c.

Alex Warstadt and Samuel R. Bowman. Grammatical Analysis of Pretrained Sentence Encoders with Acceptability Judgments. Unpublished manuscript. 2019.

Samuel R. Bowman, Ellie Pavlick, Edouard Grave, Benjamin Van Durme, Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, and Berlin Chen. Looking for ELMo's Friends: Sentence-Level Pretraining Beyond Language Modeling (code). Unpublished manuscript. 2019.

Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding (project site). Proceedings of ICLR. 2019.

Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick. What do you learn from context? Probing for sentence structure in contextualized word representations. Proceedings of ICLR. 2019.

Kelly W. Zhang and Samuel R. Bowman. Language Modeling Teaches You More Syntax than Translation Does: Lessons Learned Through Auxiliary Task Analysis. Unpublished manuscript. 2018.

Katharina Kann, Alex Warstadt, Adina Williams, and Samuel R. Bowman. Verb Argument Structure Alternations in Word and Sentence Embeddings. Proceedings of SCiL. 2018.

Yun Chen, Victor O.K. Li, Kyunghyun Cho and Samuel R. Bowman. A Stable and Effective Learning Strategy for Trainable Greedy Decoding. Proceedings of EMNLP. 2018.

Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk and Veselin Stoyanov. XNLI: Cross-lingual Sentence Understanding through Inference (corpus page). Proceedings of EMNLP. 2018.

Phu Mon Htut, Kyunghyun Cho, and Samuel R. Bowman. Grammar Induction with Neural Language Models: An Unusual Replication (code). Proceedings of EMNLP (short paper). 2018.

WooJin Chung, Sheng-Fu Wang, and Samuel R. Bowman. The Lifted Matrix-Space Model for Semantic Composition. Proceedings of CoNLL. 2018.

Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. Neural Network Acceptability Judgments (corpus page). Unpublished manuscript. 2018.

Nikita Nangia and Samuel R. Bowman. ListOps: A Diagnostic Dataset for Latent Tree Learning (code and data). Proceedings of the NAACL Student Research Workshop. 2018.

Phu Mon Htut, Samuel R. Bowman, and Kyunghyun Cho. Training a Ranking Function for Open-Domain Question Answering. Proceedings of the NAACL Student Research Workshop. 2018.

Adina Williams, Nikita Nangia, and Samuel R. Bowman. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference (corpus page). Proceedings of NAACL. 2018.

Adina Williams, Andrew Drozdov, and Samuel R. Bowman. Do latent tree learning models identify meaningful structure in sentences? (code) Transactions of the ACL (TACL). 2018.

Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. Annotation Artifacts in Natural Language Inference Data (data on the MultiNLI corpus page). Proceedings of NAACL (short paper). 2018.

Yichen Gong and Samuel R. Bowman. Ruminating Reader: Reasoning with Gated Multi-Hop Attention. Proceedings of the Workshop on Machine Reading for Question Answering. 2018.

Nikita Nangia, Adina Williams, Angeliki Lazaridou, and Samuel R. Bowman. The RepEval 2017 Shared Task: Multi-Genre Natural Language Inference with Sentence Representations. Proceedings of RepEval 2017: The Second Workshop on Evaluating Vector Space Representations for NLP. 2017.

Rohan Kshirsagar, Robert Morris, and Samuel R. Bowman. Detecting and Explaining Crisis. Proceedings of The 2017 Computational Linguistics and Clinical Psychology Workshop. 2017.

Sebastian Brarda, Philip Yeres, and Samuel R. Bowman. Sequential Attention. Proceedings of the 2nd Workshop on Representation Learning for NLP. 2017.

Yacine Jernite, Samuel R. Bowman, and David Sontag. Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning. Unpublished manuscript. 2017.

Samuel R. Bowman. Modeling natural language semantics in learned representations. Stanford University Dissertation. 2016.

{Samuel R. Bowman, Luke Vilnis}, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. Generating Sentences from a Continuous Space. Proceedings of CoNLL (Note: This work was conducted at Google, and the code for the paper was not released.). 2016.

{Samuel R. Bowman, Jon Gauthier}, Abhinav Rastogi, Raghav Gupta, Christopher D. Manning, and Christopher Potts. A Fast Unified Model for Parsing and Sentence Understanding (code). Proceedings of ACL. 2016.

Samuel R. Bowman, Christopher D. Manning, and Christopher Potts. Tree-structured composition in neural networks without tree-structured architectures. Proceedings of the NIPS Workshop on Cognitive Computation: Integrating Neural and Symbolic Approaches. 2015.

Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A large annotated corpus for learning natural language inference (corpus page). Proceedings of EMNLP. 2015. Best New Data Set or Resource Award.

Samuel R. Bowman, Christopher Potts, and Christopher D. Manning. Recursive Neural Networks Can Learn Logical Semantics (MATLAB source code and data, poster). Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality. 2015.

Samuel R. Bowman, Christopher Potts, and Christopher D. Manning. Learning Distributed Word Representations for Natural Logic Reasoning. Proceedings of the AAAI Spring Symposium on Knowledge Representation and Reasoning. 2015.

Samuel R. Bowman. Transparent vowels in ABC: open issues. UC Berkeley Phonology Lab Annual Report: Conference on Agreement by Correspondence (ABC↔Conference; handout). Invited talk. 2014.

Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel R. Bowman, Miriam Connor, John Bauer, and Christopher D. Manning. A Gold Standard Dependency Corpus for English. Proceedings of LREC 9. 2014.

Samuel R. Bowman. Can recursive neural tensor networks learn logical reasoning? (MATLAB source code and data). Unpublished manuscript. Presented at ICLR '14 and as an invited talk at the 3rd CSLI Workshop on Logic, Rationality and Intelligent Interaction. 2014.

Samuel R. Bowman and Benjamin Lokshin. Idiosyncratic transparent vowels in Kazakh. Proceedings of the 2013 Meeting on Phonology, presentations at Berkeley Phorum and WAFL 9. 2013. A typo in item (39) in the published version is corrected here.

Marie-Catherine de Marneffe, Miriam Connor, Natalia Silveira, Samuel R. Bowman, Timothy Dozat and Christopher D. Manning. More constructions, more genres: Extending Stanford Dependencies. Proceedings of DepLing. 2013.

Samuel R. Bowman. Two arguments for vowel harmony by trigger competition. Proceedings of CLS 49, presentations at Edinburgh P-Workshop and 21mfm. 2013.

Samuel R. Bowman and Harshit Chopra. Automatic animacy classification (poster). Proceedings of The NAACL-HLT Student Research Workshop. 2012.

Samuel R. Bowman. Vowel varmony, opacity, and finite-state OT. Technical report TR-2011-03, Department of Computer Science, The University of Chicago.

Geoffrey Zweig, Les Atlas, Kris Demuynck, Fei Sha, Patrick Nguyen, Dirk van Compernolle, Damianos Karakos, Pascal Clark, Meihong Wang, Gregory Sell, Samuel Thomas, Samuel R. Bowman and Justine Kao. Speech recognition with segmental conditional random fields: A summary of the JHU CLSP 2010 Summer Workshop. Proceedings of ICASSP 36. 2011.

Samuel R. Bowman and Karen Livescu. Modeling pronunciation variation with context-dependent articulatory feature decision trees. Proceedings of Interspeech. 2010.

An aside: My Erdős number is 4, both by way of Karen Livescu, Kamalika Chaudhuri, and Fan Chung, and by way of Chris Manning, Val Spitkovsky, and Daniel Kleitman.


Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A large annotated corpus of entailments and contradictions (slides). Talk at California Universities Semantics and Pragmatics. 2015.

Samuel R. Bowman. Computational linguistics (slides). Guest lecture for an introductory linguistics class with Asya Pereltsvaig. 2015.

Samuel R. Bowman. Neural networks for natural language understanding (slides). Guest lecture for Chris Potts and Bill MacCartney's computational natural language understanding class. 2015.

Samuel R. Bowman. vector-entailment: A MATLAB toolkit for tree-structured recursive neural networks. 2015.

Samuel R. Bowman. Seto vowel harmony and neutral vowels (poster). Presentation at LSA 86. 2013.

Samuel R. Bowman. Computational Linguistics, Corpora, and NLP (handout). Guest lecture for an introductory linguistics class with Asya Pereltsvaig; assumes slight prior knowledge of NLP. 2013.

Richard Futrell and Samuel R. Bowman. Measuring amok. Course paper for Natural Language Understanding. 2012.

Samuel R. Bowman. Tutorial: Building OT Grammars in PyPhon (slides). Presentation at the Stanford P-Interest Workshop. 2012.

Jason Riggle, Max Bane, and Samuel R. Bowman. PyPhon. (A software package for finite-state Optimality Theory.)