Publications

Working Papers
Ying, Luwei, Jacob Montgomery, and Brandon M. Stewart. Working Papers. “Tools for Topic Model Validation: Towards Procedures for Validating Topics as Measures”. yingmontgomerystewart_-_main.pdf
Lundberg, Ian, Rebecca Johnson, and Brandon M. Stewart. Working Papers. “Setting the Target: Precise Estimands and the Gap Between Theory and Empirics”. setting_the_target.pdf
Egami, Naoki, Christian J. Fong, Justin Grimmer, Margaret E. Roberts, and Brandon M. Stewart. Working Papers. “How to Make Causal Inferences Using Texts”. ais.pdf
Forthcoming
Lundberg, Ian, and Brandon M. Stewart. Forthcoming. “Comment: Summarizing income mobility with multiple smooth quantiles instead of parameterized means”. Sociological Methodology. lundbergstewart_commenton_mitnikgrusky.pdf
de Marchi', 'Scott, and Brandon M. Stewart. Forthcoming. “Computational and Machine Learning Models: The Necessity of Connecting Theory and Empirics”. in SAGE Handbook of Research Methods in Political Science and International Relations.
Roberts, Margaret E., Brandon M. Stewart, and Richard Nielsen. Forthcoming. “Adjusting for Confounding with Text Matching”. American Journal of Political Science. textmatching_preprint.pdf

NB: This paper is a revised version of the manuscript formerly titled "Matching Methods for High-Dimensional Data with Applications to Text"

2019
What Makes Foreign Policy Teams Tick: Explaining Variation in Group Performance at Geopolitical Forecasting
Horowitz, Michael, et al. 2019. “What Makes Foreign Policy Teams Tick: Explaining Variation in Group Performance at Geopolitical Forecasting”. The Journal of Politics 81 (4):1388-1404. Publisher's VersionAbstract
When do groups—be they countries, administrations, or other organizations—more or less accurately understand the world around them and assess political choices? Some argue that group decision-making processes often fail due to biases induced by groupthink. Others argue that groups, by aggregating knowledge, are better at analyzing the foreign policy world. To advance knowledge about the intersection of politics and group decision making, this paper draws on evidence from a multiyear geopolitical forecasting tournament with thousands of participants sponsored by the US government. We find that teams outperformed individuals in making accurate geopolitical predictions, with regression discontinuity analysis demonstrating specific teamwork effects. Moreover, structural topic models show that more cooperative teams outperformed less cooperative teams. These results demonstrate that information sharing through groups, cultivating reasoning to hedge against cognitive biases, and ensuring all perspectives are heard can lead to greater success for groups at forecasting and understanding politics.
stm: An R Package for Structural Topic Models
Roberts, Margaret, Brandon Stewart, and Dustin Tingley. 2019. “stm: An R Package for Structural Topic Models”. Journal of Statistical Software 91 (2):1–40. Publisher's VersionAbstract
This paper demonstrates how to use the R package stm for structural topic modeling. The structural topic model allows researchers to flexibly estimate a topic model that includes document-level metadata. Estimation is accomplished through a fast variational approximation. The stm package provides many useful features, including rich ways to explore topics, estimate uncertainty, and visualize quantities of interest.
2018
A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors
Khodak, Mikhail, et al. 2018. “A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors”. Proceedings of the Association of Computational Linguistics.
Selected for Oral Presentation, Paper, Code, Blog Post
The Civic Mission of MOOCs: Computational Measures of Engagement Across Differences in Online Courses
Yeomans, Michael, et al. 2018. “The Civic Mission of MOOCs: Computational Measures of Engagement Across Differences in Online Courses”. International Journal of Artificial Intelligence in Education 28 (4):553-589. Publisher's Version
Preprint here
How Algorithmic Confounding in Recommendation Systems Increases Homogeneity and Decreases Utility
Chaney, Allison J.B., Brandon M. Stewart, and Barbara E. Engelhardt. 2018. “How Algorithmic Confounding in Recommendation Systems Increases Homogeneity and Decreases Utility”. Twelfth ACM Conference on Recommender Systems (RecSys ’18). arXiv
The Global Diffusion of Law: Transnational Crime and the Case of Human Trafficking
Simmons, Beth A., Paulette Lloyd, and Brandon M. Stewart. 2018. “The Global Diffusion of Law: Transnational Crime and the Case of Human Trafficking”. International Organization 72 (2):249-281. Publisher's Version
Data and Code: here
2017
Discourse: MOOC Discussion Forum Analysis at Scale
Kindel, Alexander, Michael Yeomans, Justin Reich, Brandon Stewart, and Dustin Tingley. 2017. “Discourse: MOOC Discussion Forum Analysis at Scale”. Pp. 141–142 in Proceedings of the Fourth (2017) ACM Conference on Learning @ Scale. New York, NY, USA: ACM. Publisher's Version p141-kindel.pdf
2016
The Civic Mission of MOOCs: Measuring Engagement across Political Differences in Forums
Reich, Justin, Brandon Stewart, Kimia Mavon, and Dustin Tingley. 2016. “The Civic Mission of MOOCs: Measuring Engagement across Political Differences in Forums”. Proceedings of the Third (2016) ACM Conference on Learning @ Scale 1-10. Publisher's VersionAbstract

In this study, we develop methods for computationally measuring the degree to which students engage in MOOC forums with other students holding different political beliefs. We examine a case study of a single MOOC about education policy, Saving Schools, where we obtain measures of student education policy preferences that correlate with political ideology. Contrary to assertions that online spaces often become echo chambers or ideological silos, we find that students in this case hold diverse political beliefs, participate equitably in forum discussions, directly engage (through replies and upvotes) with students holding opposing beliefs, and converge on a shared language rather than talking past one another. Research that focuses on the civic mission of MOOCs helps ensure that open online learning engages the same breadth of purposes that higher education aspires to serve.

civicmooc.pdf
A model of text for experimentation in the social sciences
Roberts, Margaret E., Brandon M. Stewart, and Edoardo M Airoldi. 2016. “A model of text for experimentation in the social sciences”. Journal of the American Statistical Association 111 (515):988-1003. Publisher's VersionAbstract

Statistical models of text have become increasingly popular in statistics and computer science as a method of exploring large document collections. Social scientists often want to move beyond exploration, to measurement and experimentation, and make inference about social and political processes that drive discourse and content. In this paper, we develop a model of text data that supports this type of substantive research.
Our approach is to posit a hierarchical mixed membership model for analyzing topical content of documents, in which mixing weights are parameterized by observed covariates. In this model, topical prevalence and topical content are specified as a simple generalized linear model on an arbitrary number of document-level covariates, such as news source and time of release, enabling researchers to introduce elements of the experimental design that informed document collection into the model, within a generally applicable framework. We demonstrate the proposed methodology by analyzing a collection of news reports about China, where we allow the prevalence of topics to evolve over time and vary across newswire services. Our methods quantify the effect of news wire source on both the frequency and nature of topic coverage.

a_model_of_text_for_experimentation_in_the_social_sciences.pdf

NB: This is a revised version of the working paper previously titled "Structural Topic Models." SupplementReplication Package, Software

Navigating the Local Modes of Big Data: The Case of Topic Models
Roberts, Margaret E, Brandon M Stewart, and Dustin Tingley. 2016. “Navigating the Local Modes of Big Data: The Case of Topic Models”. in Computational Social Science: Discovery and Prediction. New York: Cambridge University Press. Publisher's Version

Copy available here

2015
Chuang, Jason, et al. 2015. “TopicCheck: Interactive Alignment for Assessing Topic Model Stability”. North American Chapter of the Association for Computational Linguistics Human Language Technologies (NAACL HLT).Abstract

Content analysis, a widely-applied social science research method, is increasingly being supplemented by topic modeling. However, while the discourse on content analysis centers heavily on reproducibility, computer scientists often focus more on scalability and less on coding reliability, leading to growing skepticism on the usefulness of topic models for automated content analysis. In response, we introduce TopicCheck, an interactive tool for assessing topic model stability. Our contributions are threefold. First, from established guidelines on reproducible content analysis, we distill a set of design requirements on how to computationally assess the stability of an automated coding process. Second, we devise an interactive alignment algorithm for matching latent topics from multiple models, and enable sensitivity evaluation across a large number of models. Finally, we demonstrate that our tool enables social scientists to gain novel insights into three active research questions.

topiccheck.pdf
Computer Assisted Reading and Discovery for Student Generated Text in Massive Open Online Courses
Reich, Justin, Dustin Tingley, Jetson Leder-Luis, Margaret E Roberts, and Brandon M Stewart. 2015. “Computer Assisted Reading and Discovery for Student Generated Text in Massive Open Online Courses”. Journal of Learning Analytics 2 (1):156-184.Abstract

Dealing with the vast quantities of text that students generate in a Massive Open Online Course (MOOC) is a daunting challenge. Computational tools are needed to help instructional teams uncover themes and patterns as MOOC students write in forums, assignments, and surveys. This paper introduces to the learning analytics community the Structural Topic Model, an approach to language processing that can (1) find syntactic patterns with semantic meaning in unstructured text, (2) identify variation in those patterns across covariates, and (3) uncover archetypal texts that exemplify the documents within a topical pattern. We show examples of computationally- aided discovery and reading in three MOOC settings: mapping students’ self-reported motivations, identifying themes in discussion forums, and uncovering patterns of feedback in course evaluations. 

4138-19512-1-pb.pdf
Computer assisted text analysis for comparative politics.
Lucas, Christopher, et al. 2015. “Computer assisted text analysis for comparative politics.”. Political Analysis 23 (2):254-277.Abstract

Recent advances in research tools for the systematic analysis oftextual data are enabling exciting new research throughout the socialsciences. For comparative politics scholars who are often interestedin non-English and possibly multilingual textual datasets, theseadvances may be difficult to access. This paper discusses practicalissues that arise in the the processing, management, translation andanalysis of textual data with a particular focus on how proceduresdiffer across languages. These procedures are combined in two appliedexamples of automated text analysis using the recently introducedStructural Topic Model. We also show how the model can be used toanalyze data that has been translated into a single language viamachine translation tools. All the methods we describe here are implemented in open-source software packages available from the authors.

pa2015_corrected.pdf compoltextappendix.pdf

Included in Political Analysis virtual issue on Online Research Methods. Software: stm, txtorgtranslateR. Replication Package

Romney, David, Brandon M Stewart, and Dustin Tingley. 2015. “Plain Text: Transparency in the Acquisition, Analysis, and Access Stages of the Computer-assisted Analysis of Texts”. Qualitative and Multi-Method Research 13 (1):32-37. qmmr2015-1.pdf

Pages