Publications

Working Papers
Mathur, Arunesh, et al. Working Papers. “Manipulative tactics are the norm in political emails: Evidence from 100K emails from the 2020 U.S. election cycle”.
Grimmer, Justin, Dean Knox, and Brandon M. Stewart. Working Papers. “Naïve regression requires weaker assumptions than factor models to adjust for multiple cause confounding”.

arXiv
Code and Data on Code Ocean

Ying, Luwei, Jacob Montgomery, and Brandon M. Stewart. Working Papers. “Tools for Topic Model Validation: Towards Procedures for Validating Topics as Measures”. yingmontgomerystewart_-_main.pdf
Lundberg, Ian, Rebecca Johnson, and Brandon M. Stewart. Working Papers. “Setting the Target: Precise Estimands and the Gap Between Theory and Empirics”.
Egami, Naoki, Christian J. Fong, Justin Grimmer, Margaret E. Roberts, and Brandon M. Stewart. Working Papers. “How to Make Causal Inferences Using Texts”. ais.pdf
2020
Salganik, Matthew J., and many others. 2020. “Measuring the predictability of life outcomes with a scientific mass collaboration”. Proceedings of the National Academy of Sciences 117 (15). Publisher's Version
Lundberg, Ian, and Brandon M. Stewart. 2020. “Comment: Summarizing income mobility with multiple smooth quantiles instead of parameterized means”. Sociological Methodology 50:96-111. Publisher's Version lundbergstewart_commenton_mitnikgrusky.pdf
Code on Dataverse
de Marchi', 'Scott, and Brandon M. Stewart. 2020. “Computational and Machine Learning Models: The Necessity of Connecting Theory and Empirics”. in SAGE Handbook of Research Methods in Political Science and International Relations.
Roberts, Margaret E., Brandon M. Stewart, and Richard Nielsen. 2020. “Adjusting for Confounding with Text Matching”. American Journal of Political Science 64 (4):887-903. Publisher's Version textmatching.pdf textmatchingonlinesupplementalinformation.pdf

NB: This paper is a revised version of the manuscript formerly titled "Matching Methods for High-Dimensional Data with Applications to Text"
Blog Post. Dataverse, Software

2019
What Makes Foreign Policy Teams Tick: Explaining Variation in Group Performance at Geopolitical Forecasting
Horowitz, Michael, et al. 2019. “What Makes Foreign Policy Teams Tick: Explaining Variation in Group Performance at Geopolitical Forecasting”. The Journal of Politics 81 (4):1388-1404. Publisher's VersionAbstract
When do groups—be they countries, administrations, or other organizations—more or less accurately understand the world around them and assess political choices? Some argue that group decision-making processes often fail due to biases induced by groupthink. Others argue that groups, by aggregating knowledge, are better at analyzing the foreign policy world. To advance knowledge about the intersection of politics and group decision making, this paper draws on evidence from a multiyear geopolitical forecasting tournament with thousands of participants sponsored by the US government. We find that teams outperformed individuals in making accurate geopolitical predictions, with regression discontinuity analysis demonstrating specific teamwork effects. Moreover, structural topic models show that more cooperative teams outperformed less cooperative teams. These results demonstrate that information sharing through groups, cultivating reasoning to hedge against cognitive biases, and ensuring all perspectives are heard can lead to greater success for groups at forecasting and understanding politics.
Data and Code on Dataverse 
stm: An R Package for Structural Topic Models
Roberts, Margaret, Brandon Stewart, and Dustin Tingley. 2019. “stm: An R Package for Structural Topic Models”. Journal of Statistical Software 91 (2):1–40. Publisher's VersionAbstract
This paper demonstrates how to use the R package stm for structural topic modeling. The structural topic model allows researchers to flexibly estimate a topic model that includes document-level metadata. Estimation is accomplished through a fast variational approximation. The stm package provides many useful features, including rich ways to explore topics, estimate uncertainty, and visualize quantities of interest.
2018
A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors
Khodak, Mikhail, et al. 2018. “A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors”. Proceedings of the Association of Computational Linguistics.
Selected for Oral Presentation, Paper, Code, Blog Post
The Civic Mission of MOOCs: Computational Measures of Engagement Across Differences in Online Courses
Yeomans, Michael, et al. 2018. “The Civic Mission of MOOCs: Computational Measures of Engagement Across Differences in Online Courses”. International Journal of Artificial Intelligence in Education 28 (4):553-589. Publisher's Version
Preprint here
How Algorithmic Confounding in Recommendation Systems Increases Homogeneity and Decreases Utility
Chaney, Allison J.B., Brandon M. Stewart, and Barbara E. Engelhardt. 2018. “How Algorithmic Confounding in Recommendation Systems Increases Homogeneity and Decreases Utility”. Twelfth ACM Conference on Recommender Systems (RecSys ’18). arXiv
The Global Diffusion of Law: Transnational Crime and the Case of Human Trafficking
Simmons, Beth A., Paulette Lloyd, and Brandon M. Stewart. 2018. “The Global Diffusion of Law: Transnational Crime and the Case of Human Trafficking”. International Organization 72 (2):249-281. Publisher's Version
Data and Code: here
2017
Discourse: MOOC Discussion Forum Analysis at Scale
Kindel, Alexander, Michael Yeomans, Justin Reich, Brandon Stewart, and Dustin Tingley. 2017. “Discourse: MOOC Discussion Forum Analysis at Scale”. Pp. 141–142 in Proceedings of the Fourth (2017) ACM Conference on Learning @ Scale. New York, NY, USA: ACM. Publisher's Version p141-kindel.pdf
2016
The Civic Mission of MOOCs: Measuring Engagement across Political Differences in Forums
Reich, Justin, Brandon Stewart, Kimia Mavon, and Dustin Tingley. 2016. “The Civic Mission of MOOCs: Measuring Engagement across Political Differences in Forums”. Proceedings of the Third (2016) ACM Conference on Learning @ Scale 1-10. Publisher's VersionAbstract

In this study, we develop methods for computationally measuring the degree to which students engage in MOOC forums with other students holding different political beliefs. We examine a case study of a single MOOC about education policy, Saving Schools, where we obtain measures of student education policy preferences that correlate with political ideology. Contrary to assertions that online spaces often become echo chambers or ideological silos, we find that students in this case hold diverse political beliefs, participate equitably in forum discussions, directly engage (through replies and upvotes) with students holding opposing beliefs, and converge on a shared language rather than talking past one another. Research that focuses on the civic mission of MOOCs helps ensure that open online learning engages the same breadth of purposes that higher education aspires to serve.

civicmooc.pdf
A model of text for experimentation in the social sciences
Roberts, Margaret E., Brandon M. Stewart, and Edoardo M Airoldi. 2016. “A model of text for experimentation in the social sciences”. Journal of the American Statistical Association 111 (515):988-1003. Publisher's VersionAbstract

Statistical models of text have become increasingly popular in statistics and computer science as a method of exploring large document collections. Social scientists often want to move beyond exploration, to measurement and experimentation, and make inference about social and political processes that drive discourse and content. In this paper, we develop a model of text data that supports this type of substantive research.
Our approach is to posit a hierarchical mixed membership model for analyzing topical content of documents, in which mixing weights are parameterized by observed covariates. In this model, topical prevalence and topical content are specified as a simple generalized linear model on an arbitrary number of document-level covariates, such as news source and time of release, enabling researchers to introduce elements of the experimental design that informed document collection into the model, within a generally applicable framework. We demonstrate the proposed methodology by analyzing a collection of news reports about China, where we allow the prevalence of topics to evolve over time and vary across newswire services. Our methods quantify the effect of news wire source on both the frequency and nature of topic coverage.

a_model_of_text_for_experimentation_in_the_social_sciences.pdf

NB: This is a revised version of the working paper previously titled "Structural Topic Models." SupplementReplication Package, Software

Navigating the Local Modes of Big Data: The Case of Topic Models
Roberts, Margaret E, Brandon M Stewart, and Dustin Tingley. 2016. “Navigating the Local Modes of Big Data: The Case of Topic Models”. in Computational Social Science: Discovery and Prediction. New York: Cambridge University Press. Publisher's Version

Copy available here

2015
Chuang, Jason, et al. 2015. “TopicCheck: Interactive Alignment for Assessing Topic Model Stability”. North American Chapter of the Association for Computational Linguistics Human Language Technologies (NAACL HLT).Abstract

Content analysis, a widely-applied social science research method, is increasingly being supplemented by topic modeling. However, while the discourse on content analysis centers heavily on reproducibility, computer scientists often focus more on scalability and less on coding reliability, leading to growing skepticism on the usefulness of topic models for automated content analysis. In response, we introduce TopicCheck, an interactive tool for assessing topic model stability. Our contributions are threefold. First, from established guidelines on reproducible content analysis, we distill a set of design requirements on how to computationally assess the stability of an automated coding process. Second, we devise an interactive alignment algorithm for matching latent topics from multiple models, and enable sensitivity evaluation across a large number of models. Finally, we demonstrate that our tool enables social scientists to gain novel insights into three active research questions.

topiccheck.pdf
Computer Assisted Reading and Discovery for Student Generated Text in Massive Open Online Courses
Reich, Justin, Dustin Tingley, Jetson Leder-Luis, Margaret E Roberts, and Brandon M Stewart. 2015. “Computer Assisted Reading and Discovery for Student Generated Text in Massive Open Online Courses”. Journal of Learning Analytics 2 (1):156-184.Abstract

Dealing with the vast quantities of text that students generate in a Massive Open Online Course (MOOC) is a daunting challenge. Computational tools are needed to help instructional teams uncover themes and patterns as MOOC students write in forums, assignments, and surveys. This paper introduces to the learning analytics community the Structural Topic Model, an approach to language processing that can (1) find syntactic patterns with semantic meaning in unstructured text, (2) identify variation in those patterns across covariates, and (3) uncover archetypal texts that exemplify the documents within a topical pattern. We show examples of computationally- aided discovery and reading in three MOOC settings: mapping students’ self-reported motivations, identifying themes in discussion forums, and uncovering patterns of feedback in course evaluations. 

4138-19512-1-pb.pdf
Computer assisted text analysis for comparative politics.
Lucas, Christopher, et al. 2015. “Computer assisted text analysis for comparative politics.”. Political Analysis 23 (2):254-277.Abstract

Recent advances in research tools for the systematic analysis oftextual data are enabling exciting new research throughout the socialsciences. For comparative politics scholars who are often interestedin non-English and possibly multilingual textual datasets, theseadvances may be difficult to access. This paper discusses practicalissues that arise in the the processing, management, translation andanalysis of textual data with a particular focus on how proceduresdiffer across languages. These procedures are combined in two appliedexamples of automated text analysis using the recently introducedStructural Topic Model. We also show how the model can be used toanalyze data that has been translated into a single language viamachine translation tools. All the methods we describe here are implemented in open-source software packages available from the authors.

pa2015_corrected.pdf compoltextappendix.pdf

Included in Political Analysis virtual issue on Online Research Methods. Software: stm, txtorgtranslateR. Replication Package

Romney, David, Brandon M Stewart, and Dustin Tingley. 2015. “Plain Text: Transparency in the Acquisition, Analysis, and Access Stages of the Computer-assisted Analysis of Texts”. Qualitative and Multi-Method Research 13 (1):32-37. qmmr2015-1.pdf
2014
Chuang, Jason, et al. 2014. “Computer-Assisted Content Analysis: Topic Models for Exploring Multiple Subjective Interpretations”. Advances in Neural Information Processing Systems Workshop on Human-Propelled Machine Learning.Abstract

Content analysis, a labor-intensive but widely-applied research method, is increasingly being supplemented by computational techniques such as statistical topic modeling. However, while the discourse on content analysis centers heavily on reproducibility, computer scientists often focus more on increasing the scale of analysis and less on establishing the reliability of analysis results. The gap between user needs and available tools leads to justified skepticism, and limits the adoption and effective use of computational approaches. We argue that enabling human-in-the-loop machine learning requires establishing users’ trust in computer-assisted analysis. To this aim, we introduce our ongoing work on analysis tools for interac- tively exploring the space of available topic models. To aid tool development, we propose two studies to examine how a computer-aided workflow affects the uncovered codes, and how machine-generated codes impact analysis outcome. We present our prototypes and findings currently under submission. 

nipshpml2014.pdf
Coppola, Antonio, and Brandon M Stewart. 2014. “lbfgs: Efficient L-BFGS and OWL-QN Optimization in R”.Abstract

This vignette introduces the lbfgs package for R, which consists of a wrapper built around the libLBFGS optimization library written by Naoaki Okazaki. The lbfgs package implements both the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) and the Orthant-Wise Limited-memory Quasi-Newton (OWL-QN) optimization algorithms. The L-BFGS algorithm solves the problem of minimizing an objective, given its gradient, by iteratively computing approximations of the inverse Hessian matrix. The OWL-QN algorithm finds the optimum of an objective plus the L1 norm of the problem’s parameters. The package offers a fast and memory-efficient implementation of these optimization routines, which is particularly suited for high-dimensional problems. The lbfgs package compares favorably with other optimization packages for R in microbenchmark tests.

lbfgsvignette.pdf
Stewart, Brandon M. 2014. Latent Factor Regressions for the Social Sciences.Abstract

In this paper I present a general framework for regression in the presence of complex dependence structures between units such as in time-series cross-sectional data, relational/network data, and spatial data. These types of data are challenging for standard multilevel models because they involve multiple types of structure (e.g. temporal effects and cross-sectional effects) which are interactive. I show that interactive latent factor models provide a powerful modeling alternative that can address a wide range of data types. Although related models have previously been proposed in several different fields, inference is typically cumbersome and slow. I introduce a class of fast variational inference algorithms that allow for models to be fit quickly and accurately.

tensorreg.pdf tensorregappendix.pdf
Structural topic models for open-ended survey responses
Roberts, Margaret E, et al. 2014. “Structural topic models for open-ended survey responses”. American Journal of Political Science 58:1064-1082.Abstract

Collection and especially analysis of open-ended survey responses are relatively rare in the discipline and when conducted are almost exclusively done through human coding. We present an alternative, semi-automated approach, the structural topic model (STM) (Roberts, Stewart, and Airoldi 2013; Roberts et al. 2013), that draws on recent developments in machine learning based analysis of textual data. A crucial contribution of the method is that it incorporates information about the document, such as the author’s gender, political affiliation, and treatment assignment (if an experimental study). This article focuses on how the STM is helpful for survey researchers and experimentalists. The STM makes analyzing open-ended responses easier, more revealing, and capable of being used to estimate treatment effects. We illustrate these innovations with analysis of text from surveys and experiments.

topicmodelsopenendedexperiments_0.pdf ajpsappendix.pdf

Awarded the Gosnell Prize for Excellence in Political Methodology for the best work in political methodology presented at any political science conference during the preceding year.  Data at: http://dx.doi.org/10.7910/DVN/29405

2013
Choosing Your Neighbors: Networks of Diffusion in International Relations
Zhukov, Yuri M., and Brandon M. Stewart. 2013. “Choosing Your Neighbors: Networks of Diffusion in International Relations”. International Studies Quarterly 57:271-287.Abstract

In examining the diffusion of social and political phenomena like regime transition, conflict, and policy change, scholars routinely make choices about how proximity is defined and which neighbors should be considered more important than others. Since each specification offers an alternative view of the networks through which diffusion can take place, one’s decision can exert a significant influence on the magnitude and scope of estimated diffusion effects. This problem is widely recognized, but is rarely the subject of direct analysis. In international relations research, connectivity choices are usually ad hoc, driven more by data availability than by theoretically informed decision criteria. We take a closer look at the assumptions behind these choices, and propose a more systematic method to asses the structural similarity of two or more alternative networks, and select one that most plausibly relates theory to empirics. We apply this method to the spread of democratic regime change, and offer an illustrative example of how neighbor choices might impact predictions and inferences in the case of the 2011 Arab Spring.

zhukovstewart_isq.pdf

Replication Data: here.  Spatial Weight Data here.

Learning to Extract International Relations from Political Context
O’Connor, Brendan, Brandon M Stewart, and Noah A Smith. 2013. “Learning to Extract International Relations from Political Context”. Association of Computational Linguistics. oconnorstewartsmith.irevents.acl2013.pdf supp.pdf
Psychological and Physiological Responses following Repeated Peer Death
Andersen, Judith Pizarro, Roxane Cohen Silver, Brandon M Stewart, Billie Koperwas, and Clemens Kirschbaum. 2013. “Psychological and Physiological Responses following Repeated Peer Death”. PLOS One 81-9. andersenetal2013.pdf onlineappendix.pdf
The Structural Topic Model and Applied Social Science
Roberts, Margaret E, Brandon M Stewart, Dustin Tingley, and Edoardo M Airoldi. 2013. “The Structural Topic Model and Applied Social Science”. Advances in Neural Information Processing Systems Workshop on Topic Models: Computation, Application, and Evaluation. stmnips2013.pdf

Peer-Reviewed Conference Workshop. Selected for Oral Presentation.

Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts
Grimmer, Justin, and Brandon M Stewart. 2013. “Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts”. Political Analysis 21:267-297.Abstract

Politics and political conflict often occur in the written and spoken word. Scholars have long recognized this, but the massive costs of analyzing even moderately sized collections of texts have prevented political scientists from using texts in their research. Here lies the promise of automated text analysis: it substantially reduces the costs of analyzing large collections of text. We provide a guide to this exciting new area of research and show how, in many instances, the methods have already obtained part of their promise. But there are pitfalls to using automated methods. Automated text methods are useful, but incorrect, models of language: they are no substitute for careful thought and close reading. Rather, automated text methods augment and amplify human reading abilities. Using the methods requires extensive validation in any one application. With these guiding principles to using automated methods, we clarify misconceptions and errors in the literature and identify open questions in the application of automated text analysis in political science. For scholars to avoid the pitfalls of automated methods, methodologists need to develop new methods specifically for how social scientists use quantitative text methods.

tad2.pdf

Awarded Political Analysis Editor’s Choice Award for an article providing an especially significant contribution to political methodology. Replication Data: here.

2012
Combating Transnational Crime: The Role of Learning and Norm Diffusion in the Current Rule of Law Wave
Lloyd, Paulette, et al. 2012. “Combating Transnational Crime: The Role of Learning and Norm Diffusion in the Current Rule of Law Wave”. in Rule of Law Dynamics: In an Era of International and Transnational Governance.
2009
Use of force and civil–military relations in Russia: an automated content analysis
Stewart, Brandon M., and Yuri M. Zhukov. 2009. “Use of force and civil–military relations in Russia: an automated content analysis”. Small Wars & Insurgencies 20:319-343.Abstract

Russia’s intervention in the Georgian–South Ossetian conflict has highlighted the need to rigorously examine trends in the public debate over the use of force in Russia. Approaching this debate through the prism of civil–military relations, we take advantage of recent methodological advances in automated content analysis and generate a new dataset of 8000 public statements made by Russia’s political and military leaders during the Putin period. The data show little evidence that military elites exert a restraining influence on Russian foreign and defence policy. Although more hesitant than their political counterparts to embrace an interventionist foreign policy agenda, Russian military elites are considerably more activist in considering the use of force as an instrument of foreign policy.

2009_stewartzhukov_swi.pdf appendix.pdf
2007
Political Persecution or Economic Deprivation? A Time-Series Analysis of Haitian Exodus, 1990-2004
Shellman, Stephen M, and Brandon M Stewart. 2007. “Political Persecution or Economic Deprivation? A Time-Series Analysis of Haitian Exodus, 1990-2004”. Conflict Management and Peace Science 24:121-137.Abstract

This study addresses the factors that lead individuals to flee their homes in search of refuge. Many argue that individuals abandon their homes in favor of an uncertain life elsewhere because of economic hardship, while others argue that threats to their lives, physical person, and liberty cause them to flee. This study engages the debate by analyzing flight patterns over time from Haiti to the United States as a function of economic and security factors. Which factors have the largest influence on Haitian-U.S. migratory patterns? Our results show that both economics and security play a role. However, our analyses are able to distinguish between the effects of different individual economic and security indicators on Haitian-U.S. migration.

shellman.stewart.2007.pdf
Predicting Risk Factors Associated with Forced Migration: An Early Warning Model of Haitian Flight
Shellman, Stephen M, and Brandon M Stewart. 2007. “Predicting Risk Factors Associated with Forced Migration: An Early Warning Model of Haitian Flight”. Civil Wars 9174-199.Abstract

This study predicts forced migration events by predicting the civil violence, poor economic conditions, and foreign interventions known to cause individuals to flee their homes in search of refuge. If we can predict forced migration, policy-makers can better plan for humanitarian crises. While the study is limited to predicting Haitian flight to the United States, its strength is its ability to predict weekly flows as opposed to annual flows, providing a greater level of predictive detail than its ‘country-year’ counterparts. We focus on Haiti given that it exhibits most, if not all, of the independent variables included in theories and models of forced migration. Within our temporal domain (1994–2004), Haiti experienced economic instability, low intensity civil conflict, state repression, rebel dissent, and foreign intervention and influence. Given the model’s performance, the study calls for the collection of disaggregated data in additional countries to provide more precise and useful early-warning models of forced migrant events.

shellman.stewart.2007b.pdf
2006
Reeves, Andrew M, Stephen M Shellman, and Brandon M Stewart. 2006. “Fair & Balanced or Fit to Print? The Effects of Media Sources on Statistical Inferences”.Abstract

This paper examines the effects of source bias on statistical inferences drawn from event data analyses. Most event data projects use a single source to code events. For example most of the early Kansas Event Data System (KEDS) datasets code only Reuters and Agence France Presse (AFP) reports. One of the goals of Project Civil Strife (PCS) –a new internal conflict-cooperation event data project– is to code event data from several news sources to garner the most extensive coverage of events and control for bias often found in a single source. Herein, we examine the effects that source bias has on the inferences we draw from statistical time-series models. In this study, we examine domestic political conflict in Indonesia and Cambodia from 1980-2004 using automated content analyzed datasets collected from multiple sources (i.e. Associated Press, British Broadcasting Corporation, Japan Economic Newswire, United Press International, and Xinhua). The analyses show that we draw different inferences across sources, especially when we disaggregate domestic political groups. We then combine our sources together and eliminate duplicate events to create a multi-source dataset and compare the results to the single-source models. We conclude that there are important differences in the inferences drawn dependent upon source use. Therefore, researchers should (1) check their results across multiple sources and/or (2) analyze multi-source data to test hypotheses when possible.

occassional0.pdf