The eruption of civil wars in Muslim-majority countries and a spate of acts of terrorism by Muslims in Western cities has brought renewed urgency to an age old question: is Islam more prone to violence than other religions? Specifically, does the Quran-which Muslims believe to be the actual word of God-sanction and encourage bloodshed, and does it do so more than do other holy texts? We answer this question using a supervised machine learning algorithm which allows us to score the violence propensity of each verse of the Qur'an, the Old Testament, and the New Testament and classify them into one of three categories: collective, interpersonal and self-directed violence. We find that the Quran and the Holy Bible, taken as a whole, contain a roughly equal proportion of verses that reference interpersonal and collective violence. When we explore the promotion of each type of violence, we find that the Bible has a significantly higher proportion of verses which promote interpersonal violence while the Quran has a significantly higher proportion of verses which promote collective violence. We stress that these findings do not necessarily imply that the language of the Quran is a sufficient or even necessary condition to explain the greater current collective Muslim violence. First, the Christian world has arguably seen much more violence than the Muslin world. Second, alternative or complementary factors, such as an authoritarian regime or a weak state, may be major promoters of violence. Third, even if holy texts do help to enable violence, just a few passages might be sufficient. Such complementary and alternative explanations, not explored here, should be the subject of future work.
Frequentist approaches to treatment effect estimation in experimental settings have been the dominant paradigm in the social sciences for over a century, yet there are many circumstances in which Bayesian methods are preferable. For example, in the context of experimental replications where prior treatment effect estimates are known and available, a Bayesian approach incorporating informative priors provides an entropy-minimizing and epistemologically superior means of estimating treatment effects. In this paper, we provide a conceptual justification for using Bayesian inference with informative priors in the context of sequential experimental and discuss which types of priors should be used in each circumstance.
Measuring legislative accomplishment and the productivity of political institutions is fundamental to understanding the political economies and trajectories of democratic nations. Recent research measuring legislative accomplishment has enabled scholars to assess the importance of legislation across a wide range of time but entails cumbersome and expensive methods which do not allow for continuous updating of new pieces of legislation past 1994. In this paper, we develop an algorithm which allows us to constantly update and track the relative importance of legislation over time as enacted legislation becomes available. This is accomplished by first modeling enacted legislation across time as a directed network of citations using bill text and changes to the United States Code (amendments, repeals and additions). The importance of each piece of legislation in this network is then measured as a function of its pagerank centrality. Using this new measure, we reassess the importance of several pieces of key legislation and the productivity of Congress from 1926, the year that the first edition of the United States Code was published, to 2017.
Racially segregated cities tend to be politically polarized cities, leading to inequalities in public goods provision, political and social isolation, concentrated poverty and the perpetuation of a sense of hopelessness among many living in America's urban centers. While the links between racial segregation and political polarization are well established, it is less clear why, or through what mechanism, both can arise simultaneously. In this article, we derive a formal model which we demonstrate can partially account for this puzzle. This model allows us to derive "ideological tipping points": changes in neighborhood demographics at which members of one or more groups along the ideological spectrum (liberal, conservative, moderate) relocate. We then validate the model and demonstrate that racial segregation and political polarization consistently emerge in equilibrium under a wide variety of conditions by simulating movement of individuals between Census tracts in the largest 10 cities in the United States.
We create a computational framework for understanding social action and demonstrate how this framework can be used to build an open-source event detection tool with scalable statistical machine learning algorithms and a subsampled database of over 600 million geo-tagged Tweets from around the world. These Tweets were collected between April 1st, 2014 and April 30th, 2015, most notably when the Black Lives Matter movement began. We demonstrate how these methods can be used diagnostically—by researchers, government officials and the public—to understand peaceful and violent collective action at very fine-grained levels of time and geography
Politicians and political organizations routinely interact with voters and the public at large using images, yet until recently, computational limitations have precluded efforts to gain systematic knowledge about how images function as a medium of political communication. New developments in machine learning, however, are bringing the systematic study of images within reach. In this paper, we provide a framework for political image analysis with deep neural networks, introduce neural networks and deep learning methods and discuss the promise and pitfalls of these techniques for political image analysis. Using a database of 296,460 photos from the Facebook pages of members of the U.S. House and Senate, we provide two illustrative examples of how these techniques can be used to study home style in the digital age.
Organizations produce copious volumes of written documents, including position papers, meeting summaries, presentations, and budget justifications. These documents present a wealth of untapped information, which can shed light on a variety of organizational factors--individual and group behaviors, managerial and policy choices, and other key dynamics both within and between organizations. Computational text analysis methods offer a highly generalizable means of tapping into these documents in order to generate objective organizational data. We propose a general method for measuring the budget orientations in institutional budget documents using the Latent Dirichlet Allocation (LDA). LDA is a nonparametric Bayesian method which is used to extract topical content from collections of documents. We demonstrate how this method can be used to measure the functions of budget narratives in the state of California, highlighting both within-- and between-- county variations along Shick's (1966) spectrum of budget narrative purposes. This annotated computational analysis of documents is an example of how machine-learning techniques can greatly enhance longitudinal, comparative research in public management and governance research.
Machine learning (ML) methods have gained a great deal of popularity in recent years among public administration scholars and practitioners. These techniques open the door to the analysis of text, image and other types of data that allow us to test foundational theories of public administration and to develop new theories. Despite the excitement surrounding ML methods, clarity regarding their proper use and potential pitfalls is lacking. This article attempts to fill this gap in the literature through providing an ML “guide to practice” for public administration scholars and practitioners. Here, we take a foundational view of ML and describe how these methods can enrich public administration research and practice through their ability develop new measures, tap into new sources of data and conduct statistical inference and causal inference in a principled manner. We then turn our attention to the pitfalls of using these methods such as unvalidated measures and lack of interpretability. Finally, we demonstrate how ML techniques can help us learn about organizational reputation in federal agencies through an illustrated example using tweets from 13 executive federal agencies. All R code, analyses, and data described in this article can be found in the Supplementary Appendix.
How do migration and immigration shape the political geography of American cities? In this article, we propose a mechanism of partisan sorting and demographic change which is tested using the mass migration of African-Americans from New Orleans to Houston, Texas in the aftermath of Hurricane Katrina. We argue that differences in residential choice preferences among partisans combined with demographic changes which increase diversity can induce sorting by triggering flight (migration) among ideological conservatives. Using Hurricane Katrina evacuee data from schools in Harris Country along with a variety of empirical tools, we find evidence suggesting that African-American Hurricane Katrina migration led to Republican flight.