Identity/Politics - UC Berkeley

181
UC Berkeley UC Berkeley Electronic Theses and Dissertations Title Identity/Politics: Intersection and Distinction in Digital Publics Permalink https://escholarship.org/uc/item/2bh7c62m Author Seals, Dmitri Publication Date 2020 Peer reviewed|Thesis/dissertation eScholarship.org Powered by the California Digital Library University of California

Transcript of Identity/Politics - UC Berkeley

UC BerkeleyUC Berkeley Electronic Theses and Dissertations

TitleIdentity/Politics: Intersection and Distinction in Digital Publics

Permalinkhttps://escholarship.org/uc/item/2bh7c62m

AuthorSeals, Dmitri

Publication Date2020 Peer reviewed|Thesis/dissertation

eScholarship.org Powered by the California Digital LibraryUniversity of California

Identity/Politics: Intersection and Distinction in Digital Publics

by

Dmitri Seals

A dissertation submitted in partial satisfaction of the

requirements for the degree of

Doctor of Philosophy

in

Sociology

in the

Graduate Division

of the

University of California, Berkeley

Committee in charge:

Professor Neil Fligstein, Chair Professor Kim Voss

Professor David Bamman

Spring 2020

Abstract

Identity/Politics: Intersection and Distinction in Digital Publics

by

Dmitri Seals

Doctor of Philosophy in Sociology

University of California, Berkeley

Professor Neil Fligstein, Chair

This study tests and sharpens the widely shared consensus that something meaningful changed during the 2016 election cycle in the United States. Both online and in person, highly partisan actors ventured out of comfortable ideological homes to mainstream forums, in many cases acting without support from formal organizations. The apparently new character of their mobilization gave rise to narratives of social and political realignment and suggested the possibility of radical change to the structure of affiliations and identifications in the American public.

What realigned in this tumultuous time and what stayed the same? Why do any shifts that occurred matter to the future of advocacy and democracy in the United States? To address these questions, the study brings together tools of concept and method from a wide range of influences including cultural sociology, intersectionality studies, social psychology, philosophy, and political science. The study produces two toolkits to support further work on boundary realignment and political inequality: intersectional boundary analysis and a new typology of participatory publics based on their level of diversity and emotional intensity.

The empirical work of this study is a mixed-methods analysis drawing from the discussions of over 1.7 billion comments on the social media platform Reddit from 2015-2017 to reveal connections between discourse, culture, and inequality. The first chapter lays out the conceptual framework of intersectional boundary analysis, distinguishing forms of boundary work and developing tools for measuring boundary work and emotional language in text. It then measures the relative prevalence of different forms of boundary work to reveal shifts in the salience of symbolic boundaries over the 2016 election cycle and uses network analysis to show shifts in how these boundaries align.

The second chapter develops intersectional boundary analysis by zooming in on the intersectional complexity of discourse on class inequality in online advocacy forums, using qualitative coding and close readings to critique and move past simplistic narratives like the rise of populism and the white working class. The final chapter explores inequalities of political power by tracking rates of attrition and advocacy activity among the wave of new authors who joined these advocacy forums after the elections. Results connect debates on the role of diversity and polarization, rationality and emotion in participatory publics online to inequalities of political engagement.

1

Table of Contents

Introduction ......................................................................................................................................................................... 1

Chapter 1: Intersectional Boundary Analysis ............................................................................................................... 19

Chapter 2: The Intersectional Complexity of Class Distinction ............................................................................... 46

Chapter 3: Emotion, Diversity, and Inclusion in Online Publics ............................................................................. 70

Conclusion ......................................................................................................................................................................... 96

References ........................................................................................................................................................................ 100

Appendix A: Glossary .................................................................................................................................................... 127

Appendix B: Methodology ........................................................................................................................................... 131

i

Acknowledgements

True thanks to the committee of scholars - Neil Fligstein, Kim Voss, and David Bamman - whose eternal patience, insightful comments, and kind humanity made this dissertation possible. Six years working to found and develop movements and nonprofits in the Bay Area enriched this work but interrupted my progress through graduate school, and I do not take for granted that this committee welcomed me back with honest guidance and open arms.

My deepest and most eternal gratitude to my partner Alisa Catalina Sanchez, whose brilliance, generous heart, and writing expertise enlivens every page of this document. I am also indebted to the wise voices from one generation up and one generation down: I am inspired every day by my parents, who faced down much greater obstacles than I did to get their educations; I am also lifted up by the lively spirit of Orozco, now little but already hinting at the wonderful adult they will become.

Thanks also to the many friends and collaborators at UC Berkeley, Cal State LA, and elsewhere who have read drafts and offered thoughtful comments along the way. My heart goes out especially those who read the early drafts I produced as I was just getting back in the academic groove. I want to give special appreciation to every author whose comments enter into the analysis of this study and everyone out there working to make their voice heard and claim a seat at the slanted table of democracy in this country.

ii

1

Introduction

Identity/Politics: Intersection and Distinction in Digital Publics

A wave of mass mobilizations during the 2016 US presidential campaign confounded political pundits, rattled established organizations, and raised a challenge to traditional understandings of political action. Both online and in person, highly partisan actors ventured out of comfortable ideological homes to contest for control in mainstream forums, in many cases acting without support from formal organizations. The apparently new character of their mobilization suggested the possibility that 2016 saw a critical election causing radical change to the structure of affiliations and identifications in the American public sphere.

What realigned, what stayed the same - and most importantly, why does it matter to the future of advocacy and democracy in the United States? Scholars and pundits focused attention on a wide range of narratives - populism, the white working class, the rise of the alt-right - and so often they contradicted each other, highlighting the importance of one social category and ignoring or downplaying others. This study argues that the best chance of assembling these contradictory narratives into an understanding of how realignment works and why it matters is to combine perspectives and tools of intersectionality studies and cultural sociology, and to pair close readings with big data analysis.

Crucially, the analysis directs attention away from the discourse of elites, avoiding the traditional study of political speeches and mass media and instead focusing on informal discourse - the everyday discussions of everyday people. Informal discourse is a crucial site for the study of large-scale shifts in identity and group membership. Elites do not dictate political and social directions so much as tap into forces already present in the broader public (Bourdieu 1991; Norton 2017). Developing methods to measure patterns in informal discourse directly and at scale can reveal patterns that studies of elite discourse would miss - and reveal trends and complexities that elites themselves tend to miss.

As boundaries and publics realign and evolve under the influence of charged political events, they contribute to shifts in how people view themselves - or don't - as active and engaged advocates and participants in democracy. The subjective experience of participating in advocacy and political discussion can have long-term consequences for political and power inequalities: many who find their participation ignored or dissonant withdraw in a form of self-censorship (Schwalbe et al. 2000), shying away from politics even in their personal lives (Bennett et al. 2013; Eliasoph 1996, 1998). Still, the great body of work mapping discursive opportunity structures has left relatively untouched a vital question that is essentially quantitative: which styles and structures of communication cause people to remain or drop out more than others?

These sociological questions are grounded by my own experiences engaging and observing advocacy in online spaces but also person to person. Before the 2016 election, advocacy on national policy in my city of Los Angeles sat at a low simmer. After the election, I saw neighbors with a wide range of backgrounds and perspectives take up political engagement with a new seriousness. Within a month of the election, at least eight new grassroots advocacy groups were founded within a 2-mile radius of my home. By the end of 2017, most of these groups were gone, and the remaining members looked seemed more privileged than the group that first mobilized a year before.

2

These chapters push toward answers to the broad question of why and how polities change in unsettled times, but always with an eye to the future of mobilization and political inequality. The answer to these questions has become deeply important as concerns over increasing challenges to democratic institutions in both theory and practice (Brown 2015; Foa and Mounk 2016, 2017) clash against assertions that participatory democracy is on the rise (Nabatchi and Leighninger 2015; Polletta 2013, 2014).

At stake in debates over what changed in informal discourse during unsettled times like those of the 2016 election cycle are much more than questions of theory. Understanding realignments in discourse and making them rapidly measurable can be a crucial part of diagnosing inequality and intervening for meaningful inclusion. Shifts in how people negotiate social categories on Reddit are certainly important for the people who directly participate in the platform - and because these shifts can be rapidly measured at scale, they can increase our capacity to diagnose broader shifts that implicate millions more people.

In these times, vanishingly few publics operate without becoming entangled with a global network of publics; patterns that pertain in a massively networked public like Reddit often signal or directly influence patterns emerging in other systems. Addressing questions of identity, inclusion, and inequality on social media is in no way sufficient on its own. As part of a broader network of research and intervention, this work can enrich conversations on how to create spaces of discourse that meaningfully bridge symbolic boundaries and work against inequalities of power and mobilization.

INTERSECTIONAL BOUNDARY ANALYSIS

Symbolic and formal boundaries

The boundaries people construct in everyday informal discourse are crucial to collective action and social change. When people make decisions about how to communicate and how to act, whether consciously or not, they refer to a collective understanding of ingroups and outgroups, allies and enemies, drawn from their experience and their understanding of the social context in which they operate. Every arena of social action - in the terms of Pierre Bourdieu, every field - has a specific configuration of ingroups and outgroups that exists as a pattern in the memory traces of the people who participate in it, which are the aftermath and product of prior communicative action in that context (Latour 2005).

The specific configuration of ingroups and outgroups matters a great deal to mobilization, whether in politics, social movements, or other arenas. Though history is often written as if heroic leaders spark movements through raw charisma or talent, whether or not any person rises into leadership depends on their capacity to resonate with the beliefs, values, and dispositions of the people they seek to lead; indeed, that capacity may be one of the strongest elements of what we perceive as charisma (Bourdieu 1985; Bourdieu and Wacquant 2013).

One way to capture this configuration is to study symbolic boundaries: collectively imagined cultural line by which social actors divide people and other entities into groups. Specific symbolic boundaries like man|woman and black|white are often part of boundary systems like race or gender. Symbolic boundaries are strongly related to formal boundaries: laws or policies that treat people differently based on their membership in the social groups defined by symbolic boundaries.

3

Indeed, policies that formally treat groups differently often arise from boundaries that first circulated in discourse, as when segregation laws draw on racist narratives already circulating in political discussion and popular culture (Marx 1997). The racial formation perspective has generated a large toolset for the analysis of how symbolic boundaries relate to formal boundaries as well as to economic and power inequality, not taking for granted the reality of race but showing how action from the micro to macro alternatively reifies, contests, and refigures racial groups (Omi and Winant 2014). This perspective is ripe for application across social categories, and indeed scholars inside the field have already called for extending the approach across settings and scales (Saperstein, Penner, and Light 2013) and beyond race and ethnicity (Brubaker 2014).1

Though the importance of symbolic boundaries in informal discourse is not to be taken for granted - for instance, in the presence of strong formal boundaries active reconstruction of symbolic boundaries can subside (Silva 2016) - it is a crucial site of research into the connection between culture and inequality. Most promising is the articulation by Patricia Hill Collins of an intersectional approach to lines of division that reveals conventional social groups as reciprocally constructing phenomena and attends to how power relations and social inequalities flow in the "recursive relationship between social structures and cultural representations" (Collins 2015).

Boundary work

If boundary structures matter to inequality even when they occur in informal discourse, it is crucial to develop frameworks and methods that enable us to understand, measure, and ultimately intervene to change them. Key to this process is the study of boundary work: everyday acts by which people activate, define, or frame symbolic boundaries. Does not need to be discursive or consciously intended. This study differentiates between 3 stages of boundary work and differentiates at each stage between discourse with positive, neutral, and negative charge, as in the table below:

• boundary activation makes symbolic boundaries salient by directly referencing boundary systems (race, gender) or the groups that constitute them (black people, women).

• group definition assigns people to one side or the other of the relevant symbolic boundary

• group framing associates groups with symbols and emotional charges (either negative or positive) to intervene in the collective understanding of what groups mean. The positively charged (upframing) and negatively charged (downframing) instantiations of group framing play a particularly large role in the formation of symbolic boundaries.

1 Brubaker references a particularly compelling passage from Weber, who asserted that a precise and differentiated analysis would ‘surely throw out the umbrella term “ethnic” altogether’, for it is ‘entirely unusable’ for any ‘truly rigorous investigation’ (Weber 1922:394–95).

Table. Forms of boundary work by step and emotional charge.

Emotional charge

Stage Positive Neutral Negative

Activating boundaries

Activation "group Y"

"system XYZ"

Defining groups

Inclusion "x is one

of us"

Categorization "y is in

group Y"

Exclusion "z is one of

them"

Framing groups

Upframing "group X is good"

Interpretation "group Y

is…"

Downframing "group Z is bad"

4

This systematic understanding of boundary work by its function and emotional charge builds on a productive synthesis emerging in the study of symbolic boundaries that situates boundary work with negative, neutral, and even positive emotional charges in broader processes of collective valuation (Lamont 2012). This work aligns cultural sociology with recent syntheses in social psychology and neuroscience that trace show processes of inclusion and exclusion work together to drive the formation of stereotypes (Amodio 2014), political ideologies (Jost et al. 2014; Jost, Federico, and Napier 2009), and broader identities (Hogg 2016).

These steps often happen simultaneously, but their combinations are worth analyzing independently. The qualitative analysis of Chapter 2 captures all forms in the activating boundaries and framing groups steps - activation, upframing, interpretation, and downframing. The quantitative analysis of Chapter 1 targets only 2 forms that lend themselves to accurate computer-assisted analysis at large scale:

1. Boundary activation with neutral emotional charge: Refer to categories that divide people into groups Examples: “Black people talk differently than white people.”; “Women sometimes wear dresses.”

2. Downframing with explicit boundary activation: Attach negative value (emotional charge) by directly naming a category Examples: “Southern people have funny accents”; the N word; “Men have no fashion sense”

The examples above are chosen to be minimally offensive, but clearly many instances of downframing in real discourse are extremely offensive and by design: strong emotional charge produces strong symbolic boundaries. Conceptually distinguishing forms of boundary work only becomes useful when matched to methodological work to distinguish them in the practice of communication. Developing and validating a series of codes and lexicons that could reliably distinguish between boundary activation, upframing, and downframing in informal discourse is a crucial contribution of this study.

Intersectionality studies

The core insight of intersectionality studies has been that all systems of symbolic boundaries - whether it is race, class, gender, or the many other boundaries explored here - are deeply connected. When intersectionality rose to the public consciousness in the latter half of the 20th century, it was to challenge legal systems that could not handle more than one form of discrimination at once (Crenshaw 1991).

In one of the cases that spurred this intellectual movement, DeGraffenreid v. General Motors of 1976, a group of black women who brought lawsuits on the basis of simultaneous racial and sexual discrimination had those lawsuits thrown out on the basis that combining the two would open a "Pandora's Box" of complaints on behalf of groups that experienced multiple forms of oppression, "governed only by the mathematical principles of permutation and combination" (Crenshaw 1989).

Since then, intersectionality studies has opened that box with great results, emerging as a diverse field in which a multitude of scholars study a great many forms of social difference in addition to race and gender (Cho, Crenshaw, and McCall 2013; Hancock 2016). Intersectionality studies is a vital

5

intellectual movement shedding light on how boundaries combine in arenas from the informal discourse explored here to formal discrimination and violence; it has also been a crucial tool in building coalitions that bring movements together across differences (Hancock 2013b; Milkman 2017; Terriquez, Brenes, and Lopez 2018)

This study combines insights and methodological tools from intersectionality studies and the study of symbolic boundaries to target intersectional boundary structure, particularly the relative prevalence, clustering, and function of symbolic boundaries in discourse. This follows calls in intersectionality studies to push beyond showing that categories intersect in discourse but to investigate how and why intersections matter, along with how they change over time (Choo and Ferree 2010; Hancock 2013a).

Measuring the intersection of symbolic boundaries quantitatively has been rare both in intersectionality studies and cultural sociology. Indeed, quantitative work has often masked and misrepresented the dynamics of inequality (Hancock 2007; Zuberi and Bonilla-Silva 2008). Most studies of intersectional difference in online spaces have been qualitative so far (e.g., Gray, Buyukozturk, and Hill 2017; Gray 2012). Similarly, students of symbolic boundaries have most often employed qualitative tools to reveal the language with which people communicate their identity by distinguishing themselves from stigmatized others (Beljean, Chong, and Lamont 2016).

Still, the past two decades have seen increasing calls for mixed-methods research in intersectional research (Bowleg 2008; Griffin and Museus 2011; McCall 2005) that specifies how boundaries intersect (Warner and Shields 2013).

Intersectional boundary analysis

This study attempts to develop a toolkit for intersectional boundary analysis, a form of study that measures the relative prevalence, clustering, and function of symbolic boundaries in discourse - the intersectional boundary structure - in order to understand how intersectionality differs across contexts and changes over time.

Developing the tools of intersectional boundary analysis is an intervention aimed at pundits who elevate their favorite forms of inequality and ignore the others. When it is possible to uncover how boundary intersections have changed, it is no longer enough to assert that they have changed. Social scientists aware of intersectionality and of the painful conflicts that come from ignoring it must resist simple narratives that assert the primacy of race, class, gender, or any form of inequality without careful comparative work (Hancock 2013b).

Understanding the network structure of boundary language has revealed how boundary work can not only distinguish but also bridge social groups (Pachucki and Breiger 2010; Smith 2007). Research on symbolic boundaries is particularly well suited for analysis at the intersection of conventional social categories, for instance investigating how people simultaneously navigate racial categories in combination with class (Lamont 2009) or gender and sexuality (Steinbugler 2012).

At the same time, the strong tradition of quantitative analysis in cultural sociology has rarely touched questions of intersectional difference. Careful coding of text has revealed quantitative differences in the use of language across cultural and national groups and shown how these differences shape policy outcomes (Ferree 2003; Mueller, Restifo, and Restifo 2012). Related studies have revealed cultural changes over time by applying automated content analysis to both formal and informal

6

discourse (Bail 2012; Fiss and Hirsch 2005; Oleinik 2015). The symbolic boundaries approach has also proven particularly amenable to the quantitative analysis of large datasets, for instance showing how symbolic boundaries shift through displays of negative emotion (Bail 2008, 2014a).

Pairing quantitative and qualitative analysis is particularly important in the study of intersecting boundaries given how deeply enmeshed boundaries are in everyday communication (Chun 2019). The past two decades have seen increasing calls for mixed-methods research in intersectional studies (Bowleg 2008; Griffin and Museus 2011; McCall 2005) that specifies how boundaries intersect (Warner and Shields 2013). Integrated approaches reporting qualitative and quantitative measures on the same dataset have increasingly been used in sociology (Small 2011) as in other fields to "triangulate" complex social reality, using quantitative and qualitative analyses to question and sharpen each other (Covarrubias et al. 2018; Onwuegbuzie, Johnson, and Collins 2009; Spillman 2014).

The goal of this work is not to replace the study of conventional lines of social division in offline interaction. The lines of human division that circulate most widely - race, gender, class, even political identity (Iyengar and Westwood 2015) - show what Bourdieu called an "objectivity of the first order"; they are embedded in material relations in ways that digital group identities generally are not (Bourdieu and Wacquant 2013:296). Bringing digital spaces into the broader conversation on intersectional identities and inequalities will deepen our understanding of both established and emergent forms of affiliation and distinction.

Digital communication cuts across geographies and makes possible a more aggressive segregation along lines of value and belief than was ever before possible (Papacharissi 2010), challenging traditional groups even as their internal diversity tends to increase (e.g., Waters, Kasinitz, and Asad 2014). Traditional social groupings are bound to change under the influence of this dynamic, and it is time that we develop tools of theory and method that are capable of understanding new social groups in the context of traditional ones.

TARGETS OF ANALYSIS

The structure of publics and inequalities of political power

Equal participation in advocacy cuts to the heart of democracy in theory and practice. When in practice political power or participation is deeply unequal, the legitimacy of democratic regimes

starts to erode (Dryzek 2001; Gutmann and Ben‐Porath 2014; Habermas 1975; Lipset 1959). Unfortunately, despite frequently reporting on differences in participation, scholars have done little to unearth the roots of these trends. Too often, we have taken political attitudes and practices as received wisdom, static variables to describe or interpret rather than the outcomes of a chain of experiences.

What happens when new voices appear in forums whose style and structure evolved around very different populations? Which kinds of public discourse welcome new voices and which push them to the side? Despite broad agreement that participation is a core characteristic of democracy, there has been broad conflict over the question of how democratic participation should occur. The core disagreement has been between advocates of deliberative and agonistic democracy (Benhabib 1994).

7

The deliberative tradition following on the work of Jürgen Habermas tends to value rationality and diversity of perspective and background in political discussion(Bächtiger and Parkinson 2019; Christensen, Himmelroos, and Grönlund 2017; Dryzek 2001; Fishkin 2011; Habermas 1991). Critics of Habermas in the mold of Nancy Fraser and Michael C. Dawson show how the enforcement of rationality in diverse public spheres can marginalize subaltern groups; they assert the importance of homogeneous counterpublics as alternative forums where these groups can build power (Daum 2017; Dawson 1994; Fraser 1990; Hill 2018; Toepfl and Piwoni 2018).

While the theoretical debate on the role of counterpublics or public spheres in diverse democracies has burned bright, it has left relatively untouched the empirical question of how these spheres - along with the homogeneity and emotional intensity that set counterpublics apart - influence who participates. Crucially, understanding participatory publics by their degree of emotional intensity and ideological diversity opens two other categories for exploration beyond counterpublics (emotionally intense and homogeneous) and deliberative spheres, which have sometimes been explored empirically but rarely enter into the debate about norms of democratic participation. Publics that feature rational discussion among homogeneous peers are here called Incubators; those that are both ideologically diverse and emotionally intense are called Battlefields.

When people enter into each type of public, they engage with styles and norms of conversation that are likely to influence who participates over time. Scholars following on the work of Nina Eliasoph show how the expectation of disagreement and emotional conflict in political discussion can lead people to avoid political engagement altogether (Bennett et al. 2013; Bode, Vraga, and Troller-Renfree 2017; Eliasoph 1996, 1998; Peacock 2019). Many of the people who enter into advocacy in the wake of intense historical events like the 2016 elections drop out; the question of who remains is vital to social theory and to the practice of inclusion and equity in democracy.

Social media and collective action

The capacity of groups that formed online to mobilize quickly and act collectively in recent years - reaching millions with their messages, shifting the tactics of national campaigns, and arguably altering the course of a presidential election - took many by surprise. The increasing importance of online platforms as a sphere of informal discourse raises a host of opportunities and challenges.

Bringing social media into the study of online social action dramatically expands our capacity to analyze informal discourse - think how unprecedented it would have been before the age of the internet to be able to capture even a fraction of the informal discourse of millions of people at once! - but it also creates methodological challenges in dealing responsibly with this new data. Discourse on social media offers a fascinating window into how people perform identities and draw boundaries between social groups, but it also raises the question of how this boundary work changes in online spaces.

There are good reasons to question the appropriateness of social media spaces as a place to study boundary work. Social media platforms draw a sample of the population that skews by race, gender, and education level - though the extent and direction of this skew varies widely by platform (Perrin and Anderson 2019). Further, while online communication can consume some people's lives, for most people it can only capture a relatively small amount of any individual's communicative action (Hampton 2017; Lazer and Radford 2017).

8

The flaws that make social media a complicated site for sociological research mirror the challenge of measuring any kind of informal discourse at large scales. The more authentic the setting, the less control one can exercise over exactly who is included and how much they're willing to express. The bottom line is that the study of symbolic boundaries online can only be useful to the extent that it captures "substantial moves in the game" (Evans and Aceves 2016:24) with real consequences for the collective understanding of the structure of social space. And there is reason to believe that substantial moves with serious implications indeed happen in informal discourse online.

Research on the real-world efficacy of collective action on social media platforms gained momentum in the aftermath of the political and cultural upheavals known as the Arab Spring, in which movements used a variety of social media platforms to share information, coordinate actions, and outmaneuver regimes rooted in slower and less digital forms of communication (Jamshidi 2013; Lotan et al. 2011). In 2011, the Occupy movement exploded across the United States, driving in-person activism with messages and tactics coordinated online (Gitlin 2013; Kreiss and Tufekci 2013).

Research since then has situated social media use in a broader range of factors powering movements like those of the Arab Spring. A study of protest networks in the aftermath of the Occupy movement found efforts to coordinate actions and share information sometimes effective but often disorganized, disconnected from specific goals and outcomes, and subject to communication breakdown (González-Bailón and Wang 2016). Linear models suggest that online political activism was indeed an important influence on individual decisions to participate in protest, but one that was heavily moderated traditional mobilization factors like social networks, personal grievance, and resources for mobilization (Brym et al. 2014; Kim and Lim 2019).

Further studies specified that social media mobilization may have deeper impacts on collective behavior than individual decisions. For instance, a study of social media, mass media, and in-person action during the Arab Spring found that social media shaped political debates, preceded and influenced major events on the ground, and spread democratic ideas across borders that were harder to cross with in-person action (Howard et al. 2011). The most durable impact of the Occupy movement, which achieved few direct policy victories, may have been its lingering effect on the discourse and salience of class inequality, partly through an international effort to frame specific enemies - e.g., the financial sector and the "1%" (Calhoun 2013; Gaby and Caren 2016).

Social and political struggle on social media has only intensified in the past decade as movements, state actors, and individuals rapidly learn from each other's' maneuvers globally (Tufekci 2018). Scholarly work on social media since this time has run a range from wild-eyed optimism in the revolutionary power of technology to cynicism at the way online movements have often been co-opted or seemed to trade off with in-person mobilization (Comunello and Anzera 2012; Kidd and McIntosh 2016).

Social media platforms including Reddit and Twitter were a pivotal arena of mobilization and struggle in the 2016 elections (Roozenbeek and Palau 2017), particularly in campaigns that posed themselves against the establishment and embraced a more bottom-up approach to social media outreach (Enli 2017). While the Clinton campaign engaged mostly on Facebook and Twitter, the Trump and Sanders campaigns engaged particularly deeply with Reddit as a tool for mobilization (Mills 2017).

9

Both Sanders and Trump performed "Ask Me Anything" (AMA) sessions on Reddit, embracing one of the platform's traditions in which a person promises to answer as many questions as humanly possible over a set span of time (Hara, Abbazio, and Perkins 2019). Each campaign had workers dedicated not only to pushing out social media messages, but also to moderating, commenting, and harvesting messages and ideas form Reddit along with other platforms (Lagorio-Chafkin 2018).

As an arena of political struggle in the 2016 elections, social media platforms were the site note only of rational deliberation but also demagoguery and explicit manipulation. Some manipulation came from outside sources, as in the widely publicized campaigns sponsored by Russia and other groups that spread disinformation and stoked tensions on Reddit, Twitter, Facebook, and other platforms (Aral and Eckles 2019; Badawy, Ferrara, and Lerman 2018). Much of the manipulation, misinformation, and outright hatred came from inside the country;

How social media matters to social struggles

Beyond political campaigns, social media platforms like Reddit are spheres of discourse with real stakes, where minds change not just through argumentation (Musi 2018) but also and especially through emotionally-charged interactions (DiMaggio et al. 2018).

Movements - both formal and informal - use online platforms to directly attack each other and struggle for the hearts and minds of people who are not yet ideologically committed (Nagle 2017). Some users rise to the status of high-status influencers, exercising symbolic power as others seek to emulate them or contest their power (Weeks, Ardèvol-Abreu, and Gil de Zúñiga 2017). Negative mobilizations have long-term effects including destroying forums (Kumar et al. 2018) and pushing users off the platform (Sobieraj 2018).

The particular affordances of social media platforms fundamentally shape the mobilization, conflict, and collaboration that can occur on each platform (Poell and van Dijck 2017). Online political forums are natural laboratories with often strikingly different cultures of deliberation; their divergent patterns of discourse can shed light on how people do work of social exclusion and inclusion along lines including race, class, gender, and political ideology (Ince, Rojas, and Davis 2017; Jost et al. 2018; Kruse, Norris, and Flinchum 2018).

Without access to conventional tools of identification and discrimination, users activate social categories in text, bringing to light dynamics of inclusion and exclusion that are notoriously hard to measure in traditional contexts of social action (Gray et al. 2017; Papacharissi 2010, 2015). As in other social spheres, formal blindness can result in the aggressive re-assertion of group identities (Bonilla-Silva 2014; Gray and Huang 2015).

Social media communication both provokes and channels emotions, as shown by case studies of international relations (Duncombe 2019) and online communities that organize around shared emotion (Manikonda et al. 2018; O’Neill 2018). Like emotions themselves, highly charged performances of boundary work online are contagious (Ferrara 2017) and often crucial in moving fringe cultural forms to the mainstream (Bail 2012). In many discursive communities online, boundary work can emerge as a form of cultural capital, with users earning status and recognition for cleverly insulting others (Nissenbaum and Shifman 2017).

The strength of social media in connecting geographically scattered individuals through shared boundary work has led online platforms to be tools of choice both for hate-based movements in which relatively privileged groups rail and plot against stigmatized others (Brown 2018) and

10

protective counterpublics whether those same stigmatized others gather for support and community (Graham and Smith 2016; Jenzen 2017). Attacking marginal social groups can boost the popularity of comments and lead to their propagation over time - even against the efforts of moderators and the platform itself to contain its spread (Muddiman and Stroud 2017).

Boundary work online can also affect beliefs, attitudes, and health. Downframing online - including hate speech - is often more virulent for the anonymity of its speakers, the incredible speed with which comments can pile on, and the capacity for online speech to last for years (Brown 2018). Qualitative work suggests not only that racism is often "unmasked" in online spaces but that encountering racial antagonism online changes attitudes, challenging worldviews, relationships, and ways of thinking about race (Eschmann 2019).

Several studies show that encountering online boundary work can affect mental health, with particularly strong effects on youth (Stewart, Schuschke, and Tynes 2019; Tynes, Toro, and Lozada 2015) made even stronger when forms of boundary work intersect across multiple social categories like race, gender, and sexuality (Tynes and Mitchell 2014). The emerging consensus that online boundary work matters to life offline still must be matched by more detailed work on its precise effects (Bliuc et al. 2018).

The importance of social media platforms like Reddit to social life has sometimes been overstated. In countries like the United States, where an overwhelming majority of people use the internet and social media on a daily basis, the boundaries between social action online and offline have increasingly blurred to the extent that some scholars see research focused only on offline experience to be fundamentally incomplete (Chayko 2019).

This study attempts to walk a middle path between inflating the importance of social media platforms and ignoring the importance of these social arenas, honoring both the intensity of some online experiences and the low-key casualness of others. The aim here is to reclaim digital forums as arenas of human action in the broader context of communicative action - and to understand how communicative action will evolve under the increasing influence of digital forums.

In other words, we know enough about social media discourse for social identification, group formation, and political mobilization to take these patterns seriously. Even where it might be impossible to prove direct causal links between patterns of discourse on Reddit and patterns in life offline for all users, social media is a sphere of human communication like any other in this specific sense: generalization to other spheres is possible only when the specific context of communication is taken carefully into account (Davis and Love 2019).

Online discourse is full of slang, insults, sarcasm, and poor argumentation - but this should not convince us to abandon social media as a site of social research. Many of the flaws of discourse on social media appear in every form of informal discourse from phone calls to letters to everyday chats, not just in recent years but throughout history. If we want to understand social trends directly and not through the filter of elite discourse, we will have to take seriously the challenge of understanding sites of informal discourse like social media.

The site: Reddit and online informal discourse

The primary site of this study's research is the social media platform Reddit, the subject of a field of social research that has grown in rough proportion to its user base (Medvedev, Lambiotte, and Delvenne 2019) - which is to say quickly. With over 330 million active authors posting more than

11

900 million comments each year in over 80,000 forums, Reddit was for most of the period studied the 6th most active website in the United States. The first credible external estimate of Reddit's total user base put the share of Americans using the platform at 11%. Users skew heavily by age: 22% of the population 18-29 uses the platform, which also skews educated and male, with a more complicated racial skew toward white and Hispanic respondents (Perrin and Anderson 2019).

The experience of a casual user of the Reddit platform begins with a splash page called Popular (formerly known as r/all) that collects a list of posts selected by algorithm based on how users have interacted with them through clicks, comments, or upvotes (or downvotes). Each post presents an image, a link, or a block of text and is intended to start a conversation, or thread. In public threads, which are a majority a user can click through to see the discussion even without an account, but adding your own post requires logging in.

Accounts are free and pseudonymous, requiring no background checks or other confirmation of identity. Upon creating one, a user is immediately prompted to subscribe to a number of forums - called subreddits - that might interest them. Some users continue browsing the Popular page and never choose forums, but most end up within a few months focusing their reading and most of all their comment activity on a relatively small number of forums that take on characteristics of communities with distinctive group identities (Lagorio-Chafkin 2018)

These forums function as public groups with formal boundaries - i.e., each subreddit has its own name, location, and set of norms enforced by its own team of moderators. Some of the forums on Reddit are loose agglomerations of users posting articles and other web links; many develop a base of loyal contributors who spend hours a day reading, posting, and replying. In these forums, signals of group formation multiply: new phrases and memes appear and establish resonance by generating

Figure. First engagement with the Reddit platform rapidly directs users to engage in comment threads, create an account, and ultimately select subreddits that match their interests. This is an example of the first three screens a user might see in the mobile version of the platform, which brings users quickly to comment threads.

12

upvotes, reposts, and replies; users recognize and reference each other, orienting their action in webs of mutual influence; demonyms begin to circulate inside and outside forums as a badge of collective identity (Mills 2017).

In its early years, Reddit catered to a crowd of what one chronicler called "horny programmers", with its first two widely used subreddits, NSFW and programming, focusing on porn and tech, respectively (Olson 2013). The politics forum founded in 2007 was only the fourth major subreddit but soon played a large role in pulling new members to the site; for instance, in 2008, discussions on politics took up nearly half of the posts to subreddits, with a large spike in October and November as the campaign of Barack Obama came to a climax.

A fundamental shift in the character of discourse on the Reddit platform came in January 2008 when Reddit made it possible for users to create their own subreddits. When they announced this feature there were 53 forums on the platform; since then, forums have proliferated and diversified, with the current number pushing 2 million active subreddits. Most are small, catering specific tastes or groups of friends. Some are huge, like the advice and humor forum AskReddit, which during the time of this study featured over a million active authors and over 10 million active posts.

The affordances of the Reddit platform make it particularly interesting as a site of study for the circulation of discourse on social boundaries. Unlike Twitter, there are no limits on the character length of comments, allowing users to develop arguments and share evidence (Ovadia 2015). Reddit's algorithms select and elevate resonant comments - the ones that quickly gather replies and votes - bringing them to the top of feeds and creating cascades of attention and reaction. This intensifies natural processes of discursive evolution and makes subtle differences in the resonance of messages more visible and measurable.

Unlike Facebook, Instagram, and Twitter, Reddit has no limit to the depth of replies on comment threads, making it possible for users to reference each other's words in long chains of conversation, exercising cognitive authority in argument (White 2019), learning from each other (Del Valle et al. 2020), solving problems together (Buozis 2019), and changing each other's views (Musi 2018). Of course, the same affordances make it possible for conspiracies and misinformation not only to be challenged and debated but also to be developed and spread (Kou et al. 2017; Samory and Mitra 2018)

Perhaps most importantly for social research, Reddit emphasizes collective processes over individual accounts: unlike other platforms where individual pages are at the core of the user experience, user profiles are little more than a catalog of user's posts. The real work of customization and identity construction on Reddit is collective, with forum moderators and dedicated users spending long hours discussing and tweaking the norms, rules, and appearance of their subreddits (Lagorio-Chafkin 2018; Manikonda et al. 2018).

Because forums rather than individual pages are at the center of the Reddit experience, users often express social identities by showing their mastery of the norms of their chosen subreddits (Mills 2017). This makes Reddit a strong site of community building both for those sheltering from boundary work and more explicit forms of violence (Dosono and Semaan 2019; O’Neill 2018) and for those who seek to legitimize, construct, and weaponize boundary work (Prakasam and Huxtable-Thomas 2020).

13

My personal journey as a Reddit user

I was not experienced in Reddit before launching this study. To understand how the forums on Reddit related to each other in practice, I first joined a wide variety of subreddits and started to read and participate on the platform with the goal of understanding the perspective of as wide an array of users as possible. I began to frequent Redditsearch.io, a site that allows for rapidly pulling up all reddit comments that match a particular term. Reading the selections of comments from different time period was a crucial tool in starting to think through how discourse might have shifted over the course of the 2016 election cycle

I then started to study the platform quantitatively. One of the first steps I took was to produce a rough code of categories and political leans to distinguish between groups of forums and tracked the number of authors, comments, and subreddits in each group. Even as the number of forums I visited and categorized grew, it became clear that the number of forums would far outpace any attempt to categorize them qualitatively.

The charts below, produced with the final categorization system used in this study, dramatize this aspect of the platform. The great majority of subreddits are extremely small, but the majority of conversation happens in relatively large forums. The 4,324 forums that featured 6K+ comments, 20+ authors, and 10+ average wordcount per comment - the large discursive communities that were the focus of this study - accounted for 90% of the posts on the platform but only 2% of the subreddits.

To zero in on the forums that most took on the characteristics of discursive communities, this study focuses on active forums - but though they are all above a size threshold, these communities were still extremely diverse. Some feel like small towns, like PlaceNostalgia, where during the time of the study most of the roughly 100 authors seemed to know each other by username if not in their lives off of Reddit. The largest ones were more diffuse, but repeat users still found each other and expressed anywhere from joy to fury at reading each other's comments again.

Figure. Scores for core variables across qualitatively coded forum groups by quarter, Winter 2015-16 to Fall 2017

0.00E+00

5.00E+07

1.00E+08

1.50E+08

2.00E+08

2.50E+08

3.00E+08

Winter2015-16

Spring2016

Summer2016

Fall 2016 Winter2016-17

Spring2017

Summer2017

Fall 2017

% of posts by category

advocacy community entertainment OTHER (blank)

0

50000

100000

150000

200000

250000

300000

Winter2015-16

Spring2016

Summer2016

Fall 2016 Winter2016-17

Spring2017

Summer2017

Fall 2017

% of subreddits by category

advocacy community entertainment OTHER (blank)

14

The Dataset: The pushshift.io Reddit corpus

The primary dataset that hosts the analysis of the chapters below is a two-year sample of over 1 billion Reddit comments from a year before to a year after the 2016 presidential election, made accessible on the Google BigQuery platform thanks to the work of Jason Baumgartner and pushshift.io (Baumgartner et al. 2020). Like most large datasets, this one is subject to a testable rate of errors: an analysis by Gaffney and Matias estimated that roughly 0.043% and 0.65% of comments and submissions were missing, respectively, and pushshift.io was taking steps to reduce this rate over time (Gaffney and Matias 2018). The dataset includes both primary posts - the ones that start discussion threads - and the threaded replies that follow them.

Three characteristics of this particular dataset make it particularly amenable to the analysis of cultural struggles in political discourse. First, the data records the threaded structure of each conversation, meaning that each post and each comment can be quickly analyzed in the context of a nested collection of replies. Second, every thread and every comment are attached to a number of "upvotes", essentially measuring the number of users who endorsed them. Finally, because it includes effectively the entire corpus of text sorted by forum, author, and moderator status, the Reddit dataset allows for longitudinal analysis of changes to communicative behavior at the author level and the group level.

The common limitations of online social research still apply, but this study is designed to make the best of them, particularly the limited demographic base of users and the narrow range of communication captured in online discourse. The platform features even more discourse of identity, empathy, and strong group interactions than a platform like Twitter (Manikonda et al. 2018). With a more deliberative culture than many platforms and no character restrictions, comments are sometimes short retorts or echoes of other comments but commonly feature long, detailed responses typical of deliberation (Massanari and Chess 2018).

Indeed, the affordances of the Reddit platform force users to find ways of expressing identity in text, revealing dynamics that have proven extremely hard to trace for in-person action. As with most samples of informal discourse, authors on Reddit are not representative of the full population (Hampton 2017) but their efforts to build community and negotiate identity are both internally meaningful to the users themselves and increasingly influential to broader spheres of discourse. (Lagorio-Chafkin 2018; Mills 2017; Roozenbeek and Palau 2017).

In each of the chapters that follow, my aim is to center the subjective experience of Reddit users engaging with other users on the platform. For instance, I make no effort to filter out fake accounts or manipulative messages as long as they appear on face to come from genuine users. This opens the opportunity to consider a range of different experiences and possible effects on users depending on the depth of their exposure and the resonance of the messages they engage on the platform.

Though the Reddit data fails to capture the full range of any person's communication (Evans and Aceves 2016), even this challenge of all online platforms could be seen as a strength in testing and developing theory. Online communication is unprecedented in the scale and detail of analysis it affords. Because it makes possible rapid analysis of indicators across millions of users at once, online data is perfectly suited to creating early warning systems and testing theories that are difficult to assess online as a motivation for more detailed testing in other arenas. The results of cultural struggle and realignment found here are not intended to stand alone but to lay the foundation for

15

testing similar patterns where communication is even more immersive and dense in a mixed-methods community of research and practice.

OVERVIEW OF CHAPTERS

Chapter 1: Intersectional Boundary Analysis

The first chapter establishes a baseline understanding of how boundaries intersected, grew, and realigned in the 2016 election cycle. It measures two kinds of realignment in the circulation of symbolic boundary work online: changes in the relative intensity and salience of boundary language and shifts in which boundaries align or cluster together in which discursive spaces. To measure alignment directly, I used a hierarchical clustering technique called clustering around latent variables, which is particularly well suited to revealing cultural structures where categories intersect and overlap (Giannella and Fischer 2016; Vaisey 2007).

This work allows an empirical test of what happened to symbolic boundaries, putting the claims of pundits who elevated only race, gender, or another boundary system in their stories in this tumultuous time. Lexicons representing two forms of boundary work measured the relative prevalence of language tied to race, class, gender, sexuality, immigrant status, religion, and political identity in every active forum on Reddit across the period starting a year before the 2016 election and ending a year after. The prevalence studies addressed the question of which forms of boundary work became more and less salient, and many forms of boundary work showed dramatic change over the two years studied.

The question of realignment is more subtle and harder to measure. Boundaries may rise and fall and salience while the alignment. For instance, the way that race and immigrant status are linked to each other in the United States, the way their meanings depend on activating each other, may remain unchanged even as the salience of immigrant status explodes. Measuring the clustering of boundaries revealed a persistent structure of four clusters that remained remarkably consistent across the full two years, but showed two significant and durable realignments, one of which is explored in the next chapter.

Chapter 2: The Intersectional Complexity of Class Distinction

The second chapter digs deeper into class discourse, one of the two forms of boundary work found to have realigned in Chapter 1. In the process, it challenges two narratives centered on class and the 2016 election: the rise of populism and the white working class. Both these narratives, which circulated widely among scholars and journalists, relied heavily on understanding the motives and mobilizations of common people but rested on analysis of elite speech.

The analysis begins by extending the results of Chapter 1, digging deeper into where exactly class discourse circulated on the Reddit platform during the period of study. It then applies mixed-methods analysis to a stratified random sample of comments in two time periods to reveal the intersectional structure of boundary work in the class discourse of forums on the left, right, and middle.

Just as the study of elite discourse means little without careful attention to how and why that discourse influences a broader range of institutions and fields, analysis of quantitative metrics means

16

little without qualitative study of how metrics come alive in real-world interaction. Counting the prevalence of distinction and affiliation in class discourse - toward elites and common people, but also across economic status, political identity, race, gender, and other forms of social difference - sheds light on broad shifts in the dynamics of distinction. Truly understanding these shifts requires close readings of how these forms of distinction operate in context, particularly when valued symbols of class identity like "worker" so often implicitly entangle with whiteness, maleness, and other forms of privilege.

The results show the importance of consulting informal discourse of non-elites directly rather than assuming that narratives automatically translate into everyday conversation when they dominate the media or the discussions of politicians. They also complicated simplistic narratives like the rise of populism and the white working class, revealing class as a concept whose intersectional entanglements with other forms of social difference are both fundamental to its nature and hotly contested across a range of political ideologies and discursive fields.

Chapter 3: Emotion, Diversity, and Inclusion in Online Publics

The final chapter takes the study of inequality to the question of participation in advocacy - who persists and who drops out - expanding the debate on deliberative democracy and counterpublics by grounding it in patterns of attrition and engagement in public discourse. Its core aim is to shed light on how the style and structure of conversations shifted and how these shifts influenced who dropped out of advocacy while others engaged for the long term.

What happened to the wave of people who were inspired to advocate in the tumultuous times of 2016? Did they gather together in homogeneous echo chambers, battle each other with high emotion, or engage in rational deliberation? Who deepened their engagement with advocacy online and who was pushed to the side? To address these questions, the chapter uses summary analyses along with a series of logistic and multiple linear regression models to explore the relationship between modes of communication and political engagement.

The chapter traces the communications of the 22,211 users who newly entered advocacy forums on Reddit in the year following November 8, 2016 to show when users chose to engage with cross-cutting discourse, flock together in homogeneous forums, or opt out of advocacy. In the process, it reveals four forms of public distinguished by their levels of emotional intensity and ideological diversity - not just deliberative spheres and counterpublics but also Battlefields and Incubators - each with unique dynamics and each likely to play different roles in the future of mobilization and political inequality.

PURPOSE

What changed and why it matters

The results and aims of this study travel far beyond the tumultuous elections of 2016. Because social media platforms offer rapid access to conversations at large scales, I argue that they should serve as canaries in the coal mine of democratic systems. It took years of work on the front end to develop the tools of theory and method that made it possible to measure shifts in intersectional boundary structure and the structure of publics online. Once these tools are developed, it is relatively easy to

17

expand them to new contexts, for instance to track which boundaries are expanding or realigning and which people are dropping out of which publics.

Still, even if we can rapidly measure shifts in boundary work and engagement in informal discourse online, why should we care? Discourse and culture influence inequalities of power and mobilization both on the supply side through the set of affiliations and dispositions actualized and latent in the population and on the demand side through the structure of opportunities available for people to engage in advocacy. For instance, politicians have an entrepreneurial relationship to symbolic boundaries as to culture in general. They read cultural trends, but do not often simply mirror those trends: instead, they tend to exercise symbolic power by deploying resonant symbols that activate boundaries and emotions already present in the minds of their audience (Bourdieu 1991; Norton 2017).

When patterns of boundary work in everyday conversation shift, collective understandings of boundaries can shift with them, with serious consequences for material inequality and political power. Symbolic boundaries influence the construction of formal boundaries, creating unequal distribution of resources and social opportunities as people refer to internalized understandings of status and boundary as they decide to advocate or legislate, hire or fire, welcome or shun (Pachucki, Pendergrass, and Lamont 2007; Ridgeway 2014).

Patterns in informal discourse matter not only because conversations with peers are a powerful factor shaping opinions (Keating, Van Boven, and Judd 2016; Kertzer and Zeitzoff 2017) but also because they create opportunities for mobilization. Though the Trump campaign in particular deployed words and symbols that echoed the informal boundary work of ordinary Americans to

activate strong emotions of resentment (Berezin 2017; Lamont, Park, and Ayala‐Hurtado 2017), they are not nearly the only ones. Movements seeking social justice along lines of race (Carney 2016; Ince et al. 2017), gender , class, and others have exploded online, changing discourse and impacting policy.

On the other hand, the future of intersectionality as a philosophy of coalition and mobilization is somewhat in question. Movements centered on resonant symbols like Black Lives Matter, #MeToo, Occupy Wall Street, and the Dreamers are each deeply intersectional in their own way (Milkman 2017; Terriquez et al. 2018), but each focuses first and foremost on one line of social difference - #SayHerName is one of many notable exceptions that complicate but also validate this trend (Brown et al. 2017).

In practice, movements rarely use intersectionality in promoting their work (Elliott, Earl, and Maher 2017) and national campaigns struggle to build momentum around intersectional platforms (McCall and Orloff 2017). If intersectionality can ultimately serve as a framework for mobilizing broad coalitions among movements that understand their individual grievances in the context of broader patterns (Hancock 2013b), unlocking its potential will take hard work on both conceptual and practical fronts (Collins 2015).

Why read this work?

Sometimes the intensity of this moment in history can make it seem like direct action right now is the only work that matters. Immediate action is vitally important; in my humble opinion, it is everyone's duty - and especially the duty of scholars - to work alongside others in practice, engaging the question of what should be done by actually doing things. But the work we do to interpret and

18

conceptualize this moment is also crucial. Like many empirical studies, this dissertation looks closely at a specific example in order to develop a broader view.

Years from now, it will be clearer whether the election of 2016 signaled a fundamental realignment of intersectional boundary structures or a temporary shift destined to fade away. Publics and counterpublics will come and go online and in person as new people mobilize and engage and others drop out of the conversation. When people say that politics is downstream from culture, what they really mean is that our struggles to interpret and express meaning in our time do indeed matter well down the road.

In tracing the cultural shifts under the surface in the 2016 election cycle, this dissertation develops tools of theory and method that bridge between scholarly communities that rarely talk but would be stronger if they did. The bridges are there because they are needed: complex situations demand complex perspectives.

For instance, the toolkit for intersectional boundary analysis developed in chapters 1 and 2 affords a way for cultural sociologists working on symbolic boundaries to bring the insights of intersectionality studies to the center of their work; it also extends the study of intersectionality with new tools of mixed-methods research. In the process, it cuts across the methodological divide, marrying big-data quantitative analysis to close readings that privilege context and honor subtle shades of meaning.

Similarly, analyzing dropout patterns in Reddit's advocacy forums in the final chapter grounds the often-abstract literature on public spheres in empirical research on what it feels like to advocate in diverse and emotional publics. Because the experience of advocacy in these forums sits between social movement mobilization, political participation, and subjective feelings, the practical setting demands testing the political philosophy of public spheres in the light of insights from political scientists, psychologists, and sociologists in combination.

The goal of this bridging work is to develop tools of theory and method that are genuinely useful to the communities that currently work on each side of the bridge. In the process I aim to support a community of research and practice where bridges are the norm, where learning from each other is a key tactic toward the ultimate goals: improving our collective capacity to interpret the world and meaningfully change it for the better. I hope you find tools that will serve you well among the concepts and methods developed in the chapters that follow.

19

Chapter 1

Intersectional Boundary Analysis: Cultural Realignments in the 2016 Election Cycle

ABSTRACT

Stories of political realignment circulated widely in the aftermath of the 2016 US presidential election cycle, building on a sense that something fundamental might have shifted in the way Americans think and talk. A wave of studies since then has advanced competing stories about whether and how social categories including race, class, gender, sexuality, religion, nationality, and political identity may have shifted or realigned in this time. This study draws tools from intersectionality studies and cultural sociology to trace cultural realignment in everyday conversation on the social media platform Reddit, using an analysis of over 1.7 billion comments to reveal how boundaries intersect and cluster differently across context and time. Measuring the presence, centrality, and clustering of lexicons tied to two forms of boundary work - boundary activation and downframing - reveals shifts in the relative salience of multiple forms of boundary work that persist through the end of 2017. Clustering around latent variables reveals four durable clusters among forms of boundary work that experienced small but meaningful shifts in the 2016 election cycle. This primarily quantitative analysis lays a foundation for further mixed-methods work that makes visible how symbolic boundaries realign at scale and how cultural realignments shape the landscape of possible action in formal politics and everyday life.

INTRODUCTION

Many narratives that circulated in the aftermath of the 2016 election cycle, and particularly the election of Donald Trump to the presidency of the United States, pointed in the direction of a critical election in which something important changed in the way that Americans talk and think about social difference. Scholars and journalists focused attention on a wide range of social categories - race, class, and gender are just the beginning of a long list - to weave narratives on how these categories might have shifted or realigned.

This study combines perspectives and tools of intersectionality studies and cultural sociology to better understand how realignment works by considering relationships among seven forms of social difference in the informal discourse of the social media platform Reddit: race, class, gender, sexuality, religion, nationality, and political identity. Though intersectionality and culture are both associated with rich traditions of qualitative work, this study links them to a large quantitative dataset, bringing itself into the underexplored terrain of quantitative cultural analysis.

Measuring shifts in how social categories cluster, align, and cut against each other in everyday conversation - here called intersectional boundary analysis - is both substantively important and increasingly feasible. Intersectional boundary analysis takes as its central target of analysis a set of symbolic boundaries, lines by which humans divide between social groups that are collectively constructed through individual communicative acts known as boundary work (Lamont 2012; Pachucki et al. 2007).

20

Advances in computational content analysis make it possible to measure cultural shifts with ever greater precision while still inviting and attending to the richness of qualitative semantic study (Bail 2014b; Lazer and Radford 2017). These shifts shape the landscape of possible mobilizations: past qualitative work shows politicians as cultural entrepreneurs, leveraging and shaping currents of belief and value that pre-exist and outlast their campaigns (Berezin 2017; Lamont et al. 2017; Norton 2017).

This study develops a toolkit for intersectional boundary analysis that leverages computational cultural sociology to measure shifts in how symbolic boundaries circulate and intersect in speech. The analysis focuses on a large dataset of comments from the social media platform Reddit, a sphere of informal online conversation with a tradition of discursive struggle and strong ties to electoral politics. The goal is to measure two kinds of realignment in the circulation of symbolic boundary work online: changes in the relative intensity and salience of boundary language and shifts in which boundaries cluster together in which discursive spaces.

Democratic systems depend on the interaction between professional political field and the beliefs, values, and behaviors of ordinary people. By connecting the analysis of political realignment to the study of intersecting symbolic boundaries in everyday speech, the study helps to clarify which of the many divergent predictions of change after 2016 are anchored in discursive shifts and thus most likely to hold true. By locating research in an online space, it supports a program of collective research on new political and social possibilities emerging from new forms of communicative action, exploring how persistent social divisions like race, class, and gender will evolve and reassert themselves as digitally mediated communication continues to expand.

UNSETTLED TIMES AND LASTING CHANGE

In political science, the question of realignment has often been framed as an investigation of shifts in the ideology of political parties. Political scientists often label elections including those of 1860, 1896, 1932, and 1968 as critical elections in which the dominant structure, ideology, and coalition of major political parties changed in a fundamental way (Schofield, Miller, and Martin 2003). For instance, the 1968 election of Richard Nixon marked a realignment as conflicts over social policy, race, and foreign wars produced a durable change in alliances and voter affiliations, most notably in the suburbs and the American south but across the country as well (Gould 2010).

Realignments stand out from the normal practice of politics in elections years both in the magnitude and the durability of the changes they cause in a political system. Normal elections produce changes and spikes of interest that can seem crucial in the moment but do little to alter paradigms or the structure of the political field; differentiating between normal and critical elections has been a central concern in political science (Brunell, Grofman, and Merrill 2012). Following on this work, much of the literature on what happened in the 2016 election has focused on whether it produced internal realignments in the political field, particularly within the Republican Party (Galderisi et al. 2019; Johnston et al. 2017).

This study draws tools form intersectionality studies and cultural sociology to expand the dimensions of realignment beyond the political field to a broader frame of social categories. The questions that animate this research are the questions of what I will here call intersectional boundary structure, asking whether and how the salience, clustering, or function of social categories changed

21

in the discourse of class. This includes traditional categories of race, class, and gender but expands to a broader range based on the categories salient to this particular field of discourse.

Crucially, the durability of change in a critical election depends directly on the extent to which it creates changes in the way that people talk, think, and act - essentially cultural changes to the norms by which people navigate social life. In the terms of Ann Swidler, a critical election is an "unsettled time" in which ideologies "establish new styles or strategies of action. When people are learning new ways of organizing individual and collective action, practicing unfamiliar habits until they become familiar, then doctrine, symbol, and ritual directly shape action." (Swidler 1986:278). Changes in discourse matter even more in critical elections because of their capacity to establish new norms that persist for years.

This study traces informal discourse on the social media platform Reddit from a year before to a year after election day in 2016 to attempt to adjudicate between competing hypotheses about how the election impacted the way we talk and think about social categories including gender, race, class, and more. In these social categories, some see a realignment and others see temporary change or the continuation of trends decades in the making. Which is right?

A common result in studies that view the 2016 election as critical is a story of emboldening across the board - that antagonism increased across in a wide range of social categories, particularly groups targeted by the Trump campaign (Crandall, Miller, and White 2018). Many highlight the emergence of extremists, which often started online and spread to other arenas (Heikkilä 2017; Nagle 2017; Nithyanand, Schaffner, and Gill 2017) as antagonism became less subtle and more direct (Iyengar et al. 2019; Valentino, Neuner, and Vandenbroek 2018).

Others see this election cycle as a continuation of long-term trends, describing not so much a fundamental change in discourse as a set of temporary fluctuations that leave the overall structure intact. Noting the persistence of racism and white supremacy in the American public, many have cast doubt on whether the election changed anything fundamental about race (e.g., Bobo 2017), and scholars of gender point out that the first female candidate at the top of the ticket of a major presidential party produced little change in the gender attitudes and voting behavior particularly among white women (Campi and Junn 2019; Cassese and Barnes 2019)

Ultimately the picture or social difference in discourse during the 2016 election cycle is likely too complicated to be fully explained by stories of either emboldening or of continuity. Studies of extremism have found that despite the new attention groups gained on social media by groups like the upstart coalition of right-wing movements that came to be known as the alt-right (Daniels 2018), the overall demand for extremist politics in the American public changed little over this time (Bonikowski 2017).

Further, some voting studies have found only small differences in attitudes and behavior during or after the election, suggesting that the election might have been less a turning point than an extension of long-term trends (Johnston et al. 2017), even among groups like the white working class that were at the center of realignment narratives in popular media (Morgan and Lee 2017b). One study even found that racial prejudice declined among whites after the election, raising questions that cut to the core of the emboldening thesis (Hopkins and Washington 2019).

22

Informal discourse and cultural change

To understand whether and how discourse on social categories changed over the 2016 election cycle, this study eschews common data sources focused on elites, like political speeches or media narratives. Instead, it directly targets informal discourse to understand how macro events translate into temporary or durable changes in everyday speech. This raises crucial questions about how informal discourse like that in the forums of Reddit works - and why it matters both to formal politics and everyday life.

Despite the widely held belief that the discourse and opinions of ordinary people should matter in democracy, exactly how they matter both to the political field and to the everyday experience of inequality has been a subject of intense debate. People activate categories of social division not only in dramatic conflict but in everyday speech and gesture with substantial impact on individual lives (Jean, Feagin, and Feagin 2015; Sue 2010). Still, at the national scale, the formal political power of ordinary people is extremely limited, and the opinions of everyday citizens are often disconnected from those of the elites that supposedly represent them (Fiorina and Abrams 2008; Gilens and Page 2014).

Further, most measures of how the public thinks and feels rely on surveys with problematic assumptions about sampling and reliability that introduce significant distortions, for instance undervaluing the voices of those who don't feel entitled to have an opinion at all (Laurison 2015). Rather than functioning as impartial indicators of attitudes and values, surveys often deeply shape the opinions they purport to measure (Perrin and McFarland 2011). Important critiques like these flourished in the early years of the 21st century and contributed to a decline in the use and study of public opinion in sociology, particularly at large scales (Manza and Brooks 2012).

The 2016 election cycle brought the culture and beliefs of ordinary people back into the spotlight, revealing a gap in the sociological literature and creating the opportunity to advance understanding through cultural study. Seeking to understand the basis of change , many scholars have turned to concepts like status threat (Craig and Richeson 2014) and "deep stories" (Hochschild 2018) that draw on intersectional understandings of inequality and place the factors driving political realignment well within the realm of the cultural (Major, Blodorn, and Major Blascovich 2018; Mutz 2018b).

Understanding the material and political consequences of informal discourse as the product of recursive relationships can help to resolve the debate over sensemaking, reclaiming the relationship between news events and cultural trends as a subject of empirical study (Fiss and Hirsch 2005; Maitlis and Christianson 2014). Patterns in informal discourse matter not only because conversations with peers are a powerful factor shaping opinions (Keating et al. 2016; Kertzer and Zeitzoff 2017) but also because they create opportunities for mobilization.

For instance, politicians have an entrepreneurial relationship to patterns of culture in the electorate, leveraging and shaping currents of belief and value that pre-exist and outlast their campaigns. They read cultural trends, but do not often simply mirror those trends: instead, they tend to exercise symbolic power by deploying resonant symbols that activate attitudes and emotions already present in the minds of their audience (Bourdieu 1991; Norton 2017). Trump in particular deployed words and symbols that echoed and repackaged the charged language that ordinary Americans used in informal conversation to activate strong emotions of resentment (Berezin 2017; Bonikowski 2017; Lamont et al. 2017).

23

Online forums as a site of informal discourse

The emergence of computer-assisted content analysis has opened new opportunities to measure changes in discourse among millions of people at once (Bail 2014b). The connection between online platforms like Reddit and the political sphere increased dramatically in the years before the 2016 election (Kruse et al. 2018; Manikonda et al. 2018), making the study of discourse on social media not only scalable but also substantively important. The emergence of online communication as a political force and a target of research raises important questions of how discourse in online spaces is different from - and connected to - other forms of communicative action.

The barriers to effective social research in spaces like Reddit are well documented. First, like most sites of informal discourse outside of experimental settings, they represent a skewed cross-section of the broader public: the first credible external estimate of Reddit's total user base put the share of Americans using the platform at 11%. Users skew heavily by age: 22% of the population 18-29 uses the platform, which also skews educated and male, with a more complicated racial skew toward white and Hispanic users (Perrin and Anderson 2019).

Further - and also like most arenas of informal discourse - online platforms capture a relatively small amount of any individual's communicative action for all but the most dedicated users (Hampton 2017; Lazer and Radford 2017). Indeed, users often speak differently online than they do in real life, even on Facebook and Twitter and especially in spaces like Reddit where users often operate under pseudonyms or use multiple accounts (Shelton, Lo, and Nardi 2015). The study of cultural change online can only be useful to the extent that it captures "substantial moves in the game" with real consequences for the collective understanding of the structure of social space (Evans and Aceves 2016:24).

Still, there is reason to believe that substantial moves with serious implications indeed happen in informal discourse online. Social media platforms including Reddit and Twitter were a flashpoint of the 2016 elections (Lagorio-Chafkin 2018; Roozenbeek and Palau 2017), particularly in the Trump and Sanders campaigns which embraced a more bottom-up approach to social media outreach (Enli 2017). Movements - both formal and informal - use online platforms to directly attack each other and struggle for the hearts and minds of people who are not yet ideologically committed (Nagle 2017).

Social media platforms like Reddit are spheres of discourse with real stakes, where minds change not just through argumentation (Musi 2018) but also and especially through emotionally-charged interactions (DiMaggio et al. 2018). Some users rise to the status of high-status influencers, exercising symbolic power as others seek to emulate them or contest their power (Weeks et al. 2017). Negative mobilizations have long-term effects including destroying forums (Kumar et al. 2018) and pushing users off the platform (Sobieraj 2018). Like emotions themselves, highly charged communication online is contagious (Ferrara 2017) and often crucial in moving fringe cultural forms to the mainstream (Bail 2012). In many discursive communities online, antagonism along lines of race, gender, and other social categories can emerge as a form of cultural capital, with users earning status and recognition for cleverly insulting others (Nissenbaum and Shifman 2017). Attacking marginal social groups can boost the popularity of comments and lead to their propagation over time - even against the efforts of moderators and the platform itself to contain its spread (Muddiman and Stroud 2017).

24

Many social media spaces are anonymous or pseudonymous, creating opportunities for users to create new identities unmoored from their physical phenotype (Humphrey 2017; Shelton et al. 2015). Though some might expect that social differences would fade away without access to visible markers of difference, they tend to loom large on social media platforms as users seek to distinguish themselves and signal an ever-wider range of identities (Gray et al. 2017; Papacharissi 2015). As in other social spheres, formal blindness can result in the aggressive re-assertion of group identities (Bonilla-Silva 2014; Gray and Huang 2015).

Group affiliations, distinctions, and open conflicts formed online have repeatedly shown a capacity to influence cultural and political struggles at large scale, blurring the lines between informal discourse online and offline across nations and contexts (e.g., Hatakka 2016; Fuchs 2016). Antagonism online - including hate speech - is often more virulent for the anonymity of its speakers, the incredible speed with which comments can pile on, and the capacity for online speech to last for years (Brown 2018). Qualitative work suggests not only that racism is often "unmasked" in online spaces but that encountering racial antagonism online changes attitudes, challenging worldviews, relationships, and ways of thinking about race (Eschmann 2019).

Social media strongly predicts public opinion and is ever more tightly connected to contests over formal political power (Farrell 2012; Oliveira, Bermejo, and Santos 2017). Several studies show that encountering online boundary work can affect mental health, with particularly strong effects on youth (Stewart et al. 2019; Tynes et al. 2015) made even stronger when forms of boundary work intersect across multiple social categories like race, gender, and sexuality (Tynes and Mitchell 2014).

Like any form of informal discourse, conversations on social media platforms like Reddit are full of slang, insults, and slippages, but there is reason to believe that they represent a meaningful arena of social identification, group formation, and political mobilization (Chayko 2019; Davis and Love 2019). Still, if the realignment of social categories online matters to broader question of intersectional inequality, the question remains of how to measure and interpret realignment in the context of broader struggles over how these categories are understood and mobilized.

INTERSECTIONAL BOUNDARY ANALYSIS

Differences of race, class, gender, religion, sexuality, political identity, and nationality among many others were all powerfully salient in the tumultuous times of the 2016 election cycle (Mast and Alexander 2018; Silva 2019). But statements like this leave important questions unanswered: Which changes were most intense and enduring during the 2016 election cycle? How do changes in different social categories relate to each other? Which changes will actually matter, not only the fates of political parties but also to the landscape of political possibility and to the lives of ordinary people?

Studies that address these questions fall into three broad groups. Some research describes a pattern of emboldening across the board, with conflict and salience rising in nearly all categories in the 2016 election cycle. Other analysis zooms in on specific groups or social categories, using issues of immigration or religion or gender as a lens on broader social change. A third group of studies predicts actual realignments of social categories, new clusters or shifts in the ways that boundary systems like race, gender, and religion connect and cohere in discourse.

25

Much of this work relies on surveys and voting behavior as a measure of attitudes, and those that focus on discourse generally cover the conversations of only a narrow range of people. Because of the newness of the topic matter and the difficulty of longitudinal research, many of these studies focus on the election and its immediate aftermath, making it all the more important to extend the analysis to determine not only how ways of talking and thinking about social difference might have changed but also to distinguish between temporary fluctuation and the emergence of a new normal.

How social categories intersected

Most serious stories about what happened in the 2016 election cycle and its aftermath are in some way intersectional: few studies consider any form of social difference in isolation. Still, each narrative is intersectional in a different way, considering its own group of relevant boundaries and making its own claims about how they clustered. This study zooms out to view a larger set of boundaries in order to provoke conversations about how narratives relate to each other, bridging across sets of studies rarely considered together.

One key question is the centrality of each form of boundary work to the overall discussion of social categories on the platform, for instance how prevalent political identities were in discussions of race, sexuality, and other forms of difference. Symbolic boundaries gain their intensity through a link to identity long assumed to be more powerful than any political affiliation. Which party or political ideology a person prefers has always been important for some and recent research has placed it as a signal of group identity comparable in power to race, class, and gender (Iyengar et al. 2019; Iyengar and Westwood 2015).

Deeper analysis raises more nuanced questions about how boundaries rose, fell, and clustered. Below I consider three collections of studies about how social categories connected and evolved: a Gender-centered Coalition cluster focused on how gender connected to political identity and race; a cluster of Status Threat studies focused on the nexus of race, class, and immigration; and a cluster about a New Nationalism centered on the intersection of religion and sexuality with borders, immigration, and American identity. In each broad narrative I collect diverse studies and attempt to show how they weave together into a coherent set of predictions.

Gender-centered Coalition. Studies focused on gender and the 2016 elections are inherently intersectional in that they consider political identity and gender together. The first major female nominee to the presidency provoked not only inspirational discourse on gender, but also counternarratives and backlash.

Sexism powerfully predicted vote choice even after controlling for partisanship predispositions - and emotional valence mattered, with the highest effects coming from those who reported (Valentino, Wayne, and Oceno 2018) and hostility (Cassese and Holman 2019). One study found that sexism increased after the election, but only among Trump supporters (Georgeac, Rattan, and Effron 2019).

After the election, several studies brought other forms of difference into the conversation, for instance drawing on the history of racial polarization and party affiliation to explain why many white women didn't vote for Clinton (Campi and Junn 2019; Cassese and Barnes 2019). But the studies that give the clearest predictions on how conversations about gender might have realigned after the election often focus on the Women's Marches that drew millions into the streets on January 21, 2017 along with the intersectional coalitions that formed around these movements.

26

Some stories place the Women's Marches at the center of an intersectional coalition with connections a broad range of symbolic boundaries, capturing national attention and building it through and past the 2018 elections (Fisher 2019). Indeed, these connections were key to the initial formation of the marches (Fisher, Dow, and Ray 2017) as well as their longer-term impacts (Gantt-Shafer, Wallis, and Miles 2019). On the other hand, the marches faced problems of inclusion and splintering from the start (Falola and Ohueri 2017) with particular conflicts pitting gender policy priorities against priorities including race, class, and immigrant status (Brewer and Dundes 2018; McCall and Orloff 2017).

This cluster of studies poses a set of vital questions for the current study, including whether gender rose in prevalence over the course of the election - and whether it became more or less intertwined with other social categories, both on the right and on the left. Intersectional boundary analysis can help to measure how rhetorical trends like the conflict among intersectional coalition groups on the left or the emergence of emotionally intense intersectional narratives on the right (McCall and Orloff 2017; Tuğal 2017) circulated in everyday informal discourse.

Status Threat. Many of the studies following on the 2016 election focused on the role of class versus race, immigration, and other boundary work in motivating Trump voters. A study of panel survey data by Deborah Mutz drew headlines, putting forward the claim that status threat - particularly some white people's concerns over race and immigration - were more important than economic concerns in deciding votes (Mutz 2018b). This study was backed up by other survey data (Hooghe and Dassonneville 2018; Reny, Collingwood, and Valenzuela 2019) and experiments that focused on white people's fear of being displaced (Major et al. 2018).

Though much of the early literature on the role of status threat in the election cycle focused on race and immigration, the role of class became a center of debate. Re-analysis of the same panel dataset used in the Mutz study raised a useful debate over whether anxieties over immigration and globalization count as "economic interests" (Morgan 2018; Mutz 2018a) Some found class-based antagonism rooted in strong feelings against cultural elites (Cramer 2016; Gidron and Hall 2017; Hochschild 2018) or economic concerns driving feelings about immigration and globalization (Swedberg 2018). Others viewed class as subordinate to race, for instance arguing that "economic insecurity was connected to partisan choices when it was refracted through racial grievances". (Sides, Tesler, and Vavreck 2019:156).

More complex pictures emerge from further studies. Studies of rust-belt states show that both black and white working-class voters revolted against the policy consensus they saw as contributing to the economic decline of their region (Green and McElwee 2019; McQuarrie 2017). Some pointed out that status threat is deeply entangled with gender and sexuality (Schaffner, Macwilliams, and Nteta 2018; Strolovitch, Wong, and Proctor 2017) while others argued that to the extent that class mattered, it was itself a nexus of concerns across a cluster of symbolic boundaries in which phrases like “hard working taxpayers” or “ordinary folks" engage economic insecurity, racism, sexism, and Islamophobia to gain meaning and resonance in everyday conversation (Pied 2019).

At issue in these studies is a question about the clustering of symbolic boundaries in informal discourse: if the feeling of status threat among relatively white and privileged voters was triggered by a nexus of boundary work anchored by rhetoric of race, class, and borders, which of these categories were most central and how did they relate to others?

27

New Nationalism. A final cluster of studies targeted conversations on national identity related to the question of what it means to be an American. Studies in this vein often directly target discourse, for instance counterposing the "Stronger Together" narrative of American diversity advanced by the Clinton campaign against the web of discourse surrounding phrases like "Make America Great Again" put forward by Trump (Wingard 2017).

This cluster is deeply rooted the boundary system of borders: both immigration status and nationality burn bright in these conversations, anchored by an overall rise in anti-immigrant boundary work (Demata 2017) and by events spurred by the Trump campaign and administration, including proposed travel bans and calls to "build the wall" (Alamillo, Haynes, and Madrid 2019). Specific studies link the emergence of this discourse to evolution of classic narratives of American exceptionalism and national greatness counterposed to globalization and "open borders" (Edwards 2018; Federico and de Zavala 2018).

How this conversation connects to other boundary systems is a matter of ongoing debate. Some point out the racialization of anti-immigrant language (Heidt 2018; Heuman and González 2018) while others emphasize its connection to narratives of job loss from immigration and international competition (Finley and Esposito 2020). Religious differences circulated in the aftermath of Trump's call for a "total and complete shutdown of Muslims entering the United States" and in a renewed discussion of whether America is and should be a Christian country (Delehanty, Edgell, and Stewart 2019; Whitehead, Perry, and Baker 2018).

Scholars of rhetoric have linked this wave of new conversations to a shift in the relationship between national identity and sexuality. Trump's tweeted promise to LGBTQ communities that he would "fight for you while Hillary brings in more people that will threaten your freedoms and beliefs" echoes homonationalism, described by Jasbir K. Puar as a "discursive tactic that disaggregates US national gays and queers from racial and sexual others, foregrounding a collusion between homosexuality and American nationalism" (Puar 2018:38)

This tactic seemed particularly prevalent in the rhetoric of the alt-right, for instance in the appropriation of the TwinksforTrump hashtag on social media to open a lane of support for Trump among LGBTQ+ communities (Hatfield 2018). Dignam and Rohlinger (2019) show how alt-right Reddit communities centered in a forum called theredpill mobilized sexism in the months before the election to drive voters toward the Trump campaign. The alt-right seemed to emerge in this time as a center of new combinations of rhetoric on race, sexuality, gender, class, and pop culture (Hagen 2017; Mirrlees 2018).

On the other hand, homonationalist rhetoric may mask a resurgence of traditional antagonism against LGBTQ+ identities. Studies describe a wave of threats and attacks (Drabble et al. 2019) coupled with a rollback of many programs and legal protections gained in the Obama era (Moreau 2018). These setbacks are balanced against a resurgence of inclusive rhetoric and even political candidates (Haider-Markel et al. 2019) including newly popularized identities pansexual, non-binary, and queer (Worthen 2019).

If shifts did occur in how Americans discussed and defined their national identity, these studies challenge us to explore the specific configuration of symbolic boundaries involved. How prevalent were sexuality, religion, and other forms of difference in discussions of national identity on Reddit, and how did their prevalence and clustering shift in the aftermath of the election? To gather tools to

28

explore these questions, I now turn to the linked traditions of intersectionality studies and cultural sociology.

How symbolic boundaries shift and cluster: Intersectionality with cultural sociology

The literature emerging to explain the cultural and political shifts surrounding the 2016 election draws from deep traditions of research on the stakes of popular culture and informal discourse. As conservatives in the mold of Andrew Breitbart propagated the idea that "politics is downstream from culture", they unwittingly echoed cultural scholars like Stuart Hall who showed how the popular - the everyday practices of ordinary people - functions as an arena of consent and resistance "where hegemony arises, and where it is secured" (Hall 1981).

For Pierre Bourdieu, each communicative act is an intervention in the specific fields of struggle relevant to its moment (Bourdieu 1991); the outcomes of struggles small and large across multiple field and social situations aggregate to shape broader struggles in the macro-level field of power (Bourdieu 1998). Bourdieu centers the stakes of discourse on the struggle over which categories will be used to describe, interpret, and ultimately act on the world. This struggle applies to conversations small and large, with both formal and informal communication part of a "theoretical and practical struggle for the power to conserve or transform the social world by conserving or transforming the categories through which it is perceived" (Bourdieu 1985:729).

For the purposes of this study, each social category represents a system of symbolic boundaries: lines in social space that emerge from people's collective attempts to define and interpret relationships between social groups (Lamont and Molnár 2002; Riesch 2010). Like other cultural material, symbolic boundaries often exist in a latent state as mental schemas the minds of individuals but become directly measurable when those individuals communicate and act (Sewell 2005). Symbolic boundaries are the product of top-down and bottom-up processes, influenced on one hand by large institutions and events but emerging on the other hand from everyday interaction.

The concept of symbolic boundaries anchors studies in the spirit of Hall, Bourdieu, and others that elucidate how social categories are activated, reified, and contested in communication - and how these categories matter to broader struggles over material inequality and political power. Boundary systems including race, gender, sexuality, class, and others are sites of independent struggle. They are also deeply interwoven - and the theory and study of how boundary systems interweave is crucial to understanding any realignments that occur during unsettled times like the election cycle of 2016.

Tracing realignment in symbolic boundary work forces the consideration of how boundaries intersect to shape landscapes of discourse, best understood as part of the field of intersectionality studies. The field of intersectionality studies is internally diverse, building from a simple shared insight - that social categories are better understood together than alone - to a broad range of meanings and methods driving intersectional research and action (Collins 2015). It is both theoretically rich and practically engaged, having emerged as a framework of legal action enabling groups that were oppressed (Crenshaw 1991) and evolving to facilitate coalitions that bring together struggles across differences (Milkman 2017; Terriquez et al. 2018)

The internal diversity of intersectionality studies creates an opportunity for multiple methodological and theoretical approaches to flourish as part of a field that "emphasizes collaboration and literacy rather than unity" (Cho et al. 2013:785). This same internal diversity has made it a challenge to pin down specific dynamics of intersections (Hancock 2007). Combining intersectional analysis with

29

cultural sociology is one way to address this problem, pushing past the simple insight that categories intersect to investigate how and why intersections matter, along with how they change over time (Choo and Ferree 2010).

Bridging cultural sociology and intersectional studies. Past theoretical and practical work in cultural sociology can help specify what it looks like for symbolic boundaries to change individually or for their alignment to evolve over time. When patterns of everyday conversation mirror the structure of existing symbolic boundaries - when the prevalence, clustering, and specific articulation of boundaries repeat those of the past - informal discourse performs "boundary work" that reproduces the status quo (Lamont 2012).

Intersectional analysis focuses attention on what happens when these boundaries align. When boundary work in multiple systems frequently and intensely co-occur they intensify the marginalization of people who identify with multiple stigmatized groups, as they do for black women experiencing stigma and discrimination of race and gender along with other boundary systems (Wilkins 2012). Co-occurrence among shared symbols in boundary alignment can be a crucial tool for social groups to bridge cultural holes, enabling movements on the left and right to create new alliances by creating resonance with groups who otherwise would never align (Pachucki and Breiger 2010; Smith 2007).

What then happens when alignments shift in a system of symbolic boundaries? When patterns of boundary work in everyday conversation change, collective understandings of boundaries can shift with them, with serious consequences for material inequality and political power. Symbolic boundaries influence the construction of formal boundaries, creating unequal distribution of resources and social opportunities as people refer to internalized understandings of status and boundary as they decide to advocate or legislate, hire or fire, welcome or shun (Pachucki et al. 2007; Ridgeway 2014).

The consequences of alignment and realignment among symbolic boundaries pertain across multiple forms of inequality. For instance, racial formations that begin in informal discourse can lay the foundation for formal discrimination in law, workplace inequality, and violent repression (Omi and Winant 2014). Similar processes pertain across forms of social division in what Patricia Hill Collins calls the "recursive relationship between social structures and cultural representations" (Collins 2015). By focusing on the cultural side of this recursive relationship, this study raises questions about boundary work in discourse that connect deeply to struggles over material and power inequality.

Method and measurement

Tracing the prevalence and clustering of boundary language in online speech presents an opportunity to understand realignment while honoring the complexity of communicative action, particularly in the context of a community of scholars performing mixed-methods research. Measuring where boundaries appear together and apart can answer calls to trace multiple forms of intersectional difference in the same situations and among the same people (Else-Quest and Hyde 2016; Hancock 2013a). Further, aggregating patterns at large scales from interaction helps fill the need for bridges between micro, meso, and macro analyses, putting inequality research on firmer epistemological footing by aggregating large patterns from individual interactions (Latour 2005; Page 2015; Saperstein et al. 2013).

30

Measuring the intersection of symbolic boundaries quantitatively has been rare both in intersectionality studies and cultural sociology, for the understandable reason that quantitative work has often masked and misrepresented the dynamics of inequality (Hancock 2007; Zuberi and Bonilla-Silva 2008). Most studies of intersectional difference in online spaces have been qualitative so far (e.g., Gray, Buyukozturk, and Hill 2017; Gray 2012). Still, the past two decades have seen increasing calls for mixed-methods research in intersectional research (Bowleg 2008; Griffin and Museus 2011; McCall 2005) that specifies how boundaries intersect (Warner and Shields 2013).

Similarly, students of symbolic boundaries have most often employed qualitative tools to reveal the language with which people communicate their identity by distinguishing themselves from stigmatized others (Beljean et al. 2016). Understanding the network structure of boundary language has revealed how boundary work can not only distinguish but also bridge social groups (Pachucki and Breiger 2010; Smith 2007). Research on symbolic boundaries is particularly well suited for analysis at the intersection of conventional social categories, for instance investigating how people simultaneously navigate racial categories in combination with class (Lamont 2009) or gender and sexuality (Steinbugler 2012).

At the same time, the strong tradition of quantitative analysis in cultural sociology has rarely touched questions of intersectional difference. Careful coding of text has revealed quantitative differences in the use of language across cultural and national groups and shown how these differences shape policy outcomes (Ferree 2003; Mueller et al. 2012). Related studies have revealed cultural changes over time by applying automated content analysis to both formal and informal discourse (Bail 2012; Fiss and Hirsch 2005; Oleinik 2015). The symbolic boundaries approach has also proven particularly amenable to the quantitative analysis of large datasets, for instance showing how symbolic boundaries shift through displays of negative emotion (Bail 2008, 2014a).

What to measure: Signals and forms of boundary work. Understanding how boundaries aligned, polarized, or separated requires not only distinguishing among forms of boundary work like boundary activation and downframing, but also measuring where boundary work co-occurs with emotional expression. Measuring emotion is crucial because exposure to emotions cues the activation of different frames, as for instance exposure to anger increased preference for anti-immigrant policies among people who identify as white (Banks 2016).

Emotion is both contagious and key to the contagion of frames (Bail 2016; Coviello et al. 2014; Kramer, Guillory, and Hancock 2014), and some scholars have raised the possibility that the heightened intensity of emotional narratives on the political right - in contrast to the relatively cold emotional palette of the Democrats - played a central role in the outcome of the 2016 election (Alashri et al. 2016; Tuğal 2017).

Key to identifying the specific intersectional configuration of allies, enemies, and social categories in discourse is to distinguish between types of boundary work by their emotional charge and discursive function. Introducing this distinction taps into a productive synthesis that situates symbolic boundaries in a broader context of collective processes of valuation that comes in multiple forms (Beljean et al. 2016; Lamont 2012). This work aligns with recent syntheses in social psychology and neuroscience that traces how processes of inclusion and exclusion work together to drive the formation of stereotypes (Amodio 2014), political ideologies (Jost et al. 2014, 2009), and broader identities (Hogg 2016).

31

This study distinguishes between three steps of boundary work - activating boundaries, defining groups, and framing groups - and differentiates at each step between discourse with positive, neutral, and negative charge, as in the table at right. These steps often happen simultaneously, but their combinations are worth analyzing independently. The quantitative analysis of this study targets only 2 forms that lend themselves to accurate large-scale quantitative analysis:

1. Boundary activation with neutral emotional charge: Refer to categories that divide people into groups Examples: “Black people talk differently than white people.”; “Women wear dresses.”

2. Downframing with explicit boundary activation: Attach negative value (emotional charge) by directly naming a category Examples: “Black people are lazy”; the N word; “Women just can’t do math”

Unlike upframing, which is often signaled by tone and adjective modifiers rather than specific words, both forms of boundary work share the virtue of being rapidly identifiable in large corpora of text with the help of lexicons in computer-assisted content analysis. As shown in the examples above, each boundary system has a set of distinctive words and phrases that people use to signal the target and intent of their work to form and negotiate boundaries between groups of people.

For instance, the phrase "anchor baby" in the boundary system here called borders reifies and prioritizes the social category of immigrant status, and at the same time connects the category of immigrant to a negatively charged network of beliefs and values. Though it may be used ironically or as part of a joke, the phrase itself is inextricable from the downframing it has performed in countless past discursive acts (Hodson, Rush, and Macinnis 2010). On the other hand, mentioning a category label like "white people" reifies and focuses attention on symbolic boundaries of race while leaving open the question of positive or negative charge; it takes an additional discursive act to attach emotion to the boundary.

Negatively charged discourse is an important factor in the construction of symbolic boundaries but not nearly the only factor; its dominance in academic work has inspired multiple attempts to reclaim boundary work as part of broader processes by which groups and identities form (Beljean et al. 2016; Riesch 2010). To connect negative boundary work to the study of frames in cultural change (Snow et al. 2014; Vliegenthart and van Zoonen 2011), and to distinguish it from exclusion - which cannot be performed by someone within the target category - I call it downframing.

In boundary activation, a form of social division appears in speech without a clear positive or negative charge. Whether they are intended to reinscribe symbolic boundaries or not, words used to label or describe any category gain their meaning through distinction from other categories (Lakoff

Table 1. Forms of boundary work by step and emotional charge.

Emotional charge

Step Positive Neutral Negative

Activating boundaries

Activation "group Y"

"system XYZ"

Defining groups

Inclusion "x is one

of us"

Categorization "y is in

group Y"

Exclusion "z is one of

them"

Framing groups

Upframing "group X is good"

Interpretation "group Y

is…"

Downframing "group Z is bad"

32

1987). When category labels like "men" or "black people" appear in discourse they often still impact the perceived importance and structure of boundaries collectively imagined in social space (Bourdieu 1985). Indeed, categorization without emotional charge often reinforces the power relations of the status quo, for instance functioning as part of racial grammars that highlight or even assume the dominance of groups historically at the top of social hierarchy (Bonilla-Silva 2012).

RESEARCH DESIGN

Site Selection: Extracting Boundary Work from the Reddit Platform

This study uses as its primary dataset a collection of comments to the online platform Reddit made accessible in monthly tables by pushshift.io and the Google BigQuery project, selecting roughly 1.7 billion comments posted between November 8 of 2015 and 2017, a year before and after the election (Baumgartner et al. 2020). With over 330 million active authors posting more than 900 million comments each year in over 80,000 forums, Reddit at the time of this study was the 6th most active website in the United States, with the expressed purpose to "ultimately be a universal platform for human discourse" (Lagorio-Chafkin 2018).

The forums on this platform, known as "subreddits", function as natural laboratories with often strikingly different cultures of deliberation; their divergent patterns of discourse can shed light on how people do work of social exclusion and inclusion. Most of the forums on Reddit are loose agglomerations of users posting articles and other web links; a subset of forums develop a base of loyal contributors who spend hours a day reading, posting, and replying.

In these forums, signals of group formation multiply: new phrases and memes appear and establish resonance by generating upvotes, reposts, and replies; users recognize and reference each other; demonyms begin to circulate inside and outside forums as a badge of collective identity (Mills 2017). Reddit is a strong site of community building both for those sheltering from boundary work and more explicit forms of violence (Dosono and Semaan 2019; O’Neill 2018) and for those who seek to legitimize, construct, and weaponize boundary work (Prakasam and Huxtable-Thomas 2020).

Discourse on Reddit lends itself to semantic analysis because conversations are often wider and deeper threads than other discussion forms, with more participants and longer chains. Reddit's algorithms select and elevate resonant comments - the ones that quickly gather replies and votes - bringing them to the top of feeds and creating cascades of attention and reaction. The platform features even more discourse of identity, empathy, and strong group interactions than a platform like Twitter (Manikonda et al. 2018). With a more deliberative culture than many platforms and no character restrictions, comments are sometimes short retorts or echoes of other comments but commonly feature the elaboration of one focused frame (Massanari and Chess 2018).

Like most large datasets, this one is subject to errors and omission that occur at measurable rates: Gaffney and Matias estimated that roughly 0.043% and 0.65% of comments and submissions were missing, respectively, and pushshift.io was taking steps to reduce this rate over time (Gaffney and Matias 2018). The dataset includes both primary comments - the ones that start discussion threads - and the reply threads that follow them. The filtering process targeting only posts that clearly identified themselves as bots or moderators along with comments with misleading content like copypasta, resulting in the removal of 12.4% of comments on average across the quarters studied.

33

The common limitations of online social research apply to the Reddit platform, but this study is designed to turn many of these limitations into strengths by using the idiosyncrasies of the platform as a lens on the specific population and kind of discourse which it hosts. For instance, the widely reported skew in the population of social media platforms makes Reddit a site that lends strength to specific analysis revealing trends among mostly privileged groups. A recent study by Pew Research Center estimates that 11% of the adult US population use Reddit but 22% of the population 18-29 uses the platform, which also skews educated and male, with a more complicated racial skew toward white and Hispanic respondents (Perrin and Anderson 2019). The intersectional privilege of this group makes them in some ways the ideal focus of the study of realignment of symbolic boundaries they wield considerable symbolic power (Bourdieu 1991).

Discourse in text-based forums like those of the Reddit platform raises challenges but also opportunities: the visual symbols by which people identify their own race and gender tend to be unavailable., but users tend to activate social categories in text, making visible dynamics of inclusion and exclusion that are notoriously hard to measure through self-reporting (Gray and Huang 2015; Krumpal 2013). Even the pseudonymous nature of the platform emerges as a strength in studying boundary realignment.

Where many studies labor to map relationships between online communities and "real-world" communities (Hampton 2017), this study takes the opposite tack, embracing the pseudonymity of Reddit because it makes users even more likely to express and engage boundaries in discourse. Conditions of uncertainty can heighten mutual focus, intensifying the emotional impact of interactions (Collins 2014a). When users are unsure of the identity of their interlocutors, whether they "really mean" what they are saying, whether they are even talking with a human being, the affective impact of communication can sometimes be greater (Shelton et al. 2015).

Plus, though the Reddit data fails to capture the full range of any person's communication, this challenge of all online platforms could be seen as a strength: if signals of symbolic boundary realignment can be found even in data from this platform, these signals will lay the foundation for a strong argument for the existence and relevance of similar patterns where communication is harder to measure at large scale but even more interactive and immersive.

Developing Lexicons: Downframing, Activation, Emotion

To measure how boundaries changed on Reddit from 2015 to 2017, this study takes as its central measure the relative prevalence of a set of original lexicons matched to boundary systems of race, gender, sexuality, borders & immigration, class, religion, and political identity.. Lexicons are groups of words and phrases selected to signal the presence of a particular boundary or discursive phenomenon in discourse. The count of a lexicon in a comment is the sum of the words and phrases from the lexicon that appear in the comment.

Using lexicons defined by researchers in advance, as opposed to inductive tools like topic modeling or machine learning, makes it possible to study intersections among a large number of specific symbolic boundaries. Inductive methods can identify primary distinctions in highly polarized data with relatively high accuracy, but are not designed to identify multiple specific categories simultaneously (Bail 2014b; DiMaggio, Nag, and Blei 2013; Grimmer and Stewart 2013). Lexicon-based strategies often produce similar accuracy to inductive models and enable researchers to focus in on specific concepts and constructs (Nobata et al. 2016).

34

Though it is impossible to distinguish forms of boundary work in text with perfect accuracy, lexicons have proven capable of tracing changes in boundary work online (Taboada et al. 2011; Wiegand et al. 2018), particularly where lexicons and results are validated by qualitative analysis (Grimmer and Stewart 2013). Counting words in social media posts has been used to measure affiliation, emotion, and boundary work across social categories (Park et al. 2016; Yaden et al. 2017).

Working with a research team, I first drafted a set of lexicons to measure boundary activation in each of the categories featured above. To begin each of these I worked with the research team to list the category terms (e.g., race, gender) and subcategory terms (e.g., Asian American, women) likely to be prevalent in discussions of each category on the Reddit platform. We did not seek to be exhaustive in this first instance but rather to identify terms close to the core of discussions on the issue at a relatively high level of generality (e.g., yes to "christian" and "evangelical" but not "Seventh-Day Adventist").

The language of downframing and online hate shifts quickly, so to build a dynamic lexicon of downframing that included even terms that might not be known by the research team, I started from the Hatebase, a crowdsourced collection of offensive words and phrases used online (Hine et al. 2017). The Hatebase offers a strong launching point for the analysis of boundary work online because its users not only collect words relevant to downframing, but also tag which symbolic boundary systems (gender, race, etc) relate to which words.

The final new lexicon sought to assess the prevalence and spread of alt-right language on the platform. I used a lexicon that combined a self-published glossary of an alt-right subreddit along with a list of words compiled by the Alt-Right Open Intelligence Initiative (Hagen 2017).

Each of these 15 original lexicons were refined and validated in four major ways: 1) ranking ngrams from the full Reddit corpus by polarization and emotional valence; 2) sending the draft lists to 3 other scholars with experience in social media and content analysis; 3) analyzing 20 pseudorandom comments that matched each word of phrase during the study period and flagging for deletion any that produced 2 or more false positives; and finally 4) testing each lexicon for false positive rates against a sample of 1,580 hand-coded comments.

False positive rates ranged from 0% for h_religion up to 6.1% for w_class. Rates were generally higher for the boundary activation lexicons. This makes sense given the sharp specificity of downframing: because many of the words used in downframing ("cuck", "patriarchy", etc) are designed to hurt, they tend to have a lower level of polysemy. The highest rates appeared in boundary activation categories for class and for borders & nationality, which included several terms ("workers", "unions", "citizens", "american") with very broad use. To see the rates of false positives and further details on this process, please see the Methodological Appendix.

Some of the final lexicons have a higher rate of false negatives - ranging from 2.4% in boundary activation of gender to 11.6% in boundary activation of political identity - but the low rate of false

Table 2. Samples from boundary activation lexicons.

Race & Ethnicity Class Gender

african-american(s) asian-american(s)… black people black race blacks ethnic(ity)(ities)

CEO(s) classi(sm)(st) elite(s) the-establishment laborer(s) poor-people rich-people

boy gender(s) gendered girl man/men sexes sexi(sm)(st) woman/women

35

positives makes them well-suited to the task of tracking changes in prevalence within each category of boundary work. For instance, this example from The_Donald shows terms from downframing lexicons in borders and nationality in bold: More culturally rich than a town full of people of different shades of brown, which has no distinct culture or specific heritage or ancestry (which is what libtards and globalists want)

On the other hand, the lexicons identify boundary activation of political identity in the following comment from SandersForPresident but undercount the boundary work inherent in the word "corruption", which generated too many false positives to include: Being part of the establishment isnt bad. Bernie has spent the past 30 years in politics for example. But we grew up in the internet age. We wont put up will bullshit and corruption.

To measure the presence of emotion in comments, I used the established lexicon AFINN (Nielsen 2011), designed to track the valence and intensity of emotional expression online. This lexicon has proven a strong performer in a meta-analyses of sentiment, particularly in the context of social media (Ribeiro et al. 2016).

Measures of Prevalence, Centrality, and Clustering

Prevalence. To analyze changes in the relative prevalence of boundary work - for instance, when downframing of gender become more common or where activation of racial categories increased - I used the relative rate of occurrence as the primary metric for each lexicon. Because lexicons have unequal rates for reasons unrelated to their prevalence in discourse - that "women" occurs more than "immigrants" tells us nothing about the relative importance of gender and immigrant status - in the below I report only on correlations of rates across forums and changes in rates across time. Building on the validation analysis of Schwartz and Ungar (2015), I eliminated from the analysis any counts or rates that referred to groups of less than a thousand words.

Creating a clear picture of prevalence and alignment over time required dividing comments into units of time long enough to smooth out fluctuations driven by the 24-hour news cycle and quirks of the Reddit platform. After preliminary testing revealed the quarter (3 months) as the best balance, I divided the data into eight quarters, four before and four after election day. I tracked prevalence for each variable across the 4,324 that hosted 6K+ comments (roughly 100 comments per day), 20+ authors, and 10+ average wordcount per comment in at least one of the quarters. Most forums (2,315) were active in all quarters; including even forums that were not allowed for a more complete picture of the full discursive field in each quarter.

Centrality. To measure the strengths of relationships between lexicons I computed the pairwise correlation matrix. To observe which forms of boundary work were most strongly connected to others - which were most intensely intersectional - I computed the eigenvector centrality of each lexicon.

Table 3. Dates and number of comments scored by quarter

Quarter Dates Comments

-4Q 11.9.2015 -2.8.2016

182,505,559

-3Q 2.9.2016 -5.8.2016

192,016,614

-2Q 5.9.2016 -8.8.2016

202,254,866

-1Q 8.9.2016 -11.8.2016

214,375,616

+1Q 11.9.2016 -2.8.2017

229,301,866

+2Q 2.9.2017 -5.8.2016

232,753,955

+3Q 5.9.2017 -8.8.2017

247,352,753

+4Q 8.9.2017 -11.8.2017

253,542,016

36

Unlike most other measures of network centrality, this measure has been shown to apply across a wide range of network structures (Bonacich 2007) and to scale well and track with measures of causal influence better than many commonly used alternatives like betweenness and closeness centrality (Dablander and Hinne 2019). The measure performs particularly well when the eigenvector is normalized for comparison across networks (Ruhnau 2000)

Clustering. To measure changes not only in the individual boundaries but in their relationships, I used a hierarchical clustering technique called clustering around latent variables (CLV), which is particularly well suited to revealing cultural structures where categories intersect and overlap (Giannella and Fischer 2016; Vaisey 2007). Clustering among groups of words or phrases is key to the production of meaning and measuring the degree and structure of word clusters has been a popular target for computational content analysis (Leydesdorff 2011; Mohr et al. 2013).

Like many clustering algorithms, the CLV procedure requires selecting the number of clusters that best fits the data. The most parsimonious number of clusters is often the one that precedes a jump in the quantity Δ =TK−T(K−1) as the number of clusters decreases from K to K-1, where T is the criterion to be maximized and

with xj (1,…,p) the variables to be clustered, ck (1,…,K) the clusters to be maximized, and δkj a binary variable indicating membership of a given variable xj in a given cluster ck (Vigneau, Chen, and Qannari 2015; Vigneau and Qannari 2003).

The CLV algorithm took as its input the matrix of correlations among prevalence scores across these forums, measuring the strength of association between types of boundary activation and downframing. I added measures of negativity and intensity of emotional expression along with the alt-right lexicon in order to characterize the clusters by emotion and political lean.

RESULTS

Prevalence: Temporary change or new normal?

For most of the seven forms of boundary work in this study, both downframing and boundary activation reached a local peak not in the heated months directly before the election, but in the quarter that immediately followed: by Winter 2016-17, downframing centered on borders, political identity, gender, and race appeared at a significantly higher prevalence than their levels a year earlier, as seen in Table 4.

Over the full period, only one boundary declined in activation or downframing, and it declined in both: each form of religious boundary work dropped in prevalence over the period with upticks in Summer 2016 and 2017 as conflicts between Christianity and Islam rose in the news followed by a continued decline.

Other than religion, nearly all categories followed the expected pattern of temporary change in unsettled times, spiking in the quarters nearest to the election and then returning to a level near their earlier norm. There were two exceptions: boundary activation of race and class. Race followed the

37

expected pattern until the final quarter, when it spiked again. Analysis of a random sample found this to be driven mainly by topical events, particularly the Unite the Right rally in August 2017 and the wave of protests centered on the NFL in the following month. The story of class is more complicated and is the focus of the following chapter.

The picture in downframing was very different. Here only two boundary systems - gender and class - showed the temporary change pattern. Downframing of race followed a similar pattern as boundary activation, returning to levels near the original and then spiking again in Fall 2017. On the other hand, downframing related to borders, political identity, and sexuality showed signs of a sustained new normal.

Boundaries of immigrant status, which grew more salient particularly in the two quarters following the election, exploded in downframing over the period, reaching their highest levels along with most other downframing categories in Winter 2016-17 (137% above starting prevalence) but declining to levels that were still very high, 57% above their prevalence two years earlier, even as boundary activation fell back to starting levels. The rise of anti-immigrant phrases in the quarter after the election, led by the expansion of the word "illegals", was particularly dramatic and widespread even in forums less likely to host serious political discussions: their prevalence more than tripled in forums centered on entertainment and roughly doubled in all forum types.

Downframing related to sexual identity saw consistent increases over the period and seemed to reach a new plateau, ending the period at 24% higher than its starting prevalence. Political identity boundary work also appeared to reach a new plateau, peaking 73% higher than its starting levels and declining only to 60% higher, driven in part by durable rises in words like "globalist", "libtard", and "snowflakes" even as words like "deplorables" faded from the platform. Gender downframing spiked in the quarter after the election and declined, but then settled to a new normal roughly 10% above its original levels for the last three quarters.

Table 4. Changes in prevalence for Activation and Downframing lexicons along with correlation between Activation & Downframing prevalence for each lexicon over the eight quarters studied.

Boundary Activation Downframing

% Change by Winter 2016-17

% Change by Fall 2017

% Change by

Winter 2016-17 % Change by

Fall 2017

8-quarter correlation A&D lexicons

Borders & Immigration

+46%*** +14%*** +137%*** +57%*** .87

Class +11% +16%*** +23% +11% .59

Gender +1% +3% +22%*** +10% .14

Political Identity

+132%*** +31%*** +73%*** +60%*** .22

Race & Ethnicity

+6% +25%*** +44%*** +34%*** .52

Religion -11%*** -29%*** -22%*** -34%*** .92

Sexual Identity

+30%*** +16%*** +25% +24% .94

Note: p-values calculated by paired samples t-test using the 1528 forums that met the criteria (6K+ comments, 20+ authors, 10+ average word count) in all 8 quarters during the study period, providing a test of how consistent the pattern is across forums.

*** P < .001.

38

Significance tests of prevalence across the 4,324 most active forums reveal which forms of boundary work shifted in the same direction across a wide swath of forums and which followed more divergent patterns. For instance, though across all forums downframing of class and sexuality increased (+23% and +25%), these forms of boundary work decreased in enough forums to lower the significance of the overall results.

The link between downframing and boundary activation varied widely. There was a high correlation between the prevalence of downframing and activation in borders, sexual identity, and religion as these forms of boundary work rose and fell together across the two years of the study. On the other hand, downframing and activation of gender and political identity followed markedly different patterns, showing that the link among forms of boundary work even within a form of social difference is a variable quantity rather than a constant to be taken for granted.

Centrality: Was there a new nexus?

Eigenvector centrality measures the extent to which a given form of boundary work is connected by co-occurrence with others. The measure is scaled so that the most central category in a given network earns a score of 1, with other scores expressed by their ratio to this top level of centrality. When this measure is high, it indicates that this lexicon relates to a wide range of boundary conversations - when people are talking about boundaries, they are often talking about this boundary. Though the reality is more complex than any quantitative measure can capture, a lower centrality score indicates that a particular form of boundary work is in a sense less intersectional than others.

At the start of the study in Winter 2015-16, the most intersectional form of boundary work appeared to be race & ethnicity, particularly in its antagonistic mode of downframing, closely followed by political downframing (scaled centrality x=.86 of the top level), activation on the political left (x=.83), and activation of race & ethnicity (x=.79). In this first quarter, downframing of race was closely connected to a wide range of other forms: featuring strong correlations with downframing of

Figure 1. Frequency of lexicons as percent of Winter 2015-16 rate: Downframing and Boundary activation.

-40%

-20%

0%

20%

40%

60%

80%

100%

120%

140%

Winter2015-16

Spring2016

Summer2016

Fall2016

Winter2016-17

Spring2017

Summer2017

Fall2017

Boundary Activation: Prevalence by CategoryHits per word in all active forums, % growth from Winter 2015-16

Borders & Immigration Class Gender

Political identity Race/Ethnicity Religion

Sexual identity

-40%

-20%

0%

20%

40%

60%

80%

100%

120%

140%

Winter2015-16

Spring2016

Summer2016

Fall 2016 Winter2016-17

Spring2017

Summer2017

Fall 2017

Downframing: Prevalence by CategoryHits per word, % growth from Winter 2015-16

Borders & Immigration Class Gender

Political identity Race/Ethnicity Religion

Sexual identity

39

political identity D_Political identity (r=0.61), A_Borders & immigration (r=0.54), and D_Gender (r=0.53), all with p<.001.

By the quarter after the election, downframing of race had fallen to a distant third, with centrality score 0.62 of the top scaled score. The centrality score for boundary activation of race also fell, from x=0.79 to 0.48, indicating that conversations about race were occurring more in their own arenas or intersecting less intensely with a range of boundary conversations across forums.

This evidence suggests the possibility of a new central hub for boundary work emerging over the course of the 2016 election cycle. The most centrally intersectional forms of boundary work by Winter 2016-17 and persisting through Fall 2017 was downframing of political identity, where authors used emotionally charged terms to attack and critique political others. In Fall 2017, this form of boundary work connected most strongly to A_Left (r=0.83), A_Class(r=0.69), and D_Borders (r=0.68).

Though gender certainly played a strong role in any new nexus that might have emerged, gender was less central than other forms of boundary work. Indeed, gender began with a relatively high centrality score (centrality for xD_Gender=:.44, xA_Gender=.26) but this declined by Fall 2017 (xD_Gender=:0.28, xA_Gender=.17). By the end of the period gender had joined religion and sexuality as the forms of boundary work with the lowest centrality scores.

Clustering: What changed in intersectional boundary structures?

To drill deeper into specific boundary structures and show how the clustering structure of boundary work evolved over time, I applied the Clustering by Latent Variables (CLV) procedure to the correlation matrix of prevalence scores for the forums under study for each of the four quarters from Winter 2015-16 to Fall 2017. In each quarter, CLV analysis revealed a jump in the Delta criterion between the 4-cluster and 3-cluster solutions, suggesting that four clusters best describe the alignment among forms of boundary work and emotional expression:

• Mainstream Politics: At the core of this cluster were boundary activation lexicons tied to the political right and left (A_Left and A_Right), both strongly correlated to D_Political identity. Throughout the period, A_Left was most strongly connected to boundary work of class and borders; the right was more strongly connected to downframing of race and class than to downframing of borders.

Table 5. Changes in intersectional entanglement (eigenvector centrality) for Activation (A) and Downframing (D) lexicons in three quarters, ordered by value in Fall 2017.

Lexicon Winter 2015-16

Winter 2016-17 Fall 2017

D_Political identity 0.86 1.00 1.00

A_Left (Political identity) 0.83 0.99 0.84

D_Race & Ethnicity 1.00 0.68 0.62

D_Borders & Immigration 0.64 0.57 0.57

A_Class 0.61 0.52 0.57

A_Borders & Immigration 0.71 0.64 0.55

A_Right (Political identity) 0.55 0.77 0.53

A_Race & Ethnicity 0.79 0.46 0.48

D_Class 0.42 0.51 0.46

D_Gender 0.44 0.40 0.28

A_Religion 0.35 0.32 0.21

A_Gender 0.26 0.19 0.17

D_Sexuality 0.29 0.20 0.15

D_Religion 0.47 0.39 0.14

A_Sexuality 0.16 0.19 0.12

40

• Race & Negativity: This core of this cluster was the tightly correlated pair of activation and downframing of race. Negative emotional expression was strongly correlated with downframing across the board, but most strongly with downframing of race, closely followed by class, gender, and political identity. Over time this core gained and lost a member: the alt-right lexicon fluctuated between this cluster and the Gender cluster in the first four quarters and then remained in Race for the final year, while class downframing ultimately migrated to the Politics cluster.

• Gender & Intensity: Anchored by the strong correlations of gender boundary activation and intense emotional expression, this cluster also included boundary activation in sexuality and downframing of gender. The alt-right lexicon was part of this cluster in Winter 2015-16 and Summer 2016 but ultimately shifted to the Race cluster.

• Religion: The strongly correlated dyad of activation and downframing in religion was linked to boundary activation of race and borders but resolved to its own cluster in every quarter. This link remained strong in each quarter with no changes to cluster membership over the full 2-year period, and indeed correlations with other clusters tended to decline over the period.

The Mainstream Politics and Race & Negativity clusters were strongly linked and became more so with the spike in prevalence of race discussions in Fall 2017 - their correlation ranged from r=.39 to a height of r=.61 at the end of the period. The Religion cluster was positively correlated with the Mainstream Politics cluster at the start of the period (r=.35), but the correlation had fallen to r=.20 by Fall 2017.

The Gender cluster was negatively correlated with every other cluster in each quarter, showing that on the Reddit platform, intense discussions of gender and sexuality often occurred in isolation. Figure 3 shows these clusters in a correlation network with distance between forms of boundary work estimated by graphical lasso (Friedman, Hastie, and Tibshirani 2008), with line thickness proportionate to the absolute value of the correlation coefficient.

Figure 2. Dendrogram showing hierarchical cluster structure among lexicons from Winter 2015-16 and Fall 2017.

41

Though in general the clusters were stable over the study period, there were two durable changes related to the alt-right and to downframing of class. While the questions posed by Gender-centered Coalition studies are addressed in the clustering and centrality analysis above, the shifts in the clustering position of D_Class and the alt-right lexicon speak respectively to questions raised by Status Threat and New Nationalism studies.

The move of D_Class from the Race cluster into the mainstream Politics cluster was driven largely by increasingly strong links to activation lexicons on both the left (from r=.38 to r=.53) and the right (from r=.28 to r=.39) as the prevalence of class downframing grew on both sides of the ideological divide. At the same time, correlations rose between class downframing, negative emotion, activation of borders, and political downframing, suggesting that antagonism against elites and common people played an even larger role in overall political boundary work.

Correlations between boundary work of race, class, borders, and political identity were strong and relatively stable across the period. Indeed, the strongest average correlations not within the same

boundary system linked between A_Class and D_Borders (�̅�=.68), D_Political identity and D_Borders

(�̅�=.66), and A_Class and D_Political identity (�̅�=.64). The next strongest average correlation was

between A_Class and A_Left (�̅�=.61), higher than the correlation between A_Class and A_Right

(�̅�=.34).

The change in alt-right clusters related to the changing position of alt-right language in relation to boundary categories of race, borders, political identity, and sexuality. The rise of alt-right language across the platform was rapid and persistent despite the periodic attempts of Reddit officials to ban alt-right users and forums. The prevalence of the alt-right lexicon peaked at 795% higher than its starting levels during the quarter after the election and remained at 561% above starting levels a year later.

The alt-right lexicon initially played a bridging role between downframing of race, gender, and sexuality, but it became less of a bridge over time. Particularly, its correlation with downframing of

Figure 3. Results of Clustering of Latent Variables (CLV) analysis.

42

race increased from the start to the end of the study (from r=.33 to r=.47) as its correlations dropped with gender (from r=.38 to r=.28) and sexuality (from r=.57 to r=.20). Over the same study period, the correlation between D_Religion and A_Borders dropped (from r=.43 to r=.19), as did the correlation between D_Religion and D_Sexuality (from r=.31 to r=.09). Language of the alt-right also moved closer to the political mainstream as correlations increased with both more mainstream activation of political entities on the left (from r=.17 to r=.40) and right (from r=.17 to r=.35).

DISCUSSION

This exploratory quantitative analysis lays the foundation for further research on how boundaries shift and realign in informal discourse at large scale. One clear result is that : the different patterns followed by boundary activation and downframing reveal the usefulness of distinguishing between these forms of boundary work.

The results suggest that the election cycle of 2016 was indeed critical, producing durable changes in the prevalence of boundary work. While boundary activation mostly showed the expected pattern of temporary change, several forms of the charged language of downframing, most clearly borders & immigration and political identity, seemed to reach a new plateau. Both reached a peak in the quarter after the election and remained high despite some fluctuation, suggesting a "new normal" where persistent changes in discourse in a reciprocal relationship with continuing news events.

The overall shape of patterns in prevalence is likely to matter more than the percent growth - for instance, the fact that most forms of boundary work peaked after the election does not match the predictions of most election studies, which suggest that campaigns stir up interest that subsides once the votes are tallied. This in itself may be a signal that the 2016 election was critical, or at least different from others in its effects on how people discussed social issues.

Patterns in prevalence for each form of boundary work should be considered in the context of the overall volume of conversations in each social category. For instance, the fact that gender downframing rose 22% should seem impressive given the huge volume of conversations about gender on the platform; much of the language in the borders lexicon was relatively rare in Winter 2015-16, so the much larger spikes in this lexicon are both extremely important and important to understand in context.

Results of clustering and centrality analysis show evidence of durable clusters of symbolic boundary work, types of conversation about social difference that appear strongly linked in informal discussions on Reddit. Discussion of mainstream political figures - where people mentioned Trump or the Democrats, Clinton or conservatives - clustered with discussion of class and borders. Discussions of race, gender & sexuality, and religion clustered more tightly together in relatively independent spheres that were nonetheless strongly connected to others.

Gender-centered Coalition. As with the results from prevalence analysis, the centrality and clustering of gender boundary work should be taken in the context of how they operate on the Reddit platform and in broader spheres of informal discourse. Gendered conversation is ubiquitous both on Reddit and in general, wrapped in personal journeys of romance and family, and to a large extent happens separately from conversations about other social categories. Still, as with prevalence again, the shape of the pattern matters most to the question of what changed in the 2016 election cycle.

43

Rather than intersecting more strongly with other forms of difference over this period, gender decreased in centrality and to the extent that its individual correlations with other social categories changed, they generally became weaker. This lends support to studies that question the strength of the intersectional coalitions that emerged around Women's Marches; co-appearance in informal discourse does not directly influence whether movements can successfully connect with each other, but it influences the supply side of people whose experiences and discussions prepare them to bridge between gender and other concerns.

Status threat. In Winter 2015-16, with the election just beginning to heat up, race was highest in centrality, connected to a strongly intersectional nexus of boundary work focused on race, class, political identity, and borders. The strength of the ties between the Race and Mainstream Politics clusters was strong enough in some quarters to make feasible a 3-cluster solution with only Religion and Gender as separate clusters. This gives support to the idea that race functioned as a kind of central hub for boundary work through which other forms of boundary work flowed.

The centrality of political identity increased in the quarter after the election, making downframing of political identity (D_Political identity) the most tightly connected as the centrality of A_Left, A_Right increased. This persisted even as A_Left, A_Right, and D_Political identity became less central. This suggests that political identity as such may have displaced race as the most important hub for boundary discourse, as people on both left and right more fully embraced partisan identities and divisions. It was joined in the Mainstream Politics clusters by downframing of class - targeting elites and common people - which may have also become more important in the nexus of concerns over race, political identity, and borders that drives feelings of status threat.

New Nationalism. Conversations about borders, citizenship, and what it means to be an American grew strongly on Reddit particularly in the quarter after the election. The most dramatic leap by far was in downframing related to borders and immigration in this quarter, driven almost entirely by boundary work from the right including "illegals", "anchor babies", and "globalists", while terms from the left like "xenophobe" declined.

What role did sexuality, religion, and other forms of difference play in these expanding discussions? Religion was the only form of boundary work that declined substantially in both prevalence and centrality over the period, despite the continuing salience of Christian identity on the political stage. The decline persisted through both political struggles over the "muslim ban" first proposed by Trump in December 2015 and through military struggles with the religious insurgent group ISIS over the period.

These results provide evidence that rhetoric of homonationalism did not spread widely through informal discourse despite its importance as a political tactic on the right. Though boundary work of sexuality increased in prevalence after the election and remained at elevated levels, it occurred mainly in isolation. Indeed, language of the alt-right seemed to serve as a bridge between gender, sexuality, and race on the right early in the study period, perhaps bridging two discursive spheres that would otherwise have had a weak link. By the end of the period this bridging structure was much less evident as links grew stronger between the alt-right and downframing of race as its links to gender and sexuality weakened.

44

CONCLUSION

This study develops intersectional boundary analysis as a toolkit with the potential to help social researchers test, critique, and specify simple assertions and narratives of political change like the ones that circulated after the 2016 elections. On the social media platform of Reddit, substantial changes occurred not just to the prevalence but also to the alignment and clustering of boundary work, suggesting that 2016 was indeed a critical election with lasting impacts on the way people think and talk about social difference.

Interpreting specific relative shifts in boundary work is crucial not only to scholarly work but to practical work as well. In combination with data on hate group activity and crime, measures that track changes in the intensity and type of boundary discourse can help allocate resources and support to the people likely to be targeted as boundaries evolve. Understanding boundary realignments is also crucial for movements and campaigns seeking to interpret how people think and feel in order to create resonant messages and powerful mobilizations.

This study has revealed several changes in the prevalence and alignment of boundary work in informal discourse online. Though there is plenty of evidence to suggest that shifts on this platform are relevant to broader across social contexts, further work is needed to situate these results in broader cultural and political processes. Within the platform, qualitative analysis is needed to reveal how lexicons like the ones used in this study come alive in real discourse. Quantitative intersectional boundary analysis like this one could be applied to news archives or political speeches to understand how informal discourse online relates to the relatively elite discourse of journalists or politicians.

One of the limitations of the cluster analysis is that focusing on a broad field - the Reddit platform as a whole - obscures the differences between speakers and subfields. As the results of the prevalence analysis show, it is very likely that the intersectional boundary structure of discourse on the left is different from the right, and that generally each set of forums and authors have their own way of navigating the intersections of race, gender, class, and other boundary systems. Part of the purpose of this study is helping to develop a set of tools to characterize and rigorously measure a wide range of fields within and beyond the informal discourse of social media.

Future studies should also extend the analysis forward and backward in time. The election of 2016 appears to be critical in that some shifts seem durable for at least a year, but it is unclear whether these changes have since subsided. Though it is likely that many things about this election were special - for instance, the peak in downframing occurring in the quarter after the election rather than the quarter before - the evidence will build significantly when it is possible in 2020 to perform the same analysis on another presidential election.

Most of all, this work and its connected qualitative studies represent a provocation to incorporate quantitative analysis into a collaborative field of mixed-methods intersectional research into the relationships between forms of social difference - not just two or three boundaries at a time but in combinations adequate to the complexity of social life. This call for intersectional boundary analysis echoes and answers calls to interrogate how specific forms of inequality intersect and help bring intersectionality from theoretical construct to measurable quantity.

Further, intersectional boundary analysis opens an opportunity for cultural sociologists to deepen their engagement with multiple forms of oppression and make clearer the connections between cultural inequality and inequalities of resources and political power. The more quickly and accurately

45

our collective analysis of symbolic boundaries can reveal new alignments new forms of boundary work, the more effectively we can act on cultural trends, supporting forms of discourse and political action that are ever more inclusive, just, and wise.

46

Chapter 2

The Intersectional Complexity of Class Distinction: How Online Boundary Work Challenges Narratives of

the Rise of Populism and the White Working Class

ABSTRACT

Narratives on how class distinction matters to political action and other forms of social difference proliferated after the 2016 US presidential election. Such narratives drew strength from questions about how pivotal phenomena like the rise of populism and the white working class changed the way people think and talk about class inequality. This study focuses on informal discourse online to directly assess hypotheses about the internal structure and intersectional relationships of class discourse during the 2016 election cycle. The analysis begins by measuring shifts in the overall prevalence of class discourse in the 1.7 billion comments posted on the social media platform Reddit between one year before and after the 2016 election. It then applies mixed-methods analysis to a stratified random sample of comments in two time periods to reveal the intersectional structure of boundary work in the class discourse of forums on the left, right, and middle. Results complicate the story of populism and the white working class, revealing class as a concept whose intersectional entanglements with other forms of social difference are both fundamental to its nature and hotly contested across a range of political ideologies and discursive fields.

INTRODUCTION

The United States presidential election in 2016 sparked intense debates over class inequality as multiple candidates took aim at political and economic elites. National and global surges in discourse on class carried direct and immediate stakes based in deep disagreements on questions that demand answers before any practical social change can be made: Who precisely are the elites we should target? Who are the common people in whose name the movements struggle? What are the boundaries that divide them?

Evidence has built around the belief that cultural factors played a more powerful role than personal economic anxiety in determining the outcome of this election and the associated rise of movements against elites. Studies of status threat, cultural backlash, and the rise of populism (Inglehart and Norris 2016; Major et al. 2018; Mast and Alexander 2018; Mutz 2018b) have amassed impressive evidence for the importance of cultural and psychological forces in recent political mobilizations but done less to understand how these forces relate to complex intersections of race, class, gender, and other forces - and how they are rooted in the daily experience and discourse of people.

Clearly identifying boundary work in discourse - understanding who specifically is included or excluded and how - is crucial both to social groups that are targeted and to the understanding the landscape of beliefs and dispositions in the public that determine what mobilizations are possible (Norton 2017). Many studies assert that one particular form of social difference matters most, centering race, or class, or gender, or immigrant status without carefully considering others that might be salient (Collins 2015). When multiple forms of difference appear together they often

47

oversimplify complex intersections, as in narratives that imagine the white working class as a monolithic social force (Creech 2018; Walley 2017).

The following analysis paves the way for intersectional analyses of boundaries, using class discourse in the social media platform Reddit between 2015 and 2017 as a test case to support further mixed-methods work. It first uses large-scale quantitative analysis to trace patterns in the salience of boundaries to this discursive field. Then it uses mixed-methods analysis of a stratified random sample of comments to dig deeper, showing how people talk differently about class in different discursive communities and how boundary configurations shifted over time before considering implications for class-based movements in the coming years.

The specific configuration of allies and enemies in discourse - the intersectional boundary structure of a discursive field - is crucial to the prospect of social change in class or any other category of difference. It determines the social groups most likely to suffer and benefit from the rise of a movement, determines the possibility of coalitions, and prefigures any policy platform. This analysis reconciles the insights of intersectionality studies, Marxist class analysis, and cultural sociology to support further works of intersectional boundary analysis that reveal how the specific structure of intersecting boundaries in discourse matter to inequality and social change.

CLASS DISCOURSE IN THE 2016 ELECTION CYCLE

Many of the narratives that circulated among journalists and academics after the election depended on framing the language and action of common people to explain why they mobilized, why they voted, why they spoke as they did. In excluding everyday conversations about class inequality from the analysis, these studies play into a long-standing trend of talking about common people without engaging them directly, one that tends to misrepresent common people and limit the possibility of effective change (McCall 2014).

In testing hypothesis about the evolution of class discourse against the informal discourse of an online platform like Reddit, this study argues against narratives that attempt to explain the motives and actions of non-elites but they on an empirical base of research centered on elite speeches and survey data. Studying informal discourse about inequality is critical because the network of symbols and feelings that circulate in this discourse is precisely the one that movements and campaigns must resonate with in order to effectively mobilize.

This study investigates a cascade of questions to shed light on how informal discourse of class on Reddit evolved after the 2016 presidential elections in the United States, particularly in relation to other symbolic boundaries. The first substantive question concerns the salience of class. The Sanders campaign earned a reputation in popular media and social research as a catalyst of widespread interest in class inequality (Master 2016). When the campaign ended in July 2016, did class discourse keep its momentum or subside?

Some scholars draw a straight line between the campaign and later movements challenging class inequality, suggesting that the salience of class might have stayed stable over the study period (Gautney 2018). Others saw the Clinton campaign as an avatar of progressive neoliberalism that distracted from meaningful class politics (Brenner and Fraser 2017). In this way, even the question of class salience depends strongly on intersectional concerns including the complications of gender

48

and class that plagued the Sanders campaign and the trouble that Clinton faced in connecting with discourse and movements centered on class (Albrecht 2017; McCall and Orloff 2017; Wilz 2016)

Though this study considers how salient the overall issue of class was in the 2016 election cycle, its main work is to map relationships between class discourse and other forms of social difference and to track how these relationships changed. Three hypotheses help to clarify the stakes of this work for other scholarship on this critical election. The first two hypotheses explored here concern two linked factors that would predict a rise in the salience of class during the period under study: the rise of populism and the entanglement of class with race, as in narratives of the white working class.

First comes the populism hypothesis, which would center changes in class discourse in the broader rise of populism during the 2016 election cycle. If the way people talked about class took on a more populist tone, the emerging subdiscipline of populism studies (Mudde and Kaltwasser 2018) would predict an increase in the prevalence of two classic signals of populist rhetoric: criticism of elites and the celebration of common people. Multiple studies identify this combination in the language of both the Trump and Sanders campaigns (Inglehart and Norris 2016; Oliver and Rahn 2016), but few have considered how these rhetorical moves circulated in informal discourse.

Second is the race entanglement hypothesis. From this perspective, any rise in the salience of class could be explained by the entanglement of class with race, as evidenced by the rise of the white working class both as a narrative and as a social entity (Gest 2016). If this trend were central to how the discourse of class worked, quantitative analysis might reveal a strong entanglement between class and race that increased over time. A deeper qualitative look might reveal how concerns that on face seemed to be about class inequality actually reflected a racial animus (Sides et al. 2019).

In contrast to the first two, the intersectional class hypothesis suggests that class plays a more complicated role linked in a network with other categories of social difference. This explanation would manifest differently depending on political lean. On the left, how class engages other forms of social difference is key to the question of coalition and the health and function of the big tent of diverse interests the Clinton campaign tried to channel in 2016. On the right, scholars have tended to focus on whether and how concerns of class intersected with socially conservative resistance to changing norms of religion, gender, and even racial stratification.

Populism: Boundary work above and below

The populism hypothesis suggests the rise of populist discourse as a main influence on class discourse. Narratives of populism were dominant in elite discourses on how class inequality mattered to political struggle during and after the 2016 election: in other words, populism is sexy (Rooduijn 2019:362). Scholars, journalists, and politicians turned to populism to explain the rise of Bernie Sanders, Donald Trump, and a host of other figures globally who railed against elites and praised common people, cementing the rise of populism studies as a field (Anselmi 2017; Brubaker 2017; Inglehart and Norris 2016; Mudde and Kaltwasser 2018).

These two features - attacks against elites and praise of common people - form the core of an emerging consensus that defines populism not just by its nationalist tendencies but by the targets of its boundary work. This definition, often described as "ideational" and "thin", aims to be both specific enough and flexible enough to enable comparison across very different contexts including populism that works across national borders (Mudde and Kaltwasser 2018). Usually the definition emphasizes the corruption of elites or the nobility of the people (Aslanidis 2016; Mudde 2004),

49

often along with a reference to popular sovereignty (Bonikowski and Gidron 2016; Mudde and Kaltwasser 2013).

The core opposition between elites and common people that animates populism studies is also at the core of all class discourse. It is hard to imagine a discussion of economic and power inequality that doesn't include critique of elites and sympathetic consideration people at the bottom of the social ladder. The similarity in frameworks begs the question: can considering a broader framework of class discourse add any value beyond the analysis of populism?

The mainstream academic definition fits well with many of the maneuvers of emerging political leaders and parties around the world, but it fails to engage important complexities of populist movements and campaigns in practice. For instance, Donald Trump's words from a speech in Monessen, PA on June 28, 2016 criticize elites and sympathize with common people, but the specific targets of inclusion and exclusion matter a great deal:

Globalization has made the financial elite, who donate to politicians, very, very wealthy. I used to be one of them. I hate to say it, but I used to be one. But it has left millions of our workers with nothing but poverty and heartache.

Trump, whose wealth, income, and influence seemed to make him a member of economic and political elite classes by any reasonable standard, worked in this moment and throughout the campaign occasionally to celebrate and occasionally to mask his elite identity (Pierson 2017). In highlighting the struggle between workers and elites - both the global financial elite and the politicians they donate to - the campaign used populist rhetoric tactically to advance a broader strategic agenda.

The emerging consensus definition of populism doesn't always fit the actions of movements broadly considered to be populist. The dominant definition misses crucial features of many real-world movements including their tendency to punch both up and down, attacking and excluding some categories of common people (Jagers and Walgrave 2007; Muller 2016). Further, mainstream research on populism often carries a hint of moral judgment that ignores or downplays situations where attacking elites and elevating common people are entirely appropriate (Berezin 2019; Stavrakakis and Jäger 2018).

Assessing the core predictions of class discourse requires paying more careful attention to the configuration of allies and enemies in populist discourse (Jagers and Walgrave 2007) including the interpenetration of race, gender, and other forms with discourse on elites and the common people (Chun 2019).

The first step is to consider a logically complete set of attacks and praise for elites and common people, considering attacks on common people and praise for elites. This move builds on critical voices in populism studies (Brubaker 2017; Taguieff 1995) to bring class discourse analysis in line with the insight of Stuart Hall, Ernesto Laclau, and Chantal Mouffe that value-laden symbols like "elites" and "the people" are floating signifiers precisely because they are the targets of struggle (Hall 1981; Laclau 2005; Laclau and Mouffe 2014).

Even internal to class, it is helpful to consider different forms of elites. Studies of resentment in rural communities predict a rise in antagonism against cultural elites defined by geography and education (Cramer 2016; Gidron and Hall 2017; Hochschild 2018) above the political and economic

50

elites often centered in populism studies (Brubaker 2017). Several studies suggest a growing support of not just political but also economic elites on the right and the opposite on the left (Pew Research Center 2019) linked to an increase in system justification on the right (Monteith and Hildebrand 2020).

The assumption of power presents a discursive challenge for populists (Abedi and Lundberg 2009) but one that populist parties are increasingly navigating well (Albertazzi, McDonnell, and McDonnell 2015; Bolleyer and Bytzek 2017). When anti-elite crusaders become new political elites, as Donald Trump did when he became President of the United States, populism studies might predict steady levels of anti-elite rhetoric on the right as populist movements find other elites to target. On the other hand, a perspective centered on motivated reasoning might predict that attacks on elites would decline as people begin to identify with the systems they once opposed (Leeper and Slothuus 2014; Pennycook and Rand 2019; Tappin, van der Leer, and McKay 2017).

Race and class beyond the white working class

The simplest form of the race entanglement hypothesis appears in narratives of the "white working class", which took off in 2016 as many journalists and a few scholars posed this group as a primary

driver of the victory of Donald Trump (e.g., Williams 2017; Lamont, Park, and Ayala‐Hurtado 2017). Narratives of the white working class conflate theories that might otherwise compete: the phrase "working class" aligns with the theory that people were mobilized by economic need, and the labeling of this class as "white" suggests they were moved by racial animus.

A close look at the way this phrase "white working class" circulated after the 2016 election reveals the dangers of reifying social groups without careful analysis of how boundaries intersect in discourse and lived experience. Evidence does suggest a specific jump in turnout rates in 2016 for people in working-class occupations who identified as white (Morgan and Lee 2017b). However, this trend seems to be a part of a larger turning away from establishment political parties that occurred over the Obama presidency in working-class communities of multiple racial groups (McQuarrie 2017; Morgan and Lee 2017a).

Narratives of the white working class point to a broader hypothesis that concerns of class were significantly captured by concerns of race during the 2016 election cycle. Connected to the debate over whether racial or economic concerns motivated Trump voters (Morgan 2018; Mutz 2018a), this hypothesis raises the broader question of whether "racial attitudes were the lens through which economic concerns became more politically actionable" (Sides et al. 2019:156)."

The overall function of the term "white working class" was to enable a shallow analysis that ultimately limited understanding (Bhambra 2017). Framing the white working class as a primary driver of election outcomes directed attention away from the voters of color who shifted allegiances or stayed at home in the election (Bobo 2017) and ignored the racial diversity of the working class (Walley 2017). It also oversimplified the promising analysis of the role of status threat (Mutz 2018b) by collapsing a very complex set of factors contributing to status anxiety into simple racial animus (McCall and Orloff 2017).

If whiteness was not the only signifier that mattered to class discourse in the 2016 election cycle, which forms of difference mattered most? Race is central to the history and identity of the working class in the United States (McDermott 2006; Roediger 1999), but it is by no means the only category of difference to be considered in explaining mobilization in 2016 or any other election cycle. Given

51

the historical entanglement of work with masculinity, gender cuts to the center of class identity (Acker 2006; Hall 2013; Ridgeway 2011).

Quantitative analysis can begin to get at the questions raised by the race entanglement hypothesis, but it calls for a deeper qualitative analysis to assess where concerns that appear on face to be only about race or class actually tap into other forms of difference. Racial priming research has focused on how subtle race-coded appeals are more effective at moving voters than explicit racist attacks, though this may have changed recently (Valentino, Neuner, et al. 2018).

Intersectional class on the left and right: Threats to coalition and status

The intersectional class hypothesis suggests that class discourse of the 2016 election cycle engaged a more complicated network of social categories than either the populism or . Partly because of the conflicts between Clinton, Sanders, and Trump over the meaning and intersectional position of class inequality, the class discourse of the 2016 election cycle is a prime site for the study of class in interaction with a diverse set of other forms of difference including not only race and gender but also immigrant status (Alamillo et al. 2019; Sides, Tesler, and Vavreck 2018) and religion (Delehanty et al. 2019; Whitehead et al. 2018).

Indeed, it is an open question whether class discourse on the left or the right was more deeply entangled with other forms of social difference, and one this study is positioned to resolve. The Clinton campaign posed itself as explicitly intersectional and struggled over its relationships with workers, as movements on the right celebrated workers in ways that reasserted traditional hierarchies of gender, race, and sexuality (McCall and Orloff 2017; Strolovitch et al. 2017).

The question of how class relates to other forms of difference - and particularly whether class analysis crowds out other forms of analysis and action - has been a flashpoint of debate for years. This conflict became particularly salient in conflicts between supporters of Bernie Sanders and Hillary Clinton. The simple narrative of this conflict pitted Sanders supporters who believed that economic inequality was the root of all other forms of inequality and must be addressed first, and supporters of Hillary Clinton who insisted that race, gender, and other forms of social difference be given equal or greater attention (Albrecht 2017; McCall and Orloff 2017; Wilz 2016).

The Sanders campaign and other movements that took up the cause of economic inequality in the 2016 election cycle drew heavily on the symbols and analysis that circulated in movements connected to Occupy Wall Street in 2011, which made few direct changes to policy or economic structure (Gitlin 2013; Kreiss and Tufekci 2013) but produced measurable shifts in class discourse, partly by participating in an international effort to frame specific enemies - e.g., the financial sector and the "1%" (Calhoun 2013; Gaby and Caren 2016).

These movements also mirrored Occupy Wall Street by gaining a reputation for privileging class inequality over other forms of inequality including race, gender, and immigrant rights (Milkman 2017). Seeing this as a fundamental weakness, the Clinton campaign posed itself as explicitly intersectional, making "Standing Together" across a broad range of identities a core of its platform and positioning (Clinton and Kaine 2016), and in the process developing a reputation for ignoring the concerns of workers (Liu and Lei 2018; McCall and Orloff 2017).

On the right, scholars claim that the multiple identities associated with status threat make discourse on the right more intersectional than on the left (McCall and Orloff 2017). Studies of intersectional

52

boundary work in populist discourse have traced increasing salience of boundary work focused on immigrant status, religion, and nationality as a new discourse emerges against globalist elites (Lamont et al. 2017). Others predict a drop in explicit boundary work of race and gender as class becomes a more acceptable venue of attack (Amundson and Zajicek 2018).

Some point out that status threat is deeply entangled with gender and sexuality (Schaffner et al. 2018; Strolovitch et al. 2017) while others argued that to the extent that class mattered, it was itself a nexus of concerns across a cluster of symbolic boundaries in which phrases like “hard working taxpayers” or “ordinary folks" engage economic insecurity, racism, sexism, and Islamophobia to gain meaning and resonance in everyday conversation (Pied 2019). Among scholars that embrace the intersectional class hypothesis there is a broad consensus that class is deeply entangled with other forms of difference, but much productive disagreement on precisely how.

CLASS AND INTERSECTIONALITY

Though in popular political discourse of class politics and identity politics it can sometimes be taken for granted that class analysis is the mortal enemy of other forms of analysis, this misconception fails to engage the history of class as a concept. The concept of class was always intersectional in the sense that from its earliest formulations it made strong connections between material inequality and other forms of difference.

Materialist definitions that assign people to classes based exclusively on ownership and control of the means of production, which often oversimplify the relatively nuanced writings of Anthony Giddens (1973), quickly fall apart in the analysis of complex societies (Wright 2016). Karl Marx himself did not address the definition of classes authoritatively and tragically left his final manuscript on the topic incomplete (Wright 1985). His writings on class in specific historical contexts, like The Eighteenth Brumaire and The Class Struggle in France, delineate complicated class groups that go far beyond relationship to the means of production (Ollman 1968).

A close reading of Pierre Bourdieu, often posed as an advocate of culture against the supposedly materialist Marx, reveals a strong connection between material inequality and cultural struggle. Bourdieu's insight that all forms of capital - even economic capital - derive their value from social relations and symbolic power (Bourdieu 1986) itself builds on an insight from Marx, who wrote in Capital on the social and symbolic origins of the value of money and commodities (Marx 1976:182). This insight opens the door to intersectional analysis of class by insisting on the salience and social existence of classes as an empirical question linked to the practical symbolic work that people do to create groups from social space (Bourdieu 1985).

Indeed, the intersectional status of class - its entanglement with multiple forms of social difference - may be one of the things that makes it so useful as an analytic category. Its resistance to be pinned down to material inequality allows class to apply to a wide and diverse range of struggles and contexts (Skeggs 2004). Indeed, determining the specific intersectional boundary structures surrounding class - the extent to which other forms of difference interact with material inequality to determine power relations - is crucial to understanding how inequality works in context (Anthias 2013; Collins 2015).

Economic inequality was a central focus of many of the founding works of intersectionality; bringing class back in to the analysis will take careful attention to its relations to other categories (McCall

53

2005; Walby, Armstrong, and Strid 2012). Understanding both race and gender is key to any analysis of class inequality (Choo and Ferree 2010; Collins 1998b; Hooks 2000) but those two boundary forms are far from the only ones salient (Taylor, Hines, and Casey 2010; Warner and Shields 2013). Class analysis will be strongest when it attends to the ways that material inequality is both inherently intersectional and made more so by struggles over classification and interpretation that implicate multiple forms of difference (Bourdieu and Wacquant 2013).

Intersectional work on the history of class inequality leads the way for empirical studies that reveal how class works by investigating its relationships to other forms of difference (Beck 2013; Fraser 1995; Milkman 2017). Still, a bridge between class analysis and intersectionality studies must be constructed carefully (Cho et al. 2013), resisting hierarchical frameworks of the past that have associated class with macro analysis and other boundary forms with smaller scales (Choo and Ferree 2010) or used class as a foil to advocacy for other forms of diversity as in discourse on affirmative action (Onwuachi-Willig and Fricke 2010; Sander 2010)

Informal boundary work in social media fields

The focus on informal discourse builds on the insight that people activate categories of social division not only in dramatic conflict but in everyday speech and gesture with substantial impact on individual lives (Jean et al. 2015; Sue 2010) and on the categories used to describe, interpret, and ultimately act on the world (Bourdieu 1991). Rather than assuming that elite language accurately depicts social reality, it is key to investigate where and how elite language circulates in informal discourse - and where it doesn't. Where elite and informal languages converge lies the potential for mobilization; where they diverge lies the potential for new social movements that better match the beliefs and values of ordinary people (Bonikowski 2017; Bonikowski and DiMaggio 2016).

There is reason to believe that substantial moves with serious implications indeed happen in informal discourse online. Social media platforms including Reddit and Twitter were a flashpoint of the 2016 elections (Lagorio-Chafkin 2018), particularly in the Trump and Sanders campaigns which embraced a more bottom-up approach to social media outreach (Enli 2017). Movements - both formal and informal - use online platforms to directly attack each other and struggle for the hearts and minds of people who are not yet ideologically committed (Nagle 2017).

Given this, it is reasonable to suggest that social media platforms like Reddit functions as fields, spheres of discourse with real stakes where minds change not just through argumentation (Musi 2018) but also and especially through emotionally-charged interactions (DiMaggio et al. 2018). In the terms of field theory, the immediate efforts of users to seek resonance and recognition (Ferree 2003) directly in their fields (symbolized by upvotes, replies, and monetized forms of privilege like Reddit Gold) accrete into durable cultural capital, including words and symbols specific to groups (Nissenbaum and Shifman 2017).

Each forum on reddit features internal governance units (Fligstein and McAdam 2012): a group of moderators with the power to ban other users, remove comments, and otherwise intervene in the discursive practice of the group (Lagorio-Chafkin 2018). Indeed, Reddit emphasizes collective processes over individual accounts: unlike other platforms where individual pages are at the core of the user experience, user profiles are little more than a catalog of user's posts. The real work of customization and identity construction on Reddit is collective, with forum moderators and dedicated users spending long hours discussing and tweaking the norms, rules, and appearance of their subreddits (Lagorio-Chafkin 2018; Manikonda et al. 2018).

54

Discursive fields on social media have internal structure and relationships to other fields on the platform, on other platforms, and in life offline. The forums in this study could be described as subfields of a larger field of advocacy forums whose members look to each other as allies, competitors, or enemies (Manikonda et al. 2018). On social media platforms like Reddit, fields struggle with each other for prominence and even attack each other directly; negative mobilizations have important long-term negative effects, which leads to a process of colonization to dominate and target a community (Gray et al. 2017; Kumar et al. 2018).

Symbolic boundaries and group affiliations formed online have repeatedly shown a capacity to influence cultural and political struggles at large scale, blurring the lines between informal discourse online and offline across nations and contexts (e.g., Hatakka 2016; Fuchs 2016). Far from disappearing in anonymous spaces, symbolic boundaries of race, gender, class, and political identity reassert themselves on social media as authors seek to negotiate and distinguish identities and affiliations (Gray 2012; Gray and Huang 2015; Shelton et al. 2015). Social media strongly predicts public opinion and is ever more tightly connected to contests over formal political power (Farrell 2012; Oliveira et al. 2017).

Mixed methods analysis of cultural change in informal discourse

Studying communication among millions of people presents unique challenges both because of the scope of the overall field of discourse and because of the internal complexity of each communicative act. This dual challenge makes mixed-methods analysis an absolutely vital tool in a collective program of research on intersectional boundary structure in informal discourse.

Quantitative analysis is key to understanding discursive shifts at the macro level: measuring where boundaries appear together and apart can answer calls to trace multiple forms of intersectional difference in the same situations and among the same people (Else-Quest and Hyde 2016; Hancock 2013a). Further, inequality research has suffered from a lack of clear mechanisms that connect between individuals, institutions, and macro patterns; anchoring the analysis of large patterns in the study of micro-level processes like communicative action puts inequality research on firmer epistemological footing (Latour 2005; Page 2015; Saperstein et al. 2013).

To understand the importance of quantitative patterns - why it would matter that, for instance, that 10% of comments on class inequality also include boundary work of race - consider that most of the authors in this study posted comments about inequality on Reddit multiple times in a given month. Users of Reddit and other social media platforms generally read far more than they post, so an author that posts even a few times on class inequality is likely to have engaged dozens of other relevant comments (Lagorio-Chafkin 2018; Mills 2017). Even when the prevalence of a given form of boundary work is as low as 1 in 100, the volume of comments that each reader consumes leads to repeat exposures. The mix of boundary work encountered by each reader contributes to that reader's sense of the field (Bourdieu 1991) and thus shapes their future communicative action.

Pairing quantitative and qualitative analysis is particularly important in the study of intersecting boundaries given how deeply enmeshed boundaries are in everyday communication (Chun 2019). Careful interpretive work is also important in studying emotional charge in boundary work online given the irony and sarcasm that permeates social media discourse (Bamman and Smith 2015). Plus, speakers often bury their boundary work on controversial categories like race and gender in a complex set of codes and in-jokes to avoid critique, making it necessary to ground any quantitative work on boundaries in close readings of text and context (Amundson and Zajicek 2018).

55

The past two decades have seen increasing calls for mixed-methods research in intersectional studies (Bowleg 2008; Griffin and Museus 2011; McCall 2005) that specifies how boundaries intersect (Warner and Shields 2013). Integrated approaches reporting qualitative and quantitative measures on the same dataset have increasingly been used in sociology (Small 2011) as in other fields to "triangulate" complex social reality, using quantitative and qualitative analyses to question and sharpen each other (Covarrubias et al. 2018; Onwuegbuzie et al. 2009; Spillman 2014).

RESEARCH DESIGN

This study begins with a quantitative analysis of the prevalence of a lexicon of words related to class discourse over the two-year period from Winter 2015-16 to Fall 2017. I then focused on two quarter-long time periods - Spring 2016 and Summer 2017 - for deeper mixed-methods analysis. These periods fell neither in the chaotic times immediately before nor immediately after the election, affording an opportunity to observe how class discourse might have settled into new patterns after the election cycle.

The mixed-methods analysis began with developing and validating two systems of codes, one focused on the salience of boundary categories and the other on comments targeting elites versus non-elites - boundary work above and below. The first system identified which intersecting boundary categories were salient in a given comment, showing where authors referenced categories of economic or political class versus race, nationality/immigrant status, class, gender, or religion. The second system identified the mix and emotional charge of boundary work above and below, focusing on where authors targeted elites or non-elites with positive, negative, or neutral valence.

I then used the validated code systems to classify a stratified random sample of 420 comments in the field of advocacy forums on Reddit, drawing equally from groups that leaned right, left, and middle in each time period. This produced quantitative estimates of the salience of various forms of boundary work in the sample. To understand these results in context, I performed separate qualitative studies of the full group of comments in left, right, and diverse forums for each time period, writing summary memos including representative quotes for each subfield.

The overall plan of analysis follows a similar path to the approach to computational grounded theory developed by Nelson (2017), using quantitative analysis on a large set of text to reveal patterns that are tested and elaborated by close reading of text and context. An important difference from her approach is that the initial analysis is performed on the full Reddit corpus rather than a sample, reducing the need for a final confirmation step. This is crucial given the complex and highly implicit character of speech on controversial topics of race, gender, and other forms of inequality, which tends to elude quantitative analysis (Amundson and Zajicek 2018).

Site Selection: Extracting Boundary Work from the Reddit Platform

The primary dataset used in this study is a collection of roughly 1.7 billion posts to the online platform Reddit between November 8 of 2015 and 2017, a year before and after the election. These comments are made accessible in monthly tables by pushshift.io and the Google BigQuery project (Baumgartner et al. 2020).2. With over 330 million active authors posting more than 900 million

2 Like most large datasets, this one is subject to a testable rate of errors: an analysis by Gaffney and Matias estimated that roughly 0.043% and 0.65% of comments and submissions were missing, respectively, and pushshift.io was taking steps

56

comments each year in over 80,000 forums, Reddit during the time of this study was the 6th most active website in the United States, with the expressed purpose to "ultimately be a universal platform for human discourse" (Lagorio-Chafkin 2018).

The forums on this platform, known as "subreddits", function as natural laboratories with often strikingly different cultures of deliberation; their divergent patterns of discourse can shed light on how people do work of social exclusion and inclusion. Most of the forums on Reddit are loose agglomerations of users posting articles and other web links; a subset of forums develop a base of loyal contributors who spend hours a day reading, posting, and replying. In these forums, signals of group formation multiply: new phrases and memes appear and establish resonance by generating upvotes, reposts, and replies; users recognize and reference each other; demonyms begin to circulate inside and outside forums as a badge of collective identity (Mills 2017).

Discourse on Reddit lends itself to semantic analysis because conversations are often wider and deeper threads than other discussion forms, with more participants and longer chains. Unlike Facebook, Instagram, and Twitter, Reddit has no limit to the depth of replies on comment threads, making it possible for users to reference each other's words in long chains of conversation, exercising cognitive authority in argument (White 2019), learning from each other (Del Valle et al. 2020), solving problems together (Buozis 2019), and changing each other's views (Musi 2018). Of course, the same affordances make it possible for conspiracies and misinformation not only to be challenged and debated but also to be developed and spread (Kou et al. 2017; Samory and Mitra 2018)

Reddit's algorithms select and elevate resonant comments - the ones that quickly gather replies and votes - bringing them to the top of feeds and creating cascades of attention and reaction. The platform features even more discourse of identity, empathy, and strong group interactions than a platform like Twitter (Manikonda et al. 2018). With a more deliberative culture than many platforms and no character restrictions, comments are sometimes short retorts or echoes of other comments but commonly feature the elaboration of one focused frame (Massanari and Chess 2018).

The common limitations of online social research apply to the Reddit platform, but this study is designed to turn many of these limitations into strengths by using the idiosyncrasies of the platform as a lens on the specific population and kind of discourse which it hosts. Even the pseudonymous nature of the platform emerges as a strength in studying boundary realignment. Conditions of uncertainty can heighten mutual focus, intensifying the emotional impact of interactions (Collins 2014a). When users are unsure of the identity of their interlocutors, whether they "really mean" what they are saying, whether they are even talking with a human being, the impact can sometimes be greater (Shelton et al. 2015).

Plus, though the Reddit data fails to capture the full range of any person's communication, this challenge of all online platforms could be seen as a strength: if signals of symbolic boundary realignment can be found even in data from this platform, these signals will lay the foundation for a

to reduce this rate over time (Gaffney and Matias 2018). The dataset includes both primary comments - the ones that start discussion threads - and the reply threads that follow them. The filtering process targeting only posts that clearly identified themselves as bots or moderators along with comments with misleading content like copypasta, resulting in the removal of 12.4% of comments on average across the quarters studied.

57

strong argument for the existence and relevance of similar patterns where communication is harder to measure at large scale but even more interactive and immersive.

Analysis: Lexicons and Coding

The first step of this study takes as its central measure the relative prevalence of lexicons designed to capture the presence of class discourse online. Lexicons are groups of words and phrases selected to signal the presence of a particular boundary or discursive phenomenon in discourse. The count of a lexicon in a comment is the sum of the words and phrases from the lexicon that appear in the comment.

To develop a set of lexicons measuring boundary activation, I worked with two research assistants to gather a set of words for each category and subgroup, refining and validating the lexicons in four major ways: 1) ranking ngrams from the full Reddit corpus by polarization and emotional valence; 2) sending the draft lists to 3 other scholars with experience in social media and content analysis; 3) analyzing the first 20 results from Reddit Search for each word of phrase during the study period and flagging for deletion any that produced 2 or more false positives; and finally 4) testing each lexicon for false positive rates against a sample of 1,580 hand-coded comments. To see the rates of false positives and further details on this process, please see the Methodological Appendix.

Though it is impossible to distinguish forms of boundary work in text with perfect accuracy, lexicons have proven capable of tracing broad changes in language of boundary work online (Nobata et al. 2016; Taboada et al. 2011; Wiegand et al. 2018), particularly where lexicons and results are validated by qualitative analysis (Grimmer and Stewart 2013). Counting words in social media posts has been used to measure affiliation, emotion, and boundary work across social categories (Park et al. 2016; Yaden et al. 2017).

To classify the type of forum - whether its conversations focused on advocacy, community, entertainment, or something else - we each independently classified the full set of the 4,324 forums that featured 6K+ comments, 20+ authors, and 10+ average wordcount in any of the eight quarters from Winter 2015-16 to Fall 2017. We then classified the 199 active advocacy forums by their political lean, distinguishing forums that hosted a diverse range of opinions from ones that leaned to the political right or left. We met together to resolve the 4% of forums that were classified differently across coders and validated our category classifications against several category schemes internal to Reddit along with the partial set of category memberships developed by Kumar et al (2018).

To validate the coding systems used to identify intersectional boundary categories and boundary work targeting elites and common people, we gathered examples for each subcode and tested the coding system against a random sample of 400 comments between 20 and 300 words in length taken from advocacy forums in Fall 2016. We used Dedoose to code specific sections of each comment, taking care to code all of the forms of boundary work that appeared in each comment. We then met together to reconcile the codes and refine the coding system, discarding subcodes on which we couldn't come to agreement.

I then set about creating the stratified random sample of comments related to class discourse. I first created a dataset of the comments on Reddit that included one of the words in the class lexicon. From this universe I randomly sampled 420 comments, divided evenly into six categories according

58

to their time period (Spring 2016, Summer 2017) and political lean (left, right, diverse). For a fuller description of the coding process, please see the Methodological Appendix.

With the sample in place, I created a coding database in Airtable that enabled high speed and accuracy both in assigning codes for quantitative analysis and in calling up comments that met a range of criteria for qualitative analysis and validation. Assuming a normal distribution in the proportion of codes among the total possible samples of the population, the margin of expected sampling error at 95% confidence for a sample of 420 is roughly 4.8%. Thus, proportions and differences under 5% are described below as "within the margin of error" and changes in prevalence under 5% are not reported.

RESULTS

Lexicon analysis: Prevalence of class discourse

The charts in Figure 2 show the prevalence of class lexicon ngrams per million words, first by forum category and then by political lean within advocacy forums. As expected, class discourse is much more common in advocacy forums than in forums focused on community or entertainment, and generally more common in left forums than right forums with the exception of Fall 2016. The prevalence of the class lexicon in advocacy forums was 4.8 times that of forums focused on entertainment and 3.7 times that of community forums on average over the study period.

Figure 2. Prevalence of class lexicons per million words, by forum category and by lean of advocacy forum

Note: Error bars based on 6.1% false positive rate for class boundary activation lexicon determined by hand-coding.

0

500

1000

1500

2000

2500

Winter2015-16

Spring2016

Summer2016

Fall2016

Winter2016-17

Spring2017

Summer2017

Fall2017

Class Activation by Forum CategoryHits per million words

advocacy community entertainment

0

500

1000

1500

2000

2500

Winter2015-16

Spring2016

Summer2016

Fall2016

Winter2016-17

Spring2017

Summer2017

Fall2017

Class Activation by Advocacy Forum LeanHits per million words in advocacy forums

Left Diverse Right

59

Tracking the prevalence of class discourse by quarter shows a clear pattern: across all forum categories and within nearly every political lean, class discourse reached its peak in Spring 2016 when Bernie Sanders and Hillary Clinton were in the thick of primary competition. This peak subsided in the following quarter and reached its lowest level in every forum in Fall 2016, the quarter immediately before the election.

The one exception to this pattern is in advocacy forums on the left, where the prevalence was even higher in the first quarter where very active forums like KossacksforSanders and PoliticalRevolution than in Spring 2016. In left advocacy forums, the prevalence of class discourse in Fall 2016 was 52% of its level at the start of the period. Even community and entertainment forums had their lowest levels of class discourse in Fall 2016, though the change was much less intense.

The quarters following the election saw steady growth in discussions of class in all but diverse advocacy forums. On the left, this growth did not recover to starting levels, topping out at 74% of the prevalence at the start; forums on the right discussed class at 153% their original prevalence.

Coding analysis of boundary work above and below

The sample contained 420 posts from a diversity of left-leaning, diverse, and right-leaning forums. Close reading of posts in context revealed these forums as discursive communities with a base of shared knowledge and substantially different norms and practices both within and across their category of political lean.

Though each forum referenced external sources, particularly in the post that began each thread, most of the conversation was internally focused, referencing and replying directly to the comments of other authors. With the exception of a few forums like The_Donald that often featured a long set of first-order replies on the original post, most discussion in these forums featured nested replies in which authors engaged arguments in long threads of discussion.

Downframing against elites was by far the most prevalent form of boundary work across all forum leans, though the targets of this boundary work were very different in each discursive space. Whether authors discussed elites or common people, charged boundary work was the norm, with only 1.7% of comments referencing either group with neither positive nor negative emotional charge.

Figure 3 charts overall shifts between Spring 2016 and Summer 2017 in black and then disaggregates forum leans by color, showing how prevalence changed in each forum group for the boundary forms coded in the analysis along with the combination of boundary work featured in populism studies: attacks on elites and praise for common people.

The negatively charged focus on elites intensified slightly over the period in left (63%→73%) and diverse (61%→66%) spaces while dropping more precipitously on the right (71%→53%). Notably, this pattern held even when limiting the analysis to comments that focused on economic status, with 65% of comments in right forums targeting elites in 2016 and only 48% in 2017. Commenters talked positively of elites relatively rarely throughout the period, but rates of elite upframing rose in polarized spaces. Identity work targeting elites rose in homogeneous spaces on both the left (14%→19%) and right (23%→30%) as it dropped in diverse forums (30%→19%).

60

Comments generally featured only one form of boundary work, for instance downframing elites or upframing common people but not both. Combinations were rare, but they increased over the period, with 20% of comments featuring multiple forms in 2016 and 33% in 2017. The combination of boundary work linked to narratives of populism - downframing elites and upframing common people - was by far the most prevalent combination in the sample, as would be expected by the relatively high prevalence of Elite- and Common+ respectively.

This populist combination appears to increase on left (11%→19%), diverse (13%→19%), and right forums (7%→14%). Excluding comments that go Against Lean, this pattern remains the same in all forum groups except the right; the populist boundary work combination rises only within the margin of error, from 6% to 9%. This gives forums on the right the lowest density of populist comments out of all forum leans in both time periods.

The most consistent trend across all forum leans was an increase in upframing of common people. Though the intersectional group targeted in the positive comments varied widely within and across forum leans, there was a sharp rise on the right (13%→33%) , a strong increase in diverse forums (31%→47%), and a smaller rise on the left (30%→37%). Common people received relatively little downframing in the sample - elites were the primary targets across all forum leans - but negative framing of subaltern groups rose in diverse spaces (11%→19%) as it stayed relatively steady on the right (7%→11%) and left (7%→3%).

However, unlike the trends above, the increase in the upframing of common people on the right changes greatly when excluding commenters who write against the political lean of the forums. Right and left forums began in Spring 2016 as relatively homogeneous spaces, with 4% and 3% of comments going against the dominant lean. By Summer 2017, right forums had substantially diversified, with 19% of comments going against the dominant lean. Only 1 of the 9 sample comments that upframed common people in 2016 was Against Lean; this ratio was 10 of 23 in Summer 2017 as left commenters sought to praise or defend subaltern groups. By comparison, 0 of the 47 comments on the left coded as Common+ came from authors going Against Lean.

Figure 3. Changes in prevalence of types of boundary work in class discourse sample, Spring 2016 to Summer 2017, by total advocacy forum sample and subfield.

Note: endpoint represents Summer 2017 value. χ2

Total (3, N = 420) = 0.09, p = .007; χ2 Left (3, N = 140) = 0.44, p = .068;

χ2 Diverse (3, N = 140) = 0.13, p = .013; χ2

Right (3, N = 140) = 0.03, p = .002; α of .05

61

Coding analysis of intersectional boundary categories

Addressing the intersectionality of class discourse asks not only whether authors targeted elites or common people but also which elites and which common people. Most comments targeted workers and the rich: economic status dominated class discourse across all forum leans, followed distantly in salience by political identity. Still, each group of forums had a distinct discursive formation in which economic status converged with some boundaries and diverged from others.

Conversations in the left and diverse groups became more tightly focused on economic status in class discourse by 2017 as other boundaries declined in salience. Diverse forums were most focused on material inequality from the start and showed the largest increase, with economic status - including references to CEOs, workers, wealth and income inequality - appearing in 80% of sample comments in 2016 and 96% in 2017 (80%→96%). Salience of economic status in the class discourse of the sample was already somewhat higher on the left (70%→83%) than the right (66%→71%) in Spring 2016 and the gap between the two widened over time (4%→12%).

On the other hand, talk of political identity - focused on the divide between politicians and the voters they represent - stayed at a steady level of salience in left (46%→50%) and diverse (37%→39%) spaces while declining on the right (51%→41%). Figure 4 shows how prevalence changed in each forum group for the seven boundary forms coded in the analysis.

Race had relatively low salience in the class discourse that circulated in the sample in Spring 2016, but the picture had changed by Summer 2017, particularly in right-leaning forums. By Summer 2017 class discourse in these forums had grown more intersectional, bringing in multiple forms of difference more strongly: the salience of race (9%→21%), gender/sexuality (14%→21%) rose even as it stayed steady or dropped in left and diverse spaces. The salience of borders and immigration

Figure 4. Changes in prevalence of boundary categories in class discourse sample Spring 2016 to Summer 2017, by total advocacy forum sample and subfield.

Note: endpoint represents Summer 2017 value. χ2

Total (5, N = 420) = 1.39, p = 0. 074; χ2 Left (5, N = 140) = 0.56, p = .010;

χ2 Diverse (5, N = 140) = 0.49, p = .008; χ2

Right (5, N = 140) = 1.08, p = . 045; α of .05

62

rose, though within the margin of error (14%→19%), even as it declined in diverse forums (23%→10%). As the salience of these other forms of social difference rose, forums on the right engaged political identity less (51%→41%).

Qualitative analysis of intersectional boundary categories

Close readings situate comments in context to complicate and extend quantitative results, revealing how boundaries not only intersect in discourse but also where they reinforce each other and where they come into conflict. Intersectionality itself was at issue in much of the class discourse in the sample, particularly on the right and left. In left-leaning forums in 2016, intersectional posts often put boundaries in conflict, comparing class to other forms of oppression, as in this post from r/ainbow:

A poor child in rural Georgia who has no access to quality education and lives in poverty, and has been taught to hate gay people by the same church that is the only source of community support in his town probably has a worse life than a wealthy man in Atlanta who gets called a faggot every once in a while.

This anxiety over the intersectional position of economic status often referenced the struggle between political elites Hillary Clinton and Bernie Sanders, with Sanders perceived to value class over other forms of difference.

The frequency of these comparisons dropped between the two periods along with the overall salience of race and gender to class discourse on the left. By 2017, a new strand of comments appeared in left forums that compared class to other forms of difference not to assess whether class comes first but to grapple with how to include groups that felt left out of the Democratic coalition along lines of gender, race, or class. For instance, a comment from politics claimed that Clinton was "pandering to minorities" in 2016 and noted the struggles of "white blue collar workers" before declaring that "Dems need to inspire and open the Big Tent once again."

Comments on the right referenced race and gender more than other forums by 2017, chose different targets for downframing, and drew from a different emotional palette. Targets of downframing were often intersectional and portrayed as elite, as in the recurring phrase "white liberal" or in a story told in SargonofAkkad in 2016 about a "rich black activist" who accused "all the white students of being 'baby KKK'." Mirroring the intersectional anxiety of the left but with the assumption that class matters most, several comments in 2016 downframed high-powered women who claimed to experience oppression, like "some rich female celeb" who had to pay child support or, in another comment from MensRights, "some rich famous person" who "says she got groped."

The entanglement of elite status and gender grew on the right in 2017 with the crucial difference that men and masculinity more often appeared in powerful positions. Samples from both 2017 and 2016 contained 21 comments in which gender was salient, but the sample from 2017 contained more upframing (48%→90%) and particularly more upframing of elites (24%→57%). The concept of hypergamy, referring to the tendency of women to choose relatively wealthy sexual partners and newly popular in right forums during 2017, provides a lens into these shifts in gendered class discourse.

The few posts referencing hypergamy in 2016 focused on the perspective of lower-class males who were left behind by women who preferred wealthy men. On the other hand, a post in 2017 from

63

KotakuInAction explained hypergamy in glowing terms and celebrated men who "prefer young attractive women regardless of wealth or status." Similarly, this post from The_Donald celebrated the intersection of political elite power and masculinity:

I never would have believed Trump's first international trip could have been such an insane fucking success. Grown men in gripping hugs, grown women staring at Trump and not giving a fuck who sees it.

Many posts on the right in 2016 targeted social justice warriors or SJWs, a category whose defense of intersectional groups - and intersectionality as an analytical framework - were themselves threatening, partly because they elevated other forms of oppression above class. One author from KotakuInAction claimed that "class based inequality is dead last on the list of things SJWs are concerned about" and another that SJWs "absolve themselves of any abuse of power." Most of these comments positioned SJWs and liberals as ascendant elites.

By 2017, intersectional comments on the right included a new strand of critique of non-class forms of oppression from a position of power and authority, tracking with both the rise in salience of race and gender and the decrease in downframing that targeted elites. Several posts argued against the presence of white privilege and white people as in the comment that critiqued "blanket labeling the white race liable for a small minority of powerful rich people" and praised the "throngs of common white people" who helped end slavery. This post from The_Donald in 2017 was the most extreme of several that positioned workers and class as pure in opposition to the intersectional category of SJW: "8 years of social justice hires calling actual workers racist sexist bigots tends to force people out. Need to purge the sjws somehow."

Qualitative analysis of boundary work above and below

The quantitative analysis above tracked comments as candidates for populist rhetoric when they included both upframing of common people and downframing of elites and showed a lower density of these comments among right-leaning authors. Closer reading shows that most such comments contrast between common people and elites without the grand narratives or emotional intensity of elite populist discourse, as with this comment from the left-leaning forum politics in 2016:

I feel like the government has been less and less responsive over the past 20 years to the concerns of working class Americans. And it seems like theyve grown content to pay lip service to this key demographic without showing much interest in enacting beneficial change.

Indeed, most of these comments compare elites and common people without any call to mobilization or action, as in the critiques in diverse forum news that Trump's policies "are clearly benefiting elites rather than workers" or this . complaint in right-leaning forum The_Donald: "the money poor people would've gotten was spent for them by Congress on back seat sensors. Yay." Though populist rhetoric as such seemed absent from the forums, a small metadiscourse on populism emerged in left-leaning and diverse forums in 2017 as the term circulated more widely in mass media. All five comments that included "populism" or "populist" in the sample occurred in 2017, and each one framed it positively against the actions of elites. An author in the diverse forum Documentaries commented that "populism is definitely good for the people, as it put their interest

64

first" while several posts echoed the perspective that Trump "ran as a populist" (in SandersForPresident" but governed "as any other probusiness gop asshole would do" (in politics). One post in politics praised populism before noting that it "sounds like something Obama would be against."

Elites. Though populist rhetoric as such was rare, authors spent thousands of words criticizing elites and upframing common people. The targets of elite downframing were substantially different across forum leans. Left-leaning forums targeted a relatively even mix of political and economic elites focused domestically in both time periods, and the focus on domestic political elites increased in 2017 with Trump as the primary target.

Elite targets of downframing seemed to shift more substantially in diverse and right forums, and in opposite directions. Many anti-elite comments in diverse spaces targeted both domestic and global corporate corruption, as with comments in worldnews that tied offshoring to the loss of American jobs or critiqued how tax evasion by multinational corporations impacts common people: "the average citizen receives less taxdollar programs and the rich keep the money." Global critiques like this declined along with the salience of nationality among diverse forums in 2017 as critiques of corporate and political elites focused more on domestic issues.

The trend was the opposite on the right, anchored by the circulation of the term "globalist". Where diverse forums contained the only use of the term in 2016 - a user in lostgeneration claimed that exploitation of low-wage workers "will continue until we kick the globalists out of government" - all but one of the comments including this term in 2017 were on the right. For instance, a comment in The_Donald celebrated the Trump administration's exit from the Paris Agreement as a win against globalists: "Hard to fathom how close we were to permanent ruin before Trump came along to start tearing down the globalist elite."

Another notable shift in anti-elite discourse occurred in the way left elites were framed in right-leaning forums. Where terms like "white liberal" and "SJW" dominated in 2016, mentions of these elite groups declined on the right in 2017. A new kind of anti-elite discourse appeared, focusing on an upper middle class that, according to an author in KotakuInAction, thought themselves "more 'pure' than their working class counterparts." An author in The_Donald downframed this group as an elite class that cut across party lines:

Lets be frank, our enemy is a class of people. Not "the 1%." There are plenty of great rich people out there. The class I'm talking about identifies themselves as "upper middle class." They may be "republican" or "democrat," but they all went to the same elite universities, send their kids to the same elite private schools, and play golf at the same elite country clubs.

Common people. The intersectional category of the white working class as such was conspicuously rare in the informal discourse of the sample. Workers were only explicitly described as white in 1 comment from 2016 and 4 from 2017 - and none of the authors who discussed white workers identified themselves as one. The comment from 2016 in the left-leaning hillaryclinton forum was the only one of the 22 mentions of the term "working class" to reference whiteness, criticizing another author's dismissal of Kentucky with the point that "this sort of smug elitism is why Dems have a hard time mobilizing working class whites."

65

In 2017, three of the four mentions of white workers followed popular journalism of the time in using them to explain the election result, as with the author in PoliticalDiscussion who wrote that "the white people that didn't vote for Hillary were blue collar workers" "who "didn't feel she would listen to them (globalization etc)." The only other reference to whiteness and the working class was in the forum WhiteRights in 2017, where an author added gender into the mix, claiming that the decline in the viewership of the cable channel ESPN "may be due to them alienating their largely white, male, working class viewer base."

Rather than the white working class explicitly named as such, workers and the salience of economic status dominated the upframing of common people across all forum leans, far outpacing alternatives focused on political identity ("voters", "average citizens") or other forms of intersectional difference. Class was salient in every one of the comments that upframed common people in left and diverse spaces in 2017; it was salient in 90% or more comments in every space and time period except for right-leaning forums in 2017, in which only 83% of comments contained subaltern upframing.

Workers as such, without any explicit references to race, gender, or other boundaries, were the center of upframing in each of the three forum leans, but in substantially different contexts. Across all forum leans, there were 70 comments mentioning workers and working in 2016 and 89 in 2017, many of them focusing on everyday struggles. Authors in diverse forums contested issues like minimum wage, corporate exploitation, and unions while maintaining respect for workers. For instance, a comment from news in 2016 opposed unions on the grounds that "when these workers got these jobs there was no contract they agreed to for paying the Union dues."

However, workers were rarely valorized as the heroes of a populist narrative and far more often appeared as victims. On the left workers were primarily the victims of economic and corporate elites, while workers on the right tended to suffer from political elites. Health care was the one issue in which stories of suffering seemed to go uncontested even in diverse forums, as with the user from nottheonion who wrote that "today I understand that even daily life could put me thousands of dollars in debt if I have an accident or an illness."

Though the category of "worker" seems at first glance to exclude boundary forms other than class, some comments revealed how the term carries implicit intersectional meanings. For instance, when an author in the diverse forum news asserted that "I'm all for equality but artificially selecting less skilled workers solely because of their gender is just stupid," they counterposed workers based on gender to an implicitly male norm. When an author in the right-leaning forum unpopularopinion contrasted "illegals" to "more expensive American workers" they draw a connection between class and immigrant status. Boundary work like this leaves an apparently single-issue symbol like "workers" with intersectional valences likely to follow it across contexts.

The one exception to this trend of economic status as the clear focus of upframing of common people appeared in right-leaning forums in 2017. In these forums, a new strain of debate emerged, involving both left-leaning and right-leaning authors, over which race and gender groups were indeed common people who deserved sympathy. Left-leaning authors who went Against Lean to comment in right-leaning forums often defended marginalized race and gender groups individually or in intersecting clusters, as this one did in ShitPoliticsSays:

66

I guess gays, blacks, Mexicans and poor people aren't Americans anymore. And here I thought we were a nation made of strong people of all creeds, cultures, colors, and religions. Silly me.

In contrast, right-leaning authors noted that whites and males were also victims of oppression. For instance, an author in KotakuInAction called it "strange how sweeping blanket characterizations or stereotypes of the entire white race somehow 'arent racist', yet for every other race, they're extremely racist". And an author in pussypassdenied noted that cases of domestic abuse sometimes target men, and "anyone that tells you that its men's fault because they don't ask for help is a liar… many cops are white knights that get excited by the idea of "bustin up a wife beater."

At the same time, gendered identity work declined at the margins of class discourse in left-leaning forums. Where five comments explicitly included gender groups in upframing common people in 2016, declaring that they "stand in solidarity with sex workers", grappling with the intersection of poverty and queer identity, and decrying the perceived sexism of the Bernie Bros, the only comment that upframed a gender group in 2017 was this one from ChapoTrapHouse: "we shouldn't throw women and other groups under the bus to make a big tent."

DISCUSSION

These results suggest that frames popular in media or elite discourse do not always survive in informal discourse, and when they do they are often altered in meaningful ways. This highlights the need to study directly when and how frames and narratives flow between elites and common people, replacing assumptions with careful analysis. These clusters of forums on Reddit revealed themselves as subfields of a broader field of advocacy forums with both unique logics and overarching patterns. A more complex picture of realignments emerged from mixed-method analysis, with different quantitative patterns appearing across left, right, and diverse forums that sometimes converged in the light of qualitative analysis.

Results of the prevalence analysis present clear evidence for the Sanders campaign as a catalyst sparking discussions of class inequality across all forum categories and political leans, particularly during the primary contest between Clinton and Sanders in Spring 2016. The evidence also suggests the possibility that class discourse subsided across the board when Sanders exited the campaign; though it may seem that all social categories would be displaced in the heat of the final months of a presidential campaign, rises in the salience of other issues in the same period make this unlikely. The steady rise of class discourse across both right and left forums after the election remains a subject for further analysis.

For the populism hypothesis, evidence was mixed. On one hand, the combination of attacks against elites and praise for common people was the most prevalent combination in the sample, and its prevalence grew from 2016 to 2017. However, this trend appeared to be driven primarily by an increase in praise for common people across the board, focused on defending the valued category of workers against a broad range of problems and oppressors. Another clear pattern was the dominance of attacks against elites in class discourse: the great majority of comments targeted elites for attack, often in the absence of other forms of boundary work.

Though the reputation of Reddit as an intense and filter-free arena of discussion might suggest it as a hotbed of populist rhetoric, very few comments met academic definitions of populism. Even

67

among the comments that simultaneously praised common people and bashed elites, the great majority would earn a 0 on the holistic populism grading scale popularized by Hawkins (2010): very few highlighted the nobility, valor, or purity of common people and none referenced anything resembling popular sovereignty as the antidote to corrupt elites.

Elite downframing declined on the right as it rose in left and diverse spaces, supporting predictions that the discourse of anti-elite parties changes when those parties come to power. Further support ame from the rise in elite praise in the polarized discursive communities of the left and right, a result that might have been missed by traditional populism studies that leave praise of elites out of the analysis.

Considering the race entanglement hypothesis showed how the intersectionality of class discourse varied depending on the political lean of forums. All forums held economic status and political identity at the core of their class boundary work, but the extent to which authors entangled these forms with other forms of social difference - the degree of intersectionality in class discourse - differed across forums.

The forums in which the race entanglement hypothesis performed best were on the right, but here the picture was more complicated than a simple entanglement between class and race. Following the predictions of McCall and Orloff (2017), right-leaning forums proved the most intersectional in their discussions of class, which became even more deeply entangled with race, gender, and borders rhetoric between 2016 and 2017. This result points to intersectional anxieties as a continuing feature of class discourse on the right and challenges studies of status threat to consider multiple forms of social difference in the discourse and attitudes of the right.

The trend in left-leaning and diverse forums was for class discourse to focus even more tightly on economic status, becoming in a way less intersectional. The salience of race, gender and sexuality declined on the left, cutting against the intuitive idea of the left as a center of intersectional coalitions. Patterns noted in the qualitative analysis show that struggles between class politics and other forms of social contestation still burned bright on the left in Summer 2017.

Some of the characteristics that separate the informal class discourse of these forums from traditional definitions of populism seem likely the same ones that make informal discourse effective in mobilizing people. Criticizing elites and not upframing common people - performing exclusion without specifying the target of inclusion - may make it possible to share negative emotions among a broader coalition of readers.

Similarly, upframing groups like workers while leaving their intersectional identity implicit may have broadened the range of resonance for many messages. The most dramatic shift across all forum categories from 2016 to 2017 was the rise in upframing of common people, generally focused on workers as such. Though authors may not have consciously eliminated race, gender, and other intersectional categories from their discussions of workers, they followed common scripts that addressed economic and political status in isolation and avoided topics that may have met with objections from their audience.

The way that authors implicitly associated the near-universally valued symbol of "worker" with whiteness and other privileged categories across left, right, and diverse forums shows the complexity of tracing intersectional formations in discourse. While authors rarely used the category of working class whites explicitly, several comments counterposed workers to marginalized racial and other

68

groups, revealing that class discourse is often more intersectional than it appears on face. When authors regularly read dozens of posts for every comment they write, explicit intersectional framing of class - even if they are rare - can contribute meaningfully to collective understanding of inequality.

CONCLUSIONS

Some patterns of class discourse seem to be shared across all subfields, traversing the broader field of advocacy forums. Two in particular raise questions for further study on whether these patterns persist in different settings and time periods. The first pattern in class discourse across all forums was the dominance of elite downframing, the strong and rising prevalence of subaltern upframing, and the relatively small prevalence of other forms of boundary work. The second was the intersectional structure of class discourse, in which the dominance of economic status grew from 2016 to 2017, political status played a strong role, and race, gender, and nationality formed a lower tier.

On the other hand, patterns unique to each subfield carry lessons and questions across the political spectrum. The relatively intense intersectionality of class discourse on the right shows the degree to which conversations about workers and class inequality on the right are deeply entangled with social differences outside economic and political status. Still, the capacity of authors on the right to mobilize around workers without explicitly referencing other forms of difference points to how class discourse served to organize a broader set of grievances on the right. Movements on the right positioned themselves against a deeply intersectional status threat, and any given voter could easily place themselves in a coalition against a "them" that went vaguely defined, partly because so many enemies were targeted and then cast to the side.

The relative lack of engagement with race, gender, and other forms of difference in the class discourse of left and diverse forums points to a different way of avoiding intersectionality that also raises urgent questions. Despite a few comments communicating the urgency of embedding class inequality in an intersectional framework in order to create and sustain the winning coalition of the "big tent", the dominance of economic status and the decline of other forms of social differences suggests that the left has a long way to go in creating a framework for the discussion of economic and power inequality that honors the intersectional history of class.

Intersectional boundary structures pertain in elite discursive fields just as well as in the informal discourse studied here. Still, this study shows intersectional boundary analysis to be particularly helpful in interrogating elite-driven narratives like the rise of populism and the white working class, going beyond the assumption that such narratives matter to determine whether and how they are picked up in the discourse of non-elites. Social movements and politicians do not create movements from plain cloth; rather, they attempt to assemble an array of targets for downframing and upframing that resonate with the dissatisfactions and sympathies latent in the conversations of the people they seek to mobilize. Analyzing informal discourse allows direct understanding of the social space that conditions all possibilities for political mobilization.

When scholars, journalists, and other elites conjure categories of popular grievance like the white working class, they often refer to social categories that bear little resemblance to how people discuss or understand themselves. Treating the white working class as a reified social group misses the peculiar alchemy that gives intersectional status threat its mobilizing power. On the other hand, studying historical and discursive relationships between social labels like "white", "worker", and "class" in intersectional context is crucial to reveal how elites attempt to carve groups like the white

69

working class out of complex social space, and how best to respond to support understanding and inclusion.

Research in the field of populism studies tends to make similar oversimplifications, spending energy on measuring "how populist" movements are without enough attention to specific targets of inclusion and exclusion. When movements and politicians seek to mobilize dissatisfied publics, it should be no surprise that they attack elites and praise common people in combination. These two forms of boundary work appear together regularly because they are the most effective in connecting to the daily experience of non-elites who perceive a social distance between their own cherished families and communities ("the people") and others who exercise outsized power over their lives ("elites").

Ironically, populism in its mainstream definition is a form of communication that may always be rare in popular discourse, precisely because making a statement of the form [common] people like you are the lifeblood of this country relies on an inequality of power between the audience and the speaker, who after all is claiming the power to define the people. Opening analysis of boundary work to all its emotional valences - measuring the balance of positive, neutral, and negative framing as people draw boundaries against marginalized others and praise their leaders as emerging elites - would enable a sharper analysis of struggles over precisely who constitutes "the people" and who is excluded from that category of moral worth.

Just as the study of elite discourse means little without careful attention to how and why that discourse influences a broader range of institutions and fields, analysis of quantitative metrics means little without qualitative study of how metrics come alive in real-world interaction. Right now, the investment of time and resources in the mixed-method study of discourse among everyday people pales in comparison to the investment in quantifying elite-driven constructs like populism and the white working class. Bringing these studies into balance would reveal political mobilization as one among many efforts by elites and everyday people to carve actionable social groups out of the intersectional complexity of social space.

70

Chapter 3

Emotion, Diversity, and Inclusion in Online Publics: Who Engaged and Who Avoided Politics after the 2016 Election?

ABSTRACT

After the 2016 presidential elections in the United States, newly mobilized advocates across a wide range of backgrounds and partisan positions joined publics online and offline that varied widely by the degree of polarization and emotional intensity. Scholars in the tradition of Jürgen Habermas argue for rational argument and diversity of perspective in deliberative spheres; others in the tradition of Nancy Fraser and Michael K. Dawson highlight the value of homogeneous, emotionally intense counterpublics. This study takes each of these theoretical perspectives as describing alternative modes of communication and explores how these modes influence people's decisions to engage or drop out of advocacy online. It traces the communications of 22,664 users who newly entered advocacy forums on Reddit in the months following November 8, 2016 to show when users chose to engage with cross-cutting discourse, flock together in homogeneous forums, or opt out of advocacy. Results help to specify widely reported findings of the positive effect of polarization and emotional intensity in sustaining participation, highlighting the conditions that fuel burnout among new advocates and revealing four types of public with unique effects on who sustains their advocacy over the long term.

INTRODUCTION

A wave of mass mobilizations during the 2016 US presidential campaign confounded political pundits, rattled established organizations, and raised a challenge to traditional understandings of political action. Both online and in person, a wave of newly mobilized people rose up to share their opinions in mainstream forums. Many were brand new to advocacy and many avoided established organizations on the left and right, creating their own groups or diving in to discussions on their own (Kováts 2018; Meyer and Tarrow 2018; Roth 2018).

This research is grounded in my own observations living in a diverse low-income community on the west coast of the United States. In the months before the elections of November 2016, national advocacy in our city was at a low simmer. After the election it came rapidly to a boil: I saw neighbors with a wide range of backgrounds and perspectives take up political engagement with a new seriousness. Within a month of the election, at least eight new grassroots advocacy groups were founded within a 2-mile radius of my home. By the end of 2017, most of these groups were gone, and the remaining members seemed more privileged than the group that first mobilized a year before. Who left and who remained? What about the experience of participation could have led to this change?

This study addresses the same question at a larger scale by focusing on advocacy forums online. What happened to this wave of people who were inspired to advocate in the tumultuous times of 2016? Did they gather together in homogeneous echo chambers, battle each other with high emotion, or engage in rational deliberation? Who deepened their engagement with advocacy online and who was pushed to the side? The answer to these questions has become deeply important as

71

concerns over increasing challenges to democratic institutions in both theory and practice (Brown 2015; Foa and Mounk 2016, 2017) clash against assertions that participatory democracy is on the rise (Nabatchi and Leighninger 2015; Polletta 2013, 2014).

Scholars have disagreed strongly about the value and prominence of emotion and diversity in democratic forums. A deliberative tradition of Jürgen Habermas that values rationality and diversity of perspective in political discussion has historically dominated the conversation about normative models of democracy. Critics of Habermas in the mold of Nancy Fraser and Michael C. Dawson show how the enforcement of rationality in diverse publics can marginalize subaltern groups; they assert the importance of homogeneous counterpublics as alternative forums where these groups can build power. Finally, scholars following on the work of Nina Eliasoph show how the expectation of disagreement and emotional conflict in political discussion can lead people to avoid political engagement altogether.

This study takes each of these theoretical perspectives as describing alternative modes of communication and explores how these modes influence people's decisions to engage or drop out of advocacy online. In particular, it traces the communications of the 22,211 users who newly entered advocacy forums on Reddit in the year following November 8, 2016 to show when users chose to engage with cross-cutting discourse, flock together in homogeneous forums, or opt out of political conversation.

ADVOCACY AND PARTICIPATION IN DEMOCRACY

The intensity of social divisions and group conflict in the United States presents a cutting challenge to the aspirations of democratic theory and the American image of unity in diversity. Social action online shows many of the same dynamics of division. Narratives of echo chambers and filter bubbles dominate the discussion of polarization, and stories of doxing and harassment show the intense emotional conflict that is possible online.

Despite these easy narratives of dysfunction in new media, studies on the impact of online behavior reveal a more complicated terrain that varies widely across modes of communication. Conversations online manifest every possible combination of polarization and homogeneity, rationality and emotional intensity. Revealing the distribution of these conversations online, and the way they influenced the pathways of the newly engaged authors who joined advocacy forums after the 2016 presidential election, requires engaging with what modes of communication are possible and how they relate to the practice of participation in diverse democracies.

Publics, deliberative and agonistic

Equal participation in advocacy cuts to the heart of democracy in theory and practice. When in practice political power or participation is deeply unequal, the legitimacy of democratic regimes

starts to erode (Dryzek 2001; Gutmann and Ben‐Porath 2014; Habermas 1975; Lipset 1959). Still, despite agreement that participation is a core characteristic of democracy, there has been broad conflict over the question of how democratic participation should occur. The core disagreement has been between advocates of deliberative and agonistic democracy (Benhabib 1994).

Deliberative democracy centers on the idea of deliberation as a mode of communication, generally featuring rational communication across diverse perspectives as a path to consensus or at least broader intersubjective understanding across a single public. Theorists including most famously

72

Jürgen Habermas but with him also Joshua Cohen, Amy Gutmann, and John Dryzek among others, have advocated deliberation as a way to resolve conflicts and broaden the circle of participation and power in democratic systems.

The idea of rational deliberation across diverse perspectives as the foundation of high-functioning democracy is anchored in the concept of the public sphere. The concept became popular following Habermas' classic study of the emergence of a bourgeois public sphere in Western Europe, in which people of various backgrounds met and discussed issues of common concern as a counterbalance to royal and aristocratic power (Habermas 1991). Habermas predicted that political discourse functions best when participants can bracket inequalities and deliberate rationally as equals despite any differences in power or esteem outside the deliberative sphere (Habermas 1996).

This normative model focused initially on in-person communication but extended to a range of conversations including those in online forums, which pose a challenge to the idea of a single public sphere featuring a relatively consistent set of norms for discussion (Dahlgren 2005; Papacharissi 2002). As communication exploded online in the early 2000s, Habermas noted that the internet had "reactivated the grassroots of an egalitarian public of writers and readers" but criticized the emerging set of publics as contributing to the "fragmentation of large but politically focused mass audiences into a huge number of isolated issue publics" (Habermas 2006:423).

Where advocates of deliberative democracy sometimes envision a single public sphere governed by a shared set of rational norms of discourse, advocates of what is often called agonistic democracy - including the relatively mainstream models of representative and associative democracy - have criticized both the norm of rational communication and the idea of a single public sphere (Fraser 1997; Mouffe 2000). They point out that because interests never perfectly align in heterogeneous populations, consensus among groups will likely never emerge, and perhaps never should, making conflict and even coercion necessary to resolve conflicts in the interest of justice (Mansbridge et al. 2010).

Scholars in this tradition have pointed out that bracketing inequalities in discourse and discussing "as equals" can not only mask inequalities in the symbolic power each interlocutor brings to the conversation (Fraser 1990; Young 2002, 2011) but also contribute to the aggressive re-assertion of group identities (Bonilla-Silva 2014). The extent to which Habermas' account of the rise of the public sphere omitted slavery and colonialism - and played down the way that public spheres can exclude along lines of division including race, gender, and class - raised serious questions about its appropriateness as a model for discourse in diverse democracies (Collins 1998a; Negt and Kluge 1993).

Counterpublics and beyond: Polarization and emotion in participatory publics

The idea of the counterpublic emerged to describe homogeneous, emotionally intense spaces that broke the norms of public discourse articulated by Habermas and opened spaces for people of color, women, and other marginalized groups to pursue their own communicative norms (Fraser 1990). If deliberative spheres are rational and diverse, counterpublics are emotionally intense and homogeneous. Following the articulation of the subaltern counterpublic as a space of resistance, empirical studies have traced how counterpublics help stigmatized social groups establish new discourses and mount challenges to the dominant narratives of the mainstream (Dawson 1994; Hirschkind 2001).

73

Both counterpublics and the deliberative spheres advocated by Habermas are ideal types of a more general social phenomenon here called publics. Making publics plural - rather than imagining one undifferentiated public sphere as a unitary balance to government or corporate spheres - opens the possibility of empirical study on how publics differ, connect, and contribute to broader struggles for power (Bourdieu 1998; Calhoun 2010). Publics that include participation as a core characteristic of membership are often called participatory publics (Langlois et al. 2009; Wampler and Avritzer 2004) to distinguish them from, for instance, mass media publics that invite members mainly to receive communication.

Participatory publics are sites of repeated interaction whose members imagine a audience of their communication, which is to say that each public corresponds to a collectively imagined community (Warner 2002). Deeply related to concepts of field and subculture, they are places where people come together to collectively construct and narrate life. Because people who imagine themselves as part of a group or field imagine each other's reactions before planning their own (Kluttz and Fligstein 2016), these publics shape the space of possible thought, expression, and action among their members.

Diversity and emotional intensity are not only squarely in the center of the distinction between deliberative spheres and counterpublics; they are squarely at the center of the subjective experience of participation in advocacy. The experience of addressing a public with which you share core beliefs is fundamentally different than addressing a group who you expect will dispute your ideas. When emotions run high, they raise the stakes of conversation - which can either lead some to exit the space (Habermas 1996) or generate shared emotional experiences as the basis of long-term connection and commitment (Collins 2014b; Young 2002).

Crucially, understanding participatory publics by their degree of emotional intensity and ideological diversity opens two other categories for exploration beyond counterpublics (emotionally intense and homogeneous) and deliberative spheres, which have sometimes been explored empirically but rarely enter into the debate about norms of democratic participation. Publics that feature rational discussion among homogeneous peers are here called Incubators; those that are both ideologically diverse and emotionally intense are called Battlefields. Understanding how each operates in the context of this study requires exploring the stakes and character of publics and advocacy online.

Defenders of emotionally intense, homogeneous counterpublics share a core assumption with rational, diverse deliberative spheres: the type of public matters to the quality of discourse and to the lives of the people who participate. This study will test the idea that public type matters beyond the independent effects of emotional intensity, diversity, and other predictors of retention and engagement by considering two hypotheses:

Hypothesis 1: Independent effects. Homogeneity and emotional intensity have separate positive impacts on retention and engagement in political discussion.

Table 1. Types of participatory public by ideological diversity and emotional intensity

Ideological diversity -

Ideological diversity +

Emotional intensity -

Incubators Deliberative

spheres

Emotional intensity +

Counterpublics Battlefields

74

Hypothesis 2: Combined effects. Types of public featuring a combination of high or low homogeneity and emotional intensity have independent influences on attrition.

Below we explore how retention and engagement in an online setting like the advocacy forums of Reddit might connect to individual lives and to political discussion in broader publics.

Online publics and the stakes of participation

When Habermas critiqued the internet as a fragmenting force in 2006, online communication was still a curiosity at the margins of mainstream publics. The proportion of United States adults using social media platforms rose from 8% at the dawn of 2006 to 72% in 2019, with a majority using at least one platform at least once per day (Perrin and Anderson 2019)

The expansion of online participatory publics have made them important sites of democratic participation in their own right with distinct internal dynamics and the capacity to be a countervailing force against historically dominant spheres like mass media (Benkler et al. 2015). Indeed, digital communication is an emerging center of gravity in scholarship on public spheres, pushing the field not only to consider the communicative practices of online forums and social media platforms but also to reconsider in-person communication through its multiplying connections to digital life (Rauchfleisch 2017).

Though one of the defining characteristics of participatory publics online is their wild divergence in style, structure, and membership, common trends among these spheres have begun to reshape the landscape and definition of political participation. Online mobilizations that target institutions beyond the state have led some scholars to consider expanding what counts as political participation (Theocharis 2015; Tormey 2015).

Online publics are also an emerging site for the practice of what Renato Rosaldo calls cultural citizenship (Rosaldo 1994). This view of citizenship rejects the power of state institutions to determine who counts as a citizen; it also widens the field of common concerns of citizens beyond the policies of state institutions, opening community institution-building, cultural activism, and advocacy targeting corporations and a wide range of other organizations as practices of citizenship (Beaman 2016; Paz 2019).

To capture the broader range of participation afforded by this definition of cultural citizenship, I use the term advocacy here as an umbrella concept that includes both traditional political advocacy focused on policy change and social advocacy that pushes for changes among a wider range of individuals, institutions, and cultural fields (Calhoun 2010). The Reddit advocacy forums studied here are dominated by communication in which authors develop and contest for ethical and moral claims on subjects of common concern.

This stands in contrast to entertainment forums where people share stories and comments for enjoyment without a concerted ethical focus, along with community forums in which people share personal problems and solicit advice. Modes of communication connected to entertainment and community certainly enter advocacy forums, but the advocacy is the only mode of communication online that has been associated with increased political participation (Skoric et al. 2016).

Empirical work suggests that participating in the advocacy forums of this study is both a meaningful form of cultural citizenship and a gateway to formal participation in political institutions. Trends on

75

social media strongly predict public opinion and communication online is ever more tightly connected to contests over formal political power (Farrell 2012; Oliveira et al. 2017). Those who get involved online often also get involved offline (Bode 2017; Boulianne 2015; Oser, Hooghe, and Marien 2013), particularly when their communication is rich with conversations on current events (Holt et al. 2013; Saldaña, McGregor, and Gil de Zúñiga 2015)

Social media platforms including Reddit and Twitter were a flashpoint of the 2016 elections (Lagorio-Chafkin 2018), particularly in the Trump and Sanders campaigns which embraced a more bottom-up approach to social media outreach (Enli 2017). They have both mobilized and included new populations across lines of division including but not limited to gender, race, and class (Morris and Morris 2013; Oser et al. 2013) and exacerbated inequalities along those same lines (Gray 2012; Schradie 2015). Symbolic boundaries and group affiliations formed online have repeatedly shown a capacity to influence cultural and political struggles at large scale, blurring the lines between informal discourse online and offline across nations and contexts (e.g., Hatakka 2016; Fuchs 2016).

Social media communication both provokes and channels emotions, as shown by case studies of international relations (Duncombe 2019) and online communities that organize around shared emotion (Manikonda et al. 2018; O’Neill 2018). Like emotions themselves, highly charged performances of boundary work online are contagious (Ferrara 2017) and often crucial in moving fringe cultural forms to the mainstream (Bail 2012). In many discursive communities online, boundary work can emerge as a form of cultural capital, with users earning status and recognition for cleverly insulting others (Nissenbaum and Shifman 2017).

The strength of social media in connecting geographically scattered individuals through shared boundary work has led online platforms to be tools of choice both for hate-based movements in which relatively privileged groups rail and plot against stigmatized others (Brown 2018) and protective counterpublics whether those same stigmatized others gather for support and community (Graham and Smith 2016; Jenzen 2017).

Though forums resembling counterpublics have so far dominated the headlines and scholarship surrounding 2016 election - under the label of partisan echo chambers - participatory publics show wide variance in diversity and emotional intensity. Spaces like the forum changemyview on Reddit function as near-utopian deliberative spheres in which users deeply engage the perspectives of others (Musi 2018). Even beyond these spaces, mutual influence and persuasion happens often (Weeks et al. 2017) and connecting on issues of common concern frequently leads to addressing those issues together offline (Graham, Jackson, and Wright 2016).

Participatory publics like the advocacy forums of Reddit have great potential to shed light on the debate over counterpublics, deliberative spheres, and other normative models of communication in democracy because they make it possible to trace an unprecedented share and volume of discussion. Though any particular platform online only captures a relatively small amount of any individual's communicative action (Hampton 2017; Lazer and Radford 2017), the action that is captured can be both detailed and consequential (Evans and Aceves 2016)

This potential has so far gone relatively untapped. Many studies tracked above have analyzed the effects of online participation on behavior and belief and many have zoomed in to trace the norms and effects of specific communities, particularly highly emotional spaces that could be considered counterpublics. Still, few have inhabited the scale in between, showing how publics shape discourse and participation across differences in conversational norms. The present study seeks to fill this gap

76

by exploring how conversation style shapes whether people engage or drop out of advocacy in participatory publics online.

PREDICTING RETENTION AND ENGAGEMENT IN ADVOCACY FORUMS

The following section situates the characteristics that divide the kinds of participatory public studied here - their degrees of polarization and emotional intensity - in the context of broader literature on participation, with the goal of developing a set of variables likely to influence people's engagement in advocacy online. The expansive literature on what influences political participation is definitely relevant to the question of what drives advocacy on platforms like Reddit, but needs specification to apply well to this context.

Sociological work on mobilization and participation to date has offered a wealth of theories to explain why people mobilize - or don't - including strain, resource mobilization, and political process theory. The first two of these are very deeply related to the question of why new authors first join advocacy forums - indeed, strain in particular has been associated with the rise in mobilization among partisans of both the left (Reny, Wilcox-Archuleta, and Cruz 2018) and the right (Boutcher, Jenkins, and Dyke 2017) during the Trump presidency. Still, both strain and resource mobilization are likely to hold constant for most people in the year after the 2016 elections, making these influences relevant mostly to the initial decision to participate rather than to continued engagement.

Work on political process theory, focused on the institutional conditions that allow advocacy not only to begin but to flourish and achieve its goals (Meyer and Minkoff 2004), offers more promise because its focus extends beyond the initial decision to participate. Though political process theory refers mainly to formal state institutions (Vráblíková 2014), the related concept of discursive opportunity structure (Ferree 2003; McCammon 2013; McCammon et al. 2007) is particularly promising for this study because it focuses on how the structure and style of a particular forum influences the behavior of the people who communicate within it.

Thinking about discursive opportunity structure engages the question of diversity by tracing how people are sometimes pushed out of publics depending on how well their messages resonate with dominant issues, ideas, or frames (McCammon 2013; Snow et al. 2014). People's subjective experience of participation can have long-term consequences for political and power inequalities: many who find their participation ignored or dissonant withdraw in a form of self-censorship (Schwalbe et al. 2000), shying away from politics even in their personal lives (Bennett et al. 2013; Eliasoph 1996, 1998) while some are inspired to embrace identities as radicals and push on (Ferree 2003).

The fruitful body of work on discursive opportunity structures has rarely engaged the quantitative question of the rate of attrition: which styles and structures of communication cause people to remain or drop out more than others? Turning to the literature on polarization and emotion gives a first step toward assembling the variables most likely to influence rates of attrition across types of public.

Ideological diversity and polarization

Polarization has become an overwhelming concern in research on social media platforms in recent years, based on a set of alarming claims that online forums had become "echo chambers" or "filter

77

bubbles" where people engaged only with like-minded others (Settle 2018; Sunstein 2001, 2018). A series of studies exploring these bubbles and chambers confirmed that thousands of homogeneous, emotionally intense counterpublic-style forums did indeed exist online (Bakshy, Messing, and Adamic 2015; Barberá 2015; Himelboim, McCreery, and Smith 2013).

Still, while claims that online discourse had separated people into echo chambers have drawn plenty of excitement, studies show a more complex reality. Historical analysis showed that while polarization had increased, it started not on social media platforms but among political elites (Fiorina and Abrams 2008). For instance, polarization in language and voting patterns rose in the 1980s and 1990s among members of Congress but books published in the same period showed a weaker pattern of polarization (Jensen et al. 2012). Only by the early 2000s did reliable signals emerge that started to reach the broader populace (Abramowitz and Saunders 2008; Jost 2006).

Further evidence complicates the claim that online communication leads to polarization. Indeed, several studies show that it can actually increase exposure to ideological diversity (Barberá et al. 2015; Flaxman, Goel, and Rao 2016). Polarization increased most during the 2016 election cycle among demographics with low internet use (Boxell, Gentzkow, and Shapiro 2017), and resistance to the perceived dominance of highly emotional, polarized discourse has sent many young people searching for the rational discussions of deliberative spheres (Peacock and Leavitt 2016).

Diversity in advocacy communication has a mixed track record in empirical work on who engages and who drops out. Both diverse and polarized forums have been shown to struggle with inclusion and retention (Jackson and Banaszczyk 2016; Richard and Gray 2018). Though one study finds that users of homogeneous forums are in general more "open and happy", the study does not address what happens to users over the long term (An et al. 2019). Both diverse and highly polarized forums on Reddit have shown strong solidarity even above more individualized platforms like Twitter (Manikonda et al. 2018).

Still, though few studies have looked at rates of attrition across multiple kinds of public, localized studies suggest that polarized spaces tend to sustain engagement better than diverse spaces. Homogeneous spaces online have created welcoming spaces not only for marginal political ideologies but also for people who feel marginalized along lines of race (Graham and Smith 2016), gender (O’Neill 2018), and trans status (Jenzen 2017) among others.

Further, diversity of opinion can depress multiple forms of participation. People sometimes avoid substantive discussion in diverse spaces because of a perception that it will lead to conflict (Mascheroni and Murru 2017; Settle and Carlson 2019; Testa, Hibbing, and Ritchie 2014) and this effect is particularly strong among young people on social media platforms, many of whom view social media as a "happy place" to connect with like-minded others (Kruse et al. 2018; Thorson 2014).

People often opt out of political discussions or hide their opinions when they feel in the minority (Cowan and Baldassarri 2018) or even disconnect on social media from people with whom they disagree (Bode 2016). Key to this behavior seems to be the feeling that encountering alternatives to strongly held beliefs can introduce uncertainty and even challenge a person's self-concept (Ecker and Ang 2019).

On the other hand, some studies suggest that diversity leads to an increase in participation, and that homogeneity can suppress it. Conversations that include strong disagreements can increase the

78

motivation for sharing political information, contributing to a sense of political efficacy (Lane et al. 2017). Further, when whole polities fragment into polarized groups, discussion can be suppressed across the board even as the most polarized groups continue to participate (Wells et al. 2017).

Emotional intensity and rationality

The debate on echo chambers closely mirrors the debate on public deliberation, with highly emotional polarized discourse posed as the enemy of the rational deliberation supposed to be at the heart of democracy. This framing of the conflict between two alternatives - emotional Counterpublics and rational Deliberative Spheres - conflates emotion and polarization, missing the chance to consider them as independent variables with divergent effects.

Traditional models of political participation have tended to favor rationality, and some evidence suggests that exposure to intense emotion can cause people to exit from advocacy. As discussed above, people avoid diversity partly because they imagine the possibility of an emotionally intense conflict. Indeed, the very fact that politics is emotional can itself drive people away from political discussion (Peacock 2019).

Following this thread of reasoning, we might imagine rational, homogeneous Incubators to have the highest rates of retention. A growing field of advocates propose a form of discourse known as enclave deliberation as a way to build power among marginalized groups without the intense emotions of traditional counterpublics (Karpowitz, Raphael, and Hammond 2009). A series of Scandinavian studies suggest that deliberative norms can alleviate the polarizing consequences of discussion in like-minded groups (Grönlund, Herne, and Setälä 2015; Strandberg, Himmelroos, and Grönlund 2019) and may even reduce inequalities of efficacy and participation (Himmelroos, Rapeli, and Grönlund 2017).

On the other hand, emotion is a crucial part of emerging forms of citizenship that rebel against the feeling of powerlessness by embracing passion. This is a core part of the popularity of homogeneous counterpublics that reject respectability politics (Hill 2018; Jackson 2016) but holds true even in ideologically diverse settings (Kligler-Vilenchik 2017; Zuckerman 2014). Young people in particular have embraced emotional advocacy as part of a new politics emphasizing self-actualization and free expression (Loader, Vromen, and Xenos 2014; Xenos et al. 2014) This might suggest that Battlefields, both diverse and emotionally intense, make political engagement more thrilling and thus tend to increase participation (Mutz 2016).

Emotion is at the center of most modern theories of mobilization, which would predict emotional intensity as a motivator of participation. This effect is particularly intense regardless of the diversity of a given public. Emotion is extremely contagious and a deep predictor of sharing online (Coviello et al. 2014; Kramer et al. 2014). Mobilizing emotions are key to movement success (Goodwin, Jasper, and Polletta 2009; Jasper 2017; Polletta and Jasper 2001), particularly online (Bail 2016; Jost et al. 2018).

Shared emotion also plays a crucial role in creating and activating relationships and social networks. The sociological tradition reaching back to Durkheim and beyond emphasizes the power of the shared experience of emotion in creating social connections (Barbalet 2001; Bericat 2016; Durkheim 1912). This pattern seems to hold across a wide range of feelings (Collins 2004, 2014b; Papacharissi 2015); some empirical work on social media platforms suggests that emotional arousal itself drives connection (Berger 2011; Brady et al. 2017; Hasell and Weeks 2016).

79

Still, not all emotions have equal effects and the specific character of emotionally charged discourse (Himelboim et al. 2016) matters to creating and sustaining participation. Negative emotion and anger in particular has drawn copious attention in recent years as an ingredient of successful mobilization. Repeated expression of negative emotions was a crucial tool for advocacy organizations seeking to enter mainstream conversations on national security in the early 2000s (Bail 2012). Well before the 2016 election cycle, activating negative emotions was key to political media and widely perceived as the most reliable fuel for partisan campaigns (Huddy, Mason, and Aarøe 2015; Sobieraj and Berry 2011).

Advocacy focus, partisan position, and boundary work

Understanding the roles that public type, ideological diversity, and emotional intensity play requires placing these factors in the context of others likely to influence to engage in advocacy or drop out. Three other factors - advocacy focus, partisan position, and boundary work - loom large in literature on participation across political psychology, political science, and cultural sociology.

The construct of advocacy focus is grounded in literatures of political participation in political psychology, which has mapped a range of individual factors that lead a person to engage in advocacy. The factor most intimately tied to decisions about engagement is political efficacy, an individual's belief that their efforts to engage in political action will produce meaningful results (Moeller et al. 2014). Political efficacy is closely entangled with political knowledge (Reichert 2016) and both strongly predict political participation; many structural equation models have tried to untangle relationships among the three (Jung, Kim, and Zúñiga 2011; Kenski and Stroud 2006; Reichert 2016).

The entanglement of political efficacy and political knowledge may come from the fact that they are part of a single construct intimately linked to an identity as an advocate, anchored not only in individual characteristics but in social routines and relationships of participation (Bennett et al. 2013; Lamprianou 2013). An individual's focus on advocacy comes not just from knowledge and advocacy but form moral commitments (Snell 2010) a range of civic and social motivations (Gil de Zúñiga, Valenzuela, and Weeks 2016).

These social components of advocacy focus help to explain the deep influence of social networks on political participation (Campbell 2013; Diani 2004). Studies of network effects on participation suggest that the more a person spends time among others focused on advocacy, the more they are likely to remain and deepen their engagement. Only rare studies suggest that intense focus on advocacy forums might lead to burnout and that moderate engagement with politics might best facilitate long-term engagement (Wojcieszak and Mutz 2009).

The final set of factors likely to influence participation comes from the tradition in cultural sociology of symbolic boundaries, collectively constructed social lines that differentiate between social groups (Beljean et al. 2016; Lamont 2012). The construction of symbolic boundaries in communicative action is known as boundary work. Because it focuses on the question of which people fall which side of symbolic boundaries - who is included and who is excluded - boundary work directly influences who feels welcomed in an advocacy forum and who feels discouraged.

This analysis tests a distinction between two types of boundary work likely to influence advocacy dropout and engagement in different ways. The first form of boundary work study here is downframing, in which people use insulting or offensive words (the N word, the C word, and

80

others) that cast social groups in a negative light. The second form is boundary activation, in which a form of social division appears in speech without a clear positive or negative charge. This includes category labels like "men" or "black people" that appear innocuous but implicate power relations (Bourdieu 1985), for instance functioning as part of racial grammars that highlight or assume the dominance of privileged groups (Bonilla-Silva 2012).

Though downframing and boundary activation as such have not been explored in literature on advocacy and participation, we can expect each to have different effects. When downframing occurs in homogeneous settings it is likely to activate shared negative emotions; when it occurs in diverse settings it has been shown to push targeted groups off of social media platforms and out of political participation (Gray et al. 2017; Sobieraj 2018). On the other hand, in some groups negative feedback tends to increase engagement on social media (Cheng, Danescu-Niculescu-Mizil, and Leskovec 2014), raising a challenge to an idea central to discursive opportunity structures that only positive feedback leads people to persist and thrive in publics.

Because the study of boundary work has only rarely considered boundary activation in its own light, the effects of this form of boundary work on participation are harder to predict. It features low levels of emotional charge on highly controversial issues, so measuring its effect on participation may provide a test of how well rational deliberation functions on issues connected to deep inequalities and strong feelings, precisely the issues that were excluded from Habermas' description of the bourgeois public sphere (Fraser 1990).

The wave of new advocates who mobilized after the contentious elections of 2016 had different partisan positions and levels of advocacy focus, spurred by - but not limited to - new mobilizations on the left (Andrews, Caren, and Browne 2019; Gutierrez et al. 2019; Meyer and Tarrow 2018; Reny et al. 2018) and on the right (Kováts 2018; Major et al. 2018). Literature on different practices of deliberation among partisan authors on the right and the left predicts lower valence, less rational deliberation, and less taste for diversity on the right (Boutyline and Willer 2017; Himelboim et al. 2016; Shaw and Benkler 2012).

The newly mobilized people who are the focus of this study entered a wide range of online publics that varied in polarization, intensity, and in the degree and composition of boundary work that circulated within them. Their experiences diverged as they engaged with Deliberative Spheres or Counterpublics, Incubators or Battlefields. Their appetite for further engagement in advocacy online also diverged. This study takes a first step toward untangling how norms of discourse relate to engagement outcomes.

RESEARCH DESIGN

This study uses summary analyses along with a series of logistic and multiple linear regression models to explore the relationship between modes of communication and political engagement, focusing on what happened to new authors entering advocacy forms on Reddit after the 2016 US presidential election. I begin by tracking changes in membership, polarization, and emotional intensity among advocacy forums on Reddit from Winter 2016-17 to Fall 2017, categorizing them as Deliberative spheres, Counterpublics, Incubators, and Battlefields according to their levels of emotional intensity and ideological diversity.

81

I then gather a sample of all authors present in advocacy forums in the quarter after the election and a focused subsample of authors who were brand new to advocacy forums in Winter 2016-17 - i.e., those authors who were either new to the platform or had not posted more than once on an advocacy forum in the four quarters before.

The analysis that follows uses two sets of regression models to predict the influence of mode of communication on two dependent variables: 1) multiple logistic regression predicts a binary indicator of whether new authors left the advocacy forums (i.e., were present in 1 or less of the following three quarters); 2) multiple linear regression predicts the change in a new author's engagement, e.g., the difference between the number of their posts in the quarter following the election v. the average in the following three quarters.

Site Selection: Seeking Advocacy on the Reddit Platform

This study follows a group of authors who posted to the online platform Reddit between November 8 of 2015 and 2017, a year before and after the election. The corpus of their posts and the posts of millions of others is made accessible in monthly tables by pushshift.io and the Google BigQuery project.

With over 330 million active authors posting more than 900 million comments each year in over 80,000 forums, Reddit at the time of this study was the 6th most active website in the United States, with the expressed purpose to "ultimately be a universal platform for human discourse" (Lagorio-Chafkin 2018). A recent study by Pew Research Center estimates that 11% of the adult US population use Reddit but 22% of the population 18-29 uses the platform, which also skews educated and male, with a more complicated racial skew toward white and Hispanic respondents (Perrin and Anderson 2019).

Like most large datasets, this one is subject to a testable rate of errors: an analysis by Gaffney and Matias estimated that roughly 0.043% and 0.65% of comments and submissions were missing, respectively, and pushshift.io was taking steps to reduce this rate over time (Gaffney and Matias 2018). The dataset includes both primary comments - the ones that start discussion threads - and the reply threads that follow them. The filtering process targeting only posts that clearly identified themselves as bots or moderators along with comments with misleading content like copypasta, resulting in the removal of 12.4% of comments on average across the quarters studied.

The forums on this platform, known as "subreddits", function as publics with formal boundaries - i.e., each subreddit has its own name, location, and set of norms enforced by its own group of moderators. These online spheres often feature strikingly different cultures of communication; their divergent patterns of discourse make them strong candidates for the study of public types Most of the forums on Reddit are loose agglomerations of users posting articles and other web links; a subset of forums develop a base of loyal contributors who spend hours a day reading, posting, and replying. In these forums, signals of group affiliation multiply: new phrases and memes appear and establish resonance by generating upvotes, reposts, and replies; users recognize and reference each other; demonyms begin to circulate inside and outside forums as a badge of collective identity (Mills 2017).

The platform features even more discourse of identity, empathy, and strong group interactions than a platform like Twitter (Manikonda et al. 2018). With a more deliberative culture than many platforms and no character restrictions, comments are sometimes short retorts or echoes of other

82

comments but commonly feature long, detailed responses typical of deliberation (Massanari and Chess 2018).

Most importantly for this study of community identity and cohesion, Reddit emphasizes collective processes over individual accounts: unlike other platforms where individual pages are at the core of the user experience, user profiles are little more than a catalog of user's posts. The real work of customization and identity construction on Reddit is collective, with forum moderators and dedicated users spending long hours discussing and tweaking the norms, rules, and appearance of their subreddits (Lagorio-Chafkin 2018; Manikonda et al. 2018).

Because forums rather than individual pages are at the center of the Reddit experience, users often express social identities by showing their mastery of the norms of their chosen subreddits (Mills 2017). This makes Reddit a strong site of community building for homogeneous groups like those that cluster together in Counterpublics, including those sheltering from boundary work and more explicit forms of violence (Dosono and Semaan 2019; O’Neill 2018) and those who seek to legitimize, construct, and weaponize boundary work (Prakasam and Huxtable-Thomas 2020).

However, the affordances of the platform support deliberation in both homogeneous and diverse spaces as well as the intense, homogeneous discourse typical of counterpublics. Unlike Facebook, Instagram, and Twitter, Reddit has no limit to the depth of replies on comment threads, making it possible for users to reference each other's words in long chains of conversation, exercising cognitive authority in argument (White 2019), learning from each other (Del Valle et al. 2020), solving problems together (Buozis 2019), and changing each other's views (Musi 2018). Of course, this affordance of the platform also make it possible to develop and spread conspiracy theories and misinformation (Kou et al. 2017; Samory and Mitra 2018)

A change to the site occurred on February 15, 2017, only a few days after the second quarter analyzed in this study. The front page, which had previously been dominated by 50 "defaults", was opened to all forums except a few sports and political forums including several aligned with 2016 presidential candidates such as The_Donald and SandersforPresident. This certainly influenced the influx of new users to advocacy forums after the first quarter since it impacted what the most casual users were likely to see first when they logged in, though regular users like the ones who are the focus of this study generally see a tailored homepage featuring their subscriptions and would not have experienced much of a change.

This study controls for changes to the platform in two ways: 1) it focuses analysis from the authors' perspective so that their experience in a given forum matters more than the overall popularity of that forum; 2) excluding all forums tied to particularly 2016 candidates from the summary analysis of trends in forum membership.

Measures

Dependent variables: retention and engagement. The binary indicator of retention showed whether an author remained present (more than 1 post) across all advocacy forums for at least 2 of the 3 quarters following Winter 2016-17. That is, if an author posted twice in both Spring 2017 and Summer 2017, they counted as retained. To assess the reliability of the logistic regressions using this variable, I tested this indicator alongside similar measures of whether an author remained present in 1 or more quarters and all 3 quarters.

83

The continuous indicator of engagement, showing how much an author's activity changed over the period, was the difference between the number of comments an author posted in advocacy forums in Winter 2016-17 and the average number of comments they posted in the three quarters following. To validate results, I tested this indicator alongside similar measures based on percent change in advocacy posts and changes over 1, 2, and 3 quarters.

Neither of these indicators is a perfect metric of participation in advocacy. Authors who left the platform may have directed their advocacy to another platform, taken their advocacy offline, or even paused their advocacy only to continue after the time window of this study. Still, because these outcomes are likely to be distributed equally across forums, it is reasonable to expect that differences in participation within these forums will be affected meaningfully by factors endogenous to the platform.

Core independent variables: polarization, emotion, and public type. Estimating the polarization of each forum, and ultimately the degree to which each author was exposed to polarized spaces, depended on reliable coding of whether a forum focuses on advocacy and whether it has a partisan lean for each advocacy forum. To classify the type of forum - whether its conversations focused on advocacy, community, entertainment, or something else - two research assistants and I each independently classified the full set of 3,263 subreddits that were active (3K+ comments) in any of the eight quarters from Winter 2015-16 to Fall 2017.

We then classified the 159 active advocacy forums by their political lean, distinguishing forums that hosted a diverse range of opinions from ones that leaned to the political right or left. Among diverse forums we distinguished a subcategory, debate forums like CapitalismVSocialism, changemyview, and PurplePillDebate, that focused specifically on exchange between diverse perspectives. We met together to resolve the 4% of forums that were classified differently across coders. We validated our category classifications against several category schemes internal to Reddit along with the partial set of category memberships developed by Kumar et al (2018).

To obtain a polarization score for each forum, I first measured the proportion of comments that each of that forum's authors posted in forums determined by hand-coding to lean left or right, weighted by number of posts. I computed the difference between the two and averaged it across all forum comments, thus giving more weight to the most active authors on the forum.

Crucially, this scale is focused on the extent to which partisan authors from one side or the other are dominating the conversation. It is not affected by the presence of authors who posted in neither left-leaning nor right-leaning forums, which means that it tends toward zero as the presence of partisan authors decreases. To measure the presence of emotion in comments, I used the lexicon AFINN (Nielsen 2011), designed to track emotional expression online and a strong performer in a meta-analyses of sentiment, particularly in social media (Ribeiro et al. 2016). For more information on the process used in refining and validating these measures, please see the Methodological Appendix.

As a measure of the face validity of these measures, I computed the average polarization, valence, and intensity of forums by lean in the two years from Winter 2015-16 to Fall 2017. Partisan forums showed high polarization and intensity, with the valence of forums on the left dropping as intensity rose throughout the period. Diverse and debate forums show a similar pattern of low polarization throughout the period but diverge in valence and intensity: diverse forums showed the most

84

consistently negative emotion, though less intensity than the partisan forums, throughout the period; debate forums showed little emotion at all.

To divide advocacy forums into types of public - Deliberative Spheres, Counterpublics, Incubators, and Battlefields - I sorted the forums into quartiles by their polarization and emotional intensity. Forums in the top two quartiles of both intensity and polarization were Counterpublics; forums with low intensity and low polarization were Deliberative Spheres. Forums with high intensity and low polarization were Battlefields and those with low intensity and high polarization were Incubators.

Three sets of indicators emerged from this analysis as the core independent variables in the regression models that followed: 1) the ratio of posts that each author made in Deliberative Spheres, Counterpublics, Incubators, and Battlefields; 2) indicator variables for authors who spent more than 50% of their posts in each public type; and 3) emotional intensity and polarization averaged across the forums that each author was exposed to, weighted by the author's number of posts in each forum. Including indicators tied to each public type functioned similarly to adding an interaction term between polarization and emotional intensity but more specific, testing not just whether the influence of one rises or falls with the other but rather the specific effect of intersections of polarization and diversity, rationality and intensity.

Confounding independent variables: boundary work, advocacy focus, and partisan position. To target downframing I started from the Hatebase, a crowdsourced collection of offensive words and phrases used online (Hine et al. 2017). The Hatebase offers a strong launching point for the analysis of boundary work online because its users not only collect words relevant to downframing, but also tag which symbolic boundaries relate to which words. Because the Hatebase lexicon contains multiple inaccuracies and false positives and requires significant editing to be of use (Davidson et al. 2017), I worked with two research assistants to develop and validate lexicons measuring boundary activation and downframing related to race, class, gender, sexuality, political identity, immigrant status, and religion.

Each of these original lexicons identifying forms of boundary work were refined and validated in four major ways: 1) ranking ngrams from the full Reddit corpus by polarization and emotional valence; 2) sending the draft lists to 3 other scholars with experience in social media and content analysis; 3) analyzing the first 20 results from Reddit Search for each word of phrase during the study period and flagging for deletion any that produced 2 or more false positives; and finally 4) testing each lexicon for false positive rates against a sample of 1,580 hand-coded comments. For further details on this process, please see the Methodological Appendix.

Figure 1. Scores for core variables across qualitatively coded forum groups by quarter, Winter 2015-16 to Fall 2017

0

10

20

30

40

50

60

70

Winter2015-16

Spring2016

Summer2016

Fall2016

Winter2016-17

Spring2017

Summer2017

Fall2017

Polarization

Debate Diverse Left Right

-30000

-25000

-20000

-15000

-10000

-5000

0

5000

10000

15000

Winter2015-16

Spring2016

Summer2016

Fall2016

Winter2016-17

Spring2017

Summer2017

Fall2017

Valence

Debate Diverse Left Right

120000

130000

140000

150000

160000

170000

180000

Winter2015-16

Spring2016

Summer2016

Fall2016

Winter2016-17

Spring2017

Summer2017

Fall2017

Intensity

Debate Diverse Left Right

85

To test whether the effects of other predictors changed with partisan position I created dummy variables identifying authors who spent more than half of their advocacy comments of left and right forums respectively.

Sample & Analysis

To create a sample capturing how the activity of newly engaged people changed after the election, I first identified all authors who were both present (1+ post) and interested (25% of their total posts across Reddit) in advocacy forums in the quarter directly after the election (Winter 2016-17). I downloaded all comments from these authors in that quarter for analysis and collected a range of statistics on their activity from a year before the election (Winter 2015-16) to a year after (Fall 2017).

To focus on people who were newly engaged after the elections, I then created a subset of the authors who were not present in advocacy forums in any of the quarters for a year prior to the election. To eliminate bots and spammers, I identified and removed users that used language associated with either (e.g., "I am a bot"), those with identifiable repeat patterns of posting, and those with 100 or more posts in a quarter. For the sample used in regressions, I further filtered out users with less than 10 posts to eliminate noise and focus in on authors whose relatively deep engagement would suggest that they are likely to remain, making dropout an outcome to explain rather than expect.

The series of summary analyses and regression models that followed explores how these authors' engagement with various types of forum shaped their participation. Crucially, this is not an attempt to fully explain their decisions to participate in online advocacy or not. These decisions were likely influenced by a range of factors not measurable in the current study, many of them rooted in experiences and relationships offline, contributing to a noisy dataset and a relatively low R-squared in each model.

The noise level and size of the dataset requires a very high level of validation to interpret effect sizes for each of the predictors. Because p-values are misleading for large datasets, I report confidence intervals for each predictor and interpret based on the smallest size in the range (Lin, Lucas, and Shmueli 2013). All predictor variables are transformed into z-scores - centered and scaled to have mean 0 and standard deviation 1 - to facilitate comparison.

The final set of regressions reported were selected to best capture a more complex field of results. I also ran many additional regressions with interaction terms, excluded variables, and additional variables to test the robustness of each reported model. Beyond the standard approach of using multiple variable groups in regression, I conducted both commonality analysis (Kraha et al. 2012;

Ray‐Mukherjee et al. 2014) and bootstrap analysis (Davison and Hinkley 1997) to validate the direction and relative size of each coefficient. For a fuller treatment of validation measures, please see the Methodological Appendix.

RESULTS

Polarization, attrition, and emotion by public type

The advocacy forums in the sample followed a divergent range of patterns and cultures, but most separated readily into the four public types identified above. Counterpublics like ChapoTrapHouse, a

86

left-leaning forum founded soon after the election, featured a relatively homogeneous set of authors (polarization z-score zpol=2.17) sharing links and commenting with emotional intensity on current events (intensity zint=0.94). By comparison, an Incubator forum like Conservative was by no means diverse (zpol=0.75) but saw far less emotional intensity (zint=-0.16) in conversations that often focused on polls and political strategy. There were slightly more right-leaning than left-leaning Counterpublics (59%,16/27), but fewer Incubators leaned right (29%, 4/14).

The space of advocacy forums was unevenly divided across the public types. The polarization and intensity of forums were correlated (r=0.40) and the greatest number of forums were Deliberative Spheres or Counterpublics. The forum worldnews was by far the largest Deliberative Sphere, with over 300 thousand authors in the quarter after the election and conversations that were both highly diverse (zpol=-0.93) and relatively low in emotional charge (zint=-0.04). Among Battlefields, the forum news was largest and typically diverse (zpol=-0.88) and intense (zint=0.27), but six left- and right-leaning forums like Trumpgret and progun also included enough authors from opposing views to qualify for the category. Only three forums on the left and two on the right were diverse and dispassionate enough to be classified as Deliberative Spheres.

Through the quarter before the election, every type of public experienced a clear and relatively steady increase in total authors, with the largest increases coming for Incubators at an average of 21% increase per quarter. In the quarter after the election, the membership of each subfield increased dramatically, but the highest increases were in the most homogeneous forums, with 39% growth in counterpublics and 43% growth in Incubators. The growth then stopped in each category, with little change in total authors across the board.

The change in committed authors, who spent more than 50% of their posts in the same forum, followed a different pattern. In this category, the highest initial leaps in the quarter after the election were in the least emotional subfields, Deliberative spheres (38%) and Incubators (26%). Deliberative spheres continued moderate growth (4% per quarter) while incubators lost committed authors (-6% per quarter). In this category, the largest gains were in Counterpublics (8% per quarter).

These changes focus on the subfield overall, but the individual trajectories of forums over this time were divergent. Where p-values are significant above, they show all forums in the subfield moving in

Table 2. Average change per quarter by forum type, Quarter 1 (Winter 2015-16) to Quarter 8 (Fall 2017).

Average % Change total authors

Average % Change committed authors

Average % Change total posts

# forums Quarter 5

# authors Quarter 5

Quarters 1 to 4

Quarters 4 to 5

Quarter 5 to 8

Quarters 1 to 4

Quarters 4 to 5

Quarter 5 to 8

Quarters 1 to 4

Quarters 4 to 5

Quarter 5 to 8

Deliberative Sphere

39 272,998 6%* 18%** -2% 11% 38% 4% 11% 28% 2%

Counter public

40 201,657 9% 39%*** 1% -1% 16%** 8% 2% 20%*** 9%

Incubator 19 121,108 21% 43%* 1% 9% 26% -6% 13% 23% -6%

Battlefield 19 149,598 4% 23% 0% -1% 9% 0% 0% 14%* 0%

* P < .05. ** P < .01. *** P < .001.

87

a similar direction, as in the large jumps in authors and posts in Counterpublics from the quarter before to the quarter after the election. Where they are large but not significant, the overall pattern in the subfield is driven by change in individual forums.

For instance, while Counterpublics on the right had roughly four times the committed authors of forums on the left in the quarter after the election, the 20 Counterpublics on the left grew 49% over the last three quarters of the study, led by an increase of 1,449 authors to the sarcastic, emotionally intense ChapoTrapHouse. Counterpublics on the right grew by 30%, led by unpopularopinion and the gender-focused MGTOW. The 9 Counterpublics that were both in the top quartile of polarization and the top quartile of emotional intensity grew by 2,252 committed authors (109%).

Among Deliberative Spheres the differences were even more stark: of the 39 total, the 21 that featured explicit political debate between left and right declined in committed authors by 4%, with increases in forums like AskAnAmerican and changemyview balanced by decreases in forums like AskTrumpSupporters, which aimed at helping people on the left understand the reasons others support trump. Deliberative Spheres that drew authors from both left and right on topics outside mainstream political debate grew more rapidly, led by the hundreds of authors that flocked to conspiracy (which features remarkably dispassionate debate given its subject matter) and Libertarian.

Of the 17 new forums founded after Winter 2015-16, 9 were Counterpublics and none were Battlefields. Of the six founded in the quarter after the election, the three largest were left-leaning counterpublics Trumpgret, Impeach_Trump, and OurPresident.

A flow analysis for authors who stayed active in advocacy forums on Reddit shows how the preferences of individual authors shifted between Winter 2016-17 and Fall 2017. Only 32% of authors who initially were committed to one type of public (i.e., who had 50% or more of their posts in that public type) shifted to another - or to no preference - during this time. Those who did shift their commitment to new public types followed the pattern below:

This analysis shows an overall flow out of Incubators and toward Deliberative Spheres and Counterpublics among authors who switched commitments between Winter 2016-17 and Fall 2017, visualized in Figure 2. Over half of authors who spent most of their posts in Incubators during the quarter after the election (54%) switched out, compared to only 22% of authors who committed to Counterpublics. The specific structure of the flows shows relationships between - for instance, very few authors who left Incubators joined Battlefields (7%), while many who left Battlefields joined Counterpublics (58%).

Table 3. Changes by forum type, Quarter 1 (Winter 2015-16) to Quarter 8 (Fall 2017).

% of authors who switched who became committed to new forum type by Fall 2017

Public type committed in Winter 2016-17

% who switched out

Deliberative Sphere Counterpublic Incubator Battlefield Total

Deliberative Sphere 32%

45% 33% 22% 100%

Counterpublic 22% 36%

35% 30% 100%

Battlefield 31% 28% 58% 13% 100%

Incubator 54% 45% 48% 7% 100%

No initial commitment N/A 41% 30% 16% 12% 100%

88

Retention - Where did authors stay?

Table 5 shows patterns of attrition for the regression sample, including only authors who posted between 10 and 99 comments in Winter 2016-17. The table shows the difference in rates of author dropout depending on the experience level of the author and public type they spent most time in; for instance, an author appearing in the Battlefield column in the New authors section had 0 posts in any of the four quarters prior to the election and spent more than 50% of their advocacy comments in Battlefield forums.

New authors dropped out of advocacy forums at high rates after joining in Winter 2016-17. Of the 22,211 new authors, 42% did not return to advocacy forums for any of the three quarters following - by contrast, 8% of established authors who had posted more than once in at least 3 of 4 quarter before the election dropped out. Where 21% of new authors were present in these forums for the full year after the election, this number was 62% for established authors.

Measuring retention of authors committed to a given public type - who spent more than half of their comments in forums of that type - sheds light on how these types function differently. Authors who spent most of their comments in Incubators had the lowest rates of retention across the board, with 52% of established authors and 15% of new authors remaining through the study period.

Counterpublics had the highest retention rates of the public types, with 62% of established authors and 23% of new authors remaining through the period. The other category with high retention rates was the authors with no type dominant, who divided their time and comments among multiple public types.

There were major differences in nearly all indicator variables across the authors who stayed present in advocacy forums for 2+ . Those who stayed had 16% lower ratio of their posts that appeared in advocacy forums, commenting in a greater diversity of forums across the platform. The forums they visited had a 16% lower average polarization score than those who dropped out. In boundary work, those who stayed had a 10% lower exposure to boundary activation and a 14% lower exposure to downframing.

Figure 2. Flow analysis tracking changes in the posting behavior of committed authors between Winter 2016-17 and Fall 2017

All authors committed to a forum type in Winter 2016-17

Authors who switched among forum types (committed to one forum type at start and another at end)

89

All the patterns above held true across all public types except the pattern for downframing. Among authors committed to counterpublics, exposure to downframing was slightly higher (1%) for those who stayed than for those who dropped out. The strongest downframing difference was among authors with no dominant public type, where authors who stayed were exposed to 25% less downframing. The regression models below assess the ways these indicators interact to predict an author's chances of remaining or dropping out, increasing or decreasing their advocacy engagement on the platform.

Predicting author retention for authors

The first set of models used multiple logistic regression to predict whether new authors (N=22,211) stayed present in advocacy forums for 2+ quarters following the first quarter after the election. Because continuous variables are centered and scaled to z-scores with mean 0 and standard deviation 1, odds ratios measure the change in likelihood associated with moving up one standard deviation from the mean. Only odds ratios with 95% confidence interval different from 1 that proved robust across all models - including models with and without confounding variables and models from bootstrap regression - are reported below.

Model 1A predicts the independent effects of polarization along with emotional valence and intensity; Model 1B predicts the odds of remaining engaged in advocacy forums based on the ratio of comments an author contributed in each public type; and Model 1C uses indicators of authors who spent more than 50% of their comments in each public type. As expected, none of the models captured anywhere near the full range of factors influencing the decision to engage in advocacy on Reddit: McFadden Pseudo R-squared statistics were 0.025 for 1A, 0.024 for 1B, and 0.023 for 1C.

Still, several predictors proved robust within and across the models. Figure 3 reports the results of models with core independent variables (Core), with continuous confounders (Core+), and a fully loaded version (Full) including dummy variables for authors committed to left and right forums.

Table 4. Authors with 10-100 posts in Reddit advocacy forums in Winter 2016-17; percent present in the 3 quarters after

Persistent committed authors

(present 3+ prior quarters)

Newly committed authors

(present in 0 prior quarters)

Deliberative

Spheres

Counter-

publics Incubators Battlefields

Deliberative

Spheres

Counter-

publics Incubators Battlefields

# in

Winter

2016-17 2,782 2,727 1,360 762 2,153 1,610 884 513

Present

1+

quarters

after

91% 92% 87% 92% 52% 62% 52% 55%

Present

2+

quarters

after

78% 80% 72% 80% 30% 38% 29% 30%

Present

all 3

quarters

after

60% 62% 55% 61% 17% 22% 15% 14%

Note: Authors listed in each public type posted 50%+ of advocacy posts in that public type during Winter 2016-17.

90

Core variables: polarization, emotion, and public types. Polarization and emotional intensity both had independent positive effects on the chances of remaining active in advocacy forums in model 1A. Every increase of 18.5 in the average polarization of forums increased an author's chances of remaining by 20% (95% CI 12-28%) net of other factors. When an author's exposure to emotional intensity raised 9%, the model predicted 11% (95% CI 6-16%) higher chances of remaining. Though valence was a significant predictor of retention in the Core model it had no significant influence in the Full model.

Authors who committed to Incubators and Battlefields in the quarters after the election had a significantly lower chance of remaining on the platform as shown in model 1C. Authors who contributed over half of their comments to Incubators in Winter 2016-17 had chances of remaining 33% (95% CI 22-43%) lower than other authors; those who committed to Battlefields had chances 25% (95% CI 8-39%) lower.

Though committing to Deliberative Spheres was a negative predictor of retention in the Core and Core+ models, it was not significant net of all variables in the Full model. Model 2B shows a similar pattern with ratio of posts spent in Incubators and Battlefields as significant negative predictors of retention and the other public types not significant, but with weaker effects across all public types than in the model focused on indicator variables for committed authors.

Advocacy focus, boundary work, partisan position. The ratio of posts in advocacy forums was a strong negative predictor in all three models. Across the three models, every standard deviation increase in advocacy percent - an increase of 26% in the percentage of posts in advocacy forums - lowered the odds of staying active with a 95% confidence interval between 21% and 34%. The effect was weaker in models 1A and 1B, suggesting that authors' engagement with public type captures some of the explanatory power of advocacy focus.

Boundary work had a divergent effect: exposure to boundary activation, where authors discussed race, gender, class, and other forms of social difference, had a significant negative effect on authors' chances of remaining across all three models. On the other hand, exposure to the negatively charged language of downframing had a positive effect. The effect is weakest in model 1A; since

Figure 3: Retention in advocacy forums. Odds ratios with 95% confidence interval predicting probability of new authors staying present in advocacy forums 2+ quarters after Winter 2016-17. Testing three sets of variables: 1A) polarization and intensity; 1B) ratios of posts by type of forum; 1C) dummy variables for authors spending over 50% of time in that type of forum;. N=22,664.

91

downframing and emotional intensity are correlated, some of the effect of downframing may have been captured by introducing emotional intensity into the model.

Analysis of partisan position showed generally higher odds of remaining on the platform (95% CI 21-71%) for authors who committed to left forums. Model 1C reported positive odds for authors with a majority of posts in left forums with a similar gap between left and right. Adding these dummy variables increases the explanatory power of both downframing and boundary activation across all models, suggesting that the impact of boundary work varied according to partisan position.

Predicting author retention for committed and uncommitted authors

Repeating prediction models of attrition for different types of authors shows which predictors pertain across all authors and which vary across different people and contexts. I repeated the multiple logistic regression of model 1A for authors who committed to each forum type in Winter 2015-16. Note that this analysis cannot include dummy variables for partisan authors on the left and right because authors who committed to Counterpublics and Incubators were partisan authors by definition.

Figure 4 shows the multiple logistic regression of model 1A repeated for three groups of committed authors: those who committed to Deliberative Spheres (Deliberative Authors) and those who committed to Counterpublics (Counterpublic Authors). Effect sizes were very small for models focused on authors who spent most of their time in Incubators and Battlefields, suggesting that effects in the overall models are driven by patterns among uncommitted authors and those who focused on Counterpublics and Deliberative Spheres. The Uncommitted Authors model was very similar to the overall results.

On the other hand, authors who committed to Deliberative Spheres and Counterpublics showed results substantially different from the overall models. Where downframing was not a significant

Figure 4: Retention for committed and uncommitted authors. Odds ratios with 95% confidence interval predicting probability of new authors staying present in advocacy forums 2+ quarters after Winter 2016-17.

92

predictor of retention in model 1A for authors overall, it was significant and positive among committed Counterpublic Authors and significant and negative among Deliberative Authors. Both types of committed authors showed a stronger tolerance for boundary activation, though they shared advocacy focus as a strong negative predictor.

Predicting author engagement

To validate and extend results exploring why authors stayed or dropped out advocacy forums, I also created a series of regression models predicting the change in engagement. This outcome variable measured the change in the number of posts an author contributed in Winter 2016-17 compared to the average of posts in the three following quarters. These multiple linear regression models use the same predictor variables and same author dataset as the multiple logistic regression models.

Figure 5 reports the results of models with core independent variables, continuous confounders, and a fully loaded version including dummy variables for authors committed to left and right forums. Similar to those models, R-squared values for fully loaded models all fell between 0.02 and 0.03. Only predictors with 95% confidence interval different from 0 that proved robust across all models - including models with and without confounding variables and models from bootstrap regression - are reported below.

Many predictor effects were similar between models of retention and models of engagement. Exposure to polarized forums and emotional intensity were positive predictors of engagement in all Full models; boundary activation and advocacy forums predicted less engagement. Left authors had a higher level of engagement than right authors net of other factors. Percent of time spent in various kinds of publics had a weak but significant impact on author engagement while commitment to public spheres showed stronger influence.

There were three major differences. Exposure to downframing did not have a significant effect on engagement in model 2A and acted as a weak positive predictor in models 2B and 2C, complicating the result that downframing was a positive predictor of remaining present in advocacy forums. Further, emotional valence was a positive predictor of author engagement in model 2A, suggesting that negative emotion did not predict a higher rate of activity in advocacy forums.

Figure 5: Changes in engagement. Multiple linear regression models with 95% confidence interval predicting changes in number of advocacy posts after Winter 2016-17. Testing three models for three sets of variables: 2A) polarization and intensity; 2B) ratios of posts by type of forum; 2C) dummy variables for authors spending over 50% of time in that type of forum. N=22,664.

93

Finally, commitment to publics played out with subtle but interesting differences between engagement and retention. Commitment to counterpublics again gave a higher likelihood of remaining in advocacy forums than commitment to other public types, but it appeared as a significant positive predictor of engagement and one of the strongest in model 2C. The effects of commitment to Incubators and Battlefields were generally negative but not significant; commitment to Deliberative Spheres was a significant negative predictor, indicating that committed authors were likely to decrease their participation even as they remained active on the forums.

DISCUSSION

These results show a dramatically high rate of attrition across all forum types in this time of upheaval. Newly passionate authors may have rushed to advocacy forums after the 2016 election cycle, but they left those forums in droves in the months that followed. What influenced these authors' decisions about whether to engage in online advocacy, avoid politics, or drop off the platform? Many factors beyond the internal characteristics of their experience on Reddit mattered, to be sure. Still, this analysis shows that some aspects of their discursive environment had large and robust effect sizes on both retention and engagement.

Results of the regression models support other studies showing that ideologically polarized and emotionally intense spaces tend to sustain participation. Valence generally had little independent effect - that is, negative emotion was no more effective than positive emotion at influencing people's decision to stay engaged in advocacy. On the other hand, downframing had a strong and robust effect on both retention and engagement across nearly every model and subsample, suggesting that some of the positive effects of negative emotion found elsewhere may actually stem from shared participation in negatively charged boundary work rather than emotional exposure as such.

The picture of emotion and polarization over the 2016 election cycle is more complex than this topline result; indeed, different public types had trajectories that diverged with real impacts for the authors who joined them. Deliberative Spheres and Counterpublics drew the greatest flow of authors who switched their allegiance among public types in the quarters but they followed contrasting trajectories. Deliberative Spheres were the only public type whose committed authors were less likely to remain when exposed to downframing, likely showing that they

Counterpublics, both emotionally intense and deeply polarized, had a significantly more positive impact on both retention and engagement than the other public types. Authors who spent more than half of their comments in Counterpublics had the highest rate of retention on the platform. They were also significantly more likely to remain and to increase their engagement across all regression models. This emotionally intense publics grew in absolute terms over the year after the election as new authors replaced the ones who dropped out after the first quarter.

Battlefields and Incubators had more trouble retaining new authors: authors who committed more than half their posts to dropped out of advocacy forums at significantly higher rates than others. Homogeneous, dispassionate Incubators saw among the greatest gains in authors in the quarter after the election but their rate of retention was the lowest and more than half of their committed authors switched to other types of public, particularly to Deliberative Spheres. Battlefields lost authors disproportionately to relatively homogeneous Counterpublics.

94

Polarized forums had a positive effect on retaining new authors, but their partisan position mattered. Across all models, authors who spent more than half their time in forums on the left were significantly more likely than those who committed to forums on the right. Partisans on the right were more likely to drop out of advocacy forums, but this effect faded in models of their level of activity. This may also have something to do with emotion: left forums dramatically dropped in emotional valence and gained emotional intensity throughout the two years after November 2015; after the election, forums on the right became more polarized as forums on the left became less so.

One of the strongest and most consistent signals in the retention models - and one that raises questions for literature on political participation in unsettled times - was the negative effect of advocacy focus: new authors that focused their communication almost exclusively on advocacy forums were more likely to leave the platform and likely to engage less. Validating this result is the fact that boundary activation was a significant predictor of both dropping out of advocacy platforms and reducing activity levels across every model of the full group of new authors. This is opposite the effect one would expect from a naïve reading of the empirical work on political efficacy and knowledge: those who participate in advocacy are supposed to be those who persist and engage most.

Still, in the context of political upheaval, this tendency may reverse as people with relatively little experience throw themselves into political action. High exposure to political content and social difference - particularly outside of emotional, polarized environments like Counterpublics - may lead to a form of burnout that has gone substantially understudied in sociology. Preliminary qualitative analysis on the comments of authors who left suggest that many of these new authors may have felt overwhelmed or frustrated by their lack of agency. More research is needed to determine when and where newly mobilized people experience burnout and how this feeling might be addressed to keep their passion for political engagement alive.

CONCLUSION

Debates over what types of discourse best serve the people of diverse democracies have long circled around two sets of apparently contradictory arguments. On the topic of polarization, the majority of political philosophers insist on communication across diversity as a norm, while most empirical work suggests that homogeneous spaces tend to increase participation and reduce inequalities across race, class, gender, and other forms of social difference. On emotion, studies on why people avoid politics suggests that people draw back from intense emotional experiences while literatures on mobilization and social bonding suggest that intense emotional experience are at the heart of people's reasons for remaining engaged.

This study intervenes in both contradictions by exploring the role that different types of participatory public play within a broader field of publics. In the end, high-functioning democratic systems are unlikely to rely solely on any single type of conversation. Much more fruitful than trying to impose a single framework of discussion on all publics is to think through how to support and evolve the actually existing space of publics.

Counterpublics, both homogeneous and emotionally intense, fit the predictions of the theorists who advocate them: they grew the most, retained their authors the best, and became even more polarized. Deliberative spheres and Incubators lost committed authors, suggesting that systems hoping to sustain dispassionate conversations in passionate times face a challenge. At least in the

95

context of the heated and fast-changing year after the 2016 presidential elections, the emotional discussions of Counterpublics might have been just what newly mobilized people needed to sustain their participation.

Burnout, the negative effect of advocacy focus and exposure to boundary activation, is among the most robust findings of the study and demands further research. There may indeed be such a thing as too much exposure to serious social issues for people newly inspired to advocacy, particularly when that exposure comes without the support of a tight and well-networked community.

The newly mobilized authors who joined these advocacy forums left in large numbers in the year after the 2016 elections, mirroring the experience of many grassroots groups offline. Though there are plenty of reasons to leave social media platforms, including the chance to engage more in local communities, the close empirical correlation between online and offline participation suggests that those who left are likely to have disengaged more broadly.

Explaining why people depart or engage in participatory publics is only a first step to understanding the broader field of publics. Next steps include digging deeper into how the structure and style of publics shape the possible outcomes of conversation - which mutual understandings, fruitful compromises, or collective mobilizations in which settings - and how publics of various type collaborate with and struggle against each other. In a historical moment when the health of democratic institutions in the United States and around the world is in question, the mandate to understand and support meaningful communication in diverse polities has never been more urgent.

96

Conclusion

The election cycle of 2016 in the United States was a tumultuous time when new movements and campaigns took the stage and millions considered the fate of their country in a new light. Of course, what matters in the long run is not only the period of intense change but also the work that precedes it and follows it. Moments trigger shifts, but the work of interpreting and building on those shifts - what moments mean, how they are remembered - shapes the field of struggles for centuries to come.

This study intervenes in the meaning of this particular tumultuous time directly by debunking some of the simplistic narratives that circulated in its aftermath. Claims that the 2016 election was all about race, gender, or any form of social difference are dangerous because they distort the complex reality of struggle and make coalitions across issues harder to build. Race, gender, and class all mattered deeply to the mobilizations of this period, but none of them mattered alone.

The dangers of these simplistic narratives are far from academic: those that feature homogeneous groups of enemies or heroes like the white working class condense the complex identities of real people into caricatures - and distort the meanings of race and class by removing their histories and intersections with other boundaries. Plus, narratives like the rise of populism that rely on the discourse of elites without checking how that discourse circulates in everyday conversation can take power away from common people who should be enabled and encouraged to define themselves.

Each of these particular narratives are directly rebutted in the work of this dissertation because none capture the reality of how people discuss and perform social difference in everyday conversation. Just for instance, the quantitative work of Chapter 1 shows an increase in the salience of not just one but six categories of difference - class, gender/sex, immigrant status, and race, but most of all immigrant status and political identity - starting in Spring 2016 and remaining at elevated levels throughout the period.

But the point is not simply that reality is complex. Specific intersectional configurations matter. The position of alt-right boundary language as a link between two clusters of boundary work - in the cultural hole between the Gender/Sexuality/Intensity cluster and the Borders & Religion cluster - partly explains the speed and intensity of its rise. The alt right bridged the gap between two discursive communities and the bridge it constructed was an anchor of the coalition that drove Trump into power.

Indeed, even the degree of intersectionality in discourse - how often a form of difference appears alone or entangled with other forms - matters to what mobilization and coalition is possible. The relatively intense intersectionality of right discourse on class inequality - where attacks against elites were peppered with references to race, gender, and nationality - may also have been a crucial element of coalitions on the right. By contrast, class discourse on the left was focused on economic status and political identity. Conversations so rarely touched on race and gender that it put in question the capacity of left movements to stitch together an effective coalition across the many issues they attempt to confront.

The study of dropout and engagement patterns online targets different narratives that circulate in academic circles as well as in popular media. The popular narrative of the social media echo chamber is an easy foil because it only describes the behavior of a relatively small fraction of users on a discussion platform like Reddit. Plus, despite the confidence with which political philosophers make normative claims about one form of discourse or another, neither rational discourse in diverse

97

deliberative spheres nor the intense homogeneous discussions of counterpublics offered a silver bullet for sustaining the engagement of people who mobilized after the elections.

Each category of forum - each type of public in the field of advocacy forums on Reddit - lost a majority of their new members and each engaged the members who stayed in different ways. Any intervention to sustain and support people who are newly mobilized by changing times must consider each type of public and the relationships between them.

Contributions and challenges

Beyond results specific to the 2016 election cycle, I hope to make several contributions to inform my own work in the future and most of all enrich conversations about how discourse and cultural struggle matter to material and power inequality. I developed intersectional boundary analysis to be shared as a framework for revealing boundary configurations and debunking specious claims like the ones discussed above. Similarly, dropout patterns are meaningful not just in participatory publics like the ones studied in Chapter 3 but across a wide range of institutions including the schools, movements, and city councils I am currently studying with other researchers. The lexicons, measures, visualization tools, and concepts developed here are open source, ready to be shared, discussed, evolved, and applied in community.

The interdisciplinary of these frameworks is in itself a contribution and a challenge. This study brings together tools of concept and method from a wide range of influences including cultural sociology, intersectionality studies, social psychology, philosophy, and political science. Interdisciplinarity is key to properly constructing the object of social research. Just as intersectionality challenges us to consider the forms of social difference relevant to a particular situation, interdisciplinarity poses a challenge to find the conceptual frameworks and research tools best suited to generate meaningful understanding in context. And just as perspectives on race shift when considered alongside gender, our disciplines evolve through interactions with each other, and generally for the better.

For instance, cultural sociology has rarely engaged two or even three symbolic boundaries at once; this study pushes the discipline to consider the way even boundaries that seem to stand alone are embedded in references to others. It also challenges cultural sociologists to deepen their engagement with multiple forms of oppression and make clearer the connections between cultural inequality and inequalities of resources and political power. Political science and political psychology are beginning to awaken to the importance of culture and mobilization to political participation; this dissertation supports this push by clearly tracing effects of the context of discourse on sustained engagement in advocacy.

Bridges of method also support and challenge these disciplines. This dissertation traverses the borders between theory, qualitative research, and quantitative analysis. The philosophy of the public sphere, and even scholarly work on intersectionality - despite its power as a framework for legal advocacy - have often lived in the world of theory. This work among others creates methodological tools to help them establish an even stronger footing in practical work. Cultural sociology has increasingly polarized between big data analysis and close readings; this study shows how the two can enrich each other and stands as a provocation for others to do the same.

Several conceptual maneuvers may be particularly helpful down the road. Social research and theory often suffer from incomplete typologies - for instance, negatively charged boundary work (e.g.,

98

downframing) has threatened to dominate the category of boundary work in ways that obscure the importance of affiliation (upframing) and emotionally neutral categorization. Developing a clear and complete framework that takes account of the function that specific boundary work plays - and the full its full continuum from negative to neutral to positive charge - was a crucial prerequisite for revealing boundary work with clarity and rigor.

Similarly, populism studies have just recently distinguished the traditional form of populism that attacks elites and elevates common people from a more exclusive form that attacks some groups of common people along with elites. But these two are not the only combinations of downframing and upframing that targets these groups. Opening the analysis of class discourse to four combinations of boundary work allows for a full and specific analysis of the vertical (elite +/-) and horizontal (common +/-) dimensions - and helps reveal the full range of struggles to define and mobilize value-laden terms like "elites" and "the people".

The debate between deliberative spheres and counterpublics also trades on a binary distinction that can obscure more than it clarifies. Understanding participatory publics by their degree of emotional intensity and ideological diversity opens two other categories for exploration beyond counterpublics (emotionally intense and homogeneous) and deliberative spheres, which have sometimes been explored empirically but rarely enter into the debate about norms of democratic participation. Diversity and emotional intensity are not only squarely in the center of the distinction between deliberative spheres and counterpublics; they are squarely at the center of the subjective experience of participation in advocacy. Tracking how they vary in theory and in practice is key to understanding how people participate - or decide not to participate - in actually existing democracy.

Future research

The results of this study of shifts and realignments in discourse center on one time period and the empirical work focuses on one social media platform. Each chapter makes its work meaningful through extensive validation and comparison to other contexts and processes; even so, their work cannot stand alone. Future studies should extend the analysis forward and backward in time, revealing whether the shifts described here are permanent or fleeting and showing how they build from the struggles and mobilizations of decades past.

Though there is plenty of evidence to suggest that shifts on this platform are relevant to broader across social contexts, further work is needed to situate these results in broader cultural and political processes. I have checked the trends reported in Chapter 1 against key terms in Google Trends and in GDELT, a large database of television news transcripts. But the most important future work will track how narratives and boundaries circulate differently in online and online conversations, and how they flow between spaces of different kinds.

Extending the concepts and frameworks of this dissertation to in-person interaction is a core goal: one strand of my own future work will trace dropout patterns in live participation across a range of institutions. The next project in the pipeline develops and tests a toolkit for revealing how the everyday grind of interaction in city councils reinforces inequalities of power across race and class. I am developing the framework of coding and analysis of this study as a public toolkit, extending the concepts and tools of Chapter 3 to enable more organizations to study and understand their own patterns of dropout and exclusion.

99

Similarly, I have already trained several research assistants in intersectional boundary analysis and plan to make public the set of lexicons, coding frameworks, visualization tools, and other components that made possible the work of Chapters 2 and 3. Scholars too often operate in isolation; our tendency to gather in silos and cliques hurts the quality of our work as well as our relevance to the outside world. This dissertation is a milestone along a larger path, and it is the work that follows - to extend the research and apply it to new contexts, to connect it with broader conversations on what has changed and how best to struggle for justice in this changed cultural landscape - that will matter most.

100

REFERENCES

Abedi, Amir, and Thomas Carl Lundberg. 2009. “Doomed to Failure? UKIP and the Organisational Challenges Facing Right-Wing Populist Anti-Political Establishment Parties.” Parliamentary Affairs 62(1):72–87.

Abramowitz, Alan I., and Kyle L. Saunders. 2008. “Is Polarization a Myth?” The Journal of Politics 70(2):542–55.

Acker, Joan. 2006. Class Questions: Feminist Answers. Rowman & Littlefield.

Alamillo, Rudy, Chris Haynes, and Raul Madrid. 2019. “Framing and Immigration through the Trump Era.” Sociology Compass 13(5):e12676.

Alashri, Saud, Srinivasa Srivatsav Kandala, Vikash Bajaj, Roopek Ravi, Kendra Smith, and Kevin Desouza. 2016. “An Analysis of Sentiments on Facebook during the 2016 U.S. Presidential Election.”

Albertazzi, Daniele, Duncan McDonnell, and Duncan McDonnell. 2015. Populists in Power. Routledge.

Albrecht, Michael Mario. 2017. “Bernie Bros and the Gender Schism in the 2016 US Presidential Election.” Feminist Media Studies 17(3):509–13.

Amodio, David M. 2014. “The Neuroscience of Prejudice and Stereotyping.” Nature Reviews Neuroscience 15(10):670–82.

Amundson, Kalynn, and Anna Zajicek. 2018. “A Case Study of State-Level Policymakers’ Discursive Co-Constructions of Welfare Drug Testing Policy and Gender, Race, and Class.” Sociological Inquiry 88(3):383–409.

An, Jisun, Haewoon Kwak, Oliver Posegga, and Andreas Jungherr. 2019. “Political Discussions in Homogeneous and Cross-Cutting Communication Spaces.” Proceedings of the International AAAI Conference on Web and Social Media 13:68–79.

Andrews, Kenneth T., Neal Caren, and Alyssa Browne. 2019. “Protesting Trump.” Mobilization: An International Quarterly.

Anselmi, Manuel. 2017. Populism: An Introduction. Routledge.

Anthias, Floya. 2013. “Hierarchies of Social Location, Class and Intersectionality: Towards a Translocational Frame.” International Sociology 28(1):121–38.

Aral, Sinan, and Dean Eckles. 2019. “Protecting Elections from Social Media Manipulation.” Science 365(6456):858–61.

Aslanidis, Paris. 2016. “Is Populism an Ideology? A Refutation and a New Perspective.” Political Studies 64(1_suppl):88–104.

Bächtiger, André, and John Parkinson. 2019. Mapping and Measuring Deliberation: Towards a New Deliberative Quality. Oxford University Press.

Badawy, Adam, Emilio Ferrara, and Kristina Lerman. 2018. “Analyzing the Digital Traces of Political Manipulation: The 2016 Russian Interference Twitter Campaign.” Pp. 258–65 in 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM).

101

Bail, Christopher A. 2008. “The Configuration of Symbolic Boundaries against Immigrants in Europe.” American Sociological Review 73(1):37–59.

Bail, Christopher A. 2012. “The Fringe Effect Civil Society Organizations and the Evolution of Media Discourse about Islam since the September 11th Attacks.” American Sociological Review 77(6):855–79.

Bail, Christopher A. 2014a. Terrified: How Anti-Muslim Fringe Organizations Became Mainstream. 1 edition. Princeton, New Jersey: Oxford University Press.

Bail, Christopher A. 2014b. “The Cultural Environment: Measuring Culture with Big Data.” Theory and Society 43(3–4):465–82.

Bail, Christopher A. 2016. “Emotional Feedback and the Viral Spread of Social Media Messages About Autism Spectrum Disorders.” Http://Dx.Doi.Org/10.2105/AJPH.2016.303181. Retrieved January 19, 2018 (http://ajph.aphapublications.org/).

Bakshy, Eytan, Solomon Messing, and Lada A. Adamic. 2015. “Exposure to Ideologically Diverse News and Opinion on Facebook.” Science 348(6239):1130–32.

Bamman, David, and Noah A. Smith. 2015. “Contextualized Sarcasm Detection on Twitter.” in Ninth International AAAI Conference on Web and Social Media.

Banks, Antoine J. 2016. “Are Group Cues Necessary? How Anger Makes Ethnocentrism Among Whites a Stronger Predictor of Racial and Immigration Policy Opinions.” Political Behavior 38(3):635–57.

Barbalet, J. M. 2001. Emotion, Social Theory, and Social Structure: A Macrosociological Approach. Cambridge University Press.

Barberá, Pablo. 2015. “Birds of the Same Feather Tweet Together: Bayesian Ideal Point Estimation Using Twitter Data.” Political Analysis 23(1):76–91.

Barberá, Pablo, John T. Jost, Jonathan Nagler, Joshua A. Tucker, and Richard Bonneau. 2015. “Tweeting From Left to Right: Is Online Political Communication More Than an Echo Chamber?” Psychological Science 26(10):1531–42.

Baumgartner, Jason, Savvas Zannettou, Brian Keegan, Megan Squire, and Jeremy Blackburn. 2020. “The Pushshift Reddit Dataset.” ArXiv arXiv:2001.08435.

Beaman, Jean. 2016. “Citizenship as Cultural: Towards a Theory of Cultural Citizenship.” Sociology Compass 10(10):849–57.

Beck, Ulrich. 2013. “Why ‘Class’ Is Too Soft a Category to Capture the Explosiveness of Social Inequality at the Beginning of the Twenty-First Century.” The British Journal of Sociology 64(1):63–74.

Beljean, Stefan, Phillipa Chong, and Michèle Lamont. 2016. “A Post-Bourdieusian Sociology of Valuation and Evaluation for the Field of Cultural Production.” Routledge International Handbook of the Sociology of Art and Culture 38–48.

Benhabib, Seyla. 1994. “Deliberative Rationalality and Models of Democratic Legitimacy.” Constellations 1(1):26–52.

102

Benkler, Yochai, Hal Roberts, Robert Faris, Alicia Solow-Niederman, and Bruce Etling. 2015. “Social Mobilization and the Networked Public Sphere: Mapping the SOPA-PIPA Debate.” Political Communication 32(4):594–624.

Bennett, Elizabeth A., Alissa Cordner, Peter Taylor Klein, Stephanie Savell, and Gianpaolo Baiocchi. 2013. “Disavowing Politics: Civic Engagement in an Era of Political Skepticism.” American Journal of Sociology 119(2):518–48.

Berezin, Mabel. 2017. “On the Construction Sites of History: Where Did Donald Trump Come From?” American Journal of Cultural Sociology 5(3):322–37.

Berezin, Mabel. 2019. “Fascism and Populism: Are They Useful Categories for Comparative Sociological Analysis?” Annual Review of Sociology 45(1):null.

Berger, Jonah. 2011. “Arousal Increases Social Transmission of Information.” Psychological Science 22(7):891–93.

Bericat, Eduardo. 2016. “The Sociology of Emotions: Four Decades of Progress.” Current Sociology 64(3):491–513.

Bhambra, Gurminder K. 2017. “Brexit, Trump, and ‘Methodological Whiteness’: On the Misrecognition of Race and Class.” The British Journal of Sociology 68(S1):S214–32.

Bliuc, Ana-Maria, Nicholas Faulkner, Andrew Jakubowicz, and Craig McGarty. 2018. “Online

Networks of Racial Hate: A Systematic Review of 10 years of Research on Cyber-Racism.” Computers in Human Behavior 87:75–86.

Bobo, Lawrence D. 2017. “Racism in Trump’s America: Reflections on Culture, Sociology, and the 2016 US Presidential Election.” The British Journal of Sociology 68(S1):S85–104.

Bode, Leticia. 2016. “Pruning the News Feed: Unfriending and Unfollowing Political Content on Social Media.” Research & Politics 3(3):2053168016661873.

Bode, Leticia. 2017. “Gateway Political Behaviors: The Frequency and Consequences of Low-Cost Political Engagement on Social Media.” Social Media + Society 3(4):2056305117743349.

Bode, Leticia, Emily K. Vraga, and Sonya Troller-Renfree. 2017. “Skipping Politics: Measuring Avoidance of Political Content in Social Media.” Research & Politics 4(2):2053168017702990.

Bolleyer, Nicole, and Evelyn Bytzek. 2017. “New Party Performance after Breakthrough: Party Origin, Building and Leadership.” Party Politics 23(6):772–82.

Bonacich, Phillip. 2007. “Some Unique Properties of Eigenvector Centrality.” Social Networks 29(4):555–64.

Bonikowski, Bart. 2017. “Ethno-Nationalist Populism and the Mobilization of Collective Resentment.” The British Journal of Sociology 68:S181–213.

Bonikowski, Bart, and Paul DiMaggio. 2016. “Varieties of American Popular Nationalism.” American Sociological Review 81(5):949–80.

Bonikowski, Bart, and Noam Gidron. 2016. Multiple Traditions in Populism Research: Toward a Theoretical Synthesis. SSRN Scholarly Paper. ID 2875372. Rochester, NY: Social Science Research Network.

103

Bonilla-Silva, Eduardo. 2014. Racism Without Racists: Color-Blind Racism and the Persistence of Racial Inequality in America. Rowman & Littlefield Publishers, Incorporated.

Boulianne, Shelley. 2015. “Social Media Use and Participation: A Meta-Analysis of Current Research.” Information, Communication & Society 18(5):524–38.

Bourdieu, Pierre. 1985. “The Social Space and the Genesis of Groups.” Theory and Society 14(6):723–44.

Bourdieu, Pierre. 1986. Distinction: A Social Critique of the Judgement of Taste. 1 edition. London: Routledge.

Bourdieu, Pierre. 1991. Language and Symbolic Power. Harvard University Press.

Bourdieu, Pierre. 1998. The State Nobility: Elite Schools in the Field of Power. Stanford University Press.

Bourdieu, Pierre, and Loïc Wacquant. 2013. “Symbolic Capital and Social Classes.” Journal of Classical Sociology 13(2):292–302.

Boutcher, Steven A., J. Craig Jenkins, and Nella Van Dyke. 2017. “Strain, Ethnic Competition, and Power Devaluation: White Supremacist Protest in the U.S., 1948–1997.” Social Movement Studies 16(6):686–703.

Boutyline, Andrei, and Robb Willer. 2017. “The Social Structure of Political Echo Chambers: Variation in Ideological Homophily in Online Networks.” Political Psychology 38(3):551–69.

Bowleg, Lisa. 2008. “When Black + Lesbian + Woman ≠ Black Lesbian Woman: The Methodological Challenges of Qualitative and Quantitative Intersectionality Research.” Sex Roles 59(5–6):312–25.

Boxell, Levi, Matthew Gentzkow, and Jesse M. Shapiro. 2017. “Greater Internet Use Is Not Associated with Faster Growth in Political Polarization among US Demographic Groups.” Proceedings of the National Academy of Sciences 114(40):10612–17.

Brady, William J., Julian A. Wills, John T. Jost, Joshua A. Tucker, and Jay J. Van Bavel. 2017. “Emotion Shapes the Diffusion of Moralized Content in Social Networks.” Proceedings of the National Academy of Sciences 114(28):7313–18.

Brenner, Johanna, and Nancy Fraser. 2017. “What Is Progressive Neoliberalism?: A Debate.” Dissent 64(2):130–130.

Brewer, Sierra, and Lauren Dundes. 2018. “Concerned, Meet Terrified: Intersectional Feminism and the Women’s March.” Women’s Studies International Forum 69:49–55.

Brown, Alexander. 2018. “What Is so Special about Online (as Compared to Offline) Hate Speech?” Ethnicities 18(3):297–326.

Brown, Melissa, Rashawn Ray, Ed Summers, and Neil Fraistat. 2017. “#SayHerName: A Case Study of Intersectional Social Media Activism.” Ethnic and Racial Studies 40(11):1831–46.

Brown, Wendy. 2015. Undoing the Demos: Neoliberalism’s Stealth Revolution. MIT Press.

Brubaker, Rogers. 2014. “Beyond Ethnicity.” Ethnic and Racial Studies 37(5):804–8.

Brubaker, Rogers. 2017. “Between Nationalism and Civilizationism: The European Populist Moment in Comparative Perspective.” Ethnic and Racial Studies 40(8):1191–1226.

104

Brunell, Thomas L., Bernard Grofman, and Samuel Merrill. 2012. “Magnitude and Durability of Electoral Change: Identifying Critical Elections in the U.S. Congress 1854–2010.” Electoral Studies 31(4):816–28.

Brym, Robert, Melissa Godbout, Andreas Hoffbauer, Gabe Menard, and Tony Huiquan Zhang. 2014. “Social Media in the 2011 Egyptian Uprising.” The British Journal of Sociology 65(2):266–92.

Buozis, Michael. 2019. “Doxing or Deliberative Democracy? Evidence and Digital Affordances in the Serial SubReddit.” Convergence 25(3):357–73.

Calhoun, Craig. 2010. “The Public Sphere in the Field of Power.” Social Science History 34(3):301–35.

Calhoun, Craig. 2013. “Occupy Wall Street in Perspective.” The British Journal of Sociology 64(1):26–38.

Campbell, David E. 2013. “Social Networks and Political Participation.” Annual Review of Political Science 16(1):33–48.

Campi, Ashleigh, and Jane Junn. 2019. “Racial Linked Fate and Gender in U.S. Politics.” Politics, Groups, and Identities 7(3):654–62.

Carney, Nikita. 2016. “All Lives Matter, but so Does Race: Black Lives Matter and the Evolving Role of Social Media.” Humanity & Society 40(2):180–99.

Cassese, Erin C., and Tiffany D. Barnes. 2019. “Reconciling Sexism and Women’s Support for Republican Candidates: A Look at Gender, Class, and Whiteness in the 2012 and 2016 Presidential Races.” Political Behavior 41(3):677–700.

Cassese, Erin C., and Mirya R. Holman. 2019. “Playing the Woman Card: Ambivalent Sexism in the 2016 U.S. Presidential Race.” Political Psychology 40(1):55–74.

Chayko, Mary. 2019. “Digital Technology, Social Media, and Techno-Social Life.” Pp. 377–97 in The Wiley Blackwell Companion to Sociology. John Wiley & Sons, Ltd.

Cheng, Justin, Cristian Danescu-Niculescu-Mizil, and Jure Leskovec. 2014. “How Community Feedback Shapes User Behavior.” in Eighth International AAAI Conference on Weblogs and Social Media.

Cho, Sumi, Kimberlé Williams Crenshaw, and Leslie McCall. 2013. “Toward a Field of Intersectionality Studies: Theory, Applications, and Praxis.” Signs: Journal of Women in Culture and Society 38(4):785–810.

Choo, Hae Yeon, and Myra Marx Ferree. 2010. “Practicing Intersectionality in Sociological Research: A Critical Analysis of Inclusions, Interactions, and Institutions in the Study of Inequalities.” Sociological Theory 28(2):129–49.

Christensen, Henrik Serup, Staffan Himmelroos, and Kimmo Grönlund. 2017. “Does Deliberation Breed an Appetite for Discursive Participation? Assessing the Impact of First-Hand Experience.” Political Studies 65(1_suppl):64–83.

Chun, Christian W. 2019. “Language, Discourse, and Class: What’s next for Sociolinguistics?” Journal of Sociolinguistics 0(0).

Clinton, Hillary Rodham, and Tim Kaine. 2016. Stronger Together. Simon and Schuster.

105

Collins, Patricia Hill. 1998a. Fighting Words: Black Women and the Search for Justice. U of Minnesota Press.

Collins, Patricia Hill. 1998b. “It’s All In the Family: Intersections of Gender, Race, and Nation.” Hypatia 13(3):62–82.

Collins, Patricia Hill. 2015. “Intersectionality’s Definitional Dilemmas.” Annual Review of Sociology 41(1):1–20.

Collins, Randall. 2004. Interaction Ritual Chains. Princeton, N.J: Princeton University Press.

Collins, Randall. 2014a. Interaction Ritual Chains. Princeton University Press.

Collins, Randall. 2014b. “Interaction Ritual Chains and Collective Effervescence.” Collective Emotions 299–311.

Comunello, Francesca, and Giuseppe Anzera. 2012. “Will the Revolution Be Tweeted? A Conceptual Framework for Understanding the Social Media and the Arab Spring.” Islam and Christian–Muslim Relations 23(4):453–70.

Covarrubias, Alejandro, Pedro E. Nava, Argelia Lara, Rebeca Burciaga, Verónica N. Vélez, and Daniel G. Solorzano. 2018. “Critical Race Quantitative Intersections: A Testimonio Analysis.” Race Ethnicity and Education 21(2):253–73.

Coviello, Lorenzo, Yunkyu Sohn, Adam D. I. Kramer, Cameron Marlow, Massimo Franceschetti, Nicholas A. Christakis, and James H. Fowler. 2014. “Detecting Emotional Contagion in Massive Social Networks.” PLOS ONE 9(3):e90315.

Cowan, Sarah K., and Delia Baldassarri. 2018. “‘It Could Turn Ugly’: Selective Disclosure of Attitudes in Political Discussion Networks.” Social Networks 52:1–17.

Craig, Maureen A., and Jennifer A. Richeson. 2014. “On the Precipice of a ‘Majority-Minority’ America: Perceived Status Threat From the Racial Demographic Shift Affects White Americans’ Political Ideology.” Psychological Science 25(6):1189–97.

Cramer, Katherine J. 2016. The Politics of Resentment: Rural Consciousness in Wisconsin and the Rise of Scott Walker. University of Chicago Press.

Crandall, Christian S., Jason M. Miller, and Mark H. White. 2018. “Changing Norms Following the 2016 U.S. Presidential Election: The Trump Effect on Prejudice.” Social Psychological and Personality Science 9(2):186–92.

Creech, Brian. 2018. “Finding the White Working Class in 2016: Journalistic Discourses and the Construction of a Political Identity.” European Journal of Cultural Studies 1367549418786413.

Crenshaw, Kimberle. 1989. “Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics.” U. Chi. Legal f. 139.

Crenshaw, Kimberle. 1991. “Mapping the Margins: Intersectionality, Identity Politics, and Violence against Women of Color.” Stanford Law Review 43(6):1241–99.

Dablander, Fabian, and Max Hinne. 2019. “Node Centrality Measures Are a Poor Substitute for Causal Inference.” Scientific Reports 9(1):1–13.

106

Dahlgren, Peter. 2005. “The Internet, Public Spheres, and Political Communication: Dispersion and Deliberation.” Political Communication 22(2):147–62.

Daniels, Jessie. 2018. “The Algorithmic Rise of the ‘Alt-Right.’” Contexts 17(1):60–65.

Daum, Courtenay W. 2017. “Counterpublics and Intersectional Radical Resistance: Agitation as Transformation of the Dominant Discourse.” New Political Science 39(4):523–37.

Davidson, Thomas, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. “Automated Hate Speech Detection and the Problem of Offensive Language.” in Eleventh International AAAI Conference on Web and Social Media.

Davis, Jenny L., and Tony P. Love. 2019. “Generalizing from Social Media Data: A Formal Theory Approach.” Information, Communication & Society 22(5):637–47.

Davison, A. C., and D. V. Hinkley. 1997. Bootstrap Methods and Their Application. Cambridge University Press.

Dawson, Michael C. 1994. “A Black Counterpublic?: Economic Earthquakes, Racial Agenda(s), and Black Politics.” Public Culture 7(1):195–223.

Del Valle, Marc Esteve, Anatoliy Gruzd, Priya Kumar, and Sarah Gilbert. 2020. “Learning in the Wild: Understanding Networked Ties in Reddit.” Pp. 51–68 in Mobility, Data and Learner Agency in Networked Learning, Research in Networked Learning, edited by N. B. Dohn, P. Jandrić, T. Ryberg, and M. de Laat. Cham: Springer International Publishing.

Delehanty, Jack, Penny Edgell, and Evan Stewart. 2019. “Christian America? Secularized Evangelical Discourse and the Boundaries of National Belonging.” Social Forces 97(3):1283–1306.

Demata, Massimiliano. 2017. “‘A Great and Beautiful Wall’: Donald Trump’s Populist Discourse on Immigration.” Journal of Language Aggression and Conflict 5(2):274–94.

Diani, Mario. 2004. “Networks and Participation.” The Blackwell Companion to Social Movements 339–359.

Dignam, Pierce Alexander, and Deana A. Rohlinger. 2019. “Misogynistic Men Online: How the Red Pill Helped Elect Trump.” Signs: Journal of Women in Culture and Society 44(3):589–612.

DiMaggio, Paul, Clark Bernier, Charles Heckscher, and David Mimno. 2018. “Interaction Ritual Threads: Does IRC Theory Apply Online?” Pp. 81–124 in Ritual, Emotion, Violence: Studies on the Micro-Sociology of Randall Collins. Taylor and Francis.

DiMaggio, Paul, Manish Nag, and David Blei. 2013. “Exploiting Affinities between Topic Modeling and the Sociological Perspective on Culture: Application to Newspaper Coverage of U.S. Government Arts Funding.” Poetics 41(6):570–606.

Dosono, Bryan, and Bryan Semaan. 2019. “Moderation Practices as Emotional Labor in Sustaining Online Communities: The Case of AAPI Identity Work on Reddit.” Pp. 1–13 in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI ’19. Glasgow, Scotland Uk: Association for Computing Machinery.

Drabble, Laurie A., Cindy B. Veldhuis, Angie Wootton, Ellen D. B. Riggle, and Tonda L. Hughes. 2019. “Mapping the Landscape of Support and Safety Among Sexual Minority Women and Gender Non-Conforming Individuals: Perceptions After the 2016 US Presidential Election.” Sexuality Research and Social Policy 16(4):488–500.

107

Dryzek, John S. 2001. “Legitimacy and Economy in Deliberative Democracy.” Political Theory 29(5):651–69.

Duncombe, Constance. 2019. “The Politics of Twitter: Emotions and the Power of Social Media.” International Political Sociology 13(4):409–29.

Durkheim, Emile. 1912. The Elementary Forms of the Religious Life.

Ecker, Ullrich K. H., and Li Chang Ang. 2019. “Political Attitudes and the Processing of Misinformation Corrections.” Political Psychology 40(2):241–60.

Edwards, Jason A. 2018. “Make America Great Again: Donald Trump and Redefining the U.S. Role in the World.” Communication Quarterly 66(2):176–95.

Eliasoph, Nina. 1996. “Making a Fragile Public: A Talk-Centered Study of Citizenship and Power.” Sociological Theory 14(3):262–89.

Eliasoph, Nina. 1998. Avoiding Politics: How Americans Produce Apathy in Everyday Life. Cambridge University Press.

Elliott, Thomas, Jennifer Earl, and Thomas V. Maher. 2017. “Recruiting Inclusiveness: Intersectionality, Social Movements, and Youth Online.” Retrieved September 21, 2019 (https://www.emerald.com/insight/content/doi/10.1108/S0163-786X20170000041019/full/html).

Else-Quest, Nicole M., and Janet Shibley Hyde. 2016. “Intersectionality in Quantitative Psychological Research: II. Methods and Techniques.” Psychology of Women Quarterly 40(3):319–36.

Enli, Gunn. 2017. “Twitter as Arena for the Authentic Outsider: Exploring the Social Media Campaigns of Trump and Clinton in the 2016 US Presidential Election.” European Journal of Communication 32(1):50–61.

Eschmann, Rob. 2019. “Unmasking Racism: Students of Color and Expressions of Racism in Online Spaces.” Social Problems.

Evans, James A., and Pedro Aceves. 2016. “Machine Translation: Mining Text for Social Theory.” Annual Review of Sociology 42(1):21–50.

Falola, Bisola, and Chelsi West Ohueri. 2017. “Resist, Persist, Desist: Building Solidarity from Grandma Ella through Baby Angela to the Women’s March.” Gender, Place & Culture 24(5):722–40.

Farrell, Henry. 2012. “The Consequences of the Internet for Politics.” Annual Review of Political Science 15(1):35–52.

Federico, Christopher M., and Agnieszka Golec de Zavala. 2018. “Collective Narcissism and the 2016 US Presidential Vote.” Public Opinion Quarterly 82(1):110–21.

Ferrara, Emilio. 2017. Contagion Dynamics of Extremist Propaganda in Social Networks. SSRN Scholarly Paper. ID 2982259. Rochester, NY: Social Science Research Network.

Ferree, Myra Marx. 2003. “Resonance and Radicalism: Feminist Framing in the Abortion Debates of the United States and Germany.” American Journal of Sociology 109(2):304–44.

108

Finley, Laura, and Luigi Esposito. 2020. “The Immigrant as Bogeyman: Examining Donald Trump and the Right’s Anti-Immigrant, Anti-PC Rhetoric.” Humanity & Society 44(2):178–97.

Fiorina, Morris P., and Samuel J. Abrams. 2008. “Political Polarization in the American Public.” Annual Review of Political Science 11(1):563–88.

Fisher, Dana R. 2019. American Resistance: From the Women’s March to the Blue Wave. Columbia University Press.

Fisher, Dana R., Dawn M. Dow, and Rashawn Ray. 2017. “Intersectionality Takes It to the Streets: Mobilizing across Diverse Interests for the Women’s March.” Science Advances 3(9):eaao1390.

Fishkin, James S. 2011. When the People Speak: Deliberative Democracy and Public Consultation. Oxford University Press.

Fiss, Peer C., and Paul M. Hirsch. 2005. “The Discourse of Globalization: Framing and Sensemaking of an Emerging Concept.” American Sociological Review 70(1):29–52.

Flaxman, Seth, Sharad Goel, and Justin M. Rao. 2016. “Filter Bubbles, Echo Chambers, and Online News Consumption.” Public Opinion Quarterly 80(S1):298–320.

Fligstein, Neil, and Doug McAdam. 2012. A Theory of Fields. Oxford University Press.

Foa, Roberto Stefan, and Yascha Mounk. 2016. “The Democratic Disconnect.” Journal of Democracy 27(3):5–17.

Foa, Roberto Stefan, and Yascha Mounk. 2017. “The Signs of Deconsolidation.” Journal of Democracy 28(1):5–15.

Fraser, Nancy. 1990. “Rethinking the Public Sphere: A Contribution to the Critique of Actually Existing Democracy.” Social Text (25/26):56–80.

Fraser, Nancy. 1995. “From Redistribution to Recognition? Dilemmas of Justice in a’Post-Socialist’Age.” New Left Review (212):68.

Fraser, Nancy. 1997. Justice Interruptus: Critical Reflections on the “Postsocialist” Condition. Psychology Press.

Friedman, Jerome, Trevor Hastie, and Robert Tibshirani. 2008. “Sparse Inverse Covariance Estimation with the Graphical Lasso.” Biostatistics 9(3):432–41.

Fuchs, Christian. 2016. “Racism, Nationalism and Right-Wing Extremism Online: The Austrian Presidential Election 2016 on Facebook.” Momentum Quarterly 5(3):172–96.

Gaby, Sarah, and Neal Caren. 2016. “The Rise of Inequality: How Social Movements Shape Discursive Fields.” Mobilization: An International Quarterly.

Gaffney, Devin, and J. Nathan Matias. 2018. “Caveat Emptor, Computational Social Science: Large-Scale Missing Data in a Widely-Published Reddit Corpus.” PLOS ONE 13(7):e0200162.

Galderisi, Peter F., Michael S. Lyons, Randy T. Simmons, and John G. Francis. 2019. The Politics Of Realignment: Party Change In The Mountain West. Routledge.

Gantt-Shafer, Jessica, Cara Wallis, and Caitlin Miles. 2019. “Intersectionality, (Dis)Unity, and Processes of Becoming at the 2017 Women’s March.” Women’s Studies in Communication 42(2):221–40.

109

Gautney, Heather. 2018. Crashing the Party: From the Bernie Sanders Campaign to a Progressive Movement. Verso Books.

Georgeac, Oriane A. M., Aneeta Rattan, and Daniel A. Effron. 2019. “An Exploratory Investigation of Americans’ Expression of Gender Bias Before and After the 2016 Presidential Election.” Social Psychological and Personality Science 10(5):632–42.

Gest, Justin. 2016. The New Minority: White Working Class Politics in an Age of Immigration and Inequality. Oxford University Press.

Giannella, Eric, and Claude S. Fischer. 2016. “An Inductive Typology of Egocentric Networks.” Social Networks 47:15–23.

Giddens, Anthony. 1973. Capitalism and Modern Social Theory: An Analysis of the Writings of Marx, Durkheim and Max Weber. Cambridge University Press.

Gidron, Noam, and Peter A. Hall. 2017. “The Politics of Social Status: Economic and Cultural Roots of the Populist Right.” The British Journal of Sociology 68(S1):S57–84.

Gil de Zúñiga, Homero, Sebastián Valenzuela, and Brian E. Weeks. 2016. “Motivations for Political Discussion: Antecedents and Consequences on Civic Engagement.” Human Communication Research 42(4):533–52.

Gilens, Martin, and Benjamin I. Page. 2014. “Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens.” Perspectives on Politics 12(3):564–81.

Gitlin, Todd. 2013. “Occupy’s Predicament: The Moment and the Prospects for the Movement.” The British Journal of Sociology 64(1):3–25.

González-Bailón, Sandra, Andreas Kaltenbrunner, and Rafael E. Banchs. 2010. “The Structure of Political Discussion Networks: A Model for the Analysis of Online Deliberation.” Journal of Information Technology 25(2):230–43.

González-Bailón, Sandra, and Ning Wang. 2016. “Networked Discontent: The Anatomy of Protest Campaigns in Social Media.” Social Networks 44:95–104.

Goodwin, Jeff, James M. Jasper, and Francesca Polletta. 2009. Passionate Politics: Emotions and Social Movements. University of Chicago Press.

Gould, Lewis L. 2010. 1968: The Election That Changed America. Government Institutes.

Graham, Roderick, and ‘Shawn Smith. 2016. “The Content of Our #Characters: Black Twitter as Counterpublic.” Sociology of Race and Ethnicity 2(4):433–49.

Graham, Todd, Daniel Jackson, and Scott Wright. 2016. “‘We Need to Get Together and Make Ourselves Heard’: Everyday Online Spaces as Incubators of Political Action.” Information, Communication & Society 19(10):1373–89.

Gray, Kishonna. 2012. “Intersecting Oppressions and Online Communities.” Information, Communication & Society 15(3):411–28.

Gray, Kishonna, Bertan Buyukozturk, and Zachary G. Hill. 2017. “Blurring the Boundaries: Using Gamergate to Examine ‘Real’ and Symbolic Violence against Women in Contemporary Gaming Culture.” Sociology Compass 11(3):n/a-n/a.

110

Gray, Kishonna, and Wanju Huang. 2015. “More than Addiction: Examining the Role of Anonymity, Endless Narrative, and Socialization in Prolonged Gaming and Instant Messaging Practices.” Journal of Comparative Research in Anthropology and Sociology; Bucharest 6(1):133–47.

Green, Jon, and Sean McElwee. 2019. “The Differential Effects of Economic Conditions and Racial Attitudes in the Election of Donald Trump.” Perspectives on Politics 17(2):358–79.

Griffin, Kimberly A., and Samuel D. Museus. 2011. “Application of Mixed-Methods Approaches to Higher Education and Intersectional Analyses.” New Directions for Institutional Research (151):15–26.

Grimmer, Justin, and Brandon M. Stewart. 2013. “Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts.” Political Analysis mps028.

Grönlund, Kimmo, Kaisa Herne, and Maija Setälä. 2015. “Does Enclave Deliberation Polarize Opinions?” Political Behavior 37(4):995–1020.

Gutierrez, Angela, Angela X. Ocampo, Matt A. Barreto, and Gary Segura. 2019. “Somos Más: How Racial Threat and Anger Mobilized Latino Voters in the Trump Era.” Political Research Quarterly 1065912919844327.

Gutmann, Amy, and Sigal Ben‐Porath. 2014. “Democratic Education.” Pp. 863–75 in The Encyclopedia of Political Thought. American Cancer Society.

Habermas, Jürgen. 1975. Legitimation Crisis. Beacon Press.

Habermas, Jürgen. 1991. The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society. Sixth Printing edition. Cambridge, Mass: The MIT Press.

Habermas, Jürgen. 1996. Between Facts and Norms: Contributions to a Discourse Theory of Law and Democracy. 1 edition. Polity.

Habermas, Jürgen. 2006. “Political Communication in Media Society: Does Democracy Still Enjoy an Epistemic Dimension? The Impact of Normative Theory on Empirical Research1.” Communication Theory 16(4):411–26.

Hagen, S. 2017. “Mapping the Alt-Right: The US Alternative Right across the Atlantic.” Alt-Right Open Intelligence Initiative.

Haider-Markel, Donald, Patrick Gauding, Andrew Flores, Daniel Lewis, Patrick Miller, Jami Taylor, and Barry Tadlock. 2019. “Year of the LGBTQ Candidate? LGBTQ State Legislative Candidates in the Trump Era.”

Hall, Catherine. 2013. White, Male and Middle Class: Explorations in Feminism and History. John Wiley & Sons.

Hall, Stuart. 1981. “Notes on Deconstructing ‘The Popular.’” People’s History and Socialist Theory, London, Routledge & Kegan Paul 227–249.

Hampton, Keith N. 2017. “Studying the Digital: Directions and Challenges for Digital Methods.” Annual Review of Sociology 43(1):167–88.

Hancock, Ange-Marie. 2007. “When Multiplication Doesn’t Equal Quick Addition: Examining Intersectionality as a Research Paradigm.” Perspectives on Politics 5(1):63–79.

111

Hancock, Ange-Marie. 2013a. “Empirical Intersectionality: A Tale of Two Approaches Symposium Issue: Critical Race Theory and Empirical Methods.” UC Irvine Law Review 3:259–96.

Hancock, Ange-Marie. 2013b. Solidarity Politics for Millennials: A Guide to Ending the Oppression Olympics. 2011 edition. New York: Palgrave Macmillan.

Hancock, Ange-Marie. 2016. Intersectionality: An Intellectual History. Oxford University Press.

Hara, Noriko, Jessica Abbazio, and Kathryn Perkins. 2019. “An Emerging Form of Public Engagement with Science: Ask Me Anything (AMA) Sessions on Reddit r/Science.” PloS One 14(5).

Hasell, A., and Brian E. Weeks. 2016. “Partisan Provocation: The Role of Partisan News Use and Emotional Responses in Political Information Sharing in Social Media.” Human Communication Research 42(4):641–61.

Hatakka, Niko. 2016. “When Logics of Party Politics and Online Activism Collide: The Populist Finns Party’s Identity under Negotiation.” New Media & Society 19(12):1461444816660728.

Hatfield, Joe Edward. 2018. “Toxic Identification: #Twinks4Trump and the Homonationalist Rearticulation of Queer Vernacular Rhetoric.” Communication, Culture and Critique 11(1):147–61.

Hawkins, Kirk A. 2010. Venezuela’s Chavismo and Populism in Comparative Perspective. Cambridge University Press.

Heidt, Stephen J. 2018. “Scapegoater-in-Chief : Racist Undertones of Donald Trump’s Rhetorical Repertoire.” The Trump Presidency, Journalism, and Democracy 206–28. Retrieved April 18, 2020 (https://www.taylorfrancis.com/).

Heikkilä, Niko. 2017. “Online Antagonism of the Alt-Right in the 2016 Election.” European journal of American studies 12(2).

Heuman, Amy N., and Alberto González. 2018. “Trump’s Essentialist Border Rhetoric: Racial Identities and Dangerous Liminalities.” Journal of Intercultural Communication Research 47(4):326–42.

Hill, Marc Lamont. 2018. “‘Thank You, Black Twitter’: State Violence, Digital Counterpublics, and Pedagogies of Resistance.” Urban Education 53(2):286–302.

Himelboim, Itai, Stephen McCreery, and Marc Smith. 2013. “Birds of a Feather Tweet Together: Integrating Network and Content Analyses to Examine Cross-Ideology Exposure on Twitter.” Journal of Computer-Mediated Communication 18(2):154–74.

Himelboim, Itai, Kaye D. Sweetser, Spencer F. Tinkham, Kristen Cameron, Matthew Danelo, and Kate West. 2016. “Valence-Based Homophily on Twitter: Network Analysis of Emotions and Political Talk in the 2012 Presidential Election.” New Media & Society 18(7):1382–1400.

Himmelroos, Staffan, Lauri Rapeli, and Kimmo Grönlund. 2017. “Talking with Like-Minded People—Equality and Efficacy in Enclave Deliberation.” The Social Science Journal 54(2):148–58.

Hine, G., J. Onaolapo, E. De Cristofaro, N. Kourtellis, I. Leontiadis, R. Samaras, G. Stringhini, and J. Blackburn. 2017. “Kek, Cucks, and God Emperor Trump: A Measurement Study of 4chan’s Politically Incorrect Forum and Its Effects on the Web.” International Conference on

112

Web and Social Media (ICWSM). Retrieved July 19, 2017 (http://www.icwsm.org/2017/program/accepted-papers/).

Hirschkind, Charles. 2001. “Civic Virtue and Religious Reason: An Islamic Counterpublic.” Cultural Anthropology 16(1):3–34.

Hochschild, Arlie Russell. 2018. Strangers in Their Own Land: Anger and Mourning on the American Right. The New Press.

Hodson, G., J. Rush, and C. C. Macinnis. 2010. “A Joke Is Just a Joke (except When It Isn’t): Cavalier Humor Beliefs Facilitate the Expression of Group Dominance Motives.” Journal of Personality and Social Psychology 99(4):660–82.

Hogg, Michael A. 2016. “Social Identity Theory.” Pp. 3–17 in Understanding Peace and Conflict Through Social Identity Theory, Peace Psychology Book Series. Springer, Cham.

Holt, Kristoffer, Adam Shehata, Jesper Strömbäck, and Elisabet Ljungberg. 2013. “Age and the Effects of News Media Attention and Social Media Use on Political Interest and Participation: Do Social Media Function as Leveller?” European Journal of Communication 28(1):19–34.

Hooghe, Marc, and Ruth Dassonneville. 2018. “Explaining the Trump Vote: The Effect of Racist Resentment and Anti-Immigrant Sentiments.” PS: Political Science & Politics 51(3):528–34.

Hooks, Bell. 2000. Where We Stand: Class Matters. Psychology Press.

Hopkins, Daniel J., and Samantha Washington. 2019. The Rise of Trump, the Fall of Prejudice? Tracking White Americans’ Racial Attitudes 2008-2018 via a Panel Survey. SSRN Scholarly Paper. ID 3378076. Rochester, NY: Social Science Research Network.

Howard, Philip N., Aiden Duffy, Deen Freelon, M. M. Hussain, Will Mari, and Marwa Maziad. 2011. Opening Closed Regimes: What Was the Role of Social Media During the Arab Spring? SSRN Scholarly Paper. ID 2595096. Rochester, NY: Social Science Research Network.

Huddy, Leonie, Lilliana Mason, and Lene Aarøe. 2015. “Expressive Partisanship: Campaign Involvement, Political Emotion, and Partisan Identity.” American Political Science Review 109(1):1–17.

Humphrey, Michael. 2017. “'I Am In No Way This’: Troll Hunters and Pragmatic Digital Self-Reference.” Persona Studies 3(2):21–34.

Ince, Jelani, Fabio Rojas, and Clayton A. Davis. 2017. “The Social Media Response to Black Lives Matter: How Twitter Users Interact with Black Lives Matter through Hashtag Use.” Ethnic and Racial Studies 40(11):1814–30.

Inglehart, Ronald F., and Pippa Norris. 2016. Trump, Brexit, and the Rise of Populism: Economic Have-Nots and Cultural Backlash. SSRN Scholarly Paper. ID 2818659. Rochester, NY: Social Science Research Network.

Iyengar, Shanto, Yphtach Lelkes, Matthew Levendusky, Neil Malhotra, and Sean J. Westwood. 2019. “The Origins and Consequences of Affective Polarization in the United States.” Annual Review of Political Science 22(1):129–46.

Iyengar, Shanto, and Sean J. Westwood. 2015. “Fear and Loathing across Party Lines: New Evidence on Group Polarization.” American Journal of Political Science 59(3):690–707.

113

Jackson, Sarah J. 2016. “(Re)Imagining Intersectional Democracy from Black Feminism to Hashtag Activism.” Women’s Studies in Communication 39(4):375–79.

Jackson, Sarah J., and Sonia Banaszczyk. 2016. “Digital Standpoints: Debating Gendered Violence and Racial Exclusions in the Feminist Counterpublic.” Journal of Communication Inquiry 40(4):391–407.

Jagers, Jan, and Stefaan Walgrave. 2007. “Populism as Political Communication Style: An Empirical Study of Political Parties’ Discourse in Belgium.” European Journal of Political Research 46(3):319–45.

Jamshidi, Maryam. 2013. The Future of the Arab Spring: Civic Entrepreneurship in Politics, Art, and Technology Startups. 1 edition. Oxford: Butterworth-Heinemann.

Jasper, James M. 2017. “The Doors That Culture Opened: Parallels between Social Movement Studies and Social Psychology.” Group Processes & Intergroup Relations 20(3):285–302.

Jean, Yanick St, Joe R. Feagin, and Joe R. Feagin. 2015. Double Burden: Black Women and Everyday

Racism : Black Women and Everyday Racism. Routledge.

Jensen, SURESH NAIDU, ETHAN KAPLAN, LAURENCE WILSE-SAMSON, DAVID GERGEN, MICHAEL ZUCKERMAN, and ARTHUR SPIRLING. 2012. “Political Polarization and the Dynamics of Political Language: Evidence from 130 Years of Partisan Speech [with Comments and Discussion].” Brookings Papers on Economic Activity 1–81.

Jenzen, Olu. 2017. “Trans Youth and Social Media: Moving between Counterpublics and the Wider Web.” Gender, Place & Culture 24(11):1626–41.

Johnston, Ron, Charles Pattie, Kelvyn Jones, and David Manley. 2017. “Was the 2016 United States’ Presidential Contest a Deviating Election? Continuity and Change in the Electoral Map – or ‘Plus Ça Change, plus ç’est La Mème Géographie.’” Journal of Elections, Public Opinion and Parties 27(4):369–88.

Jost, John T. 2006. “The End of the End of Ideology.” American Psychologist 651–670.

Jost, John T., Pablo Barberá, Richard Bonneau, Melanie Langer, Megan Metzger, Jonathan Nagler, Joanna Sterling, and Joshua A. Tucker. 2018. “How Social Media Facilitates Political Protest: Information, Motivation, and Social Networks.” Political Psychology 39(S1):85–118.

Jost, John T., Christopher M. Federico, and Jaime L. Napier. 2009. “Political Ideology: Its Structure, Functions, and Elective Affinities.” Annual Review of Psychology 60(1):307–37.

Jost, John T., H. Hannah Nam, David M. Amodio, and Jay J. Van Bavel. 2014. “Political Neuroscience: The Beginning of a Beautiful Friendship.” Political Psychology 35:3–42.

Jung, Nakwon, Yonghwan Kim, and Homero Gil de Zúñiga. 2011. “The Mediating Role of Knowledge and Efficacy in the Effects of Communication on Political Participation.” Mass Communication and Society 14(4):407–30.

Karpowitz, Christopher F., Chad Raphael, and Allen S. Hammond. 2009. “Deliberative Democracy and Inequality: Two Cheers for Enclave Deliberation among the Disempowered.” Politics & Society 37(4):576–615.

Keating, Jessica, Leaf Van Boven, and Charles M. Judd. 2016. “Partisan Underestimation of the Polarizing Influence of Group Discussion.” Journal of Experimental Social Psychology 65:52–58.

114

Kenski, Kate, and Natalie Jomini Stroud. 2006. “Connections Between Internet Use and Political Efficacy, Knowledge, and Participation.” Journal of Broadcasting & Electronic Media 50(2):173–92.

Kertzer, Joshua D., and Thomas Zeitzoff. 2017. “A Bottom-Up Theory of Public Opinion about Foreign Policy.” American Journal of Political Science 61(3):543–58.

Kidd, Dustin, and Keith McIntosh. 2016. “Social Media and Social Movements.” Sociology Compass 10(9):785–94.

Kim, Harris Hyun-soo, and Chaeyoon Lim. 2019. “From Virtual Space to Public Space: The Role of Online Political Activism in Protest Participation during the Arab Spring.” International Journal of Comparative Sociology 60(6):409–34.

Kligler-Vilenchik, Neta. 2017. “Alternative Citizenship Models: Contextualizing New Media and the New ‘Good Citizen.’” New Media & Society 19(11):1887–1903.

Kluttz, Daniel N., and Neil Fligstein. 2016. “Varieties of Sociological Field Theory.” Pp. 185–204 in Handbook of Contemporary Sociological Theory, Handbooks of Sociology and Social Research, edited by S. Abrutyn. Cham: Springer International Publishing.

Kou, Yubo, Xinning Gui, Yunan Chen, and Kathleen Pine. 2017. “Conspiracy Talk on Social Media: Collective Sensemaking during a Public Health Crisis.” Proceedings of the ACM on Human-Computer Interaction 1(CSCW):61:1–61:21.

Kováts, Eszter. 2018. “Questioning Consensuses: Right-Wing Populism, Anti-Populism, and the Threat of ‘Gender Ideology.’” Sociological Research Online 23(2):528–38.

Kraha, Amanda, Heather Turner, Kim Nimon, Linda Zientek, and Robin Henson. 2012. “Tools to Support Interpreting Multiple Regression in the Face of Multicollinearity.” Frontiers in Psychology 3.

Kramer, Adam D. I., Jamie E. Guillory, and Jeffrey T. Hancock. 2014. “Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks.” Proceedings of the National Academy of Sciences 111(24):8788–90.

Kreiss, Daniel, and Zeynep Tufekci. 2013. “Occupying the Political: Occupy Wall Street, Collective Action, and the Rediscovery of Pragmatic Politics.” Cultural Studies ↔ Critical Methodologies 13(3):163–67.

Krumpal, Ivar. 2013. “Determinants of Social Desirability Bias in Sensitive Surveys: A Literature Review.” Quality & Quantity 47(4):2025–47.

Kruse, Lisa M., Dawn R. Norris, and Jonathan R. Flinchum. 2018. “Social Media as a Public Sphere? Politics on Social Media.” The Sociological Quarterly 59(1):62–84.

Kumar, Srijan, Jure Leskovec, William L. Hamilton, and Dan Jurafsky. 2018. “Community Interaction and Conflict on the Web.” WWW 2018: The 2018 Web Conference,.

Laclau, Ernesto. 2005. On Populist Reason. Verso.

Laclau, Ernesto, and Chantal Mouffe. 2014. Hegemony and Socialist Strategy: Towards a Radical Democratic Politics. Verso Books.

Lagorio-Chafkin, Christine. 2018. We Are the Nerds: The Birth and Tumultuous Life of Reddit, the Internet’s Culture Laboratory. New York: Hachette Books.

115

Lakoff, George. 1987. Women, Fire, and Dangerous Things: What Categories Reveal About the Mind. 1st edition. Chicago: University of Chicago Press.

Lamont, Michèle. 2009. The Dignity of Working Men: Morality and the Boundaries of Race, Class, and Immigration. Harvard University Press.

Lamont, Michèle. 2012. “Toward a Comparative Sociology of Valuation and Evaluation.” Annual Review of Sociology 38(1):201–21.

Lamont, Michèle, and Virág Molnár. 2002. “The Study of Boundaries in the Social Sciences.” Annual Review of Sociology 28(1):167–95.

Lamont, Michèle, Bo Yun Park, and Elena Ayala‐Hurtado. 2017. “Trump’s Electoral Speeches and His Appeal to the American White Working Class.” The British Journal of Sociology 68(S1):S153–80.

Lamprianou, Iasonas. 2013. “Contemporary Political Participation Research: A Critical Assessment.” Pp. 21–42 in Democracy in transition. Springer.

Lane, Daniel S., Dam Hee Kim, Slgi S. Lee, Brian E. Weeks, and Nojin Kwak. 2017. “From Online Disagreement to Offline Action: How Diverse Motivations for Using Social Media Can Increase Political Information Sharing and Catalyze Offline Political Participation.” Social Media + Society 3(3):2056305117716274.

Langlois, Ganaele, Greg Elmer, Fenwick McKelvey, and Zachary Devereaux. 2009. “Networked Publics: The Double Articulation of Code and Politics on Facebook.” Canadian Journal of Communication 34(3).

Latour, Bruno. 2005. Reassembling the Social: An Introduction to Actor-Network-Theory. OUP Oxford.

Laurison, Daniel. 2015. “The Willingness to State an Opinion: Inequality, Don’t Know Responses, and Political Participation.” Sociological Forum 30(4):925–48.

Lazer, David, and Jason Radford. 2017. “Data Ex Machina: Introduction to Big Data.” Annual Review of Sociology 43(1):19–39.

Leeper, Thomas J., and Rune Slothuus. 2014. “Political Parties, Motivated Reasoning, and Public Opinion Formation.” Political Psychology 35(S1):129–56.

Leydesdorff, Loet. 2011. “‘Meaning’ as a Sociological Concept: A Review of the Modeling, Mapping and Simulation of the Communication of Knowledge and Meaning.” Social Science Information 50(3–4):391–413.

Lin, Mingfeng, Henry C. Lucas, and Galit Shmueli. 2013. “Research Commentary—Too Big to Fail: Large Samples and the p-Value Problem.” Information Systems Research 24(4):906–17.

Lipset, Seymour Martin. 1959. “Some Social Requisites of Democracy: Economic Development and Political Legitimacy1.” American Political Science Review 53(1):69–105.

Liu, Dilin, and Lei Lei. 2018. “The Appeal to Political Sentiment: An Analysis of Donald Trump’s and Hillary Clinton’s Speech Themes and Discourse Strategies in the 2016 US Presidential Election.” Discourse, Context & Media 25:143–52.

Loader, Brian D., Ariadne Vromen, and Michael A. Xenos. 2014. “The Networked Young Citizen: Social Media, Political Participation and Civic Engagement.” Information, Communication & Society 17(2):143–50.

116

Lotan, Gilad, Erhardt Graeff, Mike Ananny, Devin Gaffney, Ian Pearce, and Danah Boyd. 2011. “The Arab Spring| The Revolutions Were Tweeted: Information Flows during the 2011 Tunisian and Egyptian Revolutions.” International Journal of Communication 5(0):31.

Maitlis, Sally, and Marlys Christianson. 2014. “Sensemaking in Organizations: Taking Stock and Moving Forward.” Academy of Management Annals 8(1):57–125.

Major, Brenda, Alison Blodorn, and Gregory Major Blascovich. 2018. “The Threat of Increasing Diversity: Why Many White Americans Support Trump in the 2016 Presidential Election.” Group Processes & Intergroup Relations 21(6):931–40.

Manikonda, Lydia, Ghazaleh Beigi, Huan Liu, and Subbarao Kambhampati. 2018. “Twitter for Sparking a Movement, Reddit for Sharing the Moment: #metoo through the Lens of Social Media.” ArXiv:1803.08022 [Cs].

Mansbridge, Jane, James Bohman, Simone Chambers, David Estlund, Andreas Føllesdal, Archon Fung, Cristina Lafont, Bernard Manin, and José luis Martí. 2010. “The Place of Self-Interest and the Role of Power in Deliberative Democracy*.” Journal of Political Philosophy 18(1):64–100.

Manza, Jeff, and Clem Brooks. 2012. “How Sociology Lost Public Opinion: A Genealogy of a Missing Concept in the Study of the Political.” Sociological Theory 30(2):89–113.

Marx, Anthony W. 1997. Making Race and Nation: A Comparison of South Africa, the United States, and Brazil. Cambridge University Press.

Marx, Karl. 1976. Capital: A Critique of Political Economy. Penguin Books Limited.

Mascheroni, Giovanna, and Maria Francesca Murru. 2017. “‘I Can Share Politics But I Don’t Discuss It’: Everyday Practices of Political Talk on Facebook.” Social Media + Society 3(4):2056305117747849.

Mast, Jason L., and Jeffrey C. Alexander. 2018. Politics of Meaning/Meaning of Politics: Cultural Sociology of the 2016 U.S. Presidential Election. Springer.

Master, Bob. 2016. “Is Class Warfare Back? The Sanders Phenomenon and Life after Neoliberal Capitalism.” New Labor Forum 25(3):32–41.

McCall, Leslie. 2005. “The Complexity of Intersectionality.” Signs: Journal of Women in Culture and Society 30(3):1771–1800.

McCall, Leslie. 2014. “The Political Meanings of Social Class Inequality.” Social Currents 1(1):25–34.

McCall, Leslie, and Ann Shola Orloff. 2017. “The Multidimensional Politics of Inequality: Taking Stock of Identity Politics in the U.S. Presidential Election of 2016.” The British Journal of Sociology 68(S1):S34–56.

McCammon, Holly. 2013. “Discursive Opportunity Structure.” in The Wiley-Blackwell Encyclopedia of Social and Political Movements. American Cancer Society.

McCammon, Holly J., Courtney Sanders Muse, Harmony D. Newman, and Teresa M. Terrell. 2007. “Movement Framing and Discursive Opportunity Structures: The Political Successes of the U.S. Women’s Jury Movements.” American Sociological Review 72(5):725–49.

McDermott, Monica. 2006. Working-Class White: The Making and Unmaking of Race Relations. University of California Press.

117

McQuarrie, Michael. 2017. “The Revolt of the Rust Belt: Place and Politics in the Age of Anger.” The British Journal of Sociology 68(S1):S120–52.

Medvedev, Alexey N., Renaud Lambiotte, and Jean-Charles Delvenne. 2019. “The Anatomy of Reddit: An Overview of Academic Research.” Pp. 183–204 in Dynamics On and Of Complex Networks III, Springer Proceedings in Complexity, edited by F. Ghanbarnejad, R. Saha Roy, F. Karimi, J.-C. Delvenne, and B. Mitra. Cham: Springer International Publishing.

Meyer, David S., and Debra C. Minkoff. 2004. “Conceptualizing Political Opportunity.” Social Forces 82(4):1457–92.

Meyer, David S., and Sidney Tarrow. 2018. The Resistance: The Dawn of the Anti-Trump Opposition Movement. Oxford University Press.

Milkman, Ruth. 2017. “A New Political Generation: Millennials and the Post-2008 Wave of Protest.” American Sociological Review 82(1):1–31.

Mills, Richard A. 2017. “Pop-up Political Advocacy Communities on Reddit.Com: SandersForPresident and The Donald.” AI & SOCIETY 1–16.

Mirrlees, Tanner. 2018. “The Alt-Right’s Discourse on ‘Cultural Marxism’: A Political Instrument of Intersectional Hate.” Atlantis: Critical Studies in Gender, Culture & Social Justice 39(1):49–69.

Moeller, Judith, Claes de Vreese, Frank Esser, and Ruth Kunz. 2014. “Pathway to Political Participation: The Influence of Online and Offline News Media on Internal Efficacy and Turnout of First-Time Voters.” American Behavioral Scientist 58(5):689–700.

Mohr, John W., Robin Wagner-Pacifici, Ronald L. Breiger, and Petko Bogdanov. 2013. “Graphing the Grammar of Motives in National Security Strategies: Cultural Interpretation, Automated Text Analysis and the Drama of Global Politics.” Poetics 41(6):670–700.

Monteith, Margo J., and Laura K. Hildebrand. 2020. “Sexism, Perceived Discrimination, and System Justification in the 2016 U.S. Presidential Election Context.” Group Processes & Intergroup Relations 23(2):163–78.

Moreau, Julie. 2018. “Trump in Transnational Perspective: Insights from Global LGBT Politics.” Politics & Gender 14(4):619–48.

Morgan, Stephen L. 2018. “Status Threat, Material Interests, and the 2016 Presidential Vote.” Socius 4:2378023118788217.

Morgan, Stephen L., and Jiwon Lee. 2017a. “Social Class and Party Identification During the Clinton, Bush, and Obama Presidencies.” Sociological Science 4(16):394–423.

Morgan, Stephen L., and Jiwon Lee. 2017b. “The White Working Class and Voter Turnout in U.S. Presidential Elections, 2004 to 2016.” Sociological Science 4:656–85.

Morris, David S., and Jonathan S. Morris. 2013. “Digital Inequality and Participation in the Political Process: Real or Imagined?” Social Science Computer Review 31(5):589–600.

Mouffe, Chantal. 2000. “Politics and Passions.” Ethical Perspectives 7(2):146–150.

Mudde, Cas. 2004. “The Populist Zeitgeist.” Government and Opposition 39(4):541–63.

118

Mudde, Cas, and Cristóbal Rovira Kaltwasser. 2013. “Exclusionary vs. Inclusionary Populism: Comparing Contemporary Europe and Latin America.” Government and Opposition 48(2):147–74.

Mudde, Cas, and Cristóbal Rovira Kaltwasser. 2018. “Studying Populism in Comparative Perspective: Reflections on the Contemporary and Future Research Agenda.” Comparative Political Studies 51(13):1667–93.

Muddiman, Ashley, and Natalie Jomini Stroud. 2017. “News Values, Cognitive Biases, and Partisan Incivility in Comment Sections.” Journal of Communication 67(4):586–609.

Mueller, Carol, Salvatore J. Restifo, and Julie Fox Restifo. 2012. “Liberal States and Print Media Coverage of Global Advocacy Events: The Case of the UN Beijing Conference for Women *.” Comparative Sociology 11(1):113–39.

Muller, Jan-Werner. 2016. What Is Populism? University of Pennsylvania Press.

Musi, Elena. 2018. “How Did You Change My View? A Corpus-Based Study of Concessions’ Argumentative Role.” Discourse Studies 20(2):270–88.

Mutz, Diana. 2016. In-Your-Face Politics: The Consequences of Uncivil Media. Princeton University Press.

Mutz, Diana. 2018a. “Response to Morgan: On the Role of Status Threat and Material Interests in the 2016 Election.” Socius 4:2378023118808619.

Mutz, Diana. 2018b. “Status Threat, Not Economic Hardship, Explains the 2016 Presidential Vote.” Proceedings of the National Academy of Sciences 115(19).

Nabatchi, Tina, and Matt Leighninger. 2015. Public Participation for 21st Century Democracy. John Wiley & Sons.

Nagle, Angela. 2017. Kill All Normies: Online Culture Wars From 4Chan And Tumblr To Trump And The Alt-Right. John Hunt Publishing.

Negt, Oskar, and Alexander Kluge. 1993. Public Sphere and Experience: Toward an Analysis of the Bourgeois and Proletarian Public Sphere. University of Minnesota Press.

Nelson, Laura K. 2017. “Computational Grounded Theory: A Methodological Framework.” Sociological Methods & Research 0049124117729703.

Nielsen, Finn Årup. 2011. “A New ANEW: Evaluation of a Word List for Sentiment Analysis in Microblogs.” Proceedings of the Extended Semantic Web Conference.

Nissenbaum, Asaf, and Limor Shifman. 2017. “Internet Memes as Contested Cultural Capital: The Case of 4chan’s /b/ Board.” New Media & Society 19(4):483–501.

Nithyanand, Rishab, Brian Schaffner, and Phillipa Gill. 2017. “Online Political Discourse in the Trump Era.” ArXiv:1711.05303 [Cs].

Nobata, Chikashi, Joel Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. 2016. “Abusive Language Detection in Online User Content.” Pp. 145–153 in Proceedings of the 25th International Conference on World Wide Web, WWW ’16. Republic and Canton of Geneva, Switzerland: International World Wide Web Conferences Steering Committee.

Norton, Matthew. 2017. “When Voters Are Voting, What Are They Doing?: Symbolic Selection and the 2016 U.S. Presidential Election.” American Journal of Cultural Sociology 5(3):426–42.

119

Oleinik, Anton. 2015. “The Language of Power: A Content Analysis of Presidential Addresses in North America and the Former Soviet Union, 1993–2012.” International Journal of the Sociology of Language 2015(236):181–204.

Oliveira, Daniel José Silva, Paulo Henrique de Souza Bermejo, and Pâmela Aparecida dos Santos. 2017. “Can Social Media Reveal the Preferences of Voters? A Comparison between Sentiment Analysis and Traditional Opinion Polls.” Journal of Information Technology & Politics 14(1):34–45.

Oliver, J. Eric, and Wendy M. Rahn. 2016. “Rise of the Trumpenvolk Populism in the 2016 Election.” The ANNALS of the American Academy of Political and Social Science 667(1):189–206.

Ollman, Bertell. 1968. “Marx’s Use of ‘Class.’” American Journal of Sociology 73(5):573–80.

Olson, Randy. 2013. “Retracing the Evolution of Reddit through Post Data.” Mic.Com. Retrieved February 27, 2020 (https://www.mic.com/articles/78735/here-s-the-cool-graph-reddit-fans-should-show-haters-who-still-claim-it-s-not-a-legit-news-site).

Omi, Michael, and Howard Winant. 2014. Racial Formation in the United States. Routledge.

O’Neill, Tully. 2018. “‘Today I Speak’: Exploring How Victim-Survivors Use Reddit.” International Journal for Crime, Justice and Social Democracy. Retrieved September 1, 2019 (https://www.crimejusticejournal.com/article/view).

Onwuachi-Willig, Angela, and Amber Fricke. 2010. “Class, Classes, and Classic Race-Baiting: What’s in a Definition Special Issue: Class and American Legal Education.” Denver University Law Review (4):807–34.

Onwuegbuzie, Anthony J., R. Burke Johnson, and Kathleen Mt Collins. 2009. “Call for Mixed Analysis: A Philosophical Framework for Combining Qualitative and Quantitative Approaches.” International Journal of Multiple Research Approaches 3(2):114–39.

Oser, Jennifer, Marc Hooghe, and Sofie Marien. 2013. “Is Online Participation Distinct from Offline Participation? A Latent Class Analysis of Participation Types and Their Stratification.” Political Research Quarterly 66(1):91–101.

Ovadia, Steven. 2015. “More Than Just Cat Pictures: Reddit as a Curated News Source.” Behavioral & Social Sciences Librarian 34(1):37–40.

Pachucki, Mark A., and Ronald L. Breiger. 2010. “Cultural Holes: Beyond Relationality in Social Networks and Culture.” Annual Review of Sociology 36(1):205–24.

Pachucki, Mark A., Sabrina Pendergrass, and Michèle Lamont. 2007. “Boundary Processes: Recent Theoretical Developments and New Contributions.” Poetics 35(6):331–51.

Page, Scott E. 2015. “What Sociologists Should Know About Complexity.” Annual Review of Sociology 41(1):21–41.

Papacharissi, Zizi. 2002. “The Virtual Sphere: The Internet as a Public Sphere.” New Media & Society 4(1):9–27.

Papacharissi, Zizi. 2010. A Networked Self: Identity, Community, and Culture on Social Network Sites. Routledge.

Papacharissi, Zizi. 2015. Affective Publics: Sentiment, Technology, and Politics. Oxford University Press.

120

Park, Gregory, David Bryce Yaden, H. Andrew Schwartz, Margaret L. Kern, Johannes C. Eichstaedt, Michael Kosinski, David Stillwell, Lyle H. Ungar, and Martin E. P. Seligman. 2016. “Women Are Warmer but No Less Assertive than Men: Gender and Language on Facebook.” PLOS ONE 11(5):e0155885.

Paz, Alejandro I. 2019. “Communicating Citizenship.” Annual Review of Anthropology 48(1):null.

Peacock, Cynthia. 2019. “(Not) Talking Politics: Motivations and Strategies for Avoiding the Expression of Political Opinions.” Western Journal of Communication 0(0):1–19.

Peacock, Cynthia, and Peter Leavitt. 2016. “Engaging Young People: Deliberative Preferences in Discussions About News and Politics.” Social Media + Society 2(1):2056305116637096.

Pennycook, Gordon, and David G. Rand. 2019. “Lazy, Not Biased: Susceptibility to Partisan Fake News Is Better Explained by Lack of Reasoning than by Motivated Reasoning.” Cognition 188:39–50.

Perrin, Andrew, and Monica Anderson. 2019. Share of U.S. Adults Using Social Media, Including Facebook, Is Mostly Unchanged since 2018. Pew Research Center.

Perrin, Andrew J., and Katherine McFarland. 2011. “Social Theory and Public Opinion.” Annual Review of Sociology 37(1):87–107.

Pew Research Center. 2019. Growing Partisan Divide Over Fairness of the Nation’s Tax System | Pew Research Center.

Pied, Claudine M. 2019. “Ethnography and the Making of ‘The People’: Uncovering Conservative Populist Politics in the United States.” American Journal of Economics and Sociology 78(3):761–86.

Pierson, Paul. 2017. “American Hybrid: Donald Trump and the Strange Merger of Populism and Plutocracy.” The British Journal of Sociology 68(S1):S105–19.

Poell, Thomas, and José van Dijck. 2017. Social Media and New Protest Movements. SSRN Scholarly Paper. ID 3091639. Rochester, NY: Social Science Research Network.

Polletta, Francesca. 2013. “Participatory Democracy in the New Millennium.” Contemporary Sociology 42(1):40–50.

Polletta, Francesca. 2014. “Participatory Democracy’s Moment.” Journal of International Affairs; New York 68(1):79–XIV.

Polletta, Francesca, and James M. Jasper. 2001. “Collective Identity and Social Movements.” Annual Review of Sociology 27(1):283–305.

Prakasam, Naveena, and Louisa Huxtable-Thomas. 2020. “Reddit: Affordances as an Enabler for Shifting Loyalties.” Information Systems Frontiers.

Puar, Jasbir K. 2018. Terrorist Assemblages: Homonationalism in Queer Times. Duke University Press.

Rauchfleisch, Adrian. 2017. “The Public Sphere as an Essentially Contested Concept: A Co-Citation Analysis of the Last 20 Years of Public Sphere Research.” Communication and the Public 2(1):3–18.

Ray‐Mukherjee, Jayanti, Kim Nimon, Shomen Mukherjee, Douglas W. Morris, Rob Slotow, and Michelle Hamer. 2014. “Using Commonality Analysis in Multiple Regressions: A Tool to

121

Decompose Regression Effects in the Face of Multicollinearity.” Methods in Ecology and Evolution 5(4):320–28.

Reichert, Frank. 2016. “How Internal Political Efficacy Translates Political Knowledge Into Political Participation.” Europe’s Journal of Psychology 12(2):221–41.

Reny, Tyler T., Loren Collingwood, and Ali A. Valenzuela. 2019. “Vote Switching in the 2016 Election: How Racial and Immigration Attitudes, Not Economics, Explain Shifts in White Voting.” Public Opinion Quarterly 83(1):91–113.

Reny, Tyler, Bryan Wilcox-Archuleta, and Nichols Vanessa Cruz. 2018. “Threat, Mobilization, and Latino Voting in the 2018 Election.” The Forum 16(4):573–599.

Ribeiro, Filipe N., Matheus Araújo, Pollyanna Gonçalves, Marcos André Gonçalves, and Fabrício Benevenuto. 2016. “SentiBench - a Benchmark Comparison of State-of-the-Practice Sentiment Analysis Methods.” EPJ Data Science 5(1):23.

Richard, Gabriela T., and Kishonna L. Gray. 2018. “Gendered Play, Racialized Reality: Black Cyberfeminism, Inclusive Communities of Practice, and the Intersections of Learning, Socialization, and Resilience in Online Gaming.” Frontiers: A Journal of Women Studies 39(1):112–48.

Ridgeway, Cecilia L. 2011. Framed by Gender: How Gender Inequality Persists in the Modern World. Oxford University Press.

Ridgeway, Cecilia L. 2014. “Why Status Matters for Inequality.” American Sociological Review 79(1):1–16.

Riesch, Hauke. 2010. “Theorizing Boundary Work as Representation and Identity.” Journal for the Theory of Social Behaviour 40(4):452–73.

Roediger, David. 1999. The Wages of Whiteness: Race and the Making of the American Working Class. Verso.

Rooduijn, Matthijs. 2019. “State of the Field: How to Study Populism and Adjacent Topics? A Plea for Both More and Less Focus.” European Journal of Political Research 58(1):362–72.

Roozenbeek, Jon, and Adrià Salvador Palau. 2017. “I Read It on Reddit: Exploring the Role of Online Communities in the 2016 US Elections News Cycle.” Pp. 192–220 in Social Informatics, Lecture Notes in Computer Science. Springer, Cham.

Rosaldo, Renato. 1994. “Cultural Citizenship and Educational Democracy.” Cultural Anthropology 9(3):402–11.

Roth, Benita. 2018. “Learning from the Tea Party: The US Indivisible Movement as Countermovement in the Era of Trump.” Sociological Research Online 23(2):539–46.

Ruhnau, Britta. 2000. “Eigenvector-Centrality — a Node-Centrality?” Social Networks 22(4):357–65.

Saldaña, Magdalena, Shannon C. McGregor, and Homero Gil de Zúñiga. 2015. “Social Media as a Public Space for Politics: Cross-National Comparison of News Consumption and Participatory Behaviors in the United States and the United Kingdom.” International Journal of Communication 9(1):3304–3326.

122

Samory, Mattia, and Tanushree Mitra. 2018. “Conspiracies Online: User Discussions in a Conspiracy Community Following Dramatic Events.” in Twelfth International AAAI Conference on Web and Social Media.

Sander, Richard H. 2010. “Class in American Legal Education Special Issue: Class and American Legal Education.” Denver University Law Review (4):631–82.

Saperstein, Aliya, Andrew M. Penner, and Ryan Light. 2013. “Racial Formation in Perspective: Connecting Individuals, Institutions, and Power Relations.” Annual Review of Sociology 39(1):359–78.

Schaffner, Brian F., Matthew Macwilliams, and Tatishe Nteta. 2018. “Understanding White Polarization in the 2016 Vote for President: The Sobering Role of Racism and Sexism.” Political Science Quarterly 133(1):9–34.

Schofield, Norman, Gary Miller, and Andrew Martin. 2003. “Critical Elections and Political Realignments in the USA: 1860–2000.” Political Studies 51(2):217–40.

Schradie, Jen. 2015. “The Gendered Digital Production Gap: Inequalities of Affluence.” Pp. 185–213 in Communication and Information Technologies Annual. Vol. 9, Studies in Media and Communications. Emerald Group Publishing Limited.

Schwalbe, Michael, Daphne Holden, Douglas Schrock, Sandra Godwin, Shealy Thompson, and Michele Wolkomir. 2000. “Generic Processes in the Reproduction of Inequality: An Interactionist Analysis.” Social Forces 79(2):419–52.

Schwartz, H. Andrew, and Lyle H. Ungar. 2015. “Data-Driven Content Analysis of Social Media: A Systematic Overview of Automated Methods.” The ANNALS of the American Academy of Political and Social Science 659(1):78–94.

Settle, Jaime E. 2018. Frenemies: How Social Media Polarizes America. Cambridge University Press.

Settle, Jaime E., and Taylor N. Carlson. 2019. “Opting Out of Political Discussions.” Political Communication 36(3):476–96.

Sewell, William H. 2005. Logics of History: Social Theory and Social Transformation. University of Chicago Press.

Shaw, Aaron, and Yochai Benkler. 2012. “A Tale of Two Blogospheres: Discursive Practices on the Left and Right.” American Behavioral Scientist 56(4):459–87.

Shelton, Martin, Katherine Lo, and Bonnie Nardi. 2015. “Online Media Forums as Separate Social Lives: A Qualitative Study of Disclosure Within and Beyond Reddit.”

Sides, John, Michael Tesler, and Lynn Vavreck. 2018. “Hunting Where the Ducks Are: Activating Support for Donald Trump in the 2016 Republican Primary.” Journal of Elections, Public Opinion and Parties 28(2):135–56.

Sides, John, Michael Tesler, and Lynn Vavreck. 2019. Identity Crisis: The 2016 Presidential Campaign and the Battle for the Meaning of America. Princeton University Press.

Silva, Eric O. 2019. “Donald Trump’s Discursive Field: A Juncture of Stigma Contests over Race, Gender, Religion, and Democracy.” Sociology Compass 13(12):e12757.

123

Silva, Graziella Moraes. 2016. “After Racial Democracy: Contemporary Puzzles in Race Relations in Brazil, Latin America and beyond from a Boundaries Perspective.” Current Sociology 64(5):794–812.

Skeggs, Beverley. 2004. Class, Self, Culture. Psychology Press.

Skoric, Marko M., Qinfeng Zhu, Debbie Goh, and Natalie Pang. 2016. “Social Media and Citizen Engagement: A Meta-Analytic Review.” New Media & Society 18(9):1817–39.

Small, Mario Luis. 2011. “How to Conduct a Mixed Methods Study: Recent Trends in a Rapidly Growing Literature.” Annual Review of Sociology 37(1):57–86.

Smith, Tammy. 2007. “Narrative Boundaries and the Dynamics of Ethnic Conflict and Conciliation.” Poetics 35(1):22–46.

Snell, Patricia. 2010. “Emerging Adult Civic and Political Disengagement: A Longitudinal Analysis of Lack of Involvement With Politics.” Journal of Adolescent Research 25(2):258–87.

Snow, David, Robert Benford, Holly McCammon, Lyndi Hewitt, and Scott Fitzgerald. 2014. “The Emergence, Development, and Future of the Framing Perspective: 25+ Years Since ‘Frame Alignment.’” Mobilization: An International Quarterly 19(1):23–46.

Sobieraj, Sarah. 2018. “Bitch, Slut, Skank, Cunt: Patterned Resistance to Women’s Visibility in Digital Publics.” Information, Communication & Society 21(11):1700–1714.

Sobieraj, Sarah, and Jeffrey M. Berry. 2011. “From Incivility to Outrage: Political Discourse in Blogs, Talk Radio, and Cable News.” Political Communication 28(1):19–41.

Spillman, Lyn. 2014. “Mixed Methods and the Logic of Qualitative Inference.” Qualitative Sociology 37(2):189–205.

Stavrakakis, Yannis, and Anton Jäger. 2018. “Accomplishments and Limitations of the ‘New’ Mainstream in Contemporary Populism Studies.” European Journal of Social Theory 21(4):547–65.

Steinbugler, Amy C. 2012. Beyond Loving: Intimate Racework in Lesbian, Gay, and Straight Interracial Relationships. 1 edition. New York, NY: Oxford University Press.

Stewart, Ashley, Joshua Schuschke, and Brendesha Tynes. 2019. “Online Racism: Adjustment and Protective Factors Among Adolescents of Color.” Pp. 501–13 in Handbook of Children and Prejudice: Integrating Research, Practice, and Policy, edited by H. E. Fitzgerald, D. J. Johnson, D. B. Qin, F. A. Villarruel, and J. Norder. Cham: Springer International Publishing.

Strandberg, Kim, Staffan Himmelroos, and Kimmo Grönlund. 2019. “Do Discussions in Like-Minded Groups Necessarily Lead to More Extreme Opinions? Deliberative Democracy and Group Polarization.” International Political Science Review 40(1):41–57.

Strolovitch, Dara Z., Janelle S. Wong, and Andrew Proctor. 2017. “A Possessive Investment in White Heteropatriarchy? The 2016 Election and the Politics of Race, Gender, and Sexuality.” Politics, Groups, and Identities 5(2):353–63.

Sue, Derald Wing. 2010. Microaggressions in Everyday Life: Race, Gender, and Sexual Orientation. John Wiley & Sons.

Sunstein, Cass R. 2001. Republic.Com. Princeton University Press.

124

Sunstein, Cass R. 2018. #Republic: Divided Democracy in the Age of Social Media. Princeton University Press.

Swedberg, Richard. 2018. “Folk Economics and Its Role in Trump’s Presidential Campaign: An Exploratory Study.” Theory and Society 47(1):1–36.

Swidler, Ann. 1986. “Culture in Action: Symbols and Strategies.” American Sociological Review 51(2):273–86.

Taboada, Maite, Julian Brooke, Milan Tofiloski, Kimberly Voll, and Manfred Stede. 2011. “Lexicon-Based Methods for Sentiment Analysis.” Computational Linguistics 1(1).

Taguieff, Pierre-André. 1995. “Political Science Confronts Populism: From a Conceptual Mirage to a Real Problem.” Telos 1995(103):9–43.

Tappin, Ben M., Leslie van der Leer, and Ryan T. McKay. 2017. “The Heart Trumps the Head: Desirability Bias in Political Belief Revision.” Journal of Experimental Psychology. General 146(8):1143–49.

Taylor, Y., S. Hines, and M. Casey. 2010. Theorizing Intersectionality and Sexuality. Springer.

Terriquez, Veronica, Tizoc Brenes, and Abdiel Lopez. 2018. “Intersectionality as a Multipurpose Collective Action Frame: The Case of the Undocumented Youth Movement.” Ethnicities 18(2):260–76.

Testa, Paul F., Matthew V. Hibbing, and Melinda Ritchie. 2014. “Orientations toward Conflict and the Conditional Effects of Political Disagreement.” The Journal of Politics 76(3):770–85.

Theocharis, Yannis. 2015. “The Conceptualization of Digitally Networked Participation.” Social Media + Society 1(2):2056305115610140.

Thorson, Kjerstin. 2014. “Facing an Uncertain Reception: Young Citizens and Political Interaction on Facebook.” Information, Communication & Society 17(2):203–16.

Toepfl, Florian, and Eunike Piwoni. 2018. “Targeting Dominant Publics: How Counterpublic Commenters Align Their Efforts with Mainstream News.” New Media & Society 20(5):2011–27.

Tormey, Simon. 2015. The End of Representative Politics. John Wiley & Sons.

Tufekci, Zeynep. 2018. “How Social Media Took Us from Tahrir Square to Donald Trump.” MIT Technology Review 121(5).

Tuğal, Cihan. 2017. “An Unmoving Wall or a Shifting One? The American Right’s Deep Emotional Politics and Its Emaciated Counterpart.” The British Journal of Sociology 68(1):137–42.

Tynes, Brendesha M., and Kimberly J. Mitchell. 2014. “Black Youth Beyond the Digital Divide: Age and Gender Differences in Internet Use, Communication Patterns, and Victimization Experiences.” Journal of Black Psychology 40(3):291–307.

Tynes, Brendesha M., Juan Del Toro, and Fantasy T. Lozada. 2015. “An Unwelcomed Digital Visitor in the Classroom: The Longitudinal Impact of Online Racial Discrimination on Academic Motivation” edited by M. Stormont. School Psychology Review 44(4):407–24.

Vaisey, Stephen. 2007. “Structure, Culture, and Community: The Search for Belonging in 50 Urban Communes.” American Sociological Review 72(6):851–73.

125

Valentino, Nicholas A., Fabian G. Neuner, and L. Matthew Vandenbroek. 2018. “The Changing Norms of Racial Political Rhetoric and the End of Racial Priming.” The Journal of Politics 80(3):757–71.

Valentino, Nicholas A., Carly Wayne, and Marzia Oceno. 2018. “Mobilizing Sexism: The Interaction of Emotion and Gender Attitudes in the 2016 US Presidential Election.” Public Opinion Quarterly 82(S1):799–821.

Vigneau, Evelyne, Mingkun Chen, and El Mostafa Qannari. 2015. “ClustVarLV: An R Package for the Clustering of Variables Around Latent Variables.” R Journal 7(2).

Vigneau, Evelyne, and E. M. Qannari. 2003. “Clustering of Variables Around Latent Components.” Communications in Statistics - Simulation and Computation 32(4):1131–50.

Vliegenthart, Rens, and Liesbet van Zoonen. 2011. “Power to the Frame: Bringing Sociology Back to Frame Analysis.” European Journal of Communication 26(2):101–15.

Vráblíková, Kateřina. 2014. “How Context Matters? Mobilization, Political Opportunity Structures, and Nonelectoral Political Participation in Old and New Democracies.” Comparative Political Studies 47(2):203–29.

Walby, Sylvia, Jo Armstrong, and Sofia Strid. 2012. “Intersectionality: Multiple Inequalities in Social Theory.” Sociology 46(2):224–40.

Walley, Christine J. 2017. “Trump’s Election and the ‘White Working Class’: What We Missed.” American Ethnologist 44(2):231–36.

Wampler, Brian, and Leonardo Avritzer. 2004. “Participatory Publics: Civil Society and New Institutions in Democratic Brazil.” Comparative Politics 36(3):291–312.

Warner, Leah R., and Stephanie A. Shields. 2013. “The Intersections of Sexuality, Gender, and Race: Identity Research at the Crossroads.” Sex Roles 68(11–12):803–10.

Warner, Michael. 2002. Publics and Counterpublics. Zone Books.

Waters, Mary C., Philip Kasinitz, and Asad L. Asad. 2014. “Immigrants and African Americans.” Annual Review of Sociology 40(1):369–90.

Weber, Max. 1922. Economy and Society: An Outline of Interpretive Sociology (G. Roth & C. Wittich, Eds.). Berkeley: University of California Press.

Weeks, Brian E., Alberto Ardèvol-Abreu, and Homero Gil de Zúñiga. 2017. “Online Influence? Social Media Use, Opinion Leadership, and Political Persuasion.” International Journal of Public Opinion Research 29(2):214–39.

Wells, Chris, Katherine J. Cramer, Michael W. Wagner, German Alvarez, Lewis A. Friedland, Dhavan V. Shah, Leticia Bode, Stephanie Edgerly, Itay Gabay, and Charles Franklin. 2017. “When We Stop Talking Politics: The Maintenance and Closing of Conversation in Contentious Times.” Journal of Communication 67(1):131–57.

White, Anna. 2019. “Reddit as an Analogy for Scholarly Publishing and the Constructed, Contextual Nature of Authority.” Communications in Information Literacy 13(2).

Whitehead, Andrew L., Samuel L. Perry, and Joseph O. Baker. 2018. “Make America Christian Again: Christian Nationalism and Voting for Donald Trump in the 2016 Presidential Election.” Sociology of Religion 79(2):147–71.

126

Wiegand, Michael, Josef Ruppenhofer, Anna Schmidt, and Clayton Greenberg. 2018. “Inducing a Lexicon of Abusive Words – a Feature-Based Approach.” Pp. 1046–56 in.

Wilkins, Amy C. 2012. “Becoming Black Women: Intimate Stories and Intersectional Identities.” Social Psychology Quarterly 75(2):173–96.

Williams, Joan. 2017. White Working Class: Overcoming Class Cluelessness in America. Harvard Business Review Press.

Wilz, Kelly. 2016. “Bernie Bros and Woman Cards: Rhetorics of Sexism, Misogyny, and Constructed Masculinity in the 2016 Election.” Women’s Studies in Communication 39(4):357–60.

Wingard, Jennifer. 2017. “Some of the People, All of the Time: Trump’s Selective Inclusion.” Women’s Studies in Communication 40(4):330–33.

Wojcieszak, Magdalena E., and Diana C. Mutz. 2009. “Online Groups and Political Discourse: Do Online Discussion Spaces Facilitate Exposure to Political Disagreement?” Journal of Communication 59(1):40–56.

Worthen, Meredith G. F. 2019. “A Rainbow Wave? LGBTQ Liberal Political Perspectives During Trump’s Presidency: An Exploration of Sexual, Gender, and Queer Identity Gaps.” Sexuality Research and Social Policy.

Wright, Erik Olin. 1985. Classes. Verso.

Wright, Erik Olin. 2016. Class, Crisis and the State. Verso Books.

Xenos, Michael A., Ariadne Vromen, Brian D. Loader, Ariadne Vromen, and Brian D. Loader. 2014. “The Great Equalizer? Patterns of Social Media Use and Youth Political Engagement in Three Advanced Democracies.” The Networked Young Citizen. Retrieved September 1, 2019 (https://www.taylorfrancis.com/).

Yaden, David B., Johannes C. Eichstaedt, Margaret L. Kern, Laura K. Smith, Anneke Buffone, David J. Stillwell, Michal Kosinski, Lyle H. Ungar, Martin E. P. Seligman, and H. Andrew Schwartz. 2017. “The Language of Religious Affiliation: Social, Emotional, and Cognitive Differences.” Social Psychological and Personality Science 9(4).

Young, Iris Marion. 2002. Inclusion and Democracy. Oxford University Press.

Young, Iris Marion. 2011. Justice and the Politics of Difference. Princeton University Press.

Zuberi, Tukufu, and Eduardo Bonilla-Silva. 2008. White Logic, White Methods: Racism and Methodology. Rowman & Littlefield Publishers.

Zuckerman, Ethan. 2014. “New Media, New Civics?” Policy & Internet 6(2):151–68.

127

Appendix A: Glossary

This appendix lays out simple definitions for the most important terms appearing in the text, particularly those with novel meanings developed in the text.

Symbolic boundary construction & intersection

• Symbolic boundary. A collectively imagined cultural line by which social actors divide people and other entities into groups. Specific symbolic boundaries like man|woman and black|white are often part of boundary systems like race or gender. Symbolic boundaries are strongly related to formal boundaries: laws or policies that treat people differently based on their membership in the social groups defined by symbolic boundaries.

• Boundary work. Acts that activate, define, or frame symbolic boundaries. Does not need to be discursive or consciously intended. This study differentiates between 3 stages of boundary work:

o boundary activation: makes symbolic boundaries salient by directly referencing boundary systems (race, gender) or the groups that constitute them (black people, women).

o group definition: assigns people to one side or the other of the relevant symbolic boundary

o group framing: associating groups with symbols and emotional charges (either negative or positive) to intervene in the collective understanding of what groups mean. The positively charged (upframing) and negatively charged (downframing) instantiations of group framing play a particularly large role in the formation of symbolic boundaries.

• Intersectionality studies. A field of study exploring the intersectionality of social experience: the core insight that all boundary systems - whether race, class, gender, or the many other boundaries explored in this study - are deeply connected.

• Intersectional boundary analysis. Analysis that measures the relative prevalence, clustering, and function of symbolic boundaries in discourse - the intersectional boundary structure - in order to understand how intersectionality differs across contexts and changes over time.

o Salience. The relative prevalence of a boundary system in a particular field of discourse.

o Clustering. The extent to which systems are entangled with each other in a particular sphere of discourse. Generally, boundary systems exhibit strong levels of intersectionality - as Audre Lorde wrote, "there is no single-issue struggle, because we do not lead single-issue lives" (Lorde 2012) - but that level varies with context.

128

For example, one rough measure of the level of clustering between boundary systems is the extent to which boundary work in the systems co-occur in discourse.

• Boundary realignment. A substantial shift in intersectional boundary structure, in which the salience, clustering, or function of social categories changes. Linked to but distinct from political realignment, which focuses on whether elections produce changes in the structure of alliances, voter affiliations, and political parties (Brunell, Grofman, and Merrill 2012)..

Social psychology

• Schema: a network of symbols with connections built through linked experience (e.g., “neurons that fire together wire together”. Each symbol gains meaning for actors via links to other known symbols (e.g., like/unlike, less/more, in/out - Lakoff 1987), gains valence via association with emotional states (Collins 2004), and does work via connection with capital (Bourdieu 1991). Core component of:

o Frames, roles, ideologies: persistent schemas, often but not always intentionally crafted, that organize perception of other schemas.

o Capital (resource): an object or practice that enables schemas to do work in a field; a medium of power. Includes symbolic, cultural, social, and material (in ascending order of persistence when transposed from field to field).

o Habitus: the system of schemas and valences imprinted by experiential coding in the body and mind of each actor.

o Repertoire: the total set of schemas available for an actor or group of actors to deploy (Swidler 1986).

• Resonance (structural homology): a mutually affirming interaction of schemas resulting in experiences of meaning and value (Ferree 2003).3

o Resonance structure (discursive opportunity structure): The aggregate habitus of the actors in a given field, which determines the reception of any action within that field.

o Resonance mobility: the relative ease with which the meaning and use of schemas change over time for a given field or actor

Fields and publics

• Field: a group of actors, schemas, capital, and relationships that structures the space of thought, expression, and action apparent to every actor who is part of the field. Fields have

3 The operative metaphor here is to the world of acoustics, in which objects respond differently to external vibration (sound) according to their internal structure. Similarly, each schema – and thus each field – responds differently to external stimulus based on its logical structure, amplifying some ideas and expressions and dampening others. The magnitude of dampening or amplification attached to each schema depends on the resources to which they are connected; because schemas and capital are so deeply interpenetrated, resonance structures and is structured by the distribution of capital within a field.

129

practical boundaries set by actors themselves and analytic boundaries set pragmatically in research to maximize insight.4

• Social space: An aggregate of all linked fields and social actors (Bourdieu 1985). Geographical location is only one relevant attribute in social space among other aspects of social location; other distances are analytically defined according to shared attributes and schemas. Mapping social space is the process of determining the structure of locations, distances, and connections relevant to particular questions.

• Participatory public: a field with formally open membership in which actors discuss matters of common concern in the view of others, drawing from the concept of a public sphere (Habermas 1991). Participatory publics vary by many factors but this study focuses on two: emotional intensity and diversity of political lean among participants in the field. The study splits participatory publics into the four types outlined in the table at right and measures two characteristics of new authors:

o Engagement: The rate of commenting in a given time period

o Attrition: Stopping participation in a given forum, public type, or platform

Glossary References

Bourdieu, Pierre. 1985. “The Social Space and the Genesis of Groups.” Theory and Society 14 (6): 723–44.

———. 1991. Language and Symbolic Power. Harvard University Press.

Brunell, Thomas L., Bernard Grofman, and Samuel Merrill. 2012. “Magnitude and Durability of Electoral Change: Identifying Critical Elections in the U.S. Congress 1854–2010.” Electoral Studies 31 (4): 816–28. https://doi.org/10.1016/j.electstud.2012.06.002.

Collins, Randall. 2004. Interaction Ritual Chains. Princeton, N.J: Princeton University Press.

Ferree, Myra Marx. 2003. “Resonance and Radicalism: Feminist Framing in the Abortion Debates of the United States and Germany.” American Journal of Sociology 109 (2): 304–44. https://doi.org/10.1086/378343.

Habermas, Jürgen. 1991. The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society. Sixth Printing edition. Cambridge, Mass: The MIT Press.

4 “Actors do the sociology for the sociologists and sociologists learn from the actors what makes up their set of associations” (Latour 2005).

Table. Types of participatory public by ideological diversity and emotional intensity

Ideological diversity -

Ideological diversity +

Emotional intensity -

Incubators Deliberative

spheres

Emotional intensity +

Counterpublics Battlefields

130

Lakoff, George. 1987. Women, Fire, and Dangerous Things: What Categories Reveal About the Mind. 1st edition. Chicago: University of Chicago Press.

Latour, Bruno. 2005. Reassembling the Social: An Introduction to Actor-Network-Theory. OUP Oxford.

Lorde, Audre. 2012. Sister Outsider: Essays and Speeches. Potter/Ten Speed/Harmony/Rodale.

Swidler, Ann. 1986. “Culture in Action: Symbols and Strategies.” American Sociological Review 51 (2): 273–86. https://doi.org/10.2307/2095521.

131

Appendix B: Methodological Appendix

Contents

First steps ......................................................................................................................................................................... 132

Sourcing and preparations ................................................................................................................... 132

Cleaning the data ................................................................................................................................... 133

Characterizing the platform ................................................................................................................. 134

Focusing in and smoothing results by quarter .................................................................................. 137

Accounting for changes to the platform ........................................................................................... 138

Forum typology & political engagement ..................................................................................................................... 138

Categorizing forums ............................................................................................................................. 138

Political lean and polarization ............................................................................................................. 140

Sentiment ............................................................................................................................................... 141

Type of public ....................................................................................................................................... 142

Tools for Analysis of Boundary Work: Lexicons & Codes...................................................................................... 144

Forms of boundary work: activation v. downframing .................................................................... 144

Developing codes .................................................................................................................................. 145

Developing original lexicons ............................................................................................................... 147

Refining the original lexicons .............................................................................................................. 149

Validating the original lexicons ........................................................................................................... 150

Interpreting lexicon results - relating prevalence to current events and to actual discourse ..... 152

Analysis ............................................................................................................................................................................. 154

Basic definitions & rules ...................................................................................................................... 154

Chapters 1: Boundary lexicon prevalence ......................................................................................... 155

Chapter 1: Boundary lexicon centrality .............................................................................................. 157

Chapter 1: Boundary lexicon clustering ............................................................................................. 158

Chapter 2: Qualitative analysis of class discourse ............................................................................ 162

Chapter 3: Characterizing types of public ......................................................................................... 165

Chapter 3: Predicting author retention and engagement - sample and core variables................ 166

Chapter 3: Predicting author retention and engagement - regression models ............................. 168

132

FIRST STEPS

Sourcing and preparations

This study uses as its primary dataset a collection of comments to the online platform Reddit made accessible in monthly tables by pushshift.io and the Google BigQuery project, selecting roughly 1.7 billion comments posted between November 8 of 2015 and 2017, a year before and after the election. With over 330 million active authors posting more than 900 million comments each year in over 80,000 forums, Reddit at the time of this study was the 6th most active website in the United States, with the expressed purpose to "ultimately be a universal platform for human discourse" (Lagorio-Chafkin 2018).

Like most large datasets, this one is subject to a testable rate of errors: an analysis by Gaffney and Matias estimated that roughly 0.043% and 0.65% of comments and submissions were missing, respectively, and pushshift.io was taking steps to reduce this rate over time (Gaffney and Matias 2018). The dataset includes both primary comments - the ones that start discussion threads - and the reply threads that follow them. The filtering process targeting only posts that clearly identified themselves as bots or moderators along with comments with misleading content like copypasta, resulting in the removal of 12.4% of comments on average across the quarters studied.

The forums on this platform, known as "subreddits", function as publics with formal boundaries - i.e., each subreddit has its own name, location, and set of norms enforced by its own group of moderators. Since 2018, the platform refers to these forums as "communities" that users join rather than subreddits to which they subscribe, showing an emphasis on social relations that shows up far earlier in the culture of the platform. Casual users can browse a home page that collects the most popular posts of the day from multiple subreddits, but users that stick with the platform enough to comment tend to focus on a bounded set of communities over time.

The pushshift.io Reddit corpus is a near-complete database of two kinds of posts:

• submissions - posts to a subreddit that begin a comment thread; and

• comments - replies posted on the thread of a given submission, either directly to the submission or to another nested comment

The corpus collects posts across all subreddits on the platform in real time through the Reddit API and catalogs them in large monthly files, which I accessed through the cloud computing tool Google BigQuery. Each comment and submission record contains with the following relevant variables along with others not used in this analysis:

Variable Type Applies to Notes

id string Both 6-symbol identifier unique to each post

subreddit string Both Forum in which the post appears

author string Both Reddit username for the author of the post

author_flair_text string Both Text (or none) displayed next to the author’s username in the post; varies depending on subreddit

distinguished string Both “moderator” if the author is a moderator of that subreddit, NULL if not.

created_utc integer Both UTC code for the moment posted

133

score integer Both Net of upvotes (+) and downvotes (-) given by other users

retrieved_on integer Both UTC code for when the post was collected by pushshift

parent_id string Comments Unique id (prepending “t1_” for comments and “t3_” for submissions to the original id) for the post that the comment directly replied to

link_id string Comments Unique id of the submission that started the thread in which the comment appears

body string Comments Text of the comment.

title string Submissions Headline of the submission, appears in bold at the top of the thread. For content analysis, this is merged into body

selftext string Submissions Additional writing posted along with the submission title. For content analysis, this is merged into body

permalink string Submissions Directory within reddit; useful in generating links to a submission thread

url string Submissions Link (if any) included in the post - often links to an article or an image, usage of links increases over the study period. This is merged into body for phrase-based scoring only.

For the analysis of this study, I merged the submissions and comments into a unified dataset to track discourse across both types of post. To the basic structure above, I added the following derived fields for each post:

Variable Type Notes

full_id string Unique id prepending “t1_” for comments and “t3_” for submissions to the original id for each post

parenturl string Link to the relevant thread on the Reddit platform

replies1 integer Count of first-order replies to the post*

replies12 integer Count of second-order replies (direct replies + replies to direct replies)

subm_score integer Total score of the submission that started post’s thread

subm_coms integer Total comments on the post’s thread

subm_author string Author of the original submission for the post’s thread

subm_permalink string Permalink used in locating the post’s thread

cleanedbody string Text of the post after cleaning (see below)

* Reply counting occurred before filtering posts to mirror as closely as possible the number of replies a user would see as they clicked on the comment.

Cleaning the data

Like most large datasets, this one is subject to errors; helpfully, the corpus has attracted enough attention for multiple scholars to analyze its error rates. The most thorough analysis so far, by Gaffney and Matias estimated that roughly 0.043% of comments and 0.65% of submissions were missing, respectively, and pushshift.io was taking steps to reduce this rate over time even for its

134

archival data (Gaffney and Matias 2018). In addition, my own analysis finds that roughly 0.1% of submissions had incomplete data - for instance, 4785 of 6.1 million submissions in November 2011 were missing the name of their subreddit. In most cases it is impossible to fully recover the content of missing posts, and because the rates of error are relatively small in comparison to other sources of error this study does not make an effort to do this.

On the other hand, because my intent was to focus on the subjective experience of Reddit readers engaging with other users on the platform, there were several types of comment that needed to be removed. The filters below that removed "posts outside the scope of the study" targeted anything that didn't on face appear to come from another live user - including text that was copied and pasted from other sources, obviously came from a bot, or represented the leftovers of a post being removed. On the other hand, some comments that other studies take pains to remove, I left in for the sake of being true to the perspective of the user, particularly those likely to come from bots or fake accounts but that might appear to be a legitimate user on first glance. These were the three types of post targeted for removal prior to any counting:

1. Duplicates and copypasta. Many comments on the platform paste directly from others rather than offering new discourse -this study focuses on commenting behavior rather than pasting behavior. Only one copy per subreddit is kept.

2. Obvious bots. This study takes the perspective of the naïve user, so only posts that explicitly identify themselves as bots (e.g., “I am a bot”, “This bot was created by” are removed.)

3. Trivial posts. This category includes notes about post removal repeated by moderators, posts with body “[deleted]” or “[removed]”, and a few widespread meme templates that varied enough in each instance that they had to be targeted by distinctive phrases - for instance, "Trump Train" and "fuck off CTR".

The process of cleaning also involved editing the body of the text to facilitate automated content analysis. Editing the body of the text made it possible to create lexicons that rapidly count word occurrences by using spaces to parse the text into individual words. These are the major cleaning steps performed on each post:

1. Filter out URLs, html codes, and junk text. This step targets anything not interpreted by the user as the words of a human interlocutor for removal. Though I use URLs from the original pre-edited text to track information sources, it is important to remove them since they include character strings that might be misinterpreted as words.

2. Remove all characters other than numbers, letters, and spaces. This step removes dashes and quotation marks, turning ngrams like "alt-right" into "altright" and "didn't" into "didnt".

Note that all processes requiring the original text - including phrase scoring and link domain analysis - occurred prior to the editing steps above.

Characterizing the platform

To understand how the forums on Reddit related to each other in practice, I first joined a wide variety of subreddits and started to read and participate on the platform with the goal of understanding the perspective of as wide an array of users as possible. I began to frequent Redditsearch.io, a site that allows for rapidly pulling up all reddit comments that match a particular term. Reading the selections of comments from different time period was a crucial tool in starting to think through how discourse might have shifted over the course of the 2016 election cycle.

135

Then I started to study the platform quantitatively. I learned the basics of analysis on BigQuery by tracking relationships between forums. I produced network maps based on shared authors and shared moderators. I produced a rough code of categories and political leans to distinguish between groups of forums and tracked the number of authors, comments, and subreddits in each group. I also tracked average score and number of replies to get a sense of how resonance and reactivity worked in each subreddit. Even as the number of forums I visited and categorized grew, it became clear that the number of forums would far outpace any attempt to categorize them qualitatively.

The charts above, produced with the final categorization system used in this study, dramatize this aspect of the platform. The great majority of subreddits are extremely small, but the majority of conversation happens in relatively large forums. The 4,324 forums that featured 6K+ comments, 20+ authors, and 10+ average wordcount per comment accounted for 90% of the posts on the platform but only 2% of the subreddits.

I then began to focus on advocacy forums during the 2016 election cycle. Using a simple measure of distinctiveness, I pulled out the ngrams most distinctive to left and right forums:

Figure. Scores for core variables across qualitatively coded forum groups by quarter, Winter 2015-16 to Fall 2017

0.00E+00

5.00E+07

1.00E+08

1.50E+08

2.00E+08

2.50E+08

3.00E+08

Winter2015-16

Spring2016

Summer2016

Fall 2016 Winter2016-17

Spring2017

Summer2017

Fall 2017

% of posts by category

advocacy community entertainment OTHER (blank)

0

50000

100000

150000

200000

250000

300000

Winter2015-16

Spring2016

Summer2016

Fall 2016 Winter2016-17

Spring2017

Summer2017

Fall 2017

% of subreddits by category

advocacy community entertainment OTHER (blank)

136

I then began to characterize forums according to their trajectories over time. Early studies show the development of measures that would become central to the more rigorous analysis that followed.

137

Focusing in and smoothing results by quarter

The preliminary studies above used months as their unit of longitudinal analysis, mirroring the structure of the Pushshift corpus, which divides comments and submissions to the Reddit platform by month. Creating a clear picture of prevalence and alignment over time required dividing comments into units of time long enough to smooth out fluctuations driven by the 24-hour news cycle and quirks of the Reddit platform. After preliminary testing revealed the quarter (3 months) as the best balance, I divided the data into eight quarters, four before and four after election day 2016. Below is a summary of the comments tracked in each quarter along with key events likely to influence the prevalence of the forms of boundary work tracked in this study.

Table. Quarters covered with dates, comments and key events.

Quarter Dates Comments Key events

-4Q 11.9.2015 -2.8.2016

182,505,559 November: ISIS struggles intensify December 7 – Trump’s campaign calls for a Muslim ban February 1 - First primary (Iowa)

-3Q 2.9.2016 -5.8.2016

192,016,614 (Most primaries) February 18 – Trump feuds with Pope Francis March 3 – Super Tuesday March 11 – Trump’s campaign manager accused of assault April 3 – Panama Papers story breaks April 26 – Trump blames the ‘woman card’ for Clinton's success May 3 – Cruz and John Kasich suspend campaigns Trending class terms - new:

-2Q 5.9.2016 -8.8.2016

202,254,866 June 2 – House speaker Paul Ryan says he will vote for Trump June 16- Changes to r/all to cut back on vote manipulation (e.g.,

The_Donald) July 12 – Sanders endorses Clinton July 18-21 – Republican national convention July 25-28 – Democratic national convention August 1 – Trump verbally attacks muslim family

-1Q 8.9.2016 -11.8.2016

214,375,616 August 31 – Trump meets with Mexican President + anti-immigrant speech

September 9 – Clinton 'basket of deplorables’ comment September 26 – First presidential debate; Hempstead, New York October 7 – Access Hollywood tape October 28 – Comey letter

+1Q 11.9.2016 -2.8.2017

229,301,866 November 30 - Reddit CEO alters comments, apologizes, and announces more aggressive policy on users who harass or manipulate stats

Jan. 20: Inauguration Jan. 21: Women’s March Jan. 27: First travel ban Jan. 31: Gorsuch nominated to Supreme Court

+2Q 2.9.2017 -5.8.2016

232,753,955 February 15 - Change to defaults policy brings more attention to non-default subreddits, r/popular as new default for logged-out users

March 6: Second travel ban April 7: Syrian missile strike

138

Quarter Dates Comments Key events

+3Q 5.9.2017 -8.8.2017

247,352,753 May 9: Firing Comey May 17: Mueller appointment June 27: Supreme Court allows parts of travel ban to take effect July 28: Obamacare repeal voted down

+4Q 8.9.2017 -11.8.2017

253,542,016 Aug. 8: ‘Fire and fury’ for North Korea August 11–12: Unite the Right rally Sept. 5: Sessions says he is Ending DACA Sept. 22: Attacking NFL players Sept. 30: Puerto Rico feud over Hurricane Maria Oct. 5: First allegations against Weinstein Oct. 15: Alyssa Milano tweets #metoo October 25 - Wave of bans and update to rules on violent content

Accounting for changes to the platform

A change to the site occurred on February 15, 2017, only a few days after the second quarter analyzed in this study. The front page, which had previously been dominated by 50 "defaults", was opened to all forums except a few sports and political forums including several aligned with 2016 presidential candidates such as The_Donald and SandersforPresident. This definitely influenced the influx of new users to advocacy forums after the first quarter since it impacted what the most casual users were likely to see first when they logged in, though regular users like the ones who are the focus of this study generally see a tailored homepage featuring their subscriptions and would not have experienced much of a change.

This study controls for changes to the platform in two ways: 1) it focuses analysis from the authors' perspective so that their experience in a given forum matters more than the overall popularity of that forum; 2) excluding all forums tied to particularly 2016 candidates from the summary analysis of trends in forum membership.

FORUM TYPOLOGY & POLITICAL ENGAGEMENT

Categorizing forums

To classify the type of forum - whether its conversations focused on advocacy, community, entertainment, or other - we each independently classified the full set of 3,263 subreddits that were active (3K+ comments) in any of the eight quarters from Winter 2015-16 to Fall 2017. We met together to resolve the 4% of forums that were classified differently across coders. Following on the commitment to focus on the user's point of view in this analysis, the categorization criteria centered less on the topic and more on the kind of communicative action featured in each forum:

• Entertainment forums center on fandom for entertainment products or focus on entertaining users - note that this excludes humor & parody forums that often feature puns & polysemy that eludes lexicon-based analysis.

• Community forums center on mutual support, learning, or collective problem-solving outside the realm of politics and policy.

139

• Advocacy forums center on the question of collective ethics - what should be done on matters of common concern in the real world.

• Other forums including forums centered on countries and regions outside the US, forums with considerable (over 5%) of comments in languages other than English, humor & parody forums, trivial forums (like r/counting where users attempt to count up by one with each comment) and others that presented challenges to interpretation.

We also rough-coded subcategories to expand our knowledge of the categories and validate the forum typology. These were the five largest subcategories in each broader category:

Advocacy Community Entertainment Other

politics tech & digital games regions & cities outside US

anti (organized against something) ads & brands sports mean & cynical

issue money & career movies & tv sports outside US

gender gender pics & vids politics outside US

organization cities music in another language than English

We then classified the 159 active advocacy forums by their political lean, distinguishing forums that hosted a diverse range of opinions from ones that leaned to the political right or left.

We first validated our category classifications against several category schemes internal to Reddit. The rate of agreement between our advocacy label and the most comprehensive list of political subreddits at r/ListofSubreddits was 97%); the one subreddit (/r/politicalhumor) that disagreed with our coding we resolved each one in favor of our original decision to categorize it as a humor & parody sub in the larger category of Other. We also tested our categorizations against the partial set of category memberships developed by Kumar et al (2018); the rate of agreement with categories developed by Kumar was 93%.

Table. Hard news score (hits per million words) for top 10 advocacy subreddits across all quarters.

subreddit quarter lean hardnews

(pmw)

GaryJohnson 45 Other 24682

ModelUSGov 48 Other 21129

BlueMidterm2018 45 Left 20818

BlueMidterm2018 48 Left 20505

Ask_Politics 42 Diverse 20497

BlueMidterm2018 47 Left 20402

BlueMidterm2018 46 Left 20339

GrassrootsSelect 42 Left 20298

jillstein 45 Left 20102

Ask_Politics 43 Diverse 20049

140

We then validated the category most central to the study - the label for advocacy forums - against the lexicons of hard v. soft news developed by Eytan Bakshy and collaborators (see list in supplemental materials to Bakshy, Messing, and Adamic 2015). The average hits per million words for advocacy forums was 8751, far above the other categories as presented in the table at right. Interestingly, left-leaning forums had the highest hard news score (10412 hits per million words), substantially higher than right-leaning forums (7231).

Political lean and polarization

Estimating the polarization of each forum, and ultimately the degree to which each author was exposed to polarized spaces, depended on reliable coding of whether a forum focuses on advocacy and whether it has a partisan lean. The first step was the qualitative coding described above, in which two research assistants and I hand-coded all forums by category (advocacy, entertainment, etc) and all advocacy forums by political lean.

To validate the hand coding, I calculated a polarization score for each forum: I first measured the average proportion of comments that the forum's authors posted in forums determined by hand-coding to lean left or right, weighted by number of posts. This is similar to work in political polarization on Facebook that estimates the political lean of users on the basis of accounts they follow or like (Boutyline and Willer 2017; Bond and Messing 2015); similarly to these studies, not every author in a forum agrees with its ideological lean, but early qualitative analysis validated the idea that authors who post frequently in a given subreddit tend not to stray too far from its basic ideological position - partly because those who do are sometimes banned by the moderators. I computed the difference between the average proportion of left and right posts and averaged it across all forum comments, thus giving more weight to the most active authors on the forum.

Table. Polarization and lean scores by quarter for the 10 most polarized quarters across all subreddits. Of the top 200 most polarized, 104 leaned left/96 leaned right.

subreddit quarter lean polar lean

Kossacks_for_Sanders 42 Left 93.9 93.9

WayOfTheBern 43 Left 92.5 92.5

Kossacks_for_Sanders 43 Left 88.0 88.0

tucker_carlson 48 Right 79.1 -79.1

tucker_carlson 47 Right 78.5 -78.5

tucker_carlson 46 Right 78.0 -78.0

Kossacks_for_Sanders 47 Left 75.8 75.8

Kossacks_for_Sanders 46 Left 74.7 74.7

WayOfTheBern 48 Left 73.5 73.5

WayOfTheBern 47 Left 73.4 73.4

8751

19441367

2488

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

advocacy community entertainment other

Average of pmw_hardnews, all quarters

141

I tested this measure of polarization against a popular form of word scoring developed by Burt Monroe and collaborators (Monroe, Colaresi, and Quinn 2008). This approach estimates the ideology of a corpus of text via inductively generated distinctive word vectors. The authors - and several subsequent studies - have suggested that this approach works best when the individual chunks of texts are quite large. I found this to be true - the word scoring produced by this approach did not produce reliable estimates of political lean.

The absolute value of this measure of political lean served as a polarization score. I computed the polarization score for all subreddits by quarter, leaving out subreddits with less than 50 authors who posted mostly in left or right forums. I labeled a subreddit as polarized in a given quarter when it fell in the top quartile of all polarization scores - this corresponded to a polarization score just over 28. Subreddits with polarization scores above this threshold the hand coding with 99.6% accuracy.

Crucially, this scale is focused on the extent to which partisan authors from one side or the other are dominating the conversation. It is not affected by the presence of authors who posted in neither left-leaning nor right-leaning forums, but tends toward zero as the presence of partisan authors decreases. I validated this scale against the partisanship scores developed by Budak, Goel, and Rao for various news domains (Budak, Goel, and Rao 2016); the correlation between their measure and my own was 0.33 for advocacy forums. This may have been driven down by the fact that URLs that matched their list were exceedingly rare, making the measure used in this study potentially a more desirable one.

I tested the reliability of this framework by computing the average polarization, valence, and intensity of forums by lean in the two years from Winter 2015-16 to Fall 2017. Partisan forums showed high polarization and intensity, with the valence of forums on the left dropping as intensity rose throughout the period. Diverse and debate forums show a similar pattern of low polarization throughout the period but diverge in valence and intensity: diverse forums showed the most consistently negative emotion, though less intensity than the partisan forums, throughout the period; debate forums showed little emotion at all.

Sentiment

To measure the presence of emotion in comments, I used the lexicon AFINN (Nielsen 2011), designed to track the valence and intensity of emotional expression online. This lexicon has proven a strong performer in a meta-analyses of sentiment analysis tools (Biffignandi, Bianchi, and Salvatore 2018; Ribeiro et al. 2016). The most rigorous meta-analysis, by Ribeiro and collaborators and 2016, compared the classification accuracy of 24 popular sentiment analysis methods against a large hand-coded dataset drawn from multiple social media platforms, news sites, and movie reviews.

In this analysis, AFINN proved particularly strong in classifying social media posts and comments on news articles, both closely related to the kinds of communication found on the Reddit platform (Ribeiro et al. 2016). The results of the tests by Ribeiro and collaborators were strongest for the task

142

most relevant to this study, the 3-class task that classified comments not only as negative or positive but also as emotionally neutral.

In the 3-class sentiment analysis task on news reviews, AFINN outperformed all but the VADER model, which combines a lexicon targeted specifically to social media along with a set of grammar-based rules for adjusting sentiment estimates based on syntax. For social media posts, AFINN came in 4th out of 24, outperformed only by LIWC15, Umigon, and VADER. Though VADER was the strongest performer in this meta-analysis, its grammatical rules made it too computationally intensive to rapidly classify millions of comments on BigQuery, making AFINN the strongest feasible choice.

Type of public

To divide advocacy forums into types of public - Deliberative Spheres, Counterpublics, Incubators, and Battlefields - I divided the forums into quartiles by their polarization and emotional intensity. Forums in the top two quartiles of both intensity and polarization were Counterpublics; forums with low intensity and low polarization were Deliberative Spheres. Forums with high intensity and low polarization were Battlefields and those with low intensity and high polarization were Incubators.

The fact that there are different numbers of subreddits in each category comes from the correlation between polarization and emotional intensity (see figure at right). Though there are equal numbers in each quartile for each individual dimension, including quartiles of two dimensions (diversity and emotional intensity) makes it possible for the quadrants of the 2-dimensional graph to have different densities.

Three sets of indicators emerged from this analysis as the core independent variables in the regression models that followed: 1) the ratio of posts that each author made in Deliberative Spheres, Counterpublics,

Figure. Subreddits used in the Chapter 3 analysis plotted by emotional intensity and polarization

Table. Basic statistics by forum type

Polarizationa Emotional intensitya

# forums Winter 2016-17

# total authors Winter 2016-17

Deliberative Sphere

6.10 128209 32 504K

Counterpublic 37.05 167647 27 133K

Large CP: r_politics

5.26 152843 1 291K

Incubator 31.57 138310 14 76K

Battlefield 8.46 167962 17 547K

Note: Includes all advocacy forums except those focused on specific 2016 candidates (e.g., EnoughTrumpSpam, SandersforPresident) a High polarization > 16.5, low polarization < 16.5; high intensity >152K, low intensity <152K

b Average across forums in category

143

Incubators, and Battlefields; 2) indicator variables for authors who spent more than 50% of their posts in each public type; and 3) emotional intensity and polarization averaged across the forums that each author was exposed to, weighted by the author's number of posts in each forum. Including indicators tied to each public type functioned similarly to adding an interaction term between polarization and emotional intensity but more specific, testing not just whether the influence of one rises or falls with the other but rather the specific effect of intersections of polarization and diversity, rationality and intensity.

144

TOOLS FOR ANALYSIS OF BOUNDARY WORK: LEXICONS & CODES

The following lays out how I developed, tested, and refined lexicons to measure the circulation of boundary work and emotion across the forums of Reddit. To make sure that counting based on lexicons produced reliable results, I first created lexicons capturing words featured in the 2015 version of the LIWC (Linguistic Inquiry and Word Count) lexicons, one of the most widely used (González-Bailón and Paltoglou 2015; Pennebaker et al. 2015). I selected 10 scales relevant to the work of distinction and emotion in communication:

• certainty (LIWC 55)

• swearing (LIWC 121)

• leisure (LIWC 111)

• anxiety (LIWC 33)

• anger (LIWC 34)

• sadness (LIWC 35)

• we/they (LIWC 5&8)

• female/male pronouns (LIWC 43&44)

For each of these, I measured the ratio of hits (# of words in the scale that appeared in comments) per million words for each of the subreddits for each quarter in the sample. I then computed the correlation across quarters for this metric between Summer 2016 and Fall 2016 - for instance, comparing the prevalence of language related to sadness in each subreddit for Summer against the prevalence in Fall. The correlations ranged between 0.85 (for anxiety) and 0.99 (for female pronouns), showing that subreddits have strong differences in linguistic style that are measurable by lexicons.

Forms of boundary work: activation v. downframing

Key to identifying the specific intersectional configuration of allies, enemies, and social categories in discourse is to distinguish between types of boundary work by their emotional charge and discursive function. This study distinguishes between three steps of boundary work - activating boundaries, defining groups, and framing groups - and differentiates at each step between discourse with positive, neutral, and negative charge, as in the table at right.

These steps often happen simultaneously, but their combinations are worth analyzing independently. The qualitative analysis of Chapter 2 captures all forms in the activating boundaries and framing groups steps - activation, upframing, interpretation, and downframing. The quantitative analysis of Chapter 1 targets only 2 forms that lend themselves to accurate computer-assisted analysis at large scale:

Table. Forms of boundary work by step and emotional charge.

Emotional charge

Step Positive Neutral Negative

Activating boundaries

Activation "group Y"

"system XYZ"

Defining groups

Inclusion "x is one

of us"

Categorization "y is in

group Y"

Exclusion "z is one of

them"

Framing groups

Upframing "group X is good"

Interpretation "group Y

is…"

Downframing "group Z is bad"

145

3. Boundary activation with neutral emotional charge: Refer to categories that divide people into groups Examples: “Black people talk differently than white people.”; “Women wear dresses.”

4. Downframing with explicit boundary activation: Attach negative value (emotional charge) by directly naming a category Examples: “Black people are lazy”; the N word; “Women just can’t do math”

A literature review revealed seven categories of boundary work likely to be most salient in the 2016 presidential election cycle: race, class, gender, sexuality, political identity, immigrant status, and religion. Examples illustrate how boundary activation and downframing manifests for these categories in the Reddit corpus. Ultimately this study uses lexicons to identify the boundary category and type of boundary work likely to appear in comments and to quickly estimate how each type changed over the study period.

Developing codes

A coding system that could reliably distinguish between boundary activation, upframing, and downframing proved a crucial tool both for the qualitative analysis of Chapter 2 and for validating the lexicons used throughout the study. Working with a research assistant, I first visited forums that featured dense boundary work, making sure to include a mix of left, right, and diverse forums. We each took notes on the forms of boundary work that appeared with the goal of developing subcodes to track specific forms of interest. This initial set of notes underrepresented boundary work related to religion and to borders & nationality, so we specifically targeted those two for a second round of analysis.

From these first notes we generated a coding system with three subdivisions targeting downframing, upframing, and boundary activation. The coding did not focus solely on political advocacy or limit itself to topics that would make front page news - for instance, oppressed workers in a video game still counted as boundary activation of class despite not referring directly to humans.

We gathered examples for each subcode and tested the coding system against a random sample of 400 comments between 20 and 300 words in length taken from advocacy forums in Fall 2016. We used Dedoose to code specific sections of each comment, taking care to code all of the forms of boundary work that appeared in each comment. We then met together to reconcile the codes and refine the coding system, discarding subcodes on which we couldn't come to agreement. We also attempted to hand-code the political lean of the author and whether they were writing in a context in which other authors mostly agreed or disagreed with them, but these codes proved unreliable across coders.

146

Codes and subcodes for qualitative analysis of downframing and boundary activation

Downframing codes

ID Parent ID Depth Title

100 0 Downframing

101 100 1 d-Race/Ethnicity

1011 101 2 d-White

1012 101 2 d-Black

1013 101 2 d-Latinx/Hispanic

1014 101 2 d-Otherrace

1015 101 2 d-Racism (anti)

103 100 1 d-Borders & Nationality

104 100 1 d-Class

1044 101 2 d-Elites

1045 101 2 d-Common people

105 100 1 d-Gender

1051 105 2 d-Femininity

10514 1051 3 d-Feminism

1052 105 2 d-Masculinity

10521 1052 3 d-Overmasculinity

10522 1052 3 d-Undermasculinity

106 100 1 d-Sexuality and Orientation

1061 106 2 d-Trans

1062 106 2 d-Gay

1063 106 2 d-Lesbian

1064 106 2 d-Hetero

1065 106 2 d-Desirability

1066 106 2 d-LGBTQ+

107 100 1 d-Religion

108 100 1 d-Political ideology

1081 108 2 d-Liberal & left

1083 108 2 d-Middle etc

1085 108 2 d-Right & conservative

1086 108 2 d-2way streets

10861 1086 3 d-Snowflake

10862 1086 3 d-SJW

10863 1086 3 d-PC

109 100 1 d-People & Parties

1091 109 2 d-Republicans

10911 1091 3 d-Trump

1092 109 2 d-Democrats

10921 1092 3 d-Clinton

10922 1092 3 d-Sanders

10923 1092 3 d-Obama

Boundary activation codes

ID Parent ID Depth Title

200 0 Boundary activation

201 200 1 a-Race/Ethnicity

2011 201 2 a-White

2012 201 2 a-Black

2013 201 2 a-Latinx/Hispanic

2016 201 2 a-Otherrace

203 200 1 a-Borders & Nationality

204 200 1 a-Class

2044 201 2 a-Elites

2045 201 2 a-Common people

204 200 1 a-Class

205 200 1 a-Gender

2051 205 2 a-Femininity

2052 205 2 a-Masculinity

206 200 1 a-Sexuality and Orientation

2061 206 2 a-Trans

2062 206 2 a-Gay

2063 206 2 a-Lesbian

2064 206 2 a-Hetero

2065 206 2 a-Desirability

2066 206 2 a-LGBTQ+

207 200 1 a-Religion

208 200 1 a-Political ideology

2081 208 2 a-Left & Conservative

2083 208 2 a-Middle etc

2085 208 2 a-Right Wing

209 200 1 a-People & Parties

2091 209 2 a-Republicans

20911 2091 3 a-Trump

2092 2082 2 a-Democrats

20921 209 3 a-Clinton

20922 209 3 a-Sanders

20923 2092 3 a-Obama

147

Developing original lexicons

I developed two original sets of lexicons to measure boundary activation and downframing, each with seven specific lexicons targeted at the categories identified in the literature as most salient in the 2016 election cycle in the United States - race, class, gender, sexuality, political identity, immigrant status, and religion.

The boundary activation lexicons included subsets of ngrams to identify activation of specific groups within each boundary category - for instance, to identify where category labels referred to men rather than women. To begin each of these I worked with the research team to list the category terms (e.g., race, gender) and subcategory terms (e.g., Asian American, women) likely to be prevalent in discussions of each category on the Reddit platform. We did not seek to be exhaustive in this first instance but rather to identify terms close to the core of discussions on the issue at a relatively high level of generality (e.g., yes to "christian" and "evangelical" but not "Seventh-Day Adventist").

The downframing lexicons presented an even greater challenge than the boundary activation lexicons because the language of downframing shifts so quickly. As any visit to the Urban Dictionary - or to many of the forums of Reddit - will confirm, downframing is the subject of tremendous linguistic innovation. To build a more dynamic and updated lexicon that included even terms that might not be known by the research team, I started from the Hatebase, a crowdsourced collection of offensive words and phrases used online (Hine et al. 2017). The Hatebase offers a strong launching point for the analysis of boundary work online because its users not only collect words relevant to downframing, but also tag which symbolic boundary systems (gender, race, etc) relate to which words.

The final new lexicon sought to assess the prevalence and spread of alt-right language on the platform. I used a lexicon that combined a self-published glossary of an alt-right subreddit along with a list of words compiled by the Alt-Right Open Intelligence Initiative (Hagen 2017). The alt-right is just one among many groups that found a voice and audience on Reddit and other social media platforms, but one that is both central to the story of the 2016 election cycle and easy to track through its use of unique language (Heikkilä 2017; Nagle 2017).

Most elements in each lexicon were individual words known as unigrams. This made it possible to track the prevalence of lexicons across millions of comments with high efficiency and speed by splitting the body of each post into a count of individual words and symbols. On the other hand, some elements - like “alt right” as an identifier of political identity and “America first” in the borders boundary activation lexicon - did not hold their meaning without multiple words in sequence. I tracked these multi-word elements through a separate process matching regular expressions in the original unedited body, using them only where necessary to capture meaning (roughly 100 compared to over 5500 unigrams).

Because the purpose of the study is to empirically determine the linkage between boundary categories rather than to assume it, we sought to disentangle shared terms across boundary categories as much as possible. The alt right lexicon, for instance, is completely disentangled from the downframing lexicons despite the large amounts of downframing that generally appear in alt-right posts. However, there are some words that are deeply intersectional, for instance cuckservative, which uses a critique of sexuality to downframe conservatives perceived not to be aggressive enough. In each case, we tried to be parsimonious with double coding, only listing an ngram as relevant to multiple lexicons if we could see no other reasonable interpretation.

148

Below is a summary of the boundary activation and downframing lexicons used in the study:

Category Downframing variable (Chap1)

Downframing ngram examples

Boundary activation variable (Chap1)

Boundary activation ngram examples

Borders (immigration, nationality)

h_borders anchor baby, build that wall, illegals, America first

w_borders citizen, immigrant

Class (economic, political)

h_class trailer trash, freeloader, bougie

w_class poor, working class, elites

Gender h_gender cuck, feminazi, frat boy, sexist pig

w_gender men, women

Political identity (party, ideology)

h_polident* commie, the establishment, deplorables

w_polident

Alt-right lexicon

Democrat, Republican

liberal, conservative

Clinton, Sanders, Trump

Race & ethnicity

h_raceth wigger, wagon burner, beaner

w_raceth whites, blacks, latinos

Religion h_religion raghead, hebe, fundamentalist

w_religion christian, muslim

Sexual identity h_sexu queers, perverts, trannies

w_sexu straight person, queer person, transgender

* Downframing lexicon developed by research team. Others originate from Hatebase with significant editing - e.g., removing words with other main uses; edits tracked in separate document.

To show how words and phrases relate to the types and categories of boundary work described above, I’ve bolded boundary activation terms from the lexicons in blue and downframing terms in red for the examples below:

Downframing in political identity:

From 2016, subreddit KotakuInAction, author

RCShieldBreaker, category advocacy & political:

Loss of faith in the establishment and the tedious

preaching of SJWs have both come together to birth

the rise of Trump. The only other viable channel for

that energy is Sanders and only then if he can show

he has the stones to honor the activists of old by

shutting down the crybullies of today.

Full thread in context here.

Boundary activation in political identity:

From 2016, subreddit SandersForPresident,

author SerHodorTheThrall, category advocacy

& political:

Being part of the establishment isnt bad.

Bernie has spent the past 30 years in politics for

example. But we grew up in the internet age.

We wont put up will bullshit and corruption.

Full thread in context here.

149

The word “establishment” isn’t always attached to negative value - but even in the example on the right, it still activates the boundaries between the old guard and new challengers in the political system. On the other hand, SJW is used almost exclusively as an insult not only to divide between political categories but also to attach negative emotion to some groups. More examples show downframing in borders & nationality:

Downframing in borders/nationality and political identity:

From 2017, subreddit The_Donald, author

awhiteguyuno, category advocacy & political:

More culturally rich than a town full of people of

different shades of brown, which has no distinct

culture or specific heritage or ancestry (which is

what libtards and globalists want)

Full thread in context here.

Downframing in borders/nationality:

From 2017, subreddit unpopularopinion, author

softhandsam, category advocacy & political:

…Logically, that is moronic. If there were no

illegals to do the job, the employers would have to

pay min wage. So any place illegals are, employers

would rather hire them and save money….

Illegals have no positive benefit to American

society

Full thread in context here.

Refining the original lexicons

The Hatebase lexicon contains multiple inaccuracies and false positives, and misses many of the downframing terms that are idiosyncratic to the world of social media; it requires significant editing to be of use to research (Davidson et al. 2017). To identify terms it might have missed, I developed two rankings for unigrams - one by emotional charge (the most negative valence scores in AFINN) and another by their polarization (the maximum of the percent of left or right authors who use the term). For each ranking, I considered the top 100 terms in each quarter of the study and pulled new members from the list into the downframing lexicons. This added 37 new terms.

Finally, we looked at the prevalence of each term in a sample of 32 million comments, 4 million from each quarter: 1 million comments from the overall platform along with 1 million each from advocacy, entertainment, and community forums. We first eliminated all items that had no hits across the full sample. I then used this ngram prevalence study to complicate and validate the results reported in Chapter 1.

To build out the boundary activation lexicons, I sent the draft lists to 3 other scholars with experience in social media and content analysis and asked for additional contributions with the following guidance:

Could you take a look at these lists and let me know any terms that might be important in identifying where the authors on the platform are talking about race v. class, gender, sexuality, political identity, immigrant status, and religion? We're looking for terms close to the core of discussions on the issue at a relatively high level of generality (e.g., yes to "christian" and "evangelical" but not "Seventh-Day Adventist"). Also let me know if you think any of the terms on the current lists would be likely to generate false positives.

150

This resulted in 54 new terms - and 23 ngrams flagged as possibly generating a high rate of false positives. We incorporated the new ngrams and used the second list to launch our exploration of false positive rates. We targeted 10% as a maximum false positive rate.

At this stage, there were plenty of terms like "white" that had broad use across multiple meanings and were easy to quickly eliminate. Some results were unexpected: for instance, we were surprised to find that "trump" was well above the threshold despite having other meanings in common. For each remaining member of the original lexicons, we developed a process for quick vetting: we looked for false positives - places where the ngram appeared but did not truly reference the boundary systems we were targeting - among the first 20 hits for the ngram among comments from the study period on Reddit Search, a search platform also based on the Pushshift Reddit corpus.

If 2 or more false positives appeared - or if 1 appeared and looked likely to repeat - we flagged the ngram for possible deletion. We then went back through the list and quickly discussed each candidate for deletion. This is the stage in the process where ngrams like "elites" and "cultist" disappeared from the process due to their use in specific entertainment products - it turns out that video games in particular often draw character types from names used in boundary work related to religion and class. This left a total of 715 ngrams across the 15 original lexicons.

Validating the original lexicons

We validated the lexicons using three different stratified random samples to estimate the rates of false positives (where the lexicon identifies a form of boundary work that turns out not to be salient given the context) and false negatives (where the lexicon fails to capture a form of boundary work that is actually present in the sample). All three samples limited analysis to comments with between 20 and 300 words and included a link to the original post to ensure enough context for interpretation. The primary sample balanced lexicon hits across all 14 primary boundary work lexicons across the full Reddit platform: it drew 50 random comments from Fall 2016 that matched each lexicon, for a total of 700 comments.

The second sample, used in the qualitative analysis for chapter 2, selected comments that matched the boundary activation lexicon of class from Fall 2016 and Fall 2017. For each time period, the sample drew 80 random comments each from forums that leaned left, right, and middle, producing 480 comments in total. A final sample, used in a forthcoming article focused on the intersection of gender, sexuality, and political identity, selected for comments with a minimum of 1 hit in at least two of those three categories. It drew 75 comments from forums in each political lean category in Fall 2015 and Fall 2017, producing a sample of 450 total comments.

For each of the three samples, I worked with a research assistant to hand-code every comment, meeting to reconcile differences in order to produce a gold standard set of comments showing where boundary activation and downframing appeared for each category of boundary work. Out of the 1580 comments, we deemed 7 (0.4%) not codeable - usually because they included a string of words rather than a comment from an apparently real author. Each of these we replaced by another comment of the same subcategory.

We then compared the hand-coded results to the lexicon results in order to generate rates of false positives and false negatives. The coding did not focus solely on political advocacy or limit itself to topics that would make front page news - for instance, oppressed workers in a video game still counted as boundary activation of class despite not referring directly to humans. Because the study

151

focuses on relative prevalence in reporting lexicon results - showing the percent change over time within each lexicon rather than trying to estimate which boundary category is most prevalent - false negatives are still important but less worrisome than false positives.

Because the primary validation sample of 700 comments was balanced across categories - there were 50 comments each related to downframing of race, boundary activation of gender, etc - it is the most appropriate basis of false negatives analysis. Getting rates of false negatives for the other samples would distort results because the comments are selected for the presence of some categories and not others - for instance, the false negatives for sexuality would be elevated in a sample focused on gender because of the close connection between those two categories.

On the other hand, false positive analysis is unlikely to be strongly distorted in the second and third samples since the false positives mostly come from outside the domain of boundary work (e.g., "trump" as used in "trump card" creates false positives but is unaffected by whether comments mention gender or not). To maximize the sample for analysis of false positives, I combined lexicon hits across all three samples. The results follow on the next page:

Main validation sample (Fall 2016)

Class sample (Spring 2016 & Summer 2017)

Main validation sample

Boundary activation lexicon

Overall false positive rate

# Comment hits

False positives

# Comment

hits

False positives

Overall false

negative rate

# Comment

misses

False negs

Boundary activation lexicons

w_borders 5.6% 107 5 11 1 5.6% 593 33

w_class 6.1% 89 7 476 28 4.1% 611 25

w_gender 3.3% 125 5 53 2 2.4% 575 14

w_polident 3.1% 191 6 144 3 11.6% 509 59

w_raceth 3.6% 82 3 22 1 8.4% 618 52

w_religion 1.2% 117 1 7 0 5.3% 583 31

w_sexu 4.2% 81 5 9 1 4.0% 619 25

Downframing lexicons

h_borders 1.7% 57 1 3 0 6.2% 643 40

h_class 3.2% 58 2 2 0 3.4% 642 22

h_gender 3.0% 85 2 4 1 4.4% 615 27

h_polident 1.5% 111 1 15 1 5.3% 589 31

h_raceth 3.3% 107 4 1 0 2.9% 593 17

h_religion 0.0% 51 0 0 0 9.7% 649 63

h_sexu 1.8% 55 0 1 0 7.3% 645 47

Total sample 1580 650 480 650 650

False positive rates ranged from 0% for h_religion up to 6.1% for w_class, establishing an estimate of Type I error rate for each lexicon. Rates were generally higher for the boundary activation lexicons. This makes sense given the sharp specificity of downframing: because many of the words used in downframing ("cuck", "patriarchy", etc) are designed to hurt, they tend to have a lower level of polysemy. The highest rates appeared in boundary activation categories for class and for borders

152

& nationality, which included several terms ("workers", "unions", "citizens", "american") with very broad use. Though each ngram used in the study was subjected to a rough test before inclusion in any lexicon - and rejected from the study if it had more than 1 false positive in the first 20 results on Reddit Search for the study period - this further validation show how the error rates aggregate across ngrams within each lexicon.

Interpreting lexicon results - relating prevalence to current events and to actual discourse

To analyze changes in the relative prevalence of boundary work - for instance, when downframing of gender become more common or where activation of racial categories increased - I used the relative rate of occurrence as the primary metric for each lexicon. Because lexicons have unequal rates for reasons unrelated to their prevalence in discourse - that "women" occurs more than "immigrants" tells us little about the how gender and immigrant status are shifting in informal discourse - in the below I report only on correlations of rates across forums and changes in rates across time.

Building on the validation analysis of Schwartz and Ungar (2015), I eliminated from the analysis any counts or rates that referred to groups of less than a thousand words. For lexicons that used weighted elements - for instance, the AFINN sentiment scale rates the charge of emotion words from -5 (very negative emotional charge) to +5 (very positive charge) - I report on aggregates only above 20 matching ngrams. This is 4x the level recommended by Kiritchenko, Zhu, and Mohammad (2014).

To understand the way that prevalence on the Reddit platform related to outside events, I pulled out individual ngrams with high prevalence and interest from each lexicon and subjected them to a series of trend tests. I first used 538’s Reddit mapper to understand each term's sensitivity to current events, taking notes to aid in interpretation of each term. I also used the yearly view to get a stronger understanding of the way each term evolved in prevalence since the start of the Reddit platform.

Table. List of terms used in trend tests, % percent changes in advocacy forums and overall, by lexicon

Type Variable % Change in

Advocacy % Change

Overall Terms studied - increased Terms studied - decreased

Activation

w_borders 3% 9%

nationalists, sanctuary cities immigrants, foreigners, free trade

w_class -4% 12% communist/socialist/capitalist,

elites, millionaire

1%, the establishment, wealthy, rich, working poor, poverty, unions

w_gender -22% -1% nonbinary, transgender men, women

w_polident 4% 25%

alt right, nationalists, america first bernie, clinton, politician

w_raceth 8% 17% white people, latinx caucasian, hispanic, latino

w_religion -48% -30% spiritual, evangelicals muslim, christian

w_sexu -2% 14%

lgbtq, straight women, pansexual straight men, sexuality

Downframing

h_borders 45% 58%

globalists, build the wall, anchor babies, illegals

offshoring, wetback, nativist, colonialist

h_class -8% -2%

white trash, redneck, bougie, corporate shill plutocrat, privileged

h_gender 12% 9% tranny, cuck frat boys, misogynists, feminazi h_polident 39% 42% bernie bro, the swamp sjw, the establishment

153

Type Variable % Change in

Advocacy % Change

Overall Terms studied - increased Terms studied - decreased

h_raceth 37% 35%

race traitor, white trash, subhuman, uncle tom, nigger, darkies

dindu, ghetto, welfare queens, wetback, uncivilized

h_religion -56% -31% (none) fundamentalist, jihadist

h_sexu 31% 23% faggot, homophobe homos

I then set about understanding Reddit in relation to the broader internet and to the mainstream news. I used Google Trends as a rough measure of general interest in the term in the United States. I then compared the Google Trends pattern to the pattern of use on Reddit to the pattern of use in search terms between January 2008 and July 2017 (the end of the time period available on 538's reddit search tool). This showed quite clearly that in many cases trends in ngram prevalence occur on Reddit well before they occur on the broader internet, as in the figure below.

To enrich the class analysis of Chapter 2, I looked even more closely at the way key terms were covered in the mainstream news during two time periods, reading 10 articles - the first page of results from Google News - for each term to understand the comments on the platform in the context of the current events of the day. See the table at right for a list of those terms.

I looked even more closely at the usage of the term "white working class" in the news. Since the term was so pivotal to the study, I wanted to pinpoint where it first emerged and how it was used when it did emerge. On Google News as of March 2, 2019, the phrase "white working class" appeared in 72 articles before 2010, and only 33 in 2010. Its slow upward climb continued through 2013, when it rose more rapidly, but its growth in 2014-15 was overshadowed by the massive growth in 2016, as tracked in the figure below.

Table. Example of focus terms with links to articles.

Spring 2016 2.9.2016-5.8.2016

Summer 2017 5.9.2017-8.8.2017

Topical & new:

• establishment (big spike in this period)

• globalist

• panama papers Persistent:

• elites (small spike)

• working class (dominated by WWC

• Obamacare

Topical & new:

• tax cut (KS v. US)

• Obamacare

Persistent:

• establishment (in decline - see "deep state")

• globalist (spike)

• elites (small spike)

• working class (still litigating WWC)

Table. Prevalence of the term "nationalist" on Reddit and in Google searches, January 2008 to July 2017

154

ANALYSIS

Basic definitions & rules

The following categories identify levels of activity for forums and authors used across various studies:

1. Authors: a. present - 2+ posts per quarter b. active - 30+ posts per month c. interested - active with 25%+ posts in forum or category per month d. committed - active with 50%+ posts in forum or category per month

2. Forums: a. active - 3K+ posts per month, 10+ authors b. interpretable - 6K+ posts per month, 10+ authors, 10+ average wordcount

worst case: 10 authors, 1 post per 2 waking hours, 60 per day -> 2K per month, 6K per quarter I also set rules for determining what results would be interpretable:

1. Only report scores (sentiment, partisanship, etc) with matching words 20+ (4x the level in Kiritchenko, Zhu, and Mohammad 2014)

2. Only report on batches of text with 1K+ words (Schwartz and Ungar 2015)

Figure. Prevalence of the term "white working class" in articles on Google News through 2016.

72 33 62 108 80177 208

1579

0

500

1000

1500

2000

Articles on the White Working Class through 2016

41 4993

65 82106112131

86132

434

248

0

100

200

300

400

500

Jan

uar

y

Feb

rua

ry

Mar

ch

Ap

ril

May

Jun

e

July

Au

gust

Sep

tem

ber

Oct

ob

er

No

vem

ber

De

cem

be

r

Articles on the White Working Class in 2016

155

Chapter 1: Boundary lexicon prevalence

The first and second chapters used measurements of the prevalence and alignment of downframing and boundary activation in specific boundary systems by which people divide themselves into groups - race, class, gender, sexuality, religion, immigrant status, and political identity. The first chapter compared prevalence across time periods and boundary systems to determine which increased, which decreased, and which clustered together over the 2016 election cycle. As noted above, the key aim of counting lexicons was to measure relative prevalence - how a specific form of boundary work changed in prevalence over time - rather than trying to compare absolute prevalence between boundary systems.

Before aggregating to lexicons as a whole, I wanted to have a firm understanding of the patterns of change in prevalence for each of the ngrams that comprised the lexicons. Across all the lexicons, there were 714 total ngrams. Using BigQuery, I computed prevalence for each ngram over all 8 quarters for the three forum categories - advocacy, entertainment, and community. I also calculated the ratio between the hits in the overall platform and the hits in advocacy forums to estimate the extent to which the ngram was in general use. At right is an example of the ngram prevalence study for the gender lexicon.

I then characterized each of the lexicons according to the pattern revealed by the changes in prevalence among its ngrams. The table below shows the total hits among the million comments selected for the advocacy sample, along with the percent change from Winter 2015-16 to Fall 2017 in the sample from advocacy forums and in the sample from the overall Reddit platform. It also computes a measure of flux as the sum of the absolute value of the percent changes from each quarter to the next. In the final columns, the table captures some of the specific ngrams that increased or decreased over the study period.

The ngram prevalence study revealed interesting complexity within each boundary system, for instance showing that downframing in political identity that targeted the left, right, and middle all increased over the study period while downframing against extremists decreased.

Table. Changes in prevalence on the overall Reddit between Winter 2015-16 to Fall 2017 for ngrams in the gender boundary activation lexicon.

ngram

% change reddit platform

overall as % of advocacy

nonbinary 168% 50% trans 44% 43% femininity 30% 44% genders 30% 33% masculinity 25% 26% gendered 25% 39% feminine 20% 69% masculine 16% 47% boys 12% 83% transgender 11% 23% gentlemen 7% 60% sexes 5% 30% gender 3% 37% men 1% 39% lady 0% 89% manly 0% 92% girls -1% 110% males -2% 34% females -2% 58% women -4% 35% male -9% 52% female -11% 63% chicks -12% 73% mens -12% 39% ladies -14% 88% gentleman -28% 80% cisgender -48% 29%

156

Table. Summary of prevalence study results.

Type

Advocacy hits

Advocacy

%Change

Overall %Change

Flux

Increased in prevalence Decreased in prevalence

A_Borders & Immigration

43626 3% 9% 218% nationalism immigrants (non-downframe)

A_Class 41510 -4% 12% 197% communist/socialist/capitalist, elites, millionaire/billionaire

1%, the establishment, wealthy, rich, working poor, poverty, unions

A_Gender 51917 -22% -1% 380% nonbinary men, women

A_Political identity

225961 4% 25% 258% alt right, trump, democrat, left (a little right)

bernie, clinton

A_Race & Ethnicity

18293 8% 17% 806% white, latinx the "american" suffix, caucasian, hispanic, latino

A_Religion 31074 -48% -30% 241% spiritual, evangelicals muslim (a lot), christian

A_Sexuality 8193 -2% 14% 261% lgbtq, straight (people & women), pansexual

straight (men), meta terms (sexuality)

D_Borders & Immigration

3253 45% 58% 264% globalist/build that wall, anchor babies/illegals

offshoring, wetback, nativist/colonialist

D_Class 3235 -8% -2% 286% anti-lowerclass white, bougie, corporate shill

anti-elite other than bougie & corporate shills

D_Gender 12645 12% 9% 258% antifem, tranny, bernie bros antimasc, feminazi

D_Political identity

33644 39% 42% 238% globalist, anti-left/right/middle

anti-extreme, anti-political elites

D_Race & Ethnicity

20540 37% 35% 264% race traitor, white trash, subhuman, uncle tom, nigger, darkies

dindu, ghetto, welfare queens, wetback, uncivilized

D_Religion 657 -56% -31% 264% (none) anti-jew, islam, fundamentalism

D_Sexuality 1478 31% 23% 319% (all) homos

With this more fine-grained insight into each member of the lexicons, I turned to measuring broader patterns. Using Google BigQuery, I first counted the prevalence (per million words) of each lexicon in each quarter across the whole Reddit platform, subtracting out the subreddits identified to be trivial or in a language other than English. I then calculated prevalence for each of the three main categories of Reddit forum - advocacy, entertainment, and community. Next, I pulled all the comments in advocacy forums and calculated the prevalence across different political leans - left, right, diverse, and other.

With this analysis as a foundation, I moved on to characterizing how each boundary system - race, class, etc - evolved over the course of the study period. I looked at the full trajectory of changes across all periods for each lexicon and developed a small set of codes to aid in categorizing their trajectories: continuous increase/decrease, spike/drop & return; spike/drop & new normal; little change. Only one boundary system did not fall into one of these categories: both downframing and activation of race & ethnicity jumped suddenly in Fall 2017 as the Unite the Right rally and NFL protests seized the attention of many in the nation.

After experimenting with multiple ways of presenting this data, I decided that the most parsimonious way to show the differences was a line chart tracking simple trajectories over time, paired with a table showing the change between the start of the study and the quarter after election, along with the change from the start to the end of the study. I included a measure of the correlation

157

between the activation & downframing lexicons over the 8 quarters of the study period to show which pairs rose and fell together and which did not.

To test the significance of these results, I used prevalence scores for each lexicon in each of the 1528 forums that met the criteria (6K+ comments, 20+ authors, 10+ average word count) in all 8 quarters during the study period, providing a test of how consistent each pattern was across forums. Though this is not the same as a direct significance test of the changes, it did validate the great majority of the changes that looked substantial at first glance. The three exceptions were in the changes of downframing of class (a complicated pattern explored more fully in the second chapter), and in the sexuality downframing lexicons. If the case of class downframing, the fact that the change in sexuality didn't show as significant is due to large drops in prevalence in some forums; an interesting future research project might be to repeat the analysis of class performed in Chapter 2 but target sexuality instead.

Chapter 1: Boundary lexicon centrality

Shifting rates of prevalence among forms of boundary work can show changes in discourse over time, but they say little about boundary realignment. To observe which forms of boundary work were most strongly connected to others - which were most intensely intersectional - I set about selecting a measure of intersectional centrality. I considered multiple measures that could communicate the extent to which one form of boundary work was entangled with the others - that is, how strongly it was correlated with other forms in general. Understanding the entanglement of one variable with others required seeing the structure of relationships between the lexicons as a network with each lexicon as a node.

The most intuitive measure of centrality was node strength, a simple addition of one lexicon's correlation with all others. This measure has seen much refinement in recent years, including a version that not only normalizes by network size but also considers the weight of both positive and negative correlations in understanding strength (Rubinov and Sporns 2011). Still, to support intersectional boundary analysis across a wide variety of contexts, I was looking for a measure of centrality that could scale well with network size, and nearly every measure of strength and centrality have been shown weak in this regard (Bringmann et al. 2019).

The one exception is eigenvector centrality, which rates nodes highly not only according to the strength of their connections to other nodes but also considers the centrality of each connected node. This measure has been shown to apply across a wide range of network structures (Bonacich 2007) and to scale well and track with measures of causal influence better than many commonly used alternatives like betweenness and closeness centrality (Dablander and Hinne 2019). The

158

measure performs particularly well when the eigenvector is normalized for comparison across networks (Ruhnau 2000).

I calculated measures of eigenvector centrality using the igraph package in R. I also calculated the global clustering coefficient, which stayed relatively stable over the period, ranging from 0.78 to 0.84.

I was not able to find a reliable way to test the significance of changes in eigenvector centrality that featured a consensus in the literature. To validate the result, I computed correlations between normalized node strength (also via the igraph package) and eigenvector centrality. These correlations were strong, as shown below:

Chapter 1: Boundary lexicon clustering

To measure changes not only in the individual boundaries but in their relationships, I used a hierarchical clustering technique called clustering around latent variables (CLV), which is particularly well suited to revealing cultural structures where categories intersect and overlap (Vaisey 2007; Giannella and Fischer 2016). Clustering among groups of words or phrases is key to the production of meaning and measuring the degree and structure of word clusters has been a popular target for computational content analysis (Leydesdorff 2011; Mohr et al. 2013).

The CLV algorithm took as its input the matrix of correlations among prevalence scores among the set of active forums selected (6K+ posts, 10+ authors, 10+ average word count) for each quarter. This measured the strength of association between types of boundary work - activation, downframing, and emotional expression tied to race, class, gender, and other types of symbolic boundary - based on how frequently the types appeared in the same forum.

Like many clustering algorithms, the CLV procedure requires selecting the number of clusters that best fits the data. The most parsimonious number of clusters is often the one that precedes a jump in the quantity Δ=TK−T(K−1) as the number of clusters decreases from K to K-1, where T is the criterion to be maximized and

Table. Changes in intersectional entanglement (eigenvector centrality) for Activation (A) and Downframing (D) lexicons in three quarters, ordered by value in Fall 2017.

Lexicon Winter 2015-16

Winter 2016-17

Fall 2017

D_Political identity 0.86 1.00 1.00

A_Left (Political identity)

0.83 0.99 0.84

D_Race & Ethnicity

1.00 0.68 0.62

D_Borders & Immigration

0.64 0.57 0.57

A_Class 0.61 0.52 0.57

A_Borders & Immigration

0.71 0.64 0.55

A_Right (Political identity)

0.55 0.77 0.53

A_Race & Ethnicity

0.79 0.46 0.48

D_Class 0.42 0.51 0.46

D_Gender 0.44 0.40 0.28

A_Religion 0.35 0.32 0.21

A_Gender 0.26 0.19 0.17

D_Sexuality 0.29 0.20 0.15

D_Religion 0.47 0.39 0.14

A_Sexuality 0.16 0.19 0.12

Table. Correlations between eigenvector and node strength by quarter.

Winter 2015-16

Winter 2016-17

Fall 2017

0.910 0.911 0.923

159

with xj (1,…,p) the variables to be clustered, ck (1,…,K) the clusters to be maximized, and δkj a binary variable indicating membership of a given variable xj in a given cluster ck (Vigneau and Qannari 2003).

To characterize the relationships between lexicons in each quarter, I used the ClustVarLV package in R (Vigneau, Chen, and Qannari 2015) to generate hierarchical cluster dendrograms for each quarter. In the initial round of analysis, I set these dendrograms side by side for comparison as in the figure at right, trying to distinguish between the connections that persisted and changed from quarter to quarter.

As with most longitudinal hierarchical clusters with many variables, large clusters often form from tight relationships between pairs of variables, as shown in the table below:

A primary challenge in characterizing clusters in complex interconnected systems is to determine the number of clusters that best aggregates these pairwise relationships into an interpretable whole. To select the number of clusters that offered the most parsimonious description of the structure of relationships between lexicons, I computed Δ curves for each quarter and set them next to each other.

For almost every quarter, the four-cluster solution was an obvious choice; for several a three- and five-cluster solution seemed feasible as well, as discussed in the results section for Chapter 1. See the figure below for two representative graphs showing how the delta criterion changed with the number of clusters.

Table. Strongest pairwise correlations in first and last quarters.

Winter 2015-16 Fall 2017 x y r x y r

a_left d_polident 0.83 a_left d_polident 0.83

a_class d_borders 0.74 a_class d_borders 0.74

a_raceth d_raceth 0.71 a_raceth d_raceth 0.71

a_class d_polident 0.69 a_left a_right 0.69 a_left a_right 0.69 d_polident d_borders 0.68

160

Having chosen four as the number of clusters, I proceeded to characterize the clusters in each quarter, computing in-group and out-group correlations for each variable as well as the correlations between clusters as in the table at right. To name each cluster, I looked first at the variables with the highest correlations to the cluster and then created a name that seemed to address all variables at once. These clusters emerged:

• Gender & Intensity (Gender): Anchored by the strong correlations of gender boundary activation and intense emotional expression, this cluster also included boundary activation in sexuality and downframings in gender. The alt-right lexicon was part of this cluster in Winter 2015-16 and Summer 2016 but ultimately shifted to the Race cluster.

• Race & Negativity (Race): This core of this cluster was the activation and downframing of race along with negative emotional expression. Over time this core gained and lost a member: the alt-right lexicon fluctuated between this cluster and the Gender cluster in the first four quarters and then remained in Race for the final year, while class downframing ultimately migrated to the Politics cluster.

• Formal & Mainstream Political Discussion (Politics): At the core of this cluster were a core of boundary activation lexicons tied to the political right and left, with high frequency in advocacy forums. The cluster included activation & downframing lexicons for borders & immigration along with class activation. Class downframing joined the cluster in the quarter after the election and remained for 3 of the 4 following quarters.

• Religion: The strongly correlated dyad of activation and downframing in religion was linked to boundary activations of race/ethnicity and immigration status but resolved to its own cluster in every quarter. This link remained strong in each quarter with no changes to cluster membership over the full 2-year period, and indeed correlations with other clusters tended to decline over the period.

Figure. Charts showing change in Δ criterion with decreasing number of groups for Winter 2015-16 and Fall 2017.

161

I then tracked changes in the correlations from quarter to quarter along with the moments in which a variable switched from one cluster to the other. Finally, to visualize relationships between the variables, for each quarter I developed a network graph that used both the distances between variables (via MDS scaling) and the thickness of the lines between them (proportional to the absolute value of their correlation) as meaningful as possible. These charts were interesting, but ultimately the dendrograms provided the most information. See below for an example.

Table. Changes in prevalence on the overall Reddit between Winter 2015-16 to Fall 2017 for ngrams in the gender boundary activation lexicon.

Winter 2015-16 clusters 1 2 3 4 6 4 6 2 $groups[[1]] cor in group cor next group d_sexu 0.85 -0.09 d_gender 0.82 0.23 a_gender 0.60 -0.07 s_intensity 0.55 -0.32 w_altright 0.53 -0.01 a_sexu 0.45 -0.25 $groups[[2]] cor in group cor next group d_raceth 0.88 0.39 a_raceth 0.81 0.42 d_class 0.70 0.32 s_negemot 0.56 0.13 $groups[[3]] cor in group cor next group a_left 0.95 0.31 d_polident 0.92 0.51 d_borders 0.91 0.32 a_class 0.89 0.28 a_borders 0.85 0.59 a_right 0.79 0.36 $groups[[4]] cor in group cor next group d_religion 0.94 0.33 a_religion 0.94 0.32 $cormatrix Comp1 Comp2 Comp3 Comp4 Comp1 1.00 -0.17 -0.75 -0.48 Comp2 -0.17 1.00 0.42 0.17 Comp3 -0.75 0.42 1.00 0.35 Comp4 -0.48 0.17 0.35 1.00

Fall 2017 clusters 1 2 3 4 5 5 6 2 $groups[[1]] cor in group cor next group d_sexu 0.77 -0.01 d_gender 0.75 0.40 a_gender 0.70 -0.14 a_sexu 0.57 -0.22 s_intensity 0.55 -0.16 $groups[[2]] cor in group cor next group d_raceth 0.90 0.34 a_raceth 0.72 0.34 d_class 0.70 0.38 w_altright 0.54 0.25 s_negemot 0.52 0.06 $groups[[3]] cor in group cor next group a_left 0.94 0.28 d_borders 0.92 0.33 a_class 0.86 0.23 a_borders 0.85 0.58 d_polident 0.82 0.53 a_right 0.74 0.23 $groups[[4]] cor in group cor next group d_religion 0.95 0.21 a_religion 0.95 0.17 $cormatrix Comp1 Comp2 Comp3 Comp4 Comp1 1.00 -0.11 -0.71 -0.56 Comp2 -0.11 1.00 0.41 -0.13 Comp3 -0.71 0.41 1.00 0.20 Comp4 -0.56 -0.13 0.20 1.00

Figure. Results of MDS positioning algorithm based on Clustering of Latent Variables (CLV) analysis.

162

Chapter 2: Qualitative analysis of class discourse

The analysis of chapter 2 began with a quantitative analysis of the prevalence of the boundary activation lexicon related to class discourse over the two-year period from Winter 2015-16 to Fall 2017. I then focused on two quarter-long time periods - Spring 2016 and Summer 2017 - for deeper mixed-methods analysis. The overall plan of analysis follows a similar path to the approach to computational grounded theory developed by Nelson (2017), using quantitative analysis on a large set of text to reveal patterns that are tested and elaborated by close reading of text and context.

An important difference from Nelson's approach is that the initial analysis is performed on the full Reddit corpus rather than a sample, reducing the need for a final confirmation step. This is crucial given the complex and highly implicit character of speech on controversial topics of race, gender, and other forms of inequality, which tends to elude quantitative analysis (Amundson and Zajicek 2018).

Preliminary analysis on the qualitative sample of class-relevant comments (the class sample) showed that gender and sexuality would be too rare to be accurately counted. The two codes were initially kept separate, but then combined for the quantitative analysis of coding in Chapter 2 along with the round of validation based on the class sample. This coding system mainly followed the same structure as the boundary activation lexicons, with the small difference that it lumped gender & sexuality into a combined code and separated class into economic and political components, as tracked in the table below:

Category Chapter 1 version Chapter 2 version

Class (economic status) _class Economic class

Class (political power) _class Political class

Race and ethnicity _raceth Race

Borders (immigration, nationality) _borders Borders

Political identity (party, ideological group) _polident Political identity

Religion _religion Religion

Sexuality _sexu Gender & sexuality

Gender _gender Gender & sexuality

With a research assistant, I then set about refining a coding system for use with this project. We started form the codebook developed above to track downframing and upframing across race, class, gender, and the other forms of difference tracked in the larger study. It was relatively simple to combine gender and sexuality codes and to combine the guidance on downframing and upframing to track the salience of each boundary system, but we took even greater care with the two new coding elements: two codes to distinguish between class discourse targeting economic status v. political power, and one to distinguish between comments targeting elites versus non-elites - boundary work above and below.

163

Interestingly, placing common people and elites on separate axes and allowing emotional charge to vary for each creates a simple two-way table that resonates strongly with recent attempts in populism studies to differentiate between different forms of populism and the movements that oppose it.

Speech centered on upframing common people and downframing elites meets the now-traditional definition of "inclusionary" populism popularized by Mudde (2004), but anti-elite speech that focuses more on attacking and excluding other groups of everyday people meets the definition of exclusionary populism later proposed by Mudde and Kaltwasser (2013). Speech that elevates elites at the expense of common people fits with elitism, one of populism's proposed opposites (Mudde and Kaltwasser 2018). The other opposite, pluralism, imagines elites and common people in collaboration; this doubly positive framing is attractive on face but can limit mobilization against elites who may indeed be corrupt (Stavrakakis and Jäger 2018).

To test and refine this modified coding system, we gathered a random sample of 200 comments from advocacy forums that included one of the words in the class lexicon, 100 from Fall 2015 and 100 from Fall 2017. Each comment was between 20 and 300 words in length to maximize interpretability, as with the larger sample ultimately coded in the analysis for Chapter 2. We used Dedoose to code specific sections of each comment, taking care to code all of the forms of downframing that appeared in each comment. We then met together to reconcile the codes and refine the coding system, discarding subcodes on which we couldn't come to agreement.

After developing and testing the codes, I set about developing a stratified random sample of comments related to class discourse. I first created a dataset of the comments on Reddit that included one of the ngrams in the class lexicon. To allow for substantive, focused coding, I limited the dataset to comments between 20 and 300 words in length, producing a total corpus of roughly 124 million comments of medium length and including at least one word from the class lexicon. From this universe I randomly sampled 480 comments, drawing six random subsamples of 80 comments each according to time period (Spring 2016, Summer 2017) and political lean (left, right, diverse).

I created a coding database in Airtable that enabled high speed and accuracy both in assigning codes for quantitative analysis and in calling up comments that met a range of criteria for qualitative analysis and validation. Assuming a normal distribution in the proportion of codes among the total possible samples of the population, the margin of expected sampling error at 95% confidence for a sample of 420 is roughly 4.8%. Thus, proportions and differences under 5% are described below as "within the margin of error" in the study's results and changes in prevalence under 5% are not reported.

This produced quantitative estimates of the salience of various forms of boundary work in the sample. To understand these results in context, I performed separate qualitative studies of the full group of comments in left, right, and diverse forums for each time period, writing summary memos including representative quotes for each subfield. To summarize and write up each pattern, I called up all relevant comments on Airtable and wrote analytic memos noting the number of comments

Table 2. Combinations: downframing v. upframing, elites v. common people.

Common - Common +

Elite - Exclusionary populism

Inclusionary (traditional) populism

Elite + Elitism Pluralism

164

that seemed to fit the pattern along with those that didn't. I selected only quotes that seemed representative of a pattern including 10 or more comments. Each pattern I cross-checked against the quantitative analyses of prevalence and clustering described above, noting where it agreed with the quantitative analysis, where it differed, and where it added more complexity.

To test the significance of the apparent patterns in boundary work, I computed a χ2 statistic for testing the significance of changes from 2016 to 2017 for each form of analysis (boundary work above and below, class intersections) and for each category of forum (left, right, and diverse). Because religion had less than 5 hits in each cell of the frequency table, I excluded it from the analysis.

165

Chapter 3: Characterizing types of public

Chapter 3 uses summary analyses along with a series of logistic and multiple linear regression models to explore the relationship between modes of communication and political engagement, focusing on what happened to new authors entering advocacy forms on Reddit after the 2016 US presidential election. I began by tracking changes in membership, polarization, and emotional intensity among advocacy forums on Reddit from Winter 2016-17 to Fall 2017, categorizing them as Deliberative spheres, Counterpublics, Incubators, and Battlefields according to their levels of emotional intensity and ideological diversity (see above).

The sample of forums. To measure how different publics changed in membership, activity, polarization, and intensity during the time after the election, I measured these statistics for all advocacy forums that were interpretable (6K+ comments, 20+ authors, 10+ average words per comment) in Winter 2016-17. This created a cohort of 139 forums.

For each forum selected, I repeated the measurements for every quarter - even those that dropped below the interpretable threshold after Winter 2016-17. I removed forums that experienced two sets of trivial changes:

1. Forums focused on specific campaigns or news events, like EnoughTrumpSpam, SandersforPresident, and WhereIsAssange

2. Forums that were banned or directly modified by Reddit staff - like altright or The_Donald - or closed by their moderators - like ThanksObama

3. Huge "default" forums (see here) that were often directly linked to the Reddit home page but subject to fluctuations based on how they were placed

I then divided the forums into Deliberative spheres, Counterpublics, Incubators, and Battlefields in two ways in order to be able to test the robustness of the effects that the type of public had on author retention and engagement (see Table below for a summary):

1. Comprehensive - using all forums. This separated forums into the four categories based on whether they were in the top or bottom half of scores in intensity and polarization.

2. Conservative - cut out middle. This included only forums that were in the top or bottom quartile rather than the top or bottom half. See below for a summary.

Table. Forums excluded from Chapter 3 analysis with reason

Excluded forum Reason

altright Banned by Reddit

bidenbro Tied to 2016 campaign

DNCleaks Tied to 2016 campaign

Enough_Sanders_Spam Tied to 2016 campaign

EnoughTrumpSpam Tied to 2016 campaign

GaryJohnson Tied to 2016 campaign

hillaryclinton Tied to 2016 campaign

HillaryForPrison Tied to 2016 campaign

HillaryMeltdown Tied to 2016 campaign

jillstein Tied to 2016 campaign

Kossacks_for_Sanders Tied to 2016 campaign

news Default forum

nottheonion Default forum

Political_Revolution Tied to 2016 campaign

politics Default forum

ThanksObama Closed by moderators

The_Donald Modified by Reddit

the_meltdown Tied to 2016 campaign

WayOfTheBern Tied to 2016 campaign

WhereIsAssange Used primarily for Spring17 events

WikiLeaks Used primarily for Spring17 events

worldnews Default forum

166

I also conducted a flow analysis for authors who stayed active in advocacy forums on Reddit to see how the preferences of individual authors shifted between Winter 2016-17 and Fall 2017. Only 32% of authors who initially were committed to one type of public (i.e., who had 50% or more of their posts in that public type) shifted to another - or to no preference - during this time. Those who did shift their commitment to new public types followed the pattern below:

Public type committed in Fall 2017

Public type committed in Winter 2015-16

Deliberative Sphere

Counterpublic Incubator Battlefield Total

Deliberative Sphere

45% 33% 22% 100%

Counterpublic 36%

35% 30% 100%

Battlefield 28% 58% 13% 100%

Incubator 45% 48% 7% 100%

Chapter 3: Predicting author retention and engagement - sample and core variables

Sample of authors. To capture how the activity of newly engaged people changed after the election, I first identified all authors who were both present (1+ post) and interested (25% of their total posts across Reddit) in advocacy forums in the quarter directly after the election (Winter 2016-17). I downloaded all comments from these authors in that quarter for analysis and collected a range of statistics on their activity from a year before the election (Winter 2015-16) to a year after (Fall 2017).

To focus on people who were newly engaged after the elections, I then created a subset of the authors who were not present in advocacy forums in any of the quarters for a year prior to the election. To eliminate bots and spammers, I identified and removed users that used language associated with either (e.g., "I am a bot"), those with identifiable repeat patterns of posting, and those with 100 or more posts in a quarter. For the sample used in regressions, I further filtered out users with less than 10 posts to eliminate noise and focus in on authors whose relatively deep

Table. Changes from Winter 2016-17 to Fall 2017 by forum type

Comprehensive - All Forums Conservative - Cut Out Middle

Polarization Emotional intensity

# Forums Polarization

Emotional intensity

# Forums

Deliberative Sphere

Q3&Q4 Q3&Q4 39 Q4 Q4 11

Counterpublic Q1&Q2 Q1&Q2 40 Q1 Q1 9

Incubator Q1&Q2 Q3&Q4 19 Q1 Q4 3

Battlefield Q3&Q4 Q1&Q2 19 Q4 Q1 6

Note: Includes all advocacy forums except those focused on specific 2016 candidates (e.g., EnoughTrumpSpam, SandersforPresident) a High polarization > 16.5, low polarization < 16.5; high intensity >152K, low intensity <152K

167

engagement would suggest that they are likely to remain, making dropout an outcome to explain rather than expect.

Dependent variables: retention and engagement. The analysis uses two sets of regression models to predict the influence of mode of communication on two outcomes for authors:

1. Author attrition. Multiple logistic regression predicts a binary indicator of whether new authors left the advocacy forums (i.e., were present in 1 or less of the following three quarters)

2. Author engagement. Multiple linear regression predicts the change in a new author's engagement in advocacy forums, e.g., the difference between the number of their posts in the quarter following the election v. the average in the following three quarters.

The binary indicator of retention showed whether an author remained present (more than 1 post) across all advocacy forums for at least 2 of the 3 quarters following Winter 2016-17. That is, if an author posted twice in both Spring 2017 and Summer 2017, they counted as retained. To assess the reliability of the logistic regressions using this variable, I tested this indicator alongside similar measures of whether an author remained present in 1 or more quarters and all 3 quarters.

The continuous indicator of engagement, showing how much an author's activity changed over the period, was the difference between the number of comments an author posted in advocacy forums in Winter 2016-17 and the average number of comments they posted in the three quarters following. To validate results, I tested this indicator alongside similar measures based on percent change in advocacy posts and changes over 1, 2, and 3 quarters.

Neither of these indicators is a perfect metric of participation in advocacy. Authors who left the platform may have directed their advocacy to another platform, taken their advocacy offline, or even paused their advocacy only to continue after the time window of this study. Still, because these outcomes are likely to be distributed equally across forums, it is reasonable to expect that differences in participation within these forums will be affected meaningfully by factors endogenous to the platform.

Core independent variables: polarization, emotion, and public type. Three sets of indicators emerged from this analysis as the core independent variables in the regression models that followed:

A. Polarization and emotion. Emotional intensity and polarization averaged across the forums that each author was exposed to, weighted by the author's number of posts in each forum;

B. Engagement ratios. The ratio of posts that each author made in Deliberative Spheres, Counterpublics, Incubators, and Battlefields; and

C. Commitment to publics. Binary indicator variables for authors who spent more than 50% of their posts in each public type.

Figure. Correlations among predictor variables for new author sample

168

To obtain a polarization score for each forum, I first measured the proportion of comments that each of that forum's authors posted on the left and the right. I computed the difference between the two and averaged it across all forum comments, thus giving more weight to the most active authors on the forum. Crucially, this scale is focused on the extent to which partisan authors from one side or the other are dominating the conversation. It is not affected by the presence of authors who posted in neither left-leaning nor right-leaning forums, which means that it tends toward zero as the presence of partisan authors decreases.

To measure the presence of emotion in comments, I used the lexicon AFINN (Nielsen 2011), designed to track the valence and intensity of emotional expression online. This lexicon has proven a strong performer in a meta-analyses of sentiment, particularly in the context of social media (Ribeiro et al. 2016).

Including indicators tied to each public type functioned similarly to adding an interaction term between polarization and emotional intensity but specifically focused on the experience of authors. Grouping the variables in this way measured not simply whether the influence of emotion and polarization rise or fall together but instead focused on the specific effect of exposure to public types.

Confounding independent variables: boundary work, advocacy focus, and partisan position. To target downframing and boundary activation, I used measures of the prevalence of the original lexicons described above. To measure the extent to which authors focused on advocacy rather than entertainment, community, or other kinds of forums on Reddit, I calculated the proportion of posts each author gave in advocacy forums during each quarter. To test whether the effects of other predictors changed with partisan position I created dummy variables identifying authors who spent more than half of their advocacy posts in left and right forums respectively.

Chapter 3: Predicting author retention and engagement - regression models

Ultimately the regressions produced six total charts in the final version of the report: 1A, 1B, and 1C test the effects of polarization and intensity, proportion spent in types of public, and commitment to a type of public, respectively; 2A, 2B, and 2C test the effects of the same variable clusters on author engagement.

To make the results succinct and interpretable, I report on only three models in each of the six charts:

• Core: Testing the effects of the core independent variables alone

• Core+: Adding in exposure to downframing, boundary activation, and advocacy forums

• Full: Adding in dummy variables for focus on left and right forums.

The final set of regressions reported were selected to best capture a more complex field of results. I also ran many additional regressions with interaction terms, excluded variables, and additional variables to test the robustness of each reported model. I also reran the full set of regression analyses on subsets of the main sample including i) authors who were committed to each type of public; ii) authors who remained or left (for the engagement analysis); iii) established authors who had been present in advocacy forums 3 or more quarters before the election; and iv) authors with other ranges

169

of posts. Some of these subsets changed the confidence levels of various predictors, but the basic relationships between variables remained relatively constant.

The noise level and size of the dataset requires a very high level of validation to interpret effect sizes for each of the predictors. Because p-values are misleading for large datasets, I report confidence intervals for each predictor and interpret based on the smallest size in the range (Lin, Lucas, and Shmueli 2013). All predictor variables are transformed into z-scores - centered and scaled to have mean 0 and standard deviation 1 - to facilitate comparison.

Beyond the standard approach of using multiple variable groups in regression, I conducted both

commonality analysis (Ray‐Mukherjee et al. 2014; Kraha et al. 2012) and bootstrap analysis (Davison and Hinkley 1997) to validate the direction and relative size of each coefficient. Commonality analysis was particularly important because the independent variables are strongly correlated (see bivariate correlations above), in this situation CA is often used to determine where variables might be masking the effects of others

(Capraro et al. 2001; Ray‐Mukherjee et al. 2014; Kraha et al. 2012) In two of the 42 models I created, the results showed that intensity may play a suppressing role in the analysis, particularly capturing some of the impact of valence. Still, intensity repeatedly showed higher effects than variance across each of the regression models.

Table. Results of commonality analysis for full model 1A

Unique Common Total

f_polarize 0.0014 0.0011 0.0025

f_valence 0.0001 0.0065 0.0066

f_intensity 0.0014 -0.0012 0.0002

f_downframe 0.0007 0.0007 0.0014

f_catactiv 0.0035 0.0063 0.0098

i_polpercent 0.0020 0.0110 0.0130

i_dummyleft 0.0000 0.0001 0.0001

i_dummyright 0.0000 0.0001 0.0001

Table. Results of full model 1A including an interaction term for polarization and intensity

Estimate Std. Error z value Pr(>|z|)

(Intercept) -0.80933 0.03804 -21.278 < 2e-16 ***

f_polarize 0.03117 0.03883 0.803 0.422

f_intensity 0.15170 0.02444 6.208 5.38e-10 ***

f_downframe 0.01079 0.03505 0.308 0.758

f_catactiv -0.11221 0.02855 -3.930 8.49e-05 ***

i_polpercent -0.35479 0.02187 -16.220 < 2e-16 ***

i_dummyleft 0.30610 0.05326 5.747 9.08e-09 ***

i_dummyright 0.07182 0.06231 1.153 0.249

f_polarize:f_intensity 0.13476 0.01799 7.492 6.79e-14 ***

170

Bootstrap analysis is particularly useful to reveal where apparent effects in a large dataset might be caused by strong patterns in subsets of the data. I compared the coefficients resulting from bootstrap analysis to the original regressions using the betahat and confidence interval functions of the car package in R (Fox et al. 2012). Generally the estimates of bias were low and all betahats for significant predictors in the regression models fell within 2% of the coefficients of the original regressions.

In the following pages are detailed results for the Core and Full models for each chart in the final version.

Figure. Example results of bootstrap validation for logistic regression model testing impacts of exposure to polarized spaces and emotion.

Number of bootstrap replications R = 999 original bootBias bootSE bootMed (Intercept) -0.682460 -0.00083493 0.031945 -0.683205 f_polarize 0.179388 -0.00111961 0.034031 0.177994 f_valence -0.031518 -0.00045343 0.021187 -0.031352 f_intensity 0.104998 -0.00039602 0.022313 0.103188 f_downframe 0.141655 0.00183288 0.033782 0.142929 f_catactive -0.270621 -0.00013573 0.035681 -0.271881 i_advfocus -0.357304 -0.00025703 0.026781 -0.357744 i_dummyleft 0.288024 0.00138190 0.050297 0.288720 i_dummyright -0.062950 -0.00058249 0.057695 -0.062973 Bootstrap percent confidence intervals 2.5 % 97.5 % (Intercept) -0.74657412 -0.62219838 f_polarize 0.11532642 0.24587226 f_valence -0.07293919 0.01238756 f_intensity 0.06218713 0.14958239 f_downframe 0.08313995 0.21394068 f_catactive -0.33781764 -0.20297243 i_advfocus -0.41181406 -0.30527471 i_dummyleft 0.18687571 0.39128175

i_dummyright -0.17577818 0.04983767

171

Table. Summary of multiple logistic regression results predicting author retention.

Dependent variable:

stayed2

(1A

Core)

(1A

Full)

(1B

Core)

(1B

Full)

(1C

Core)

(1C

Full)

f_polarize -0.095*** 0.179***

(0.018) (0.033)

f_valence 0.241*** -0.032

(0.016) (0.022)

f_intensity 0.236*** 0.105***

(0.019) (0.022)

f_downframe 0.142*** 0.280*** 0.277*** (0.033) (0.029) (0.029)

f_catactive -0.271*** -0.282*** -0.290*** (0.032) (0.033) (0.033)

i_advfocus -0.357*** -0.286*** -0.277*** (0.026) (0.022) (0.022)

i_dummyleft 0.288*** 0.433*** 0.450*** (0.049) (0.047) (0.047)

i_dummyright -0.063 0.150*** 0.172*** (0.057) (0.048) (0.048)

ratio_delsphere -0.073*** -0.022

(0.014) (0.017)

ratio_counterpub 0.029** -0.013

(0.014) (0.015)

ratio_incubator -0.053*** -0.064***

(0.014) (0.015)

ratio_battlefield -0.039*** -0.048***

(0.014) (0.016)

bin_delsphere -0.274*** -0.080 (0.049) (0.056)

bin_counterpub 0.063 -0.087 (0.054) (0.057)

bin_incubator -0.357*** -0.402*** (0.076) (0.079)

bin_battlefield -0.308*** -0.290*** (0.098) (0.104)

Constant -0.605*** -0.682*** -0.599*** -0.792*** -0.557*** -0.769*** (0.014) (0.032) (0.014) (0.029) (0.016) (0.030)

Observations 22,664 22,664 22,664 22,664 22,664 22,664

Note: *p<0.1; **p<0.05; ***p<0.01

172

Table. Summary of multiple linear regression results predicting author engagement.

(2A

Core)

(2A

Full)

(2B

Core)

(2B

Full)

(2C

Core)

(2C

Full)

f_polarize -0.058*** 0.067***

(0.008) (0.015)

f_valence 0.101*** 0.034***

(0.007) (0.010)

f_intensity 0.067*** 0.052***

(0.009) (0.010)

f_downframe 0.009 0.045*** 0.045*** (0.015) (0.013) (0.013)

f_catactive -0.124*** -0.124*** -0.120*** (0.014) (0.014) (0.014)

i_advfocus -0.060*** -0.056*** -0.058*** (0.012) (0.010) (0.010)

i_dummyleft 0.022 0.063*** 0.066*** (0.023) (0.022) (0.022)

i_dummyright -0.092*** -0.015 -0.009 (0.027) (0.023) (0.023)

ratio_delsphere -0.019*** -0.036***

(0.007) (0.008)

ratio_counterpub 0.027*** 0.015**

(0.007) (0.007)

ratio_incubator -0.016** -0.015**

(0.007) (0.007)

ratio_battlefield 0.0004 -0.017**

(0.007) (0.007)

bin_delsphere -0.055** -0.094*** (0.023) (0.025)

bin_counterpub 0.109*** 0.074*** (0.026) (0.027)

bin_incubator -0.052 -0.043 (0.034) (0.035)

bin_battlefield 0.009 -0.064 (0.045) (0.046)

Constant 0.000 0.024 0.000 -0.013 -0.001 -0.009 (0.007) (0.015) (0.007) (0.013) (0.008) (0.014)

Observations 22,664 22,664 22,664 22,664 22,664 22,664

R2 0.012 0.020 0.001 0.019 0.001 0.018

Adjusted R2 0.012 0.020 0.001 0.018 0.001 0.018

Residual Std. Error 0.994

(df = 22660)

0.990

(df = 22655)

0.999

(df = 22659)

0.991

(df = 22654)

0.999

(df = 22659)

0.991

(df = 22654)

F Statistic 91.828***

(df = 3; 22660)

57.631***

(df = 8; 22655)

7.996***

(df = 4; 22659)

48.240***

(df = 9; 22654)

6.992***

(df = 4; 22659)

46.948***

(df = 9; 22654)

Note: *p<0.1; **p<0.05; ***p<0.01

173

REFERENCES FOR METHODOLOGICAL APPENDIX

Amundson, Kalynn, and Anna Zajicek. 2018. “A Case Study of State-Level Policymakers’ Discursive

Co-Constructions of Welfare Drug Testing Policy and Gender, Race, and Class.” Sociological Inquiry 88 (3): 383–409. https://doi.org/10.1111/soin.12208.

Bakshy, Eytan, Solomon Messing, and Lada A. Adamic. 2015. “Exposure to Ideologically Diverse News and Opinion on Facebook.” Science 348 (6239): 1130–32. https://doi.org/10.1126/science.aaa1160.

Biffignandi, Silvia, Annamaria Bianchi, and Camilla Salvatore. 2018. “Can Big Data Provide Good Quality Statistics? A Case Study on Sentiment Analysis on Twitter Data.” In Int. Total Surv. Error Workshop ITSEW-2018 DISM-Duke Initiat. Surv. Methodol.

Bonacich, Phillip. 2007. “Some Unique Properties of Eigenvector Centrality.” Social Networks 29 (4): 555–64. https://doi.org/10.1016/j.socnet.2007.04.002.

Bond, Robert, and Solomon Messing. 2015. “Quantifying Social Media’s Political Space: Estimating Ideology from Publicly Revealed Preferences on Facebook.” American Political Science Review 109 (1): 62–78. https://doi.org/10.1017/S0003055414000525.

Boutyline, Andrei, and Robb Willer. 2017. “The Social Structure of Political Echo Chambers: Variation in Ideological Homophily in Online Networks.” Political Psychology 38 (3): 551–69. https://doi.org/10.1111/pops.12337.

Bringmann, L. F., T. Elmer, S. Epskamp, R. W. Krause, D. Schoch, M. Wichers, J. T. W. Wigman, and E. Snippe. 2019. “What Do Centrality Measures Measure in Psychological Networks?” Journal of Abnormal Psychology 128 (8): 892–903.

Budak, Ceren, Sharad Goel, and Justin M. Rao. 2016. “Fair and Balanced? Quantifying Media Bias through Crowdsourced Content Analysis.” Public Opinion Quarterly 80 (S1): 250–71. https://doi.org/10.1093/poq/nfw007.

Capraro, Capraro, Robert M. Capraro, Mary Margaret Capraro, and Texas A. 2001. “Commonality Analysis: Understanding Variance Contributions to Overall Canonical Correlation Effects of Attitude toward Mathematics on Geometry Achievement.” Multiple Linear Regression Viewpoints.

Dablander, Fabian, and Max Hinne. 2019. “Node Centrality Measures Are a Poor Substitute for Causal Inference.” Scientific Reports 9 (1): 1–13. https://doi.org/10.1038/s41598-019-43033-9.

Davidson, Thomas, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. “Automated Hate Speech Detection and the Problem of Offensive Language.” In Eleventh International AAAI Conference on Web and Social Media. https://www.aaai.org/ocs/index.php/ICWSM/ICWSM17/paper/view/15665.

Davison, A. C., and D. V. Hinkley. 1997. Bootstrap Methods and Their Application. Cambridge University Press.

Fox, John, Sanford Weisberg, Daniel Adler, Douglas Bates, Gabriel Baud-Bovy, Steve Ellison, David Firth, Michael Friendly, Gregor Gorjanc, and Spencer Graves. 2012. “Package ‘Car.’” Vienna: R Foundation for Statistical Computing.

174

Gaffney, Devin, and J. Nathan Matias. 2018. “Caveat Emptor, Computational Social Science: Large-Scale Missing Data in a Widely-Published Reddit Corpus.” PLOS ONE 13 (7): e0200162. https://doi.org/10.1371/journal.pone.0200162.

Giannella, Eric, and Claude S. Fischer. 2016. “An Inductive Typology of Egocentric Networks.” Social Networks 47 (October): 15–23. https://doi.org/10.1016/j.socnet.2016.02.003.

González-Bailón, Sandra, and Georgios Paltoglou. 2015. “Signals of Public Opinion in Online Communication: A Comparison of Methods and Data Sources.” The ANNALS of the American Academy of Political and Social Science 659 (1): 95–107. https://doi.org/10.1177/0002716215569192.

Hagen, S. 2017. “Mapping the Alt-Right: The US Alternative Right across the Atlantic.” Alt-Right Open Intelligence Initiative.

Heikkilä, Niko. 2017. “Online Antagonism of the Alt-Right in the 2016 Election.” European journal of American studies 12 (2). http://ejas.revues.org/12140.

Hine, G., J. Onaolapo, E. De Cristofaro, N. Kourtellis, I. Leontiadis, R. Samaras, G. Stringhini, and J. Blackburn. 2017. “Kek, Cucks, and God Emperor Trump: A Measurement Study of 4chan’s Politically Incorrect Forum and Its Effects on the Web.” Proceedings paper. International Conference on Web and Social Media (ICWSM). May 16, 2017. http://www.icwsm.org/2017/program/accepted-papers/.

Kiritchenko, Svetlana, Xiaodan Zhu, and Saif M. Mohammad. 2014. “Sentiment Analysis of Short Informal Texts.” Journal of Artificial Intelligence Research 50: 723–762.

Kraha, Amanda, Heather Turner, Kim Nimon, Linda Zientek, and Robin Henson. 2012. “Tools to Support Interpreting Multiple Regression in the Face of Multicollinearity.” Frontiers in Psychology 3. https://doi.org/10.3389/fpsyg.2012.00044.

Kumar, Srijan, Jure Leskovec, William L. Hamilton, and Dan Jurafsky. 2018. “Community Interaction and Conflict on the Web.” WWW 2018: The 2018 Web Conference,.

Lagorio-Chafkin, Christine. 2018. We Are the Nerds: The Birth and Tumultuous Life of Reddit, the Internet’s Culture Laboratory. New York: Hachette Books.

Leydesdorff, Loet. 2011. “‘Meaning’ as a Sociological Concept: A Review of the Modeling, Mapping and Simulation of the Communication of Knowledge and Meaning.” Social Science Information 50 (3–4): 391–413. https://doi.org/10.1177/0539018411411021.

Lin, Mingfeng, Henry C. Lucas, and Galit Shmueli. 2013. “Research Commentary—Too Big to Fail: Large Samples and the p-Value Problem.” Information Systems Research 24 (4): 906–17. https://doi.org/10.1287/isre.2013.0480.

Mohr, John W., Robin Wagner-Pacifici, Ronald L. Breiger, and Petko Bogdanov. 2013. “Graphing the Grammar of Motives in National Security Strategies: Cultural Interpretation, Automated Text Analysis and the Drama of Global Politics.” Poetics, Topic Models and the Cultural Sciences, 41 (6): 670–700. https://doi.org/10.1016/j.poetic.2013.08.003.

Monroe, Burt L., Michael P. Colaresi, and Kevin M. Quinn. 2008. “Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict.” Political Analysis 16 (4): 372–403. https://doi.org/10.1093/pan/mpn018.

Nagle, Angela. 2017. Kill All Normies: Online Culture Wars From 4Chan And Tumblr To Trump And The Alt-Right. John Hunt Publishing.

175

Nelson, Laura K. 2017. “Computational Grounded Theory: A Methodological Framework.” Sociological Methods & Research, November, 0049124117729703. https://doi.org/10.1177/0049124117729703.

Nielsen, Finn Årup. 2011. “A New ANEW: Evaluation of a Word List for Sentiment Analysis in Microblogs.” Proceedings of the Extended Semantic Web Conference, March. http://arxiv.org/abs/1103.2903.

Pennebaker, James W., Ryan L. Boyd, Kayla Jordan, and Kate Blackburn. 2015. “The Development and Psychometric Properties of LIWC2015,” September. https://repositories.lib.utexas.edu/handle/2152/31333.

Ray‐Mukherjee, Jayanti, Kim Nimon, Shomen Mukherjee, Douglas W. Morris, Rob Slotow, and Michelle Hamer. 2014. “Using Commonality Analysis in Multiple Regressions: A Tool to Decompose Regression Effects in the Face of Multicollinearity.” Methods in Ecology and Evolution 5 (4): 320–28. https://doi.org/10.1111/2041-210X.12166.

Ribeiro, Filipe N., Matheus Araújo, Pollyanna Gonçalves, Marcos André Gonçalves, and Fabrício Benevenuto. 2016. “SentiBench - a Benchmark Comparison of State-of-the-Practice Sentiment Analysis Methods.” EPJ Data Science 5 (1): 23. https://doi.org/10.1140/epjds/s13688-016-0085-1.

Rubinov, Mikail, and Olaf Sporns. 2011. “Weight-Conserving Characterization of Complex Functional Brain Networks.” NeuroImage 56 (4): 2068–79. https://doi.org/10.1016/j.neuroimage.2011.03.069.

Ruhnau, Britta. 2000. “Eigenvector-Centrality — a Node-Centrality?” Social Networks 22 (4): 357–65. https://doi.org/10.1016/S0378-8733(00)00031-9.

Schwartz, H. Andrew, and Lyle H. Ungar. 2015. “Data-Driven Content Analysis of Social Media: A Systematic Overview of Automated Methods.” The ANNALS of the American Academy of Political and Social Science 659 (1): 78–94. https://doi.org/10.1177/0002716215569197.

Vaisey, Stephen. 2007. “Structure, Culture, and Community: The Search for Belonging in 50 Urban Communes.” American Sociological Review 72 (6): 851–73. https://doi.org/10.1177/000312240707200601.

Vigneau, Evelyne, Mingkun Chen, and El Mostafa Qannari. 2015. “ClustVarLV: An R Package for the Clustering of Variables Around Latent Variables.” R Journal 7 (2).

Vigneau, Evelyne, and E. M. Qannari. 2003. “Clustering of Variables Around Latent Components.” Communications in Statistics - Simulation and Computation 32 (4): 1131–50. https://doi.org/10.1081/SAC-120023882.