Are Conservatives More Rigid Than Liberals? A Meta-Analytic ...
-
Upload
khangminh22 -
Category
Documents
-
view
4 -
download
0
Transcript of Are Conservatives More Rigid Than Liberals? A Meta-Analytic ...
Are Conservatives More Rigid Than Liberals? A Meta-Analytic Test of the Rigidity-of-the-
Right Hypothesis
Thomas H. Costello, Shauna M. Bowes
Emory University, Department of Psychology
Matt W. Baldwin
University of Florida, Department of Psychology
Scott O. Lilienfeld
Emory University, Department of Psychology
University of Melbourne, School of Psychological Sciences
Arber Tasimi
Emory University, Department of Psychology
This working manuscript is under review
11/06/21
Author Note We would like to thank Crystal Liu for her work identifying and coding studies; Omer Kirmaz for his work aggregating scores across raters; and Patricia Brennan for suggesting that we code political donations by study authors as a proxy for political ideology. Supporting materials for this manuscript, including data and analytic code, will be made openly available in a public repository upon publication of this manuscript. Correspondence should be addressed to Thomas H. Costello, 36 Eagle Row, Department of Psychology, Emory University, Atlanta, GA, 30322. E-mail: [email protected].
1
Abstract
The rigidity-of-the-right hypothesis (RRH), which posits that cognitive, motivational, and
ideological rigidity resonate with political conservatism, is the dominant psychological account
of political ideology. Here, we conduct an extensive review of the RRH, using multilevel meta-
analysis to examine relations between varieties of rigidity and ideology alongside a bevy of
potential moderators (s = 329, k = 708, N = 187,612). Associations between conservatism and
rigidity were enormously heterogeneous, such that broad theoretical accounts of left-right
asymmetries in rigidity have masked complex—yet conceptually fertile—patterns of relations.
Most notably, correlations between economic conservatism and rigidity constructs were almost
uniformly not significant, whereas social conservatism and rigidity were significantly positively
correlated. Further, leftists and rightists exhibited modestly asymmetrical motivations yet closely
symmetrical thinking styles and cognitive architecture. Dogmatism was a special case, with
rightists being clearly more dogmatic. Complicating this picture, moderator analyses revealed
that the RRH may not generalize to key environmental/psychometric modalities. Thus, our work
represents a crucial launch point for advancing a more accurate—but admittedly more nuanced—
model of political social cognition. We resolve that drilling into this complexity, thereby moving
away from the question of if conservatives are essentially rigid, will amplify the explanatory
power of political psychology.
.
2
Are Conservatives More Rigid Than Liberals? A Meta-Analytic Test of the Rigidity-of-the-
Right Hypothesis
How do the minds of right-wing and left-wing individuals differ? This has been one of
the essential questions of political psychology since the field’s inception in the mid-twentieth
century (e.g., Adorno et al., 1950; Hibbing et al., 2014; Jost et al., 2003; Rokeach, 1960; Wilson,
1973). A dominant—if not the dominant––psychological account of what distinguishes leftists
from rightists is known as the rigidity-of-the-right hypothesis (henceforth, RRH; e.g., Tetlock et
al., 1984). Put plainly, the RRH suggests that individuals who think of the world as
uncontrollable and difficult to understand have a motivational need to simplify reality; thus, they
adopt political ideologies that foster a sense of order and predictability to satisfy this need.
Because conservatism offers a sense of certainty by way of its support for current social norms
and hierarchies, rightists are disproportionately likely to be cognitively, ideologically, and
motivationally rigid.
Several prior meta-analytic reviews have reported reliable relations between political
conservatism and rigidity-related variables (e.g., Jost, 2017; Jost et al., 2003; Van Hiel et al.,
2016), prompting social scientists to continue to champion and refine the RRH in recent decades.
During this time, scholars have extended and conceptually replicated the RRH by identifying
left-right asymmetries in a diverse array of rigidity-adjacent contexts (for a review, see Hibbing
et al., 2014); these include ideological asymmetries in cognitive ability (e.g., rightists have an
impaired cognitive capacity to manage complexity; Kemmelmeier, 2008), word preferences
(e.g., rightists prefer nouns over verbs and adjectives because nouns facilitate clearer and more
definite perceptions of reality; Cichocka et al., 2016), and reactions to fake news (e.g., rightists
3
are more likely to fall for fake news because they process new information less thoughtfully;
Pennycook & Rand, 2018), among many others.
That said, a burgeoning chorus of scholars have asserted that the relation between
conservatism and rigidity hinges crucially on a host of empirical (e.g., Ditto et al., 2019;
Federico & Malka, 2018; Feldman & Johnston, 2014; Kahan, 2016; Malka & Soto, 2015;
Zmigrod et al., 2019), methodological (e.g., Malka et al., 2017; Zmigrod, 2020), and meta-
scientific (e.g., Duarte et al., 2015; Jussim et al., 2016) factors, such that the RRH’s evidentiary
foundation may be grounded in a noisy and contradictory literature. To provide a sense of these
prior critiques, consider that many people identify as “socially liberal” and “economically
conservative” (or vice versa), suggesting that “liberalism” and “conservatism” may not be
psychologically coherent categories (Feldman, 2013; Kerr, 1952). Similarly, the umbrella
category of “rigidity” may be just as incoherent—the list of constructs that previous meta-
analyses of the RRH have used to operationalize rigidity is broad and includes dogmatism,
intolerance of ambiguity, cognitive and perceptual inflexibility, motivational needs for closure,
simple patterns of speech, and intuitive thinking styles (e.g., Houck & Conway, 2019; Jost et al.,
2003; Jost, 2017; Van Hiel et al., 2016). Overall, these criticisms point to the possibility that
research on the RRH—and within political psychology more generally—may mask a
considerable degree of heterogeneity in understanding both “the right” and “rigidity”.
Rather than simply highlight these problems, here we synthesize and meta-analytically
test them. We begin by providing a brief overview of the RRH and its evidentiary support, then
catalogue prominent challenges to the RRH and advance new ones. Synthesizing these
challenges, our meta-analysis indicates that the RRH understates the complexity of interrelations
among rigidity and political ideology. For research in this area to advance, we argue that political
4
psychologists may do well to move the discussion away from if conservatives are rigid (and
otherwise psychologically distinct from liberals) to when politics and rigidity-related processes
intersect and why they do or do not.
The Rigidity-of-the-Right Hypothesis
The notion that there is a relation between rigidity and conservatism has been with us for
many decades (e.g., Adorno et al., 1950; Freud, 1921; Katz, 1960; Kaufman, 1940; McClosky,
1958). During this time, social scientists have conducted hundreds of tests of the RRH,
documenting differences like conservative U.S. Senators make less complex policy statements
than liberals (e.g., Tetlock, 1983; cf. Houck & Conway, 2019), conservatives show impoverished
abstract reasoning abilities (e.g., O’Connor, 1952) and are less tolerant of ambiguity (e.g., Block,
1951), conservatives exhibit differences in general neurocognitive functioning (e.g., Amodio et
al., 2007; Nam et al., 2021; cf. Rollwage et al., 2018), and conservatives favor distinct working
memory processes (i.e., inhibition) that may account for underlying ideological asymmetries in
mental flexibility (e.g., Buechner et al., 2021).
Although theoretical accounts of the RRH differ across scholars (e.g., Adorno et al.,
1950; Altemeyer, 1996; Hetherington & Weiler, 2018; Tetlock et al., 1984; Wilson, 1973), one
popular version of this hypothesis, as foreshadowed above, conceives of conservatism as
motivated social cognition (Jost et al., 2003). Indeed, in one of the most influential publications
in the history of political psychology, a meta-analysis of five decades worth of this literature
found that political conservatism arises as a consequence of basic cognitive (i.e., pertaining to
thinking, reasoning, or remembering) and motivational (i.e., the impetus that gives purpose or
5
direction to behavior) processes concerning certainty/rigidity and safety/threat-sensitivity1 (Jost
et al., 2003).
This version of the RRH has served as the frontline for much research within political
psychology over the last two decades, stimulating a dramatic surge in studies of the
psychological correlates (and theorized causes) of left- vs. right-wing ideology (e.g., Dean, 2006;
Hibbing et al., 2014; Inbar et al., 2009; Lakoff, 2008; Mooney, 2012; Oxley et al., 2008; Westen,
2007). This renaissance of theory and research, in turn, prompted additional meta-analyses of the
RRH, which generally continued to provide strong corroboration for the model (Houck &
Conway, 2019; Jost, 2017; Jost et al., 2003; Van Hiel et al., 2016). For instance, in a recent
review (Jost, 2017), significant meta-analytic estimates were revealed for conservatism and a
broad range of rigidity-related constructs. Critically, many of these relations seem to be robust
across Western, democratic cultural contexts (e.g., Chirumbolo et al., 2004; Kemmelmeier, 1997;
Malka et al., 2014, 2019) and measures of political ideology (e.g., Federico et al., 2012). Overall,
this veritable ocean of evidence seems to lead to one clear conclusion: that rightists are more
rigid than leftists.
But from our point of view, existing evidence is less supportive of the RRH than may
seem at first blush. As previously noted, extensive empirical, methodological, and meta-
scientific challenges to the validity and generalizability of the RRH have arisen in recent years,
raising the possibility that ideological symmetries and asymmetries exist across the political left
and right, depending on key moderators. In this paper, we synthesize, examine, and test these
wide-ranging concerns and controversies, which organize our quantitative review. Before we do,
1 Vis-à-vis safety and threat-sensitivity, which we do not focus on in the present work, it is theorized that conservatism satisfies existential needs to preserve safety and security and to reduce danger and threat (Jost, 2017).
6
we highlight what we consider to be among the most critical issues that permeate the literature
on the RRH.
What is “The Right”?
Politicians, political commentators, social scientists, and other politically engaged
members of the public have long conceived of political ideology as a unidimensional, left/liberal
vs. right/conservative political continuum, with the left pole reflecting preferences for change,
individualism, and egalitarianism, and the right pole reflecting preferences for stability,
authority, and hierarchy (Caprara & Vecchione, 2018; Johnston & Ollerenshaw, 2020). Political
discourse follows this left-right ideological divide in many Western nations (e.g., Benoit &
Laver, 2006; Kitschelt et al., 2010; Knight, 1999; McCarty et al., 2006), seemingly speaking to
the degree to which, as Emerson (1841) speculated over a century ago, the division between left-
and right-wing reflects an “irreconcilable antagonism [that] must have a correspondent depth of
seat in the human constitution… the appearance in trifles of the two poles of nature” (p. 293).
As parsimonious and appealing as this account may be, there are several problems with
treating political ideology in a binary or unidimensional manner. Consider, for example, the
predominant approach for measuring political ideology: single-item self-reports that ask
participants to indicate how liberal or conservative they are on a Likert-type scale. One recent
estimate suggests that 80% of studies in political psychology rely on this measure (Claessens et
al., 2020). Yet a plethora of research has found that the political spectrum can be decomposed
into two conceptually and empirically distinct subdimensions: social ideology and economic
ideology (e.g., Claessens et al., 2020; Costello & Lilienfeld, 2020; Duckitt & Sibley, 2009;
Federico & Malka, 2018; Feldman & Johnston, 2014; Johnston et al., 2017; Lameris et al., 2018;
Pan & Xu, 2018; see Johnston & Ollerenshaw, 2020, for a review). Whereas social ideology
7
spans attitudes concerning progressive vs. traditional values, rules, and norms, economic
ideology spans attitudes concerning redistributive vs. free market economic systems (Claessens
et al., 2020; Malka et al., 2019). Thus, the question of “how liberal vs. conservative are you?”
may mean different things to different people. That is, some people’s responses may be rooted in
their social preferences and others in their economic preferences—which, in turn, introduces a
considerable degree of ambiguity in what constitutes “the right.”
Part of what makes this a problem is that most hypothesized mechanisms underlying the
RRH draw from conceptual connections between the shared epistemic qualities of social and
economic conservatism and rigidity. Hence, rigidity should be roughly equivalently associated
with all forms of conservatism (e.g., Jost et al., 2013, p. 1). But, if “the right” is not any one
thing, the RRH may commit a great error of oversimplification. To that end, whereas some
studies suggest roughly equivalent relationships between social and economic conservativism
and rigidity-related constructs (e.g., Azevedo et al., 2019; Cornelis & Van Hiel, 2006; Everett,
2013; Sterling et al., 2016), others have indicated that (1) social conservatism is consistently
positively associated with rigidity related variables, yet (2) economic conservatism manifests
null or even negative relations with many of these same variables (e.g., Carl, 2014; Carney et al.,
2008; Costello & Lilienfeld, 2020; Cizmar et al., 2014; Feldman, 2013; Hibbing et al., 2014; Van
Hiel et al., 2004; Yilmaz et al., 2016). What is more, social and economic conservatism tend to
be positively correlated among politically engaged, Western, Educated, Industrialized, Rich, and
Democratic participants (WEIRD; Henrich et al., 2010), yet are more often slightly negatively
correlated outside of Western democracies (Feldman & Johnston, 2014; Malka et al., 2017;
Malka et al., 2019; Marks et al., 2006). This string of findings further bolsters the notion that
social and economic ideology are not psychologically intertwined.
8
Altogether, the left/liberal vs. right/conservative political spectrum appears to be a
straightforward heuristic for explaining patterns of ideological clustering—especially in cultural
contexts such as the U.S. where psychological scientists have typically studied the RRH (as we
discuss later)—but it may not be a fact of human nature. To our knowledge, no prior meta-
analytic reviews of the RRH have examined the differential correlates of social versus economic
political ideology.
What is “Rigidity”?
Much like the commonplace practice of collapsing social and economic ideology into a
single category (or not measuring them separately at all), prior tests of the RRH have tended
to subsume a host of loosely interrelated variables under the broad heading of rigidity. For
example, one recent review (Cherry et al., 2021) of the cognitive rigidity literature identified 25
competing conceptualizations assessed across 23 measures. To that end, little scholarly
consensus exists concerning the precise boundaries of rigidity (Furnham & Marks, 2013;
Sternberg & Grigorenko, 1997; Zmigrod et al., 2019), such that there are few systematic
accounts of conceptual distinctions across variables typically thought to reflect rigidity, let alone
empirical evidence to guide the construction of valid and reliable rigidity dimensions. If these
constructs are only loosely coupled, which appears plausible given their definitional
heterogeneity, they are unlikely to share specific psychological mechanisms linking them to
political conservatism.
For this reason, how best to meta-analytically compare (or disaggregate) rigidity
constructs remains a matter of open debate (see, e.g., Cherry et al., 2021, for a review; Kipnis,
1997). Several taxonomies of distinctions within rigidity constructs have, however, emerged in
recent years (e.g., executive functioning, intolerance of ambiguity, inflexible thinking styles;
9
cognitive complexity; Newton et al., 2021; Lauriola et al., 2016; Stoycheva et al., 2020; Woznyj
et al., 2020), providing some basis for distinguishing between rigidity variables in a theoretically
informed manner2. Based on these provisional taxonomies of rigidity dimensions, we have
identified four domains of rigidity (see Figure 1) that are reasonably differentiable in their
relations with both one another and relevant external criteria: (1) rigid thinking styles, (2)
motivational rigidity, (3) cognitive inflexibility, and (4) ideological rigidity (i.e., dogmatism).
As we discuss below, these domains have little definitional overlap, are not strongly
correlated, and tend to be studied in disparate subfields. We suspect that this schema offers a
data-driven and useful means of resolving “the lumper-splitter problem” (i.e., balancing
precision and parsimony when placing individual cases into categories; Simpson, 1945) in the
absence of formal investigations of the taxonomy of rigidity. Accordingly, as with the “the
right,” we anticipate inconsistencies across elements of “rigidity” in the RRH literature. In other
words, some—but not all—forms of rigidity may be related to conservatism.
Rigid Thinking Styles
Many or most theoretical accounts of human decision-making draw upon a broad
distinction between two types of cognitive processes: intuitive (i.e., rapid, unconscious, and
automatic) and reflective (i.e., slow, conscious, and deliberative; Kahneman, 2011). Many
dozens of research investigations have found that individuals vary in this cognitive reflectivity,
and that these thinking styles have strong and broad patterns of relations with myriad behaviors
and attitudes (e.g., Toplak et al., 2011; see Pennycook et al., 2015). Drawing from the RRH
literature, several authors have suggested that conservatives may be more intuitive (i.e., less
2Drawing on these provisional taxonomies of rigidity dimensions in future research may be a useful factor in developing actionable and detailed mechanistic accounts of the structure and dynamics of mechanisms underlying the RRH (e.g., Rollwage et al., 2019).
10
analytic) thinkers than liberals (Talhelm et al., 2015; cf. Kahan, 2012). Nevertheless, common
operationalizations of cognitive reflectivity, such as the Cognitive Reflection Test and the Need
for Cognition Scale (Cacioppo & Petty, 1982), are negligibly related to measures of other
constructs that have been used in tests of the RRH (e.g., need for closure, intolerance of
ambiguity, and dogmatism; Newton et al., 2021). Given these findings—and that cognitive
reflectivity accounts for a greater degree of unique variance in many different (irrational)
heuristics and cognitive biases than intelligence, executive functioning, and actively open-
minded thinking (e.g., Toplak et al., 2011)—we treat rigid thinking styles as a distinct “rigidity”
domain.
Motivational Rigidity
As with rigid thinking styles and cognitive inflexibility, motivational rigidity is not
highly correlated with other rigidity domains, suggesting that it may bear unique or divergent
associations with political ideology (Lauriola et al., 2016). Many such motives are subsumed by
need for cognitive closure, a widely known construct that broadly reflects “the individual’s
desire for a firm answer to a question and an aversion toward ambiguity” (Kruglanski &
Webster, 1996, p. 264; Kruglanski & Webster, 1996; Kruglanski et al., 2006). Specifically, need
for cognitive closure includes five conceptually distinctive, if not empirically distinct,
subdimensions (see Roets et al., 2006; cf. Neuberg et al., 1997): preference for order, preference
for predictability, discomfort with ambiguity, closed-mindedness, and decisiveness. Many tests
of the RRH have revealed a relation between need for cognitive closure (and related motivational
needs) and conservatism (see Federico & Goren, 2009). Other constructs potentially indicative of
need for certainty, such as risk aversion, also exhibit modest relations with elements of
conservatism (Kam, 2012; Kam & Simas, 2012; Kemmelmeier, 2008). To evaluate this
11
possibility, we collapse motivational rigidity variables, such as need for cognitive closure and
motivational elements of intolerance of ambiguity, in our primary analyses.
Cognitive Inflexibility
Cognitive inflexibility can be understood as part of a broader suite of psychological
processes involved in executive functioning (i.e., high-level cognitive control functions that are
involved in complex mental processes, such as planning, focusing attention, working memory,
and multi-tasking; Diamond, 2013; Miyake & Friedman, 2012). Specifically, cognitive
inflexibility is thought to reflect an inability to change perspectives, shift approaches efficiently,
and take advantage of unexpected opportunities (Cools & Robbins, 2004; Diamond, 2013).
Drawing from the RRH literature, neuropsychological and behavioral measures of cognitive
inflexibility have been leveraged to suggest that leftists and rightists may differ in their basic
cognitive architecture and downstream consequences thereof (e.g., Buechner et al., 2021;
Sidanius, 1978; Zmigrod, 2020). To our knowledge, though, no systematic data are publicly
available concerning the convergence between these cognitive inflexibility measures and
measures of other rigidity constructs (e.g., motivations, intuitive thinking, dogmatism).
Moreover, cognitive inflexibility and other rigidity constructs, on the one hand, manifest
differential relations with external criteria, on the other (e.g., differences as large as r = .40 for
various personality traits; Lauriola et al., 2016; Stoycheva et al., 2020). Similarly-sized
differences in these rigidity variables’ relations with conservatism, or even far smaller
differences, would carry notable implications for the generalizability and explanatory power of
the RRH. Given these uncertainties, we distinguish cognitive inflexibility from other rigidity
domains.
Ideological Rigidity (i.e., Dogmatism)
12
The storied construct of dogmatism has variously been defined as generalized
authoritarianism (Rokeach, 1960) and, later, as “relatively unchangeable, unjustified certainty”
(Altemeyer, 1996, p. 201). Factor analytic investigations have indicated dogmatism is relatively
unidimensional and manifests positive correlations with theoretically relevant variables,
including belief in certain knowledge, resistance to belief change, closed-mindedness, need for
cognition, need for structure, and need to evaluate (Altemeyer, 2002; Crowson, 2009; Crowson
et al., 2008). Still, dogmatism is conceptually and empirically distinct from these and other
rigidity constructs (see Duckitt, 2009)— in the studies referenced above, no correlation between
dogmatism scores and the individual differences variables was > .31, providing strong evidence
for its empirical distinctiveness (see Ronkko & Cho, 2020). Moreover, the primary measure of
dogmatism in the literature, the DOG Scale, may be confounded with religiosity and social
conservatism (Conway et al., 2016; Duckitt, 2009; see Stanovich & Toplak, 2019)3. Given these
considerations, as well as the fact that dogmatism is quite theoretically distinct from all other
“rigidity” variables (Johnson, 2009), we treat dogmatism as a standalone rigidity domain in the
present review.
3Stanovich and Toplak (2019) found that religious individuals respond differently than non-religious individuals to Actively Openminded Thinking (AOT) Scale items that include the word “belief.” Individuals with strongly held religious views generally take “beliefs” to mean “religious beliefs,” whereas non-religious individuals generally take “beliefs” to mean “opinions.” After the offending items were removed, Stanovich and Toplak (2019) found that AOT-religiosity correlations were reduced from roughly r = -.60 to roughly r = - .20. Perhaps notably, several other oft-used measures in social and personality psychology frequently use the word “belief” in a similar manner, including Altemeyer’s (1996) DOG Scale, the most popular psychological measure of dogmatism, which does so in 6 of its 22 items (and is correlated with religious fundamentalism such that r = .60; Altemeyer, 1996). Duckitt (2009) has similarly suggested that the DOG Scale potentially assesses “religious dogmatism specifically and not dogmatism in other spheres of belief”.
14
Excluded rigidity variables. Having defined the rigidity variables we will focus on, we
now briefly discuss some constructs that are sometimes regarded as part of an extended family of
rigidity-related indicators—insofar as they are used in efforts to evaluate the RRH—that relate to
“uncertainty intolerance and threat sensitivity” (e.g., Jost et al., 2003), “needs for security and
certainty” (Malka et al., 2014), or an “open vs. closed” personality superfactor (Johnston et al.,
2017). Some constructs that are not directly rigidity-related but are sometimes regarded as part of
this extended family are fear of death, perceptions of various threats, reversed openness to
experience (or facets thereof), conscientiousness (or facets thereof), and the conservation vs.
openness value axis (e.g., Jost et al., 2007; Johnston and Wronski, 2015; 2018; Federico &
Malka, 2018). Although these constructs may bear some theoretical and empirical relations with
our outlined rigidity domains, they are only indirectly relevant to them. To minimize the risk of
introducing construct-irrelevant variance, we therefore exclude findings involving extended-
family rigidity indicators from the present meta-analysis.
Circular Measurement: Some Measures of Conservatism Directly Measure Rigidity
So far, we have predominantly focused on conceptual and taxonomic reasons that the
RRH’s evidentiary basis may be less clear-cut than previously thought. Yet, from our point of
view, among the most critical obstacles to useful meta-analytic tests of the RRH are
methodological in nature. To elaborate, a large proportion of early studies used measures of
“conservatism” that rest on the theoretical assumption that conservatism is heavily imbued with
rigidity. These measures—which include the Fascism Scale (e.g., Adorno et al., 1950), the Right-
wing Authoritarianism Scale (e.g., Altemeyer, 1996), and the Conservatism Scale (e.g., Wilson
& Patterson, 1968)—were designed to assess rigidity and conservatism simultaneously (e.g.,
Wilson, 1973). For instance, the Conservatism Scale asks participants to indicate their support
15
for “general attitudes concerning uncertainty avoidance” (Jost et al., 2003, p. 340), artistic
movements that often involve ambiguity (e.g., jazz music, modernism), and specific social-
political issues that carry authoritarian or prejudicial connotations (e.g., censorship, white
superiority, church authority, women judges).
This imprecise and criterion-contaminated historical measurement practice poses an
obstacle to meta-analytic tests of the RRH because, until recently, studies relying on criterion-
contaminated measures of conservatism made up a not insubstantial proportion of the RRH
literature. In the first meta-analytic test of the RRH (see Jost et al., 2003), for example, the
Wilson-Patterson Conservatism Scale was used to measure right-wing ideology in 38% of the
rigidity-related studies (with an additional 22% of studies using either the Fascism Scale or
Right-wing Authoritarianism Scale). Thus, a full 60% of studies relied on conservatism measures
imbued with rigidity. Later meta-analyses of the RRH comprise a similar proportion of studies
with biased measures, even explicitly operationalizing right-wing attitudes as authoritarianism,
ethnocentrism, and dogmatism (Van Hiel et al., 2010; 2016).
Just as publication bias (e.g., “file drawer” effects) and questionable research practices
have been shown to systematically distort meta-analytic findings (Thornton & Lee, 2000;
Rosenthal, 1979), the presence of rigidity-related content in political ideology measures may
similarly yield exaggerated meta-analytic results (see Malka et al., 2017, pp. 119-121)4.
4Content overlap such as this has also taken the form of inclusion of political content in measures of rigidity-related constructs (Malka et al., 2017, pp. 121-122). For example, manipulations and measures relevant to perception of terrorism-related threats are often found to predict conservatism and are consequently taken as support for the RRH (Jost et al., 2007, Study 3; Thorisdottir & Jost, 2011, Study 2). Further, many studies rely on Rokeach’s Dogmatism (D) scale as a rigidity indicator, despite the presence of right-wing political content in this scale (see Conway et al., 2016). Similarly, the longstanding finding that political conservatism is associated with prejudice (see Hodson & Dhont, 2015, for a review) appears to dissipate when groups that are perceived as ideologically dissimilar to political liberals, such as Christian fundamentalists and wealthy individuals, are included as targets in measures of prejudice (Brandt & Crawford, 2019; Crawford, 2017).
16
Moreover, the inflation of rigidity-conservatism correlations may obscure many key boundary
conditions of the RRH. For example, if the global correlation between economic conservatism
and cognitive inflexibility is ρ = -.10, but meta-analytic estimates are biased by +.20 due to
criterion contamination, researchers may draw erroneous conclusions about the RRH’s
explanatory power for economic ideology. Hence, in the present review, we examine the degree
to which biased measures inflate effect sizes and estimate rigidity-conservatism relations in a
way that is not distorted by content overlap.
Meta-scientific Concerns: Heterogeneity, the “Crud Factor,” and Political Bias
It is perhaps unsurprising that previous meta-analytic reviews of the RRH have revealed
ample substantive heterogeneity in the associations between conservatism and rigidity. Point
estimates for conservatism-rigidity correlations reported in peer-reviewed articles range from r =
-.58 (Durrheim, 1998) to r = .82 (Pettigrew, 1958). This degree of heterogeneity is consistent
with the vast range of constructs, measures, and environments that scholars have used to test the
RRH. Attempting to meta-analytically estimate an “overall” effect size estimate for the relation
between conservatism and rigidity glosses over the more difficult—and, arguably, more
interesting—question of when and why these effects (1) occur, (2) are large vs. small, and/or (3)
positive vs. negative. In other words, previous reviews have largely neglected to empirically
parse the heterogeneity of the RRH literature, which is perhaps the chief insight provided by
meta-analysis (Higgins & Thompson, 2002).
Further, the RRH advances forceful claims about fundamental cognitive differences
between liberals and conservatives, yet proponents of the model have generally taken left-right
differences of any magnitude as evidence in support of ideological asymmetries (Jost, 2017),
potentially rendering the RRH difficult to falsify (see also Lakens et al., 2018; Meehl, 1992).
17
Because any given psychological variable tends to be at least slightly correlated with any other
(Lykken, 1991; cf. Orben & Lakens, 2019), with a large enough k and adequate study quality,
appropriately powered meta-analyses will virtually always return a non-null effect, regardless of
the truth of the substantive theory in question (Meehl, 1978). Certain small effects can have
profound real-world implications (Funder & Ozer, 2019), yet one obstacle to interpreting the
RRH literature is that there is no established or consensual size of “difference” between the left
and right that scholars consider meaningful. Statistical significance notwithstanding, how much
more rigid must the right be for us to conclude that the right is, in fact, rigid compared with the
left (see Lakens et al., 2018)?
Given that the RRH implies causality, or at least that conservatism and rigidity share
antecedents, we posit that the criterion for accepting a meaningful effect should be relatively
strict: Ideological asymmetries should be reasonably robust, not wobbling near zero. If we use
Cohen’s effect size benchmarks, intended to describe the meaningfulness of a result (e.g., a small
effect size can only be detected via careful study whereas a medium effect size is visible to “the
naked eye of a careful observer”), r = .10, r = .30, and r = .50 represent small, medium, and large
effect sizes, respectively. Hence, we would begin to gain confidence in the RRH as correlations
approached a range of r = .25 to r = .35, and our confidence would increase dramatically as
effect sizes approached r = .40. By the same token, effect sizes from r = .05 to r = .15 would
support the possibility that the RRH is of dubious practical value, and we would treat effect sizes
below r = .05 as falsifying the model.
An additional source of bias may be attributable to the fact that we live in an extremely
polarized and politicized world, and psychologists, being humans, are not immune from the
biases that tend to accompany partisanship. Namely, several authors (e.g., Duarte et al., 2015;
18
Honeycutt & Jussim, 2020) have suggested that the RRH has benefited from the
disproportionately left-leaning political preferences of social psychologists (Haidt, 2011;
Langbert et al., 2016; von Hippel & Buss, 2017), which may have biased the literature in
undetermined ways. All meta-analyses are liable to poor statistical accuracy due to biases
introduced during the dissemination of results (e.g., publication bias) and/or those borne of
correlated error variance across multiple studies. Similarly, allegiance biases serve a similar
function in meta-analyses of controversial topics. For instance, Gaffan and colleagues (1995)
famously found that, in a meta-analytic review of the efficacy of various psychotherapeutic
approaches, researchers’ allegiance to a given therapeutic approach accounted for up to half of
the difference between said approach and other treatments. The same may be true of political
allegiance (Duarte et al., 2015). Still, evidence is for this possibility is mixed (e.g., a recent
adversarial collaboration found that political allegiance is not related to replicability; Reinero et
al., 2019) and warrants further examination.
The Present Review
Taken together, previous reviews’ failure to sufficiently account for the heterogeneity of
conservatism and rigidity, among other considerations, renders their estimated effect sizes
difficult to interpret. In the present review, we meta-analytically examine the full body of
currently available literature (including peer-reviewed journal articles, doctoral dissertations,
Master’s theses, books, and unpublished data), with the dual aims of probing the RRH’s basic
assumptions and parsing the RRH literature’s considerable heterogeneity. We leverage divergent
conceptualizations and measures of political ideology and rigidity to facilitate these tests,
allowing us to clarify the coherence and utility of approaching political ideology and rigidity as
unidimensional constructs in the context of the RRH. Further, we examine methodological and
19
meta-scientific obstacles to substantive tests of the RRH, such as publication and political bias,
sampling bias, and criterion contamination in ideology and rigidity measures. We also examine
the statistical impact of other potential moderating variables; these include rigidity measure type
(i.e., self-report vs. performance-based), political ideology measure type, peer-review status, type
of sample, WEIRDness, and nationality.
Relative to previous reviews, the current meta-analysis is considerably larger and broader
in the number of independent samples, effect sizes, and unique participants. What is more, our
meta-analysis is the first review of the RRH to statistically model dependencies among effect
sizes extracted from the same samples. To do so, we employ a three-level (i.e., “multi-level”)
meta-analytic approach to facilitate the inclusion of all relevant observations, including those
drawn from the same participants, without artificially inflating confidence in our estimates (Van
den Noortgate et al., 2015).
Method
Literature Search
Studies were obtained using several search strategies (updated a final time in January of
2021). First, we conducted targeted searches of online databases (i.e., ProQuest Dissertations &
Theses, PsycINFO, Google Scholar, and the Emory University Libraries search tool, discoverE,
which comprises 18 relevant databases). The search terms were developed by the first author and
were based on our review of the literature; they were entered as variations of the following
Boolean phrase: “(political AND (orientation OR ideology OR conservatism OR attitudes))
AND (cognitive reflection OR dogmatism OR need for cog* OR need for closure OR rigidity
OR flexibility OR inflexibility OR executive function* OR motiv* OR intolerance of
ambiguity)”. Searches covered English-language articles, books, Master’s theses, and
20
dissertations published from 1950 to 2020. Second, we drew from published and unpublished
studies included in previous meta-analyses of the RRH. Third, we employed a snowballing
procedure that entailed reviewing lists of studies that have cited widely used measures of
political ideology and rigidity. Finally, we searched publicly and privately available datasets
(e.g., YourMorals.org) to manually calculate effect sizes of interest.
Our initial search yielded 1,416 studies, and abstracts of these studies were then screened
for initial inclusion. A total of 489 studies were deemed appropriate for full-text review;
removing duplicates reduced this number to 371. The remaining full texts were read by the first
author. For a study to be included, it needed to meet all of the following criteria: (a) assessment
of one or more of the rigidity constructs of interest; (b) an assessment of political ideology (e.g.,
symbolic self-placement, support for conservative/liberal policies, party identification, self-
report questionnaire, vote choice, or some combination thereof); and (c) sufficient data provided
for calculating individual effect sizes. Effect sizes that were either observed following an
experimental manipulation or reported alongside statistically significant covariates (e.g., beta
weights from multiple regression analyses) were excluded. No studies were excluded on the
basis of participant characteristics (e.g., age, ethnicity, native language).
A total of 140 articles met inclusion criteria and were coded. Five open datasets that met
inclusion criteria were also identified and used to calculate effect sizes. A final round of
searching was conducted in January 2021, which resulted in the addition of 7 studies. Twenty-
five percent of studies were randomly selected and independently reviewed and coded by the
second author to assess reliability of study coding. Interrater reliability coefficients (i.e., κ for
categorical variables and ICC for continuous variables) are provided below. Coding
21
disagreements were resolved by discussion. An overview of included citations, study
characteristics, and effect sizes is provided in Supplementary Table 1.
After completing our initial literature review, we expanded our study pool to include any
effect sizes from the most comprehensive previous meta-analytic review of the RRH (i.e., Jost et
al., 2017) that are based on political ideology constructs that overtly assess prejudice,
authoritarianism, and rigidity. An additional 102 effect sizes and 6,275 participants were added.
Secondary analyses were conducted to facilitate the comparison of our results before and after
excluding these effects sizes, affording the opportunity to meta-analytically examine the
differences between proxy measures of conservatism and “purer” measures of conservatism.
Figure 2. Flowchart of the screening process. The term “record” refers to a discrete source of data (e.g., a study, which may contain many effect sizes, or a dataset from which effect sizes can be calculated).
Data Coding
22
Descriptive statistics for each moderator variable are presented in Tables 1 and 2.
Allegiance associations. Theoretical allegiance to the RRH was coded categorically as
one of three categories: (1) supports RRH, (2) neutral towards RRH/does not mention RRH, and
(3) challenges RRH. We modeled our coding strategy on procedures from Gaffan, Tsaousis, and
Kemp-Wheeler (1995). Accordingly, studies were considered to be non-neutral toward the RRH
if their Introduction section fulfilled any of the following criteria: (a) explicitly hypothesized that
rightists will demonstrate cognitive rigidity than leftists, or vice versa; (b) included a description
of the RRH (or a competing hypothesis) that was at least 10 lines longer than descriptions of
other models; (c) were authored by a creator of a RRH-aligned theory (or a competing theory);
and/or (d) mentioned only the RRH (or a competing model). If a manuscript did not fulfill any of
the aforementioned criteria, it was coded as neutral towards the RRH. Rater agreement was
substantial, κ = .69.
To further investigate the potential role of political bias in the RRH literature, we also
coded for author’s political allegiance. Given that this information is difficult to ascertain from
research articles, we employed a proxy measure of authors’ political ideology: whether and to
what extent authors had contributed to political organizations. Donation information for the first
and last authors of each study was obtained via the Center for Responsive Politics’ database of
Federal Election Commission records, which comprises receipts from political donations,
including donations to political action committees (PACs). These analyses were by necessity
limited to authors who are U.S. citizens. We attributed political contributions to an author only if
our search yielded a match for their (a) first and last name and either (b) employer (e.g., their
university) or (c) their current or past location (i.e., as determined by the location of one’s
employer and/or information on publicly available CVs). We computed one categorical variable
23
(donated to right, did not donate, or donated to left) and one continuous variable (frequency of
donation), with donations to the right arbitrarily coded with negative values and those to the left
arbitrarily coded as positive values).
Sample characteristics. We extracted the following continuous variables for sample-
level characteristics: country of origin (κ = .98), mean age (ICC = .98), and gender composition
(ICC = .98). We also coded participant composition categorically (e.g., university students,
online, community, nationally representative, government officials; κ = .76).
WEIRDness. We followed procedures described in the Many Labs 2 project (i.e., Klein
et al., 2018; see also Yilmaz & Alper, 2019) to quantify sample WEIRDness via the sample
country of origin (see https://osf.io/b7qrt/ for more detailed information).
Measure of political ideology. We coded the political ideology measure used for each
observation as a categorical moderator using both broad and narrow coding strategies. Individual
measures with k > 2 were coded as an individual category. Further, the following specific
categories were used: symbolic self-placement, support for issues/policies, vote choice, party
identification, ad-hoc measures (i.e., designed for purposes of a single study), composites (i.e., a
combination of multiple measure types), unspecified self-report (i.e., studies that noted that a
self-report measure of ideology was used but did not name it or provide items), and other
unspecified (i.e., all other cases where the authors left their measure of ideology unspecified).
Including these categories, a total of 23 categories with k > 2 were present.
Content overlap. Judgments concerning whether measures of political ideology are
marked by content overlap were initially made by the first author based on a careful reading of
each measure. We then constructed a dummy-coded moderator variable for overlap vs. no
overlap. The following measures were categorized as containing content overlap: the original C-
24
Scale (e.g., Kirton, 1978)5, all versions of the F-Scale (e.g., Davids, 1955; Kohn, 1974), all
versions of the RWA Scale (e.g., Crowson et al., 2006), all versions of the SDO scale (e.g.,
Leone & Chirumbolo, 2008), all versions of the System Justification Scale (e.g., Hennes et al.,
2012), the Personal Conservatism Scale (e.g., Olcaysoy & Saribay, 2014), and all ad-hoc
measures that borrowed items from the aforementioned measures.
As a means of evaluating the first-author’s coding decisions, we recruited a small online
community sample from Amazon’s Mechanical Turk (N = 150) to provide judgments about (a)
six of the most commonly used and/or representative non-overlap measures of political ideology
(i.e., symbolic identification, vote choice, the Social and Economic Conservatism Scale, a
modernized C-Scale, party preferences, and the Political-Economic Conservatism Scale) and (b)
a priori content overlap measures. After reading each measure, participants responded to the
following items: (1) “A person's answers on this measure will accurately reflect whether they
hold right-wing vs. left-wing political views”; (2) “This measure contains content related to
psychological rigidity (e.g., inflexibility in thinking, dislike of uncertainty, dogmatism, etc.)?”;
and (3) “This measure contains content that is not directly related to political ideology”. Five-
point Likert-type response scales were used6. Responses on the three items were averaged (with
item 1 reverse scored) to yield a score for lay appraisals of content overlap and face validity.
Using these scores, we conducted a dependent samples t-test of differences between lay
appraisals of measures categorized by the first author as containing content overlap vs. lay
appraisals of measures not containing overlap. Scores for overlap and non-overlap measures
suggested that non-overlap measures had better face validity, t = 12.96, p < .001, d = 1.19.
5With the exception of modern variants of the C-Scale, such as that used in Deppe et al. (2015). 6For items 1 and 3, response options ranged from Definitely True (1) to Definitely False (5); for item 2, response options were Not at All (1), Somewhat (2), Half and Half (3), Primarily (4), and Exclusively (5).
25
Self-report vs. performance-based measures. Effect sizes derived from self-report
rigidity measures were coded as such (i.e., self-report), whereas effect sizes derived from
behavioral and/or objectively scored measures were coded as performance-based (κ = .89).
Dimension of political ideology. Measures of political ideology were coded as one of
three categories: general conservatism, social conservatism, or economic conservatism. General
conservatism, which comprised the largest proportion of observations, included generic self-
placement items, party affiliation or membership, vote history or preference, and self-report
scales that contain both social and economic content but report only a single score (e.g., the
Political-Economic Conservatism scale). Social conservatism was measured by self-placement
items; self-report measures that rely heavily on content related to the endorsement of traditional
values, social rules, and norms; self-report measures that yield a social conservatism subscale;
and policy preferences for issues related to social conservatism (e.g., abortion rights or gay
marriage). Economic conservatism was assessed in the same manner as social conservatism, but
with measures and policies that focus on government involvement in private enterprise,
redistribution of wealth, and/or the economic choices available to its citizens. Rater agreement
was substantial, κ = .90.
Rigidity. When coding each observation, we used a two-pronged approach. First, we
examined the rigidity constructs individually, coding them on the basis of study authors’
designations wherever possible. For instance, if the authors indicated that they had created a
composite self-report measure of dogmatism, we coded said measure as “dogmatism.” When this
was not possible, we relied on the fact that many of the varieties of rigidity used in the current
review are tied to “trademark” measures that are most frequently used to operationalize them.
For instance, motivations for certainty are typically assessed with the Need for Closure Scale
26
(Kruglanski et al., 1993) and cognitive reflection is typically assessed with the Cognitive
Reflection Test (Frederick, 2005). Hence, between the authors’ stated designations and this
heuristic, most studies in our pool could be categorized straightforwardly. Second, observations
were independently coded as reflecting the broad categories of rigid thinking styles, motivational
rigidity, dogmatic certitude, or cognitive inflexibility (i.e., using the taxonomic scheme outlined
in the Introduction) based on the first author’s judgment.
Statistical Analyses
All extracted effect sizes were transformed into Fisher’s z (Cohen, West, & Aiken, 2014)
to account for the slight negative bias in Pearson’s r (Card, 2012), and weighted according to the
inverse of their variance (i.e., sampling error), such that larger samples contributed more to the
aggregate effect size estimate than smaller ones (Lipsey & Wilson, 2001; Raudenbush & Bryk,
2002). We used the metafor package (Viechtbauer, 2010) in R (version 3.6.1; R Core Team,
2019) to conduct all analyses.
The three-level model. To account for dependencies across effect sizes, and particularly
for correlated sampling errors due to multiple effect sizes drawn from the same sample, we used
a three-level meta-analytic approach with restricted maximum likelihood estimation. In contrast
to the traditional (two-level) random effects model, in which effect sizes are assumed to vary due
to sampling variance and systematic variance between studies, the three-level model also
accounts for systematic variance across outcomes from the same sample. Using this approach,
we modeled the sampling variance for each effect size (level one), variation across outcomes
within each sample (level two), and variation across each sample (level three). Although such
multilevel models are said to require that residuals at each level are independent, Van den
Noortgate and colleagues (2013) demonstrated in simulation studies that the three-level approach
27
successfully handles dependencies due to correlated sampling errors, resulting in accurate
standard errors and point estimates (see also Van den Noortgate et al., 2003, 2015). We chose to
use three-level meta-analysis because, unlike most other statistical techniques for handling
correlated sampling errors (e.g., multivariate meta-analysis with robust estimation), the three-
level approach does not require that correlations among reported outcomes be known.
Heterogeneity. Another advantage of the three-level model is that it characterizes the
amount of heterogeneity due to differences both within (e.g., both economic and social
conservatism being reported in a single study) and between studies. Further, if heterogeneity is
present, the global three-level model may be extended to include relevant predictors (e.g.,
construct and conservatism type) without assuming that said predictors explain all variance
among outcomes within studies.
We computed three indices of heterogeneity. First, we computed Cochran’s Q. To
account for the misleading inflation of Q that occurs with a large pool of studies, we also
calculated H2 (Higgins & Thompson, 2002), or Q/k – 1, which represents the differences between
Q and its expected value when heterogeneity is not present. H2 allows for the comparison of
heterogeneity across different meta-analytic models, as it does not increase with the number of
studies included in each estimate. Higgins and Thompson (2002) recommended the following
guidelines for interpreting H2 values: H2 = 1 indicates that the population of studies is
homogeneous, whereas H2 > 1.5 indicates that substantial heterogeneity is present.
Lastly, we calculated the percentage of total heterogeneity due to substantive
heterogeneity from level 2 of the three-level model (variation across outcomes within each
sample), I2(2), and level 3 of the three-level model (variation across each sample), I2(3). Much like
the traditional I2 statistic for random-effects models, introduced by Higgins and Thompson
28
(2002), I2(2) and I2(3) estimate heterogeneity relative to the total variance (i.e., variance in true
effects plus sampling variance) with the additional step of splitting the variance in true effects
into between-cluster (sample) heterogeneity and within-cluster heterogeneity.
Meta-analytic models. Determining the extent to which scientific constructs should be
lumped together or split apart is a longstanding philosophical and practical conundrum (e.g.,
McKusick, 1969; Simpson, 1945). As such, it is unclear whether either the different types of
rigidity or the various domains of conservatism should be conceptualized as comprising two
larger constructs. As a means of engaging with this problem, we used the following nested
analytic approach.
First, following Glassian thinking (Glass, 2015), we estimated an overall model,
collapsing across rigidity constructs and types of conservatism to yield an overall meta-analytic
estimate. Second, we conducted subgroup analyses for each political ideology and rigidity
variable across all classification schemes. We then estimated meta-regression models with
categorical moderators for these classifications (e.g., social vs. general vs. economic ideology),
which we evaluated with omnibus tests of the null hypothesis that all levels of the moderator are
equal to zero simultaneously. Finally, we estimated a “full” multiple meta-regression model by
simultaneously regressing effect sizes on categorical moderators for rigidity domain and political
ideology domain, which we then extended to additional moderators of interest, such as
publication status, sample type, author allegiance, and so on. Continuous moderators were mean
centered to facilitate interpretation. This produced predicted values for each of the four rigidity
domains at the reference level of each moderator, as well as effect size estimates for each non-
reference level of each moderator (i.e., how much the predicted values for each rigidity domain
would change if the reference level for a given moderator changed). We employed the Knapp
29
and Hartung (2003) adjustment to standard Wald-type tests, which allows for better control of
Type I error rate (i.e., tests of sets of model coefficients were F-tests). We interpreted moderators
with significant omnibus tests based on (a) t-tests of the differences between each level of the
moderator and (b) point estimates and confidence intervals of each conservatism-construct
coefficient at a reference level of the moderator in question.
A reduction in substantive heterogeneity as the meta-analytic models increase in
complexity allows us to examine whether our boundary conditions are explaining additional
residual variance. For instance, we can separately observe the reduction in heterogeneity
attributable to distinguishing between social and economic conservatism, different rigidity
domains, and so on, thereby clarifying the degree the latent meaning of the RRH literature and/or
the likelihood that similar mechanisms underly most rigidity-conservatism relations. Still, these
models, which include only main effects, carry the assumption that the influence of multiple
factors is additive (i.e., that differences between levels of each moderator do not vary across
levels of the other moderator[s]). Although interactions may be present, we did not have
adequate statistical power to conduct the 3-way interaction analyses that would have been
necessary (e.g., moderator variable by conservatism domain by rigidity type).
Publication bias. The methods for assessing publication bias are limited in the multi-
level model. Hence, to assess for publication bias, we proceeded in several steps. To initially
investigate reporting and/or publication bias, we created two contour-enhanced plots visualizing
(1) the distribution of all effect sizes against their precision (1/SE), including the variance from
each level of the three-level model, with the reference line set at the estimated overall effect size,
and (2) the distribution of internally standardized residuals (i.e., observed residuals in the full
model divided by their corresponding standard errors) after accounting for rigidity construct and
30
conservatism type. Traditionally, funnel plots were used to visualize publication bias via
asymmetrically distributed studies (i.e., as standard errors increase, the tails of the distribution of
effect sizes should widen asymmetrically under conditions of publication bias). One limitation of
traditional funnel plots, however, is that asymmetry may be attributable to substantive
heterogeneity (i.e., differing true effects across studies), rather than publication bias (Egger et al.,
1997). Thus, we employed procedures proposed by Soveri et al. (2017) that partially resolve this
limitation by, in effect, controlling for major moderating variables that might plausibly account
for substantive heterogeneity.
Next, to further probe, and potentially correct for, the possibility of asymmetry in the
effect size distribution while maintaining the three-level model, we followed the method of
entering either the standard error or variance for each observed effect size into each model as an
additional predictor (i.e., moderator). Some authors argue that this approach can be considered
closely equivalent to the PET-PEESE method (Lehtonen et al., 2018). In the PET-PEESE
method, a precision-effect test (PET) is first conducted by regressing the effect sizes on their
standard errors in a weighted least-squares regression; a statistically significant relation between
effect sizes and standard errors points to publication bias. Next, a precision-effect test with
standard error (PEESE) is conducted, which entails replacing the standard error with the
variance; relative to a PET, a PEESE better estimates the model’s unbiased effect size (i.e., the
intercept of the weighted least-squares regression) under certain circumstances (Stanley &
Doucouliagos, 2014). A PET that yields a statistically significant relation between the effect
sizes and standard errors is typically followed by a PEESE. As an additional and more direct
means of assessing publication bias, we examined the degree to which published vs. unpublished
studies influenced the full model via fixed-effects moderator analyses.
31
Results
The final dataset comprised 708 observations, 329 samples, and 173 studies (unique N =
187,612). Figure 2 depicts the number of effect sizes for each construct, segmented by the
frequency of each political ideology type within each construct (see also the full data set
provided in online supplementary materials). Table 2 presents the number of effect sizes at each
level of each categorical moderator; Table 3 presents descriptive statistics for each continuous
moderator7. Unless stated otherwise, all results are reported with content overlap effect sizes (N
= 139) removed.
Figure 3. Number of effect sizes for each rigidity domain and political ideology dimension.
7To account for the possibility of outlying observations distorting our conclusions, we removed observations with standardized residuals that deviated from the expected asymptotic distribution. This procedure was done iteratively at both the 95% and 99% confidence levels (visualized with the white and grey areas, respectively, of Figure 7). Forty-four observations (7.2%) were removed for p < .05 and 18 observations (2.9%) were removed at p < .01. Together, the three pools of studies (i.e., raw, trimmed at p <.05, and trimmed at p < .01) allowed for sensitivity analyses, although the raw pool of studies remained our primary object of analysis.
32
Table 2. Number of effect sizes for categorical moderators with k > 3. Moderator k κ
Sample .76 Students 321 Online 143 Community 110 Nationally Representative 56 YourMorals.org 17 Mixed 8 Government officials 5 Soldiers 3
Rigidity Measures .89 Self-report 493 Performance-based 170
Political Measures .79 Left-right self-placement 305 Issues 48 F-Scale 40 Party Identification 40 RWA 28 C-Scale (Modified) 25 Composite 19 Political-Economic Conservatism Scale 17 Economic Conservatism Scale 15 Cultural Conservatism Scale 14 Conservatism-Liberalism Scale 13 Social and Economic Conservatism Scale 12 Vote 11 C-Scale (Original) 11 Unspecified self-report 10 System Justification 10 S4 Conservatism Scale 7 Social Dominance Orientation 6 Current Political Beliefs Questionnaire 5 Unspecified 5 Original/ad-hoc 5 Economic System Justification 3 Core Conservatism Scale 3
Country .98 USA 417 Flanders 40 Poland 30 UK 24 Canada 22 Turkey 20 Sweden 17 South Africa 16
Italy 15 Hungary 13 Germany 6 Belgium 5
33
Netherlands 4 Brazil 3
Asia 5 Europe 10 Oceania 3 South/Central America 4
Peer-review status .93 Journal 459 Dissertation 102 Unpublished 78 Replication 44 Supplementals 15 Book 7 Note. We separate Flanders from the rest of Belgium as the papers from which these effects are drawn analyze these regions separately given that there are substantial differences in the political culture in Flanders relative to the rest of Belgium.
Table 3. Descriptive statistics for continuous moderators. k M SD Median Min Max Allegiance 658 1.20 .69 1 0 2 Year 660 2004 16.47 2011 1955 2020 Age 349 28.51 10.68 27.57 18 52 % Female 394 59% 19% 57% 0% 100% WEIRD 655 .80 .13 .84 .21 .94 Donations (First) 289 .13 .39 0 -1 1 Donations (Last). 187 .34 .48 0 0 1 Frequency (First) 274 .16 .75 0 -4 5 Frequency (Last) 163 2.71 7.67 0 0 45 Sample size 706 510.52 1263 223 12 18, 817
Model 1: Global Result
Supporting the RRH in its broadest-brush predictions, our overall analysis indicated a
small statistical association between rigidity and political conservatism, r = .13, 95% CI (.11,
15). Importantly, a considerable degree of heterogeneity was present in the model, Q(565) =
4363, p < .001; H2 = 6.71; I2(2) = 66% and I2(3) = 25%. As indicated by the I2 values, only 9% of
this heterogeneity is attributable to error variance, whereas 91% is attributable to differing true
effect sizes across studies, with the degree of substantive heterogeneity in level 2 (i.e., within
samples) being somewhat greater than that accounted for by variance in level 3 (i.e., across
34
samples). Results were reduced only slightly (differences in r < .01) when outliers were
removed; thus, the subsequent analyses were based on the full dataset.
Model 2: The Multidimensionality of Political Ideology
To arrive at an estimate of the main effect for each type of conservatism, we adapted the
three-level model by dropping the intercept and regressing the observed effect sizes on a set of
dummy-coded variables for economic conservatism, social conservatism, and general
conservatism, respectively. An omnibus test was statistically significant, F (3, 563) = 141.17, p <
.001. Residual heterogeneity was reduced but not eliminated (QE [563] = 3596, p < .001; H2 =
5.35; I2(2) = 55% and I2(3) = 34%), with more variance being attributable to within-sample
differences. Table 4 presents estimated effect sizes, alongside 95% confidence intervals, ks, Ns,
p-values, and heterogeneity statistics (the latter of which were derived by-necessity from
individual subgroup analyses involving only a single type of conservatism). All three types of
conservatism positively and significantly deviated from zero. Results were effectively unchanged
after removing outliers (change in r < .02 for all three outcomes). Each type of conservatism
differed significantly from the other two. More specifically, economic conservatism was less
strongly related to rigidity than general conservatism (t = 5.43, p < .001), which manifested a
small-to-moderate positive relation to rigidity, and social conservatism (t = 9.37, p <.001), which
manifested a moderate positive relation with rigidity; general conservatism was less strongly
related to rigidity than was social conservatism (t = 4.61, p <.001).
Subgroup analyses indicated that all three types of conservatism demonstrated a
considerable degree of heterogeneity. As reported in Table 4, for both social and economic
conservatism, most of the total variance was explained by differences within studies, whereas for
general conservatism, most of the total variance was explained by differences among studies,
35
perhaps speaking to the greater specificity of the former two conservatism-types (e.g., because
measures of general conservatism are more heterogeneous than measures of economic or social
conservatism).
Table 4. The effect of political ideology dimension for rigidity-ideology relations. k n H2 I2 (2) I2 (3) r 95% CI
Economic 112 49,885 4.26 54% 33% .05 .02, .08 General 394 132,000 4.85 37% 50% .13 .12, .15 Social 98 54,259 7.10 64% 27% .20 .17., .23 Note. k = observations, n = unique participants.
Model 3: Rigidity Domains
We next sought to clarify the relations between individual rigidity domains and political
conservatism, adapting the three-level model by dropping the intercept and regressing the
observed effect sizes on a set of nominal variables for each of the four rigidity domains.
Subsequently, to parse the heterogeneity within and across these domains, we conducted a
number of standalone meta-analytic models (i.e., subgroup analyses) with effect sizes variously
grouped by rigidity domain and individual rigidity constructs. An omnibus test suggested the
presence of moderation for domain, F (3, 562) = 128.92, p < .001. Relative to the overall model,
the residual heterogeneity, most of which was attributable to within-sample variance, was
reduced somewhat but not eliminated, with more variance being attributable to within-sample
differences (QE [562] = 3256.70, p < .001; H2 = 4.74; I2(2) = 74% and I2(3) = 14%).
Consistent with the RRH, all constructs were statistically significantly related to political
conservatism. Still, estimated effect sizes were uniformly and, in most cases, considerably
smaller than previously reported estimates (e.g., the most recent prior meta-analytic estimate for
cognitive inflexibility was r = .38, while our estimate was r = .07; Jost, 2017). See Table 6 for
point estimates, 95% confidence intervals, ks, ns, p-values, and heterogeneity statistics.
36
Removing outliers did not influence results and the PET was not statistically significant, p =
.128. Cognitive inflexibility and rigid thinking demonstrated trivial effects (i.e., rs < .07);
motivational rigidity manifested a small-to-modest positive association with political
conservatism; and dogmatism manifested a moderately-sized positive association with
conservatism. Thus, motivational rigidity manifested a significantly larger effect than both
cognitive inflexibility (t = 3.63, p < .001) and rigid thinking (t = 5.74, p < .001), and dogmatism
manifested a larger relation than all of the other three domains (ts ranged from 4.11 to 8.47, ps <
.001). Rigid thinking and cognitive inflexibility did not significantly differ from one another (t =
0.17, p = .865).
Looking at the construct rather than domain level, an omnibus test suggested the presence
of moderation, F (6, 598) = 86.45, p < .001, QE (598) =3342.79, p < .001, H2 = 4.53; I2(2) = 72%
and I2(3) = 15%. With the exception of dogmatism, which manifested a moderately-sized positive
association with political conservatism (in the presence of considerable substantive
heterogeneity), all meta-analytic estimates fell between r = .07 and r = .15.
Table 6. Subgroup analyses for rigidity-ideology relations. k n H2 I2
(2) I2 (3) r 95% CI
Rigidity Domain Cognitive Inflexibility 68 7,926 2.35 38% 54% .07 .03, .11 Motivational Rigidity 256 50,507 4.31 80% 8% .15 .13, .17 Rigid Thinking Style 144 86,410 4.81 57% 22% .07 .05, .09 Dogmatic Certitude 98 27,666 7.40 44% 48% .22 .19, .25
Rigidity Construct Closure, Order, & Structure 266 53,608 4.20 .76 .08 .15 .13, .16 Dogmatism 103 29,504 7.11 .40 .51 .22 .19, .25 (Low) Cognitive Reflection 95 39,678 4.29 .85 .01 .07 .04, .10 Cognitive Rigidity 53 6,090 1.58 .07 .72 .10 .05, .14 (Low) Need for Cognition 51 51,250 5.03 .00 .90 .07 .03, .10 Intolerance of Ambiguity 36 10,525 3.96 .82 .09 .11 .06, .16 Note. k = observations, n = unique participants.
37
The Full Model
We next regressed all non-overlap effect sizes on 12 dummy-coded moderator variables,
one for each potential combination of conservatism-type and rigidity domain (e.g., dogmatism by
economic conservatism). Residual heterogeneity was reduced further but remained present and
substantial, Q (554) = 2470, p < .001; H2 = 3.36; I2 (2) = 62% and I2 (3) = 24%, with most being
attributable to within-sample heterogeneity. The test of moderation was statistically significant,
F (12, 544) = 56.20, p < .001. Results are presented in Table 7.
All rigidity variables, except dogmatism, manifested correlations with economic
conservatism that were not significantly different from zero (rs ranged from .00 to .04). In
contrast, dogmatism demonstrated a small, statistically significant, positive correlation with
economic conservatism (r = .16). As expected, meta-analytic estimates for relations between
rigidity domains and social conservatism were considerably larger than those for economic
conservatism (rs ranged from .11 to .32). As illustrated in Figure 4, the differences between
social and economic ideology were particularly stark for all rigidity domain (difference between
rs > .10).
38
Table 7. Results for the full model. k s N r 95% CI p
Economic Cognitive Inflexibility 26 8 1,402 .00 -.06, .06 = .935 Motivational Rigidity 45 28 11,933 .04 .01, .08 = .018 Dogmatism 23 18 8,691 .16 .11, .21 < .001 Thinking Style 23 21 32,560 .02 -.02, .07 = .324 General Cognitive Inflexibility 30 19 6,707 .11 .06, .16 < .001 Motivational Rigidity 170 91 36,243 .16 .14, .19 < .001 Dogmatism 58 48 18,811 .22 .18, .25 < .001 Thinking Style 98 83 64,300 .06 .04, .09 < .001 Social Cognitive Inflexibility 12 5 1,087 .11 .02, .19 = .012 Motivational Rigidity 41 27 12,347 .23 .19, .27 < .001 Dogmatism 17 13 10,232 .32 .27, .38 <.001 Thinking Style 23 22 33,386 .15 .10, .19 < .001 Note. k = observations, s = samples, n = unique participants.
39
Figure 4. Relations between rigidity variables and social and economic conservatism (with 95%
confidence intervals).
40
Moderators
We extended both the overall model (i.e., no moderators) and full model (i.e., categorical
moderators for political ideology dimension and rigidity domain) to include moderator variables
representing plausible boundary conditions of the RRH. Namely, these were political ideology
measure type, rigidity measure type, nationality, sample type, and WEIRDness.
Political Measures
As reported in Table 8 and as expected, there was significant variation in conservatism-
rigidity relations across political ideology measures in the overall model, F(32, 635) = 22.36, p <
.001, such that the effect sizes across measures (with k > 2) ranged from non-significant to
extremely large. By far the largest effect sizes were found using the F-Scale, RWA Scale, and
original Wilson-Patterson Conservatism Scale. Estimated effects for content overlap measures
were larger than the estimated overall effect excepting the System Justification Scale. Content
overlap was examined as a binary moderator variable (i.e., overlap vs. no overlap; see Figure 5),
revealing a significant moderation effect, F(2, 668) = 277.72, p < .001, such that non-overlap
political measures manifested only a small association with rigidity, r = .13, 95% CI (.11, 15),
whereas overlap measures manifested a large statistical association with rigidity, r = .39, 95% CI
(.35, .43).
41
Table 8. Subgroup analyses for all political ideology measures. k n H2 I2
(2) I2 (3) r 95% CI
Symbolic 305 140,966 6.58 72% 17% .13*** .11, .14 Policy Preferences 48 12,115 3.20 49% 34% .11*** .07, .15 Fascism Scale 40 6,387 45.49 48% 49% .54*** .41, .67 Party Identification 40 20,003 4.05 17% 70% .11*** .06, .16 Right-wing Authoritarianism 28 5,345 4.70 49% 37% .34*** .27, .41 W-P Conservatism Scale (Modernized)
25 2,809 8.29 88% 5% .05 -.06, .17
Composite 19 6,308 2.53 6% 84% .07 -.01, .14 Political-Economic Conservatism Scale
17 1,722 6.75 3% 82% .22** .09, .34
Economic Conservatism Scale 15 1,315 3.13 27% 60% .11 -.02, .25 Cultural Conservatism Scale 14 1,379 10.87 79% 13% .32*** .19, .45 Conservatism-Liberalism Scale 13 1,070 1.39 55% 8% .12*** .06, .19 Social and Economic Conservatism Scale
12 1,793 11.39 15% 79% .21* .04, .39
Vote 11 11,130 1.94 0% 69% .19*** .11, .26 W-P Conservatism Scale (Original)
11 1,902 14.21 95% 0% .37*** .21, .52
System Justification 10 2,443 7.05 74% 16% .08 -.02, .18 S4 Conservatism Scale 7 180 0 NA NA .15* .05, .25 Social Dominance Orientation 6 2,019 3.61 41% 41% .17* .03, .30 Economic System Justification 3 919 1.28 36% 36% .18 -.09, .46 Note. k = observations, n = unique participants.
42
Figure 5. Comparison of overlap vs. non-overlap effect sizes.
Further, after excluding content overlap measures, political ideology measure type
significantly moderated the relation between rigidity and conservatism in the full model (i.e.,
controlling for political ideology dimension and rigidity domain), F (16, 533) = 3.29, p < .001;
H2 = 2.98. Still, results suggested little heterogeneity across commonly used measures of
political ideology in contemporary political psychology. Relative to symbolic ideology, which
comprised the largest proportion of effect sizes, only 2 of the 16 measures (with k > 2) yielded
significantly different point estimates, one of which was a measure of social conservatism,
43
specifically, and the other of which was our catch-all category for self-report measures that were
not specified in the study text. That so few differences arose across diverse measures such as
vote choice, party preference, issue preferences, and various common self-report measures of
ideology suggests that variation across non-overlap measures of ideology is not a substantial
source of heterogeneity (results held after controlling for rigidity-type and ideology dimension).
Nationality
A significant moderation effect was present for nationality in the overall model, F (18,
523) = 19.57, p < .001. Nations/regions with k > 2 were Brazil, Canada, Flanders, Germany,
Hong Kong, Hungary, Italy, the Netherlands, Poland, South Africa, Sweden, Turkey, the United
Kingdom, and the United States. We also collapsed countries with k < 2 into European,
South/Central American, Asian, and Oceanian categories. Results are presented in Table 9 and
Supplemental Figure 1. Heterogeneity was reduced, but remained high, Q(523) = 3476, p < .001;
H2 = 5.43; I2(2) = 66% and I2(3) = 22%. Given that the majority of effect sizes were observed in
the United States, we also compared US and non-US samples, which revealed a significant
difference, such that F(2, 540) = 149.23, p < .001, where r(USA) = .15, 95% CI (.13, .17) and
r(Non-USA) = .09, 95% CI (.07, .12).
As visualized in Figure 6, the difference between the USA other countries appeared to be
driven by differences across economic ideology, such that allowing the binary USA vs. non-USA
factor to interact with political ideology revealed a significant interaction effect, F(2, 536) =
5.46, p = .005. Specifically, effects were relatively consistent across US and non-US samples for
social ideology (r[USA] = .21, 95% CI [.15, .26]; r[non-USA] = .22, 95% CI [.18, .25]; F =
.070, p = .792); only modestly, albeit significantly, different for general ideology (r[USA] = .15,
95% CI [.13, .18]; r[non-USA] = .11, 95% CI [.08, .14]; F = 5.64, p = .018); and substantially
44
and significantly different for economic ideology (r[USA] = .10, 95% CI [.06, .13]; r[non-USA]
= -.03, 95% CI [-.07, .02]; F = 16.34, p < .001). Controlling for rigidity domain (and the
interaction between rigidity domain and US vs. non-US) did not reduce the strength or
significance of this interaction effect.
Figure 6. Social and economic ideology’s correlation with rigidity in the USA vs. other
countries.
45
Table 9. Meta-analytic results for nationality. Economic Social General
k r 95% CI k r 95% CI k r 95% CI Asia 0 – – 0 – – 5 -.01 -.13, .11 Belgium 0 – – 1 .41 .13, .69 1 .17 -.12, .47 Brazil 1 .11 -.14, .37 1 .19 -.12, .49 1 -.05 -.34, .25 Canada 4 .02 -.11, .15 4 .09 -.08, .26 14 .04 -.05, .14 Europe 1 .04 -.16, .24 1 .19 -.08, .46 8 .05 -.06, .16 Flanders 10 .04 -.04, .11 8 .21 .09, .34 21 .29 .19, .39 Germany 0 – – 0 – – 6 .11 -.01, .23 Hong Kong 1 -.09 -.31, .13 1 .12 -.16, .40 1 .05 -.22, .31 Hungary 0 – – 0 – – 3 -.11 -.25, .04 Italy 2 .03 -.14, .19 1 .19 -.10, .48 9 .17 .08, .26 Netherlands 0 – – 0 – – 3 .14 -.01, .29 Oceania 1 -.13 -.39, .14 1 .57 .28, .87 1 .17 -.12, .47 Poland 4 -.19 -.31, -.08 4 .28 .13, .44 21 .18 .08, .28 South Africa 15 -.02 -.10, .06 0 – – 1 .28 -.03, .58 South/Central America 0 – – 0 – – 4 .04 -.11, .18 Sweden 5 -.01 -.12, .10 1 .13 -.14, .40 9 .03 -.09, .15 Turkey 3 .01 -.12, .12 2 .15 -.04, .35 11 .13 .06, .21 UK 1 .12 -.18, .42 1 .29 -.08, .66 15 .08 .01, .16 USA 62 .09 .06, .12 58 .22 .18, .26 213 .15 .13, .17 Note. k = observations. Bolded indicates p < .01.
Sample-type
The type of sample from which each observation was collected accounted for a
significant degree of residual heterogeneity when entered into the overall model, F (7, 555) =
66.63, p < .001 (see Table 10 and Supplementary Figure 2). The smallest effect size was for
samples matched to the demographic characteristics of the national population (r = .03, 95% CI
[-.01, .07]), and the largest effect size was for government officials (r = .24, 95% CI [.09, .39]).
Heterogeneity was reduced, but remained high, after accounting for sample-type, Q(555) = 3747,
p < .001; H2 = 5.67; I2(2) = 75% and I2(3) = 14%. Controlling for ideology and rigidity (i.e., the
46
full model) revealed similar results, F (7, 550) = 9.93, p < .001, H = 2.88. Relative to nationally
representative samples (k = 45), which are considered least likely to be at risk for bias (Higgins
& Green, 2011), 7 of 8 sample-types exhibited significantly larger effects. Namely, results
relative to nationally representative samples were larger for students (r = +.06, 95% CI [+.02,
+.10], k = 266), online samples (r = +.09, 95% CI [+.05, +.13], k = 135), yourmorals.org (+.06,
95% CI [+.00, +.11], k = 17), community samples (r = +.14, 95% CI [+.10, +.19], k = 86), and
government officials (r = +.14, 95% CI [.01, .28], k = 5).
Table 10. Political Measure Type as a Moderator of the Full Model
r 95% CI
(lower) 95% CI (upper)
Predicted Effect (for General Conservatism and Cognitive Inflexibility) Sample Type Community .16 *** .11 .21 Government Officials .16 * .03 .29 Mixed .03 -.07 .13 Nationally Representative .02 -.04 .07 Online .11 *** .07 .15 Students .08 *** .04 .11 YourMorals.org .08 * .02 .13
Change in Estimated Effect for Each Level of the Moderators Change in Predicted Effect Across Levels of Political Conservatism (Reference = General Conservatism) Economic -.08 *** -.11 -.05 Social +.07 *** .04 .10 Change Across Levels of Rigidity (Reference = Cognitive Inflexibility)
Dogmatism +.14 *** .10 .18 Motivation +.06 ** .02 .10 Thinking Style -.01 -.05 .04 Note. * p < .05; ** p < .01; *** p < .001.
47
Self-report vs. Performance-based Rigidity Measures
Consistent with the findings of Van Hiel et al. (2016), performance-based outcome
measures yielded significantly smaller estimated effects than did self-report outcome measures in
the overall model, F(2, 661) = 186.65, p < .001, such that performance-based measures of
rigidity manifested a trivial statistical association with conservatism (r = .06, 95% CI [.03, .09])
but self-reports manifested a small statistical association (r = .16, 95% CI [.14, .18]). It is worth
noting, however, that relations for both types of measures were small.
WEIRDness
As shown in Table 11, nations categorized as Western and/or rich demonstrated
significantly larger conservatism-rigidity correlations than non-Western and/or non-Rich nations
in the overall model. Further, when standardized scores for Industrialization, Education, and
level of Democracy were entered as continuous moderators of the relation between conservatism
and rigidity, Education accounted for a significant degree of residual heterogeneity, whereas
Industrialization and Democracy did not. After controlling for ideology and rigidity type,
however, WEIRD estimates did not account for a significant degree of residual heterogeneity,
either individually or simultaneously (see Supplemental Table 2 for details).
Table 11. Results for WEIRDness. Categorical Moderators
k n r 95% CI Test of Moderation
Western 514 86,607 .14* .12, .15 F(2, 537) = 143.10, p < .001 Non-Western 25 6,324 .08* .01, .14
Rich 492 85,643 .14* .12, .16 F(2, 537) = 149.38, p
< .001 Non-rich 47 7,228 .07* .02, .11 Continuous Moderators
95% CI B 95% CI p
Intercept .12, .15 .13* .12, .15 < .001
48
Education .01, .04 .03* .01, .04 .002
Industrialization -.03, .02 -.005 -.03, .02 .712
Democracy -.03, .02 -.0004 -.03, .02 .973 Note. k = observations, n = unique participants. * indicates p < .01.
Publication Bias
We first examined the distribution of study outcomes via two contour-enhanced funnel
plots, using the methods described (see Figure 7). These analyses were conducted with content
overlap effect sizes removed from the study pool, as they may otherwise give the false
appearance of publication bias. In the first plot, many effect sizes were outside of the anticipated
range given their SEs, which is to be expected in the presence of considerable heterogeneity.
Still, there was no clear asymmetry in the distribution of these outliers. When considering the
second plot, in which a greater degree of substantive heterogeneity is accounted for, far fewer
outliers were present. The outliers that remained were relatively symmetrical. As such, neither
plot provided clear evidence of publication bias. Nevertheless, only 34.5% of studies were
sufficiently powered to detect an effect size of ρ = .10, while a true effect of ρ = .15 would be
necessary to achieve median statistical power of 66% (see Supplemental Figure 3).
49
Figure 7. Contour enhanced funnel plots.
PET analysis indicated that there was no statistically significant association between
effect sizes and their standard errors for any of the four models (i.e., overall, political ideology
dimension, rigidity domain, and full model) providing little evidence of publication bias. Still, a
significant moderation effect was present for publication-type in the overall model (F[2, 563] =
161.43, p < .001), such that relative to effect sizes drawn from initial peer-reviewed journal
articles (r = .14, 95% CI [.13, .16]), all other effect sizes were smaller (r = .11, 95% CI [.09,
.14]). This result held true in the full model, such that non-peer-reviewed studies were
significantly smaller in magnitude than those from peer reviewed studies, F (1, 584) = 4.66, p =
.031 with a relatively small difference in effect size, r = -.03, 95% CI (-.05, -.00), suggesting the
potential presence of publication bias in favor of the RRH.
50
Table 12. Moderation results for publication status. r 95% CI p Test of Moderation
Relative to Journal: F (5, 580) = 1.18, p = .318
Dissertation -.03 -.07, .01 .090
Book -.02 -.11, .07 .646
Replication -.05 -.10, .01 .085
Supplementals -.01 -.08, .07 .817
Unpublished -.02 -.05, .02 .348 F (1, 584) = 4.66, p = .031
All Non-Journal -.03 -.05, -.00 .031
Allegiance Associations
In the overall model, first-authors’ political donations significantly moderated the relation
between conservatism and rigidity, F (3, 286) = 55.44, p < .001. This result was not consistent
with prediction, however, in that first-authors who donated to the political left (r = .10, 95% CI [-
.06, .15]) did not significantly differ from those who donated to the political right (r = .09, 95%
CI [-.03, .22]), whereas non-donators reported the largest effect sizes (r = .15, 95% CI [.13, .17]).
Authors’ theoretical allegiance was also a significant moderator F (3, 552) = 102.62, p < .001.
Pro-RRH papers (r = .17, 95% CI [.14, .19]) reported the largest effect sizes, followed by anti-
RRH papers (r = .12, 95% CI [.10, .14]), and neutral papers (r = .10, 95% CI [.06, .14]).
Allegiance effects remained significant after controlling for heterogeneity across political
ideology and rigidity domain, such that first-authors’ political donations significantly moderated
the relation between conservatism and rigidity, F (2, 256) = 3.82, p = .023. Consistent with our
expectations, relative to first authors who had not donated to political causes, first authors who
had donated to right-wing causes reported effect sizes that were significantly smaller (r = -.16 [-
.28, -.04], p = .007). In contrast, authors who donated to left-wing causes reported effect sizes
that did not statistically differ from non-donators (r = -.02 [-.07, .03], p = .382). Last authors who
51
donated to left-wing causes did not report effect sizes that differed significantly from those who
had not donated (r = .02 [-.03, .08], p = .42). No last authors had donated to right-wing causes.
Neither donation amount nor donation frequency accounted for a statistically significant amount
of variance in the full model. Theoretical allegiance also significantly moderated the relation
between conservatism and rigidity in the full model, F (3, 548) = 6.16, p < .001. Pro-RRH
authors reported significantly larger effect sizes than neutral authors (r = +.05, 95% CI [.02, .08],
t = 3.34, p < .001) and anti-RRH reported smaller effect sizes than neutral authors (r = -.03, 95%
CI [-.07, .01], t = 1.55, p = .123).
Discussion
The rigidity-of-the-right hypothesis, which posits that politically conservative beliefs
appeal to people who are cognitively, motivationally, and ideologically rigid, has been subjected
to decades of conflagratory scientific debate. The present meta-analytic review, which spanned
303 independent samples, 607 effect sizes, 35 nations, and more than 180,000 unique
participants, (1) provides a precise estimate of the magnitude and direction of the relation
between political conservatism and rigidity, (2) catalogues the extent to which effect sizes of
individual studies in the literature are distributed around said estimate, and (3) elucidates
moderator variables that account for these differences across studies. We hope that our findings,
which we discuss below, herald the beginning of the end of the longstanding controversy
surrounding the RRH.
Major Findings: A Birds-eye View
Broad theoretical accounts of ideological asymmetries (or symmetries) in rigidity have
masked complex and often-divergent—yet conceptually fertile—patterns of relations in the
literature. Most notably, correlations between economic conservatism and rigidity variables were
52
almost uniformly not statistically different from zero, whereas social conservatism and rigidity
variables were moderately-to-strongly positively correlated. Thus, the RRH appears to apply to
social, but not economic, conservatism. Further, estimated effects were inconsistent across levels
of rigidity-related phenomena: population estimates were marginal for rigid thinking styles and
basic cognitive mechanisms and processes, larger for motivations to avoid complexity and
ambiguity in everyday life, and quite large for ideological rigidity (dogmatism). Accordingly, the
political left and right evince asymmetrical rigidity-related motivations yet closely symmetrical
cognitive architecture and thinking styles. Dogmatism appears to be a special case, with
conservatives seeming to be clearly more dogmatic than members of the political left.
Complicating this picture, methodological moderators (e.g., type of sample or nationality)
frequently accounted for additional variance, suggesting that many previous findings have been
amplified by, or are even contingent upon, systematic error variance (or, at the very least, do not
generalize to critical environmental and psychometric modalities). For instance, nationally
representative samples did yield significant effects, and the meta-analytic estimate for economic
conservatism was negative, albeit non-significant, in non-United States samples.
Depending on one’s theoretical perspective, these findings might be construed as a
demonstration of either (a) the RRH’s limited predictive accuracy and generalizability, which are
not commensurate with the model’s strong claims, or (b) the strongest evidence to date that
certain strains of conservatism tend to be psychologically affiliated with certain strains of
rigidity, even when accounting for methodological bias. We have a somewhat different
assessment. Namely, the model, which is now over 70 years old, has outlived its heuristic value.
Political psychology has grown beyond comparisons of the “right” and “left”—indeed, that
differences exist between any two groups, let alone demographically and culturally opposed
53
groups, is in some critical sense trivial. If we are interested in the psychological causes and
correlates of political ideology, we will do well to articulate when politics and cognitive-
motivational processes intersect—and why they do or do not—rather than if differences exist
between the boxes of “left” and “right”. What follows are several places to begin this
conversation.
A Classic Social-Psychological Approach
For starters, we might consider returning to our social-psychological roots and recall
Lewin’s (1936) classic formula describing behavior (B) as a function (f) of the person (P) and
their environment (E), which includes both physical and social influences: B = f(P, E).
Our assessment of the RRH literature is that it has overly focused on the “P” in that equation—
assuming that rigidity is fundamentally different between persons on the left and right and
attempting to explain why that difference exists. Less work has focused on the “E” in the
equation and less still has investigated the “f” (i.e., function), namely, the ways in which people
interact with their environment.
Applying such a socioecological lens (i.e., accounting for physical, societal, and
interpersonal environments; Oishi & Graham, 2010) allows for rigidity to be conceptualized as
an emergent property—stemming from certain configurations of traits, environments, and the
ways in which they interact, rather than existing in a social or cultural vacuum. Notwithstanding
the degree to which people actively create physical and psychological environments compatible
with their dispositional drives and traits (Bouchard, 2016), this socioecological approach allows
for new predictions concerning when and why rigidity varies across the political spectrum,
including instances of rigidity-of-the-left and rigidity being equally high or low across the left
and right. In fact, these hypotheses may be the most interesting, as they are potentially
54
surprising. Theorizing about these surprising associations is not due to a lack of evidence that
they exist. The data across this meta-analysis reveal a non-negligible number of negative
correlations between conservatism and rigidity in the distributions of effect sizes, especially for
non-overlap measures (Figure 4) and when looking at economic (vs. social) conservatism (Figure
5) in non-US (vs. US) samples (Figure 6). Rather than treating these effects as noise, an
explanatorily powerful psychology of political ideology would aim to explain rigidity on both
left and right and how it manifests across diverse contexts. Doing so would not undermine any
true main effect of conservatism; indeed, no cross-national meta-analytic estimate presented here
was significantly negative. But if no one is looking for nuance, nuance is unlikely to be found.
Social, But Not Economic, Conservatism
Despite the popularity of the left vs. right political spectrum among researchers, the
psychological antecedents of political ideology are better understood in the context of their
natural correspondence to differing social and economic ideologies than in terms of a global left
vs. right distinction. If mechanisms specific to social conservatism drive conservatism-rigidity
relations, then epistemic features that are common to social and economic conservatism are of
dubious explanatory value. Indeed, in light of our findings, any conceptual resonance between
the philosophical tenets of broad-based political conservatism (e.g., system justification) and
psychological characteristics (e.g., rigidity) are probably not relevant to the question of why
people adopt conservative beliefs. This conclusion runs counter to nearly every instantiation of
the RRH but is unavoidable given that both economic and social conservatism are “right-wing”
belief systems (e.g., both ostensibly promise to justify extant hierarchies and systemic
inequalities). Hence, analyzing the epistemic differences between social and economic
conservatism may be as or more informative than comparing across the political right and left.
55
To this end, how do social and economic conservatism epistemically diverge? Malka and
colleagues (2017) speculate that motivations for governmental protection (vs. motivations for
individual freedoms) may lead people to adopt both culturally right-wing attitudes and
economically left-wing attitudes (in the absence of countervailing environmental pressures; see
also Lefkofridi et al., 2014). Whereas free-market (i.e., right-wing) economic ideology leaves
individuals to be buffeted by the harsh winds of uncertainty, competition, and entropy, left-wing
economic ideology offers a safeguard. Similarly, whereas left-wing social ideology embraces
difference and change as a force for good—implicitly confronting adherents with how much of
the world is unfamiliar and uncertain—traditionalist, socially conservative attitudes assure
adherents that there are clear answers to life’s stickiest questions (e.g., good and bad, life after
death), and create rigid taboos that leave little room for thoughtful deliberation and personal
responsibility (Popper, 1945, p. 164). These epistemic gaps across social and economic
conservatism offer an explanatory lens with which to understand psychological differences
between social and economic conservatives.
Where do these findings leave us? Insofar as economic and social ideology are
sometimes strongly correlated (e.g., in certain educated subsets of the U.S. population; Federico
& Malka, 2021), the field should no longer presume that psychological factors drive this
coherence—not least because social and economic ideology are nearly orthogonal within many
countries around the world (Malka et al., 2019). One straightforward implication of this
conclusion is that there is considerable downside to political psychologists focusing on a
unidimensional conservative vs. liberal ideology construct (e.g., Feldman & Johnston, 2014).
Further, many canonical findings in the ideological asymmetries literature are based on global
assessments of political ideology. As one of numerous examples, studies of neurophysiological
56
asymmetries in political ideology have relied overwhelming on global measures (e.g., Nam et al.,
2018; Oxley et al., 2008; Smith et al., 2011). Another famous pillar of political psychology—the
notion that trait openness to experience is related to conservatism—is also worth reexamining.
Vitriol et al. (2019) found that Openness within the Five Factor Model is significantly negatively
related to global political conservatism in a meta-analytic summary of 10 nationally
representative datasets (N = 75,994) with an effect size of r = -.10. If, as with rigidity, these
effects are driven by social conservatism (e.g., we might presume a population effect of r = -.25),
they may be null or even positive for economic conservatism (e.g., a population effect of r =
.05).
These considerations illustrate a broader conceptual point that we will deliberately
underscore: The complex reality of belief systems outpaces present psychological knowledge.
Different components (e.g., social vs. economic) of a single belief system (e.g., conservatism)
can satisfy competing or even opposing psychological causes, leading a highly psychologically
heterogeneous group of individuals to proclaim their adherence to what is, nominally, the same
ideology. Further, in the absence of consistent directional bottom-up (i.e., psychological)
influences on entire belief systems, top-down (i.e., environmental) effects—such as the
information environment and its associated partisan pressures, group memberships, and the
like—may be especially salient. As we argue above, adopting approaches that treat the
environment as a unit of analysis with independent influence over psychological outcomes and
behaviors may help to shed light on the degree to which ideology emerges from the dynamic
interaction between people and the worlds they inhabit (see Fiedler, 2007; Fiedler & Wänke,
2009). Moreover, it is increasingly plausible that far-right, far-left, and religiously
fundamentalist ideologies are in part caused by the same or similar psychological mechanisms
57
(e.g., Zmigrod, 2021) while differing dramatically from one another in other ways (e.g.,
Federico, 2021).
Parsing and classifying these variegated influences on ideology is a critical task that we
believe has been obscured by a dominant focus on the left-right spectrum. As such, we suspect
that when focusing specifically on organic links between dispositional attributes and politics,
researchers should approach politics at the level of individual beliefs and policy preferences,
such as opposition to gun-control or support for paid paternity leave, insofar as they are more
likely to have epistemic commonalities that are shared across members of the public and,
consequently, reasonably accessible psychological causes and correlates. At the same time,
researchers should also focus on the top-down influences that bring these individual attitudes
together into a belief system.
Symmetrical Cognitive Architecture, Asymmetrical Motivations
The predominant version of the RRH is rooted in motivated social cognition, suggesting
that people who are especially averse to uncertainty, ambiguity, and risk feel compelled to
defend the status-quo and systematic cultural and economic hierarchies as just and necessary by
adopting ideologies that imply as much, such as conservatism (Jost & Hunyady, 2005; Wakslak,
Jost, Tyler, & Chen, 2007). Although we find it likely that individuals gravitate toward belief
systems out of motivation to satisfy psychological needs, it is also likely the case that a broader
profile of individual differences shape and influence one’s willingness and ability to engage
with, understand, and arbitrate between ideological options. Indeed, a variety of cognitive and
meta-cognitive traits appear to foster different elements of ideological thinking, such as
extremism and propensity for violence (Zmigrod, 2020). Critically, this distinction (i.e., needs
vs. other individual difference domains) bears considerably on the psychological mechanisms
58
that might plausibly underlie ideology-rigidity relations. Needs connote one set of mechanisms,
thinking styles connote a largely separate set, and so on. Thus, distinguishing between these
constructs allows us to falsify (though not confirm) the various causal “stories” of the RRH,
whereas collapsing across these rigidity variables obscures more than it illuminates.
For instance, cognitive psychologists have posited that dispositional tendencies towards
intuitive thinking yield a special affinity for conservative political principles, suggesting a
principally cognitive mechanism underlying the RRH (Talhelm et al., 2015). Conservatives may
be more intuitive (less analytic) thinkers because they tend to come from cultures with tight
social bonds and many interconnected groups, such as churches and fraternities, which causes
them to think more holistically (Talhelm et al., 2015; cf. Kahan, 2012). Our data provide little
support for this perspective. Although cognitive reflectivity (analytic thinking) was modestly
negatively related to social conservatism, few other significant effects were present.
Moreover, modern neuropsychological accounts of cognitive inflexibility (Cools &
Robbins, 2004; Diamond, 2013) have been leveraged to characterize differences between the
political left and right (e.g., Zmigrod, 2020). This literature is often taken to suggest that leftists
and rightists differ in their basic cognitive architecture. For instance, one finding indicates that
conservatives and liberals favor distinct working memory processes (i.e., inhibition and
updating, respectively), which may account for underlying ideological asymmetries in mental
flexibility (Buechner et al., 2020). Yet we found only limited evidence for this possibility.
Although general and social conservatism manifested a small positive correlation with
neuropsychological measures of cognitive inflexibility, economic conservatism yielded a non-
significant and negligible point estimate. Hence, rightists and leftists do not appear to differ
greatly in their ability to change perspectives spatially and interpersonally, shift approaches
59
when one’s solution to a problem is not working, inhibit one’s preexisting perspectives to store
different perspectives in working memory, change priorities, and/or take advantage of
unexpected opportunities (Cools & Robbins, 2004; Diamond, 2013).
Our results for motivational rigidity provided greater support for both the RRH and the
popular conceptualization of political conservatism as motivated social cognition (Jost et al.,
2003), which focuses on motivational needs for certainty (e.g., need for cognitive closure).
Namely, our findings revealed a moderately-sized positive correlation between motivational
rigidity and social conservatism. By the same token, the correlation between motivational
rigidity and economic conservatism quite small; as such, motivational needs for certainty appear
to have little explanatory power for economic ideology. Economic conservatism may be subject
to a set of motivational dispositions that remain obscure—a possibility that is a fruitful focal
point for future work.
Having evaluated the degree to which conservatives and liberals differ in their
motivations, thinking styles, and cognitive architecture, we now turn to rigidity in one’s beliefs
(i.e., dogmatism), where there was the greatest degree of support for the RRH. Specifically,
dogmatism manifested a positive relation with both economic and social conservatism, clearly
supporting the RRH. We consider three potential interpretations for this finding. First,
conservatives may be more dogmatic than liberals. This is both perhaps the most likely
possibility and well-worn ground that many other scholars have thoughtfully tread (e.g.,
Altemeyer, 2002; Duckitt, 2009; Crowson, 2009). Conservatives tend to closely embrace
absolutist and intuitive ideas and practices, which is quite congenial to dogmatism (Jost et al.,
2003). Second, the primary measure of dogmatism in the literature, the DOG Scale, seems to be
confounded with religiosity and social conservatism (Conway et al., 2016; Duckitt, 2009; see
60
Stanovich & Toplak, 2019). Thus, our findings may be attributable to measurement error. While
important to consider, this possibility may not be especially explanatorily powerful given that
only a handful of items seem to be confounded in a way that has been empirically documented
for similar measures (e.g., Stanovich & Toplak, 2019). Still, future work addressing
measurement invariance and test bias across the political left and right for dogmatism measures
would be a major step towards resolving the magnitude of any such error variance.
Third, social conservatives may be motivated to describe and understand themselves as
dogmatic. Social conservatism arguably reflects the values of what Popper described as the
closed society: “held together by semi-biological ties—kinship, living together, sharing common
efforts, common dangers, common joys, and common distress…its institutions, including its
castes, are sacrosanct—taboo” (pp. 165-166). In this environment, dogmatism offers individuals
protection from ostracism and protects their group identity from the spread of “dangerous”
ideologies and practices, which ostensibly act as harmful destabilizing forces. Mirroring this
pattern, the “binding” moral foundations favored by conservatives (i.e., “binding individuals into
roles and duties in order to constrain their imperfect natures”) align with positive views towards
dogmatism (i.e., it upholds the larger group’s core beliefs), whereas “individualizing”
foundations favored by liberals (i.e., “teaching individuals to respect the rights of other
individuals”) align with negative views towards dogmatism (i.e., it devalues the perspectives of
other people). Arbitrating between these three possibilities is an important avenue for future
research.
In sum, our findings indicate that thinking styles and cognitive inflexibility are of dubious
relevance to left-right differences in political ideology, motivational rigidity may play a modest,
61
or even notable role in shaping ideology, and dogmatism is a clearly delineated axis of difference
between the political left and right.
Self-report vs. Behavioral Measures of Rigidity
Consistent with Van Hiel and colleagues’ (2010, 2016) meta-analytic reviews,
performance-based measures of cognitive rigidity differed considerably from self-reports, with
self-reports yielding effects r = .16 larger than performance-based measures after accounting for
variance attributable to rigidity construct and ideology type. Given that self-reports constituted
approximately 70% of all effect sizes in our review, and that previous reviews presumably
contained a similar proportion of self-reports, our replication and extension of the finding to four
additional constructs suggests yet another potentially important boundary condition to the RRH.
Still, what might account for such a dramatic split? Notwithstanding the possible
influence of method artifacts such as content overlap, common method variance, semantic
overlap, and/or acquiescence response bias (e.g., Peabody, 1961; Rokeach, 1967; Rorer, 1965),
one reasonably straightforward interpretation has been advanced by Kahan (2016), who noted
that idiosyncrasies of information processing are not easily accessible to introspective
observation, such that “there is thus little reason to believe a person’s own perception of the
quality of his reasoning is a valid measure of it” (p. 5). Indeed, cognitive psychology and
neuropsychology typically rely on behavioral tasks to assess cognition (e.g., cognitive ability or
memory are rarely measured using self-reports), in part because self-assessments of cognitive
performance are frequently inaccurate (Furnham, 2001; Kruger & Dunning, 1999). A recent
meta-analytic review of the relation between self-report and neuropsychological tests of rigidity
found no significant relation, leading the authors to conclude that self-report tests are not valid
proxies for cognitive flexibility (Howlett et al., 2021). Differences between the left and right on
62
self-report measures of rigidity, absent equivalent differences on behavioral measures, may
corroborate the possibility that self-report measures of cognitive style differ substantially from
behavioral measures in their validity—and that the evidence for the RRH, which is largely
predicated on self-reports measures, is far less compelling than it initially appears.
Another plausible explanation for the self-report vs. performance-based split is more
charitable to the RRH. Ostensibly “objective” measures of cognitive style may be unreliable or
otherwise demonstrate poor construct validity, perhaps because of their high situational
specificity and resulting poor replicability (e.g., Epstein & O’Brien, 1985). Most trait-behavior
correlations famously have a ceiling of roughly .30 or at best .40 when the measured behaviors
are not aggregated across multiple situations (Kenrick & Funder, 1988; Mischel, 1968). Studies
examining a wide-range of self-reports and performance-based measures of rigidity within a
latent variable framework, which to our knowledge are not present in the literature, are likely
necessary to resolve this question. Still, the limited pool of evidence suggests that performance-
based measures of rigidity tend to overlap in expected directions with self-report measures of
rigidity (e.g., scores on the CRT are correlated .18 to .21 with need for cognition; Burger et al..
2020). Such results provide modest support for interrelations among these measures while
raising questions concerning the extent of their convergent validity.
Of course, perhaps the most likely possibility is that behavioral measures and self-reports
each have their own sets of psychometric strengths and weaknesses and detect related but distinct
constructs. Future research investigating the reasons for left/right differences across behavioral
and self-report measures of cognitive style may, therefore, be of considerable utility in clarifying
the psychological processes underlying political ideology.
National- and Sample-level Differences in Rigidity-Conservatism
63
Prominent studies have argued that an overreliance on WEIRD, politically engaged
samples in the literature has biased conclusions in favor of the RRH due to “reversal effects,”
whereby ideologically constrained environments (i.e., those with a strict, unidimensional
normative structuring of political ideology) lead highly politically engaged individuals to adopt
hierarchically structured political attitudes (e.g., Malka et al., 2017). Hence, examining the RRH
disproportionately in politically engaged populations might obscure differential relations across
economic and social conservatism. We could not test this possibility directly in the present
review, as too few studies reported participants’ levels of political engagement. Still, moderator
analyses for sample composition provided indirect support for this hypothesis (e.g., Malka et al.,
2014). In particular, the relation between conservatism and rigidity was typically non-significant
in nationally representative samples and was smaller in such samples than in all other sample
types. As such, scholars’ reliance on results drawn from highly engaged samples (i.e., student
and online samples comprised approximately 70% of effect sizes in the current review) may have
resulted in estimates favoring the RRH. Perhaps running counter to this interpretation, however,
is that sample WEIRDness was not a significant moderator in the full model. Further efforts to
conduct research outside of the Western political landscape are needed to clarify the extent to
which political engagement within and across cultures bears on the RRH.
Relatedly, critics of the RRH have noted that an ideological restriction of range in U.S.
samples may result in spurious support for the RRH in the presence of underlying curvilinear
effects (Greenberg & Jonas, 2003). Specifically, given that far-left and far-right ideological
extremism is associated with rigidity (e.g., Harris & Van Bavel, 2020), and the U.S. has many
more right-wing extremists than left-wing extremists, only the rightward half of the rigidity-
extremism curve would be clearly visible in most U.S. samples. Of course, one could view such
64
a lack of left-wing extremism as evidence in favor of the RRH. But it may be worth comparing
rigidity-conservatism relations across nations that are more balanced in the degree to which
norms lean towards capitalism vs. communism. Indeed, Canada, Hungary, Japan, Sweden, and
UK samples demonstrated significantly smaller effects than US samples, whereas only Spanish
samples demonstrated larger effects than the US. As 417 of the 708 effect sizes in the present
review were drawn from U.S. samples, the present review is not immune from this issue.
Delving further into the nation-level findings, in recent years, relatively radical socialist
parties have seen growing popularity in a handful of European Union countries (e.g., the Dutch
Socialist Party; Germany’s “die Linke”). Countries such as Canada, where, in contrast, far-left
and far-right politics have never been a prominent force (Ambrose & Mudde, 2015), may also be
informative to examine. To that end, neither the Netherlands (r = .10, 95% CI [-.05, .26], k = 4)
nor Germany (r = .10, 95% CI [-.13, .24], k = 6) showed significant effects for general
conservatism. Canada similarly demonstrated non-significant effect for general conservatism (r =
.05, 95% CI [-.01, .11], k = 14). Countries currently dominated by conservative parties, such as
Turkey (r = .13, 95% [.10, .16], k = 13) and Poland (r = .20, 95% CI [.15, .25], k = 21), however,
tended to show significant conservatism-rigidity effects (although, notably, results for economic
conservatism were non-significant and slightly negative in Turkey and significant and negative
with a moderately-sized point estimate in Poland). Studies carried out with citizens of presently
socialist countries, such as China, Cuba, Laos, Algeria, Venezuela, or Nicaragua, are not
presently available in the literature. Still, few countries evinced negative point estimates,
suggesting that the RRH may be relatively robust in many nations.
How Biased is the RRH Literature?
Semantic Overlap
65
Our results demonstrate that rigidity-related content in popular measures of political
conservatism has inflated previous meta-analytic estimates. Jost et al. (2017) meta-analytically
estimated the overall relations between political conservatism, on the one hand, and dogmatism
and cognitive/perceptual rigidity, on the other, to be r = .51 and r = .38, respectively, and Van
Hiel et al. (2016) estimated the relation for cognitive/perceptual rigidity to be r = .24 (they did
not examine dogmatism as an outcome). In contrast, after removing criterion-contaminated
measures such as the F Scale, RWA Scale, and C Scale from our study pool (i.e., leaving only
relatively non-overlap measures of ideology, such as policy preferences or self-identification as a
liberal vs. conservative), dogmatism and cognitive rigidity’s relations were far smaller. This
discrepancy is consistent with the large moderator effect we found for content overlap—the use
of such measures increased effect sizes with a magnitude of r = .26. Hence, previous authors’
reliance on measures imbued with rigidity content has almost certainly distorted the field’s
conclusions in favor of the RRH.
These results, and the problem of overlap more broadly, can perhaps be understood as a
function of theory-ladenness (see Brewer & Lambert, 2002). The interdependence of theory and
measurement in psychological science (i.e., one needs to have an initial theory or at least a
conceptual sketch of a construct to design a measure of it) may limit opportunities to identify
biases that are simultaneously embedded in a measure and the theory underlying said measure.
Although this paradox can be resolved by adopting multi-method approaches, theory-ladenness
has rarely been accounted for in political psychology.
Moreover, our meta-analytic estimates do not account for other potentially important
forms of semantic content overlap, such as the contaminating influence of content related to
political conservatism in measures of rigidity. The Gough-Sanford Rigidity Scale, for instance,
66
includes items that almost certainly reflect social conservatism, such as “I never miss going to
church.” Based on our review of the literature, the Gough-Sanford Scale is far and away the most
used self-report measure of cognitive rigidity. Hence, “true” effect sizes for conservatism-
rigidity relations may be even smaller than we have estimated, and are perhaps even barely
distinguishable from zero, although future work using non-contaminated measures is needed to
better characterize this association.
Publication Bias
By and large, our meta-analytic examination provided mixed evidence for publication
bias in the RRH literature. Admittedly, however, the options for evaluating publication bias via
the multi-level model are limited. Although PET-PEESE and examination of the funnel plots did
not reveal publication bias, both approaches are flawed in important ways (Gervais, 2015; Lau et
al., 2006), especially under conditions of high effect size heterogeneity (Renkewitz & Keiner,
2018). Arguably the most direct means of probing publication bias, a moderator analysis for
published vs. unpublished effect sizes, revealed significantly higher effect sizes in published than
unpublished manuscripts. Replication studies, particularly, reported the smallest overall effects.
This finding is consistent with Kvarven et al. (2020), who found that meta-analytic effect sizes
are typically three-times larger than preregistered replications carried out across multiple
laboratories (and that methods of correcting meta-analyses for bias do not improve meta-analytic
results). Accordingly, given our overall meta-analytic estimate of r = .13, a multi-lab replication
of the RRH might be expected to produce a point estimate of roughly r = .04 to .05.
Further, assuming a true effect of r = .10, only 34.5% of studies in our meta-analytic pool
were sufficiently powered to detect the effect reliably. This finding provides further evidence
that our overall meta-analytic estimate is likely to be inflated. Issues of low statistical power may
67
also explain the large degree of heterogeneity that was present after accounting for moderator
variables.
Political Bias
In the wake of psychology’s replication crisis, political bias has been highlighted as a
potentially important source of non-replicable research findings (e.g., Jussim et al., 2016; cf.
Reinero et al., 2019), perhaps because the ratio of liberals to conservatives within social and
personality psychology has been estimated from 8:1 to nearly 100:1 (Haidt, 2011; Inbar &
Lammers, 2012; Langbert, Quain, & Klein, 2016; von Hippel & Buss, 2017). Such a political tilt
by itself may not be worrisome if scholars can maintain a reasonably objective stance toward
politically-tinged scientific claims that activate their congeniality bias, a variant of
confirmation bias in which individuals are especially likely to accept assertions that accord with
their broader worldviews (Hart et al., 2009). Still, in a survey of 506 members of the Society for
Personality and Social Psychology, Inbar and Lammers (2012) found that a substantial
proportion of left-leaning respondents were openly willing to discriminate against right-leaning
applicants in hiring, symposia invitations, journal reviews, and grant reviews. This finding is
consistent with past research suggesting that grant proposals and Institutional Review Board
submissions are sometimes rejected due to their political implications (see Ceci & Williams,
2018, for a review).
Our results provided provisional although hardly conclusive support for this possibility.
Although authors’ theoretical allegiance and political ideology were significant moderators of
the relations between conservatism and cognitive rigidity in the expected direction, results were
modest in magnitude. Still, pro-RRH authors reported larger effect sizes and anti- RRH authors
reported smaller effect sizes. Similarly, authors who donated to right-wing organizations
68
reported smaller effect sizes than both non-donators and authors who donated to left-wing
organizations, although non-donators and left-wing donators did not significantly differ from one
another. We can envision plausible, mutually compatible, interpretations for this latter result, one
of which is consistent with the RRH and one of which is not. First, right-wing authors may be
more susceptible to ideological bias than other authors (Baron & Jost, 2019), although the
relative paucity of right-wing authors in the literature makes interpreting the size of this potential
asymmetry difficult. Second, given that social and personality psychologists overwhelmingly
hold left-wing views (Inbar & Lammers, 2012), perhaps authors who donated to left-wing causes
were not any more liberal, on aggregate, than those who had no record of political donations.
Such a severe restriction of range could account for the statistical similarity between non-
donators and left-wing donators, and therefore could raise the specter of undetected bias in the
literature, as right-wing donators reported smaller effects. Given that there were only a handful
of first authors and zero last authors who had donated to right-wing organizations, however, the
sufficiency of either account remains difficult to evaluate.
Limitations and Future Directions
Our review, although by far the most comprehensive quantitative synthesis of literature
bearing on the RRH, is not without limitations. One limitation is the possibility of correlated
error variance across multiple studies; a likely source of systematic error is the underdeveloped
validity of many or most rigidity constructs. Given that loose nomological networks can limit the
falsifiability of the auxiliary hypotheses that are often implicit in psychological research (e.g., the
assumption that one’s measurement instruments work), the possibility that we were merely
aggregating systematic error is an unavoidable limitation of the present review (Cronbach &
Meehl, 1955/1973).
69
A related limitation is our reliance on authors’ language when coding rigidity variables.
Although this approach had the advantage of minimizing our own biases, it is susceptible to
those of the study authors, including “jingle-jangle” fallacies (Block, 1995). Jingle fallacies
entail the erroneous assumption that two measures with the same or similar names (e.g., the
Dogmatism Scale and the DOG Scale) reflect the same construct, whereas jangle fallacies entail
the erroneous assumption that two measures with different names (e.g., the Intolerance of
Ambiguity Scale and the Need for Cognitive Closure Scale) reflect different constructs. For
example, although we opted to aggregate Rokeach’s (1960) Dogmatism Scale and Altemeyer’s
(1996) DOG Scale in our analyses, the two measures may operationalize dogmatism quite
differently8. These issues intersect with the previously described questionable construct-
validational properties of self-report measures of rigidity. Still, our use of multiple meta-analytic
models that differ in specificity may buffer against interpretative errors owing to loose
nomological networks and jingle-jangle fallacies.
Another limitation, in this case concerning our analytical approach, is that we did not test
for many 3-way statistical interactions when examining potential moderators (i.e., construct by
conservatism-type by third moderator) owing to inadequate statistical power. This approach
assumes that each moderator’s impact on the relation between a given level of political
conservatism (e.g., economic) and a given rigidity variable (e.g., dogmatism) is equivalent to that
moderator’s impact on all other levels of conservatism for all other rigidity variables. Hence,
interpretations of moderator effects in models with multiple moderators (i.e., all except for
Model 1) should be made according to the broader pattern of changes rather than individual
8Altemeyer’s DOG scale discards central elements of Rokeach’s conceptualization, deemphasizing cognitive-organizational aspects and newly emphasizing belief certainty. The DOG Scale also excludes content related to disdain for, and disgust towards, individuals who challenge one’s core beliefs, these being integral to emotional and behavioral characteristics of dogmatism, per Rokeach (1960) and some other theorists (Johnson, 2009).
70
effect size estimates. We discourage interpretation of specific meta-regression coefficients across
levels of a given moderator in multi-meta-regression models. For example, despite the negative
meta-regression coefficients for certain conservatism-construct pairings on performance-based
measures, we cannot necessarily conclude that economic conservatism is negatively related to
cognitive rigidity as measured by performance-based instruments. Rather, our results suggest
that, by and large, sizeable relations between conservatism and rigidity are considerably smaller
for performance-based measures of rigidity vs. self-reports.
Moreover, future research administering domain-general self-report measures, such as the
DOG Scale, in complement with measures that assess rigidity in the context of several specific
domains (e.g., politics, movies, sports) or issues may be one means of addressing problems of
measurement non-invariance across the political left and right. One crucial step in further
evaluating the RRH will be to evaluate the degree to which well-established, ideologically
neutral, performance-based measures from the neuropsychology literature, on the one hand, and
self-report measures of cognitive rigidity, on the other, predict phenotypically diverse criterion-
related outcomes. Future research might consider such criterion-related outcomes as: (a)
behavioral aggression towards entities that threaten one’s beliefs; (b) between-group differences
(e.g., cult members vs. lawyers); (c) stability across time and domain, assessed either by the
consistency of a single belief (e.g., Christianity) or the extremity of one’s beliefs in general, even
as they shift in content (e.g., Christian fundamentalists who become militant atheists); (e) various
cognitive biases (see Ditto et al., 2019); (f) partisan moral disengagement and partisan
schadenfreude (Kalmoe & Mason, 2018); (g) metacognitive sensitivity (Rollwage et al., 2018);
71
(h) the taxonicity9 of one’s political views (e.g., in referring to environmental mold taxa, Meehl
[1992] noted that “the unquestioned existence of such highly cohesive and dynamically effective
taxa as Trotskyism, Baptist Fundamentalism, or Frenzied Egalitarianism—one could exemplify
with a variety of political, economic, religious, and even esthetic types—should suffice to
persuade us that strong and important taxa in the personality domain need not originate in germs,
genes, or single dramatic environmental happenings” p. 148); and, finally, (i) the incremental
validity of self-report and performance-based measures of rigidity for the criteria already
mentioned over and above related, but conceptually distinct, constructs (e.g., low openness,
right- and left-wing authoritarianism, affective polarization).
Another future direction concerns symmetries and asymmetries in existential needs to
preserve safety and security and to reduce danger and threat, the other major prong of the theory
conceptualizing conservatism as arising from motivated social cognition. A meta-analysis of Jost
and colleagues (2017) attempted to shed light on this prong but fell prey to many of the same
objections that we outlined in the current research, most notably content overlap, as many of the
threat measures deal with politically relevant threats. Further calling previous findings into
question are recent large-scale failures to replicate (a) ideological asymmetries in
psychophysiological reactions to threatening stimuli (Osmundsen et al., 2020) and (b) the effects
of mortality salience posited by terror management theory, which bears on existential motives
9To briefly elaborate on point (h), Meehl’s (1992) “favorite example” (p. 334) of an environmental mold taxon was Trotskyism (although he also offered examples pertinent to far-right wing ideology): “As an undergraduate…I quickly learned that there was a pair of beliefs that, taken jointly, were pathognomic of the ‘Trotskyist syndrome.’ If a student opined that (a) the Soviet Union is a workers’ state and must be defended at all costs against anybody, including the USA and (b) Stalin is a stupid counter-revolutionary bureaucrat, one could predict—not with 90% or 95% but with 100% accuracy—that the person would hold a dozen or more other beliefs…the statistical tightness of the Trotskyist syndrome was greater than any nosologically entity in psychopathology” (p. 334). This observation seems prima facie pertinent to rigidity, especially given evidence that individuals who exhibit particular clusters of core beliefs, attitudes, and behaviors that are predictive of ideological extremism demonstrate 95-99% within-group agreement on a host of topics that are otherwise controversial among moderates and centrists (Hawkins et al., 2018).
72
(Klein et al., 2019). Whether previous results are due to publication bias and Type 1 error, or
whether the effects in question are simply more nuanced than previously thought, such
replication failures suggest that the associations between existential needs and political
conservatism should be thoroughly scrutinized. A more comprehensive and nuanced meta-
analysis of the purported link between conservatism and existential threat is necessary.
A more general limitation of the political asymmetry hypothesis is that the basis for left-
right differences is reliant on correlational evidence and does not speak directly to the absolute
level of rigidity in either group. Indeed, a correlation may suggest that conservatives and liberals
differ, even if both exhibit similarly high (or low) scores on a measure of some epistemic
motivation. For instance, liberals and conservatives could both exhibit low levels of cognitive
rigidity (as indicated by average scores below the midpoint of the scale) even if the correlation
between conservatism and rigidity were significant and positive. Consider that the mean and
standard deviation for dogmatism in one study were M = 2.94, SD = .86 on a 6-point scale
(Crowson, DeBacker, & Davis, 2007). Thus, most participants scored below the midpoint of the
scale (3.5), suggesting that most participants were slightly non-dogmatic, presuming, of course,
that this midpoint is psychologically interpretable as reflecting a moderate level of the construct
(see Blanton & Jaccard, 2006). Nevertheless, the correlation between dogmatism and
conservatism in this study was significant and positive (r = .37). When converted to Cohen’s d
and then used to estimate plausible mean differences across liberals and conservatives in the
sample, the mean conservative score could be expected to be around 3.28 and the mean liberal
score could be expected to be around 2.69. Note that both groups still score below the midpoint
of the scale on average; the political asymmetry found in this study does not allow one to
73
conclude that conservatives are, in an absolute sense, dogmatic, whereas liberals are not (see
Reyna, 2017).
Conclusion
We hope that the present review allows for more nuanced, albeit perhaps more
ambiguous, accounts of the psychological correlates of political ideology to emerge in place of
the RRH. Accordingly, our results may be cause for optimism. As William James (1896) noted,
“Science, like life, feeds on its own decay. New facts burst old rules; then newly divined
conceptions bind old and new together into a reconciling law” (p. 320). We welcome and
anticipate challenges to our conclusions, but hope, at the very least, that psychological science
will be somewhat closer to reconciling an intellectual conflict that has worn on for well-over half
a century. It suffices to say, however, that we cannot be absolutely certain.
74
References
*References marked with an asterisk indicate studies included in the meta-analysis.
Adorno, T., Frenkel-Brenswik, E., Levinson, D. J., & Sanford, R. N. (1950). The authoritarian
personality. Verso Books.
Altemeyer, B. (1996). The authoritarian specter. Cambridge, Mass: Harvard University Press.
Altemeyer, B. (2002). Dogmatic behavior among students: Testing a new measure of
dogmatism. The Journal of Social Psychology, 142, 713-721.
*Arceneaux, K., & Vander Wielen, R. J. (2017). Taming Intuition: How Reflection Minimizes
Partisan Reasoning and Promotes Democratic Accountability. Cambridge University
Press.
Ambrose, E., & Mudde, C. (2015). Canadian multiculturalism and the absence of the far
right. Nationalism and Ethnic Politics, 21, 213-236.
Amodio, D. M., Jost, J. T., Master, S. L., & Yee, C. M. (2007). Neurocognitive correlates of
liberalism and conservatism. Nature Neuroscience, 10, 1246-1247.
Azevedo, F., Jost, J. T., Rothmund, T., & Sterling, J. (2019). Neoliberal ideology and the
justification of inequality in capitalist societies: Why social and economic dimensions of
ideology are intertwined. Journal of Social Issues, 75, 49-88.
*Bahçekapili, H. G., & Yilmaz, O. (2017). The relation between different types of religiosity and
analytic cognitive style. Personality and Individual Differences, 117, 267-272.
*Baker, F., & Schulberg, H. C. (1969). Community mental health ideology, dogmatism, and
political-economic conservatism. Community Mental Health Journal, 5, 433-436.
75
*Baldner, C., Pierro, A., Chernikova, M., & Kruglanski, A. W. (2018). When and why do
liberals and conservatives think alike? An investigation into need for cognitive closure,
the binding moral foundations, and political perception. Social Psychology, 49, 360–368.
*Baldner, C., & Pierro, A. (2019). Motivated prejudice: The effect of need for closure on anti-
immigrant attitudes in the United States and Italy and the mediating role of binding moral
foundations. International Journal of Intercultural Relations, 70, 53-66.
Barber, J. (2012). Attitudinal responses to mixed evidence: The role of attitude extremity and
political ideology in effecting change versus resistance (Doctoral dissertation). Available
from ProQuest Dissertations & Theses Global database.
Baron, J., & Jost, J. T. (2019). False equivalence: Are liberals and conservatives in the United
States equally biased? Perspectives on Psychological Science, 14, 292-303.
*Baxter, C. (2011). Hypocrisy in upholding the status quo: The role of the status quo in the
motivated social cognition model of political conservatism (Unpublished Master’s thesis).
University of Guelph. Ontario, Canada.
*Benjamin Jr, A. J. (2014). Chasing the elusive left-wing authoritarian: An examination of
Altemeyer’s right-wing authoritarianism and left-wing authoritarianism scales. National
Social Science Journal, 43, 7-13.
Benoit, K., & Laver, M. (2006). Party policy in modern democracies. Routledge.
Blanton, H., & Jaccard, J. (2006). Arbitrary metrics in psychology. American Psychologist, 61,
27-41.
Block, J., & Block, J. (1951). An investigation of the relationship between intolerance of
ambiguity and ethnocentrism. Journal of Personality, 19, 303–311.
76
Block, J. (1995). A contrarian view of the five-factor approach to personality
description. Psychological Bulletin, 117, 187-215
Bouchard Jr, T. J. (2016). Experience producing drive theory: Personality “writ
large”. Personality and Individual Differences, 90, 302-314.
*Brandt, M. J., Chambers, J. R., Crawford, J. T., Wetherell, G., & Reyna, C. (2015). Bounded
openness: The effect of openness to experience on intolerance is moderated by target
group conventionality. Journal of Personality and Social Psychology, 109, 549-568.
*Brandt, M. J., & Crawford, J. (2013). Replication-extension of 'Not for all the tea in China!'
Political ideology and the avoidance of dissonance-arousing situations (Nam, H., Jost, J.,
& Van Bavel, J. (2013), Plos One, 18, e59837).
*Brandt, M. J., & Crawford, J. (unpublished data).
Brandt, M. J., Sibley, C. G., & Osborne, D. (2019). What is central to political belief system
networks? Personality and Social Psychology Bulletin, 45, 1352-1364.
*Brandt, M. J., Evans, A. M., & Crawford, J. T. (2015). The unthinking or confident extremist?
Political extremists are more likely than moderates to reject experimenter-generated
anchors. Psychological Science, 26, 189-202.
*Brandt, M. J., & Reyna, C. (2010). The role of prejudice and the need for closure in religious
fundamentalism. Personality and Social Psychology Bulletin, 36, 715-725.
*Brant, W. D., Batres, A., & Hays, R. (1980). Authoritarian traits as predictors of preference for
candidates in 1980 United States presidential election. Psychological Reports, 47, 416-
418.
77
*Bronstein, M. V., Dovidio, J. F., & Cannon, T. D. (2017). Both bias against disconfirmatory
evidence and political orientation partially explain the relationship between dogmatism
and racial prejudice. Personality and Individual Differences, 105, 89-94.
Budner, N. Y. (1962). Intolerance of ambiguity as a personality variable. Journal of
Personality, 30, 29-50.
Buechner, B. M., Clarkson, J. J., Otto, A. S., Hirt, E. R., & Ho, M. C. (2021). Political ideology
and executive functioning: The effect of conservatism and liberalism on cognitive
flexibility and working memory performance. Social Psychological and Personality
Science, 12, 237-247.
*Burger, A. M., Pfattheicher, S., & Jauch, M. (2020). The role of motivation in the association of
political ideology with cognitive performance. Cognition, 195, 104-124.
*Burke, S. & LaFrance, M. (unpublished data).
*Burke, S. (unpublished data).
*Burke, S. E., Dovidio, J. F., Przedworski, J. M., Hardeman, R. R., Perry, S. P., Phelan, S. M.,
… van Ryn, M. (2015). Do contact and empathy mitigate bias against gay and lesbian
people among heterosexual first-rear medical students? A report from the Medical
Student CHANGE Study. Academic Medicine, 90, 645–651.
*Burke, S., Perry, S. P., & Dovidio, J. F. (unpublished data).
Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social
Psychology, 42, 116–131.
Cacioppo, J. T., Petty, R. E., Kao, C. F., & Rodriguez, R. (1986). Central and peripheral routes to
persuasion: An individual difference perspective. Journal of Personality and Social
Psychology, 51, 1032–1043.
78
*Callender, K. A. O. (2015). One of us or one of them? Psychological responses to uncertainty
about the sexual identity of others (Doctoral dissertation, Yale University).
Caprara, G. V., & Vecchione, M. (2018). On the left and right ideological divide: Historical
accounts and contemporary perspectives. Political Psychology, 39, 49-83.
Card, N. A. (2012). Applied meta-analysis for social science research. New York, NY: Guilford
Press.
Carl, N. (2014). Verbal intelligence is correlated with socially and economically liberal
beliefs. Intelligence, 44, 142-148
Carney, D. R., Jost, J. T., Gosling, S. D., & Potter, J. (2008). The secret lives of liberals and
conservatives: Personality profiles, interaction styles, and the things they leave
behind. Political Psychology, 29, 807-840.
*Carraro, L., Castelli, L., & Macchiella, C. (2011). The automatic conservative: Ideology-based
attentional asymmetries in the processing of valenced information. PLoS One, 6, e26456.
Ceci, S. J., Williams, W. M. (2018). Socio-political values infiltrate the assessment of scientific
research. In Jussim, L., Crawford, J. (Eds.), The politics of social psychology (pp. 156–
167), New York, NY: Routledge.
Cichocka, A., Bilewicz, M., Jost, J. T., Marrouch, N., & Witkowska, M. (2016). On the grammar
of politics—or why conservatives prefer nouns. Political Psychology, 37, 799-815.
*Chan, E. Y. (2019). Political Conservatism and Anthropomorphism: An Investigation. Journal
of Consumer Psychology. Pagination forthcoming.
Cherry, K. M., Vander Hoeven, E., Patterson, T. S., & Lumley, M. N. (2021). Defining and
measuring “psychological flexibility”: A narrative scoping review of diverse flexibility
79
and rigidity constructs and perspectives. Clinical Psychology Review. Advance online
publication.
*Chirumbolo, A., & Leone, L. (2008). Individual differences in need for closure and voting
behaviour. Personality and Individual Differences, 44(5), 1279-1288.
*Chirumbolo, A. (2002). The relationship between need for cognitive closure and political
orientation: The mediating role of authoritarianism. Personality and Individual
Differences, 32(4), 603-610.
*Chirumbolo, A., Areni, A., & Sensales, G. (2004). Need for cognitive closure and politics:
Voting, political attitudes and attributional style. International Journal of Psychology, 39,
245-253.
*Choma, B. (2008). Why are people liberal? A motivated social cognition perspective (Doctoral
dissertation). Brock University, Ontario, Canada.
*Choma, B. L., Hafer, C. L., Dywan, J., Segalowitz, S. J., & Busseri, M. A. (2012). Political
liberalism and political conservatism: Functionally independent? Personality and
Individual Differences, 53, 431-436.
*Cichocka, A., Bilewicz, M., Jost, J. T., Marrouch, N., & Witkowska, M. (2016). On the
grammar of politics—or why conservatives prefer nouns. Political Psychology, 37, 799-
815.
Cizmar, A. M., Layman, G. C., McTague, J., Pearson-Merkowitz, S., & Spivey, M. (2014).
Authoritarianism and American political behavior from 1952 to 2008. Political Research
Quarterly, 67, 71-83.
Claessens, S., Fischer, K., Chaudhuri, A., Sibley, C. G., & Atkinson, Q. D. (2020). The dual
evolutionary foundations of political ideology. Nature Human Behaviour, 4, 336-345.
80
Cohen, P., West, S. G., & Aiken, L. S. (2014). Applied multiple regression/correlation analysis
for the behavioral sciences. Psychology Press.
*Collins, T. P., Crawford, J. T., & Brandt, M. J. (2017). No evidence for ideological asymmetry
in dissonance avoidance. Social Psychology, 48, 123-134.
Conover, P. J., & Feldman, S. (1981). The origins and meaning of liberal/conservative self-
identifications. American Journal of Political Science, 617-645.
Converse P. (1964). Ideology and discontent. New York: The Free Press.
*Conway III, L. G., Gornick, L. J., Houck, S. C., Anderson, C., Stockert, J., Sessoms, D., &
McCue, K. (2016). Are conservatives really more simple‐minded than liberals? The
domain specificity of complex thinking. Political Psychology, 37, 777-798.
Cools, R., & Robbins, T. W. (2004). Chemistry of the adaptive mind. Philosophical
Transactions of the Royal Society of London. Series A: Mathematical, Physical and
Engineering Sciences, 362, 2871-2888.
Cornelis, I., & Van Hiel, A. (2006). The impact of cognitive styles on authoritarianism based
conservatism and racism. Basic and Applied Social Psychology, 28, 37-50.
*Cornelis, I., & Van Hiel, A. (2006). The impact of cognitive styles on authoritarianism based
conservatism and racism. Basic and Applied Social Psychology, 28, 37-50.
*Costello, T.H., & Lilienfeld, S.O. (unpublished data 1).
*Costello, T.H., & Lilienfeld, S.O. (unpublished data 2).
Costello, T. H., & Lilienfeld, S. O. (2020, June 10). Social and Economic Political Ideology
Consistently Operate as Mutual Suppressors: Implications for Personality, Social, and
Political Psychology. https://doi.org/10.31234/osf.io/gq4js.
81
*Costin, F. (1971). Dogmatism and conservatism: An empirical follow-up of Rokeach's
findings. Educational and Psychological Measurement, 31, 1007-1010.
*Critcher, C. R., Huber, M., Ho, A. K., & Koleva, S. P. (2009). Political orientation and
ideological inconsistencies: (Dis) comfort with value tradeoffs. Social Justice
Research, 22, 181-205.
Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological
Bulletin, 52, 281-302.
*Crowson, H. M. (2003). Is the Defining Issues Test a measure of moral judgment development:
A test of competing claims. Dissertation Abstracts International Section A: Humanities
and Social Sciences, 63, 3527.
*Crowson, H. M. (2004). Human rights attitudes: Dimensionality and psychological
correlates. Ethics & Behavior, 14, 235-253.
*Crowson, H. M. (2009). Does the DOG scale measure dogmatism? Another look at construct
validity. The Journal of Social Psychology, 149, 365-383.
*Crowson, H. M. (2009). Are all conservatives alike? A study of the psychological correlates of
cultural and economic conservatism. The Journal of Psychology, 143, 449-463.
Crowson, H. M., Debacker, T. K., & Thoma, S. J. (2006). The role of authoritarianism, perceived
threat, and need for closure or structure in predicting post-9/11 attitudes and beliefs. The
Journal of Social Psychology, 146, 733-750.
*Crowson, H. M., Thoma, S. J., & Hestevold, N. (2005). Is political conservatism synonymous
with authoritarianism? The Journal of Social Psychology, 145, 571-592.
*Crowson, H. M., DeBacker, T. K., & Davis, K. A. (2008). The DOG Scale: A valid measure of
dogmatism? Journal of Individual Differences, 29(1), 17-24.
82
*Davids, A. (1955). Some personality and intellectual correlations of intolerance of
ambiguity. The Journal of Abnormal and Social Psychology, 51, 415-420.
Dean, J. W. (2006). Conservatives without conscience. Penguin.
*Deppe, K. D., Gonzalez, F. J., Neiman, J. L., Jacobs, C., Pahlke, J., Smith, K. B., & Hibbing, J.
R. (2015). Reflective liberals and intuitive conservatives: A look at the Cognitive
Reflection Test and ideology. Judgment & Decision Making, 10, 314-331.
Diamond, A. (2013). Executive functions. Annual Review of Psychology, 64, 135-168.
*Di Renzo, G. J. (1968). Dogmatism and presidential preferences in the 1964
elections. Psychological Reports, 22, 1197-1202.
*Diez, C. (2007). Guilty or not guilty? The impact of ideology and need for closure on
conviction proneness. In E. Castano (Ed.): ProQuest Dissertations Publishing.
Ditto, P. H., Liu, B. S., Clark, C. J., Wojcik, S. P., Chen, E. E., Grady, R. H., ... & Zinger, J. F.
(2019). At least bias is bipartisan: A meta-analytic comparison of partisan bias in liberals
and conservatives. Perspectives on Psychological Science, 14, 273-291.
Duarte, J., Crawford, J., Stern, C., Haidt, J., Jussim, L., & Tetlock, P. (2015). Political diversity
will improve social psychological science. Behavioral and Brain Sciences, 38, E130.
Duckitt, J., Wagner, C., du Plessis, I., & Birum, I. (2002). The psychological bases of ideology
and prejudice: Testing a dual process model. Journal of Personality and Social
Psychology, 83, 75-93.
Duckitt, J., & Sibley, C. G. (2009). A dual-process motivational model of ideology, politics, and
prejudice. Psychological Inquiry, 20, 98-109.
83
Duckitt, J. (2009). Authoritarianism and dogmatism. In M. R. Leary & R. H. Hoyle (Eds.),
Handbook of individual differences in social behavior (pp. 298-317). New York, NY:
Guilford Press.
*Durrheim, K. (1998). The relationship between tolerance of ambiguity and attitudinal
conservatism: A multidimensional analysis. European Journal of Social Psychology, 28,
731-753.
Egger, M., Smith, G. D., Schneider, M., & Minder, C. (1997). Bias in meta-analysis detected by
a simple, graphical test. BMJ, 315, 629-634.
R. Emerson. 1841, December 9. The conservative
[https://archive.vcu.edu/english/engweb/transcendentalism/authors/emerson/essays/conse
rvative.html].
Epstein, S., & O'Brien, E. J. (1985). The person–situation debate in historical and current
perspective. Psychological Bulletin, 98, 513-537.
*Everett, J. A. (2013). The 12 item social and economic conservatism scale (SECS). PloS one, 8,
e82131.
Federico, C. M., & Malka, A. (2018). The contingent, contextual nature of the relationship
between needs for security and certainty and political preferences: Evidence and
implications. Political Psychology, 39, 3-48.
Federico, C., & Malka, A. (2021, August 15). Ideology: The Psychological and Social
Foundations of Belief Systems. https://doi.org/10.31234/osf.io/xhvyj.
Federico, C. M., Deason, G., & Fisher, E. L. (2012). Ideological asymmetry in the relationship
between epistemic motivation and political attitudes. Journal of Personality and Social
Psychology, 103, 381.
84
Federico, C. M. (2021). When do psychological differences predict political differences?:
Engagement and the psychological bases of political polarization. In The Psychology of
Political Polarization (pp. 17-37). Routledge.
*Federico, C. M., & Ekstrom, P. D. (2018). The political self: How identity aligns preferences
with epistemic needs. Psychological Science, 29, 901-913.
*Federico, C. M. & Goren, P. (2009). Motivated social cognition and ideology: Is attention to
elite discourse a prerequisite for epistemically motivated political affinities? In: Social
and psychological bases of ideology and system justification, ed. J. T. Jost, A. C. Kay &
H. Thorisdottir, pp. 267–91. Oxford University Press.
*Federico, C. M., & Schneider, M. C. (2007). Political expertise and the use of ideology:
Moderating effects of evaluative motivation. Public Opinion Quarterly, 71, 221-252.
*Federico, C. M., Deason, G., & Fisher, E. L. (2012). Ideological asymmetry in the relationship
between epistemic motivation and political attitudes. Journal of Personality and Social
Psychology, 103(3), 381–398.
*Federico, C. M., Ergun, D., & Hunt, C. (2014). Opposition to equality and support for tradition
as mediators of the relationship between epistemic motivation and system-justifying
identifications. Group Processes & Intergroup Relations, 17, 524-541.
Feldman, S. (2013). Political ideology. In L. Huddy, D. O. Sears, & J. S. Levy (Eds.), The
Oxford handbook of political psychology (pp. 591–626). Oxford University Press.
*Feldman, S., & Johnston, C. (2014). Understanding the determinants of political ideology:
Implications of structural complexity. Political Psychology, 35, 337-358.
*Feldman, O. (1996). The political personality of Japan: An inquiry into the belief systems of
Diet members. Political Psychology, 657-682.
85
*Fiagbenu, M. E., Proch, J., & Kessler, T. (2019). Of deadly beans and risky stocks: Political
ideology and attitude formation via exploration depend on the nature of the attitude
stimuli. British Journal of Psychology. Pagination forthcoming.
*Fibert, Z., & Ressler, W. H. (1998). Intolerance of ambiguity and political orientation among
Israeli university students. The Journal of Social Psychology, 138, 33-40.
Fiedler, K., & Wänke, M. (2009). The cognitive-ecological approach to rationality in social
psychology. Social Cognition, 27, 699-732.
Franklin, A., Anderson, M., Brock, D., Coleman, S., Downing, J., Gruvander, A., ... & Rice, R.
(1989). Can a theory-laden observation test the theory?. The British Journal for the
Philosophy of Science, 40, 229-231.
Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic
Perspectives, 19, 25-42.
*French, E. G. (1955). Interrelation among some measures of rigidity under stress and nonstress
conditions. The Journal of Abnormal and Social Psychology, 51, 114–118.
Frenkel-Brunswik, E. (1954). Further explorations by a contributor to the authoritarian
personality. In R. Christie & M. Jahoda (Eds.), Studies in the scope and method of the
authoritarian personality (pp. 226-275). New York: Free Press.
Freud (1921). Dream psychology: Psychoanalysis for beginners. New York: The James A.
McCann Company.
*Fu, J. H., Morris, M. W., Lee, S.-l., Chao, M., Chiu, C.-y., & Hong, Y.-y. (2007). Epistemic
motives and cultural conformity: Need for closure, culture, and context as determinants
of conflict judgments. Journal of Personality and Social Psychology, 92, 191–207.
86
Funder, D. C., & Ozer, D. J. (2019). Evaluating effect size in psychological research: Sense and
nonsense. Advances in Methods and Practices in Psychological Science, 2, 156-168.
Furnham, A. (2001). Self-estimates of intelligence: Culture and gender difference in self and
other estimates of both general (g) and multiple intelligences. Personality and Individual
Differences, 31, 1381-1405.
Furnham, A., & Marks, J. (2013). Tolerance of ambiguity: A review of the recent literature.
Psychology, 4, 717–728.
Gaffan, E. A., Tsaousis, J., & Kemp-Wheeler, S. M. (1995). Researcher allegiance and meta-
analysis: The case of cognitive therapy for depression. Journal of Consulting and
Clinical Psychology, 63, 966–980.
*Garcia, C. (2014). Scarily coming to the centre: political centrism as an effect of mortality
salience and a need for closure (Unpublished doctoral dissertation).
Glass, G. V. (2015). Meta‐analysis at middle age: a personal history. Research Synthesis
Methods, 6, 221-231.
*Golec de Zavala, A., & van Bergh, A. (2007). Need for cognitive closure and conservative
political beliefs: Differential mediation by personal worldviews. Political Psychology, 28,
587–608.
*Golec de Zavala, A., Cislak, A., & Wesolowska, E. (2010). Political conservatism, need for
cognitive closure, and intergroup hostility. Political Psychology, 31, 521–541.
Greenberg, J., & Jonas, E. (2003). Psychological Motives and Political Orientation—The Left,
the Right, and the Rigid: Comment on Jost et al. (2003). Psychological Bulletin, 129,
376-382.
87
Gurven, M. D. (2018). Broadening horizons: Sample diversity and socioecological theory are
essential to the future of psychological science. Proceedings of the National Academy of
Sciences, 115, 11420-11427.
Haidt, J. (2011, January). The bright future of post-partisan social psychology. In Talk given at
the Annual Meeting of the Society for Personality and Social Psychology, San Antonio,
TX.
Hamburger, T. (2015, December 13th). Cruz Campaign Credits Psychological Data and Analytics
for its Rising Success. Washington Post.
*Hannikainen, I., Cabral, G., Machery, E., & Struchiner, N. (2017). A deterministic worldview
promotes approval of state paternalism. Journal of Experimental Social Psychology, 70,
251-259.
*Hanson, D. J. (1981). Authoritarianism and candidate preference in the 1980 presidential
election. Psychological Reports, 49, 326-326.
Harris, E. A., & Van Bavel, J. J. (2021). Preregistered Replication of “Feeling superior is a
bipartisan issue: Extremity (not direction) of political views predicts perceived belief
superiority”. Psychological Science, 32, 451-458.
Hart, W., Albarracín, D., Eagly, A. H., Brechan, I., Lindberg, M. J., & Merrill, L. (2009). Feeling
validated versus being correct: a meta-analysis of selective exposure to
information. Psychological Bulletin, 135, 555-588.
Hawkins, S., Yudkin, D., Juan-Torres, M., & Dixon, T. (2018). Hidden Tribes: A Study of
America’s Polarized Landscape. Hidden Tribes.
*Hasta, D. & Dönmez, A. (2009). Relation between authoritarianism, level of cognitive
complexity and political ideology. Turk Psikoloji Dergisi, 24, 19-33.
88
*Hennes, E. P., Nam, H. H., Stern, C., & Jost, J. T. (2012). Not all ideologies are created equal:
Epistemic, existential, and relational needs predict system-justifying attitudes. Social
Cognition, 30, 669-688.
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). Most people are not WEIRD. Nature, 466,
29.
*Hession, E., & McCarthy, E. (1975). Human performance in assessing subjective probability
distributions. The Irish Journal of Psychology, 3, 31-46.
Hetherington, M., & Weiler, J. (2018). Prius or Pickup? How the Answers to Four Simple
Questions Explain America's Great Divide. Houghton Mifflin.
Hibbing, J. R., Smith, K. B., & Alford, J. R. (2014). Differences in negativity bias underlie
variations in political ideology. Behavioral and Brain Sciences, 37, 297-350.
*Hicks, J. M. (1974). Conservative Voting and Personality. Social Behavior & Personality: An
International Journal, 2, 43– 49.
Hiel, A. V., & Mervielde, I. (2004). Openness to experience and boundaries in the mind:
Relationships with cultural and economic conservative beliefs. Journal of
Personality, 72, 659-686.
Higgins, J. P., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta‐
analysis. Statistics in medicine, 21(11), 1539-1558.
Ho, A. K., Sidanius, J., Kteily, N., Sheehy-Skeffington, J., Pratto, F., Henkel, K. E., Foels, R., &
Stewart, A. L. (2015). The nature of social dominance orientation: Theorizing and
measuring preferences for intergroup inequality using the new SDO₇ scale. Journal of
Personality and Social Psychology, 109, 1003–1028.
89
Hogg, M. A. (2007). Uncertainty–identity theory. Advances in experimental social
psychology, 39, 69-126.
*Holanda Coelho, G. L., Hanel, P. H., & Wolf, L. J. (2018). The very efficient assessment of
need for cognition: developing a six-item version. Assessment, 1, 16.
Honeycutt, N., & Jussim, L. (2020). A model of political bias in social science
research. Psychological Inquiry, 31, 73-85.
Houck, S. C., & Conway III, L. G. (2019). Strategic communication and the integrative
complexity‐ideology relationship: Meta‐analytic findings reveal differences between
public politicians and private citizens in their use of simple rhetoric. Political
Psychology, 40, 1119-1141.
Howlett, C. A., Wewege, M. A., Berryman, C., Oldach, A., Jennings, E., Moore, E., ... &
Moseley, G. L. (2021). Same room-different windows? A systematic review and meta-
analysis of the relationship between self-report and neuropsychological tests of cognitive
flexibility in healthy adults. Clinical Psychology Review, 102061.
Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correcting for error and bias
in research findings. Newbury Park, CA: Sage.
Inbar, Y., & Lammers, J. (2012). Political diversity in social and personality
psychology. Perspectives on Psychological Science, 7, 496-503.
Inbar, Y., Pizarro, D. A., & Bloom, P. (2009). Conservatives are more easily disgusted than
liberals. Cognition and Emotion, 23, 714-725.
James, W. (1896). The will to believe. An address to the Philosophical Clubs of Yale and Brown
Universities. Retrieved from http://educ.jmu.edu//~omearawm/ph101willtobelieve.html.
90
*Jessani, Z., & Harris, P. B. (2018). Personality, politics, and denial: Tolerance of ambiguity,
political orientation and disbelief in climate change. Personality and Individual
Differences, 131, 121-123.
*Johnson, S. D., & Tamney, J. B. (2001). Social traditionalism and economic conservatism: Two
conservative political ideologies in the United States. The Journal of social
psychology, 141, 233-243.
Johnson, J. J. (2009). What's so wrong with being absolutely right: The dangerous nature of
dogmatic belief. Prometheus Books.
Johnston, C. D., Lavine, H. G., & Federico, C. M. (2017). Open versus closed: Personality,
identity, and the politics of redistribution. Cambridge University Press.
Johnston, C. D., & Ollerenshaw, T. (2020). How different are cultural and economic
ideology? Current Opinion in Behavioral Sciences, 34, 94-101.
Johnston, C. D., & Wronski, J. (2015). Personality dispositions and political preferences across
hard and easy issues. Political Psychology, 36, 35-53.
*Jones, J. M. (1973). Dogmatism and political preferences. Psychological Reports, 33, 640.
*Jost, J. T., Kruglanski, A. W., & Simon, L. (1999). Effects of epistemic motivation on
conservatism, intolerance and other system-justifying attitudes. In L. I. Thompson, J. M.
Levine, & D. M. Messick (Eds.), Shared cognition in organizations: The management of
knowledge (pp. 91–116). Mahwah, NJ: Erlbaum.
*Jost, J. T., Napier, J. L., Thorisdottir, H., Gosling, S. D., Palfai, T. P., & Ostafin, B. (2007). Are
needs to manage uncertainty and threat associated with political conservatism or
ideological extremity? Personality and Social Psychology Bulletin, 33, 989-1007.
91
Jost, J. T. (2017). Ideological asymmetries and the essence of political psychology. Political
Psychology, 38, 167-208.
Jost, J. T., Sterling, J., & Stern, C. (2017). Getting closure on conservatism, or the politics of
epistemic and existential motivation. In The motivation-cognition interface (pp. 56-87).
Routledge.
Jost, J. T., & Hunyady, O. (2005). Antecedents and consequences of system-justifying
ideologies. Current Directions in Psychological Science, 14, 260-265.
Jost, J. T., Federico, C. M., & Napier, J. L. (2013). Political ideologies and their social
psychological functions. The Oxford handbook of political ideologies, 232-250.
Jost, J. T., Glaser, J., Kruglanski, A. W., & Sulloway, F. J. (2003). Political conservatism as
motivated social cognition. Psychological Bulletin, 129, 339-375.
Jost, J. T., Nosek, B. A., & Gosling, S. D. (2008). Ideology: Its resurgence in social, personality,
and political psychology. Perspectives on Psychological Science, 3, 126-136.
Jost, J. T., & Kende, A. (2020). Setting the record straight: System justification and rigidity‐of‐
the‐right in contemporary Hungarian politics. International Journal of Psychology, 55,
96-115.
Jussim, L. (2012, September). Social Perception and Social Reality: Why Accuracy Dominates
Bias and Self-Fulfilling Prophecy. In Social Perception and Social Reality. Oxford
University Press.
Jussim, L., Crawford, J. T., Anglin, S. M., Stevens, S. T., & Duarte, J. L. (2016). Interpretations
and methods: Towards a more effectively self-correcting social psychology. Journal of
Experimental Social Psychology, 66, 116-133.
92
*Kahan, D. M. (2012). Ideology, motivated reasoning, and cognitive reflection: An experimental
study. Judgment and Decision Making, 8, 407-424.
*Kahan, D.M. (2017). Unpublished online blog post:
http://www.culturalcognition.net/blog/2017/6/28/do-you-see-an-effect-here-some-data-
on-correlation-of-cognit.html.
Kahan, D. M. (2016). The politically motivated reasoning paradigm, Part 2: Unanswered
questions. In R. Scott and S. Kosslyn (Eds.), Emerging trends in the social and
behavioral sciences (pp. 1–15). New York, NY: Wiley.
Kahneman, D. (2011). Thinking, fast and slow. Macmillan.
*Kahoe, R. D. (1974). Personality and achievement correlates of intrinsic and extrinsic religious
orientations. Journal of Personality and Social Psychology, 29, 812.
Katz, D. (1960). The functional approach to the study of attitudes. Public Opinion Quarterly, 24,
163-204.
Kalmoe, N. P. (2020). Uses and abuses of ideology in political psychology. Political Psychology.
Pagination forthcoing.
Kalmoe, N. P., & Mason, L. (2019) Lethal mass partisanship: Prevalence, correlates, &
electoral contingencies. Prepared for presentation at the January 2019 NCAPSA
American Politics Meeting. Retrieved from
https://www.dannyhayes.org/uploads/6/9/8/5/69858539/kalmoe___mason_ncapsa_2019_
-_lethal_partisanship_-_final_lmedit.pdf.
Kam, C. D. (2012). Risk attitudes and political participation. American Journal of Political
Science, 56, 817-836.
93
Kaufman, M. R. (1940). Old age and aging the present status of scientific knowledge: Section
meeting, 1939: Old age and aging: The psychoanalytic point of view. American Journal
of Orthopsychiatry, 10, 73–84.
*Kelemen, L., Szabó, Z. P., Mészáros, N. Z., László, J., & Forgas, J. P. (2014). Social cognition
and democracy: The relationship between system justification, just world beliefs,
authoritarianism, need for closure, and need for cognition in Hungary. Journal of Social
and Political Psychology, 2, 197-219.
*Kemmelmeier, M. (1997). Need for closure and political orientation among German university
students. The Journal of Social Psychology, 137, 787-789.
*Kemmelmeier, M. (2007). Political conservatism, rigidity, and dogmatism in American foreign
policy officials: The 1966 Mennis data. The Journal of Psychology, 141, 77-90.
Kemmelmeier, M. (2008). Is there a relationship between political orientation and cognitive
ability? A test of three hypotheses in two studies. Personality and Individual
Differences, 45, 767-772.
Kenrick, D. T., & Funder, D. C. (1988). Profiting from controversy: Lessons from the person-
situation debate. American Psychologist, 43, 23-34.
Kerr, W. A. (1952). Untangling the liberalism-conservatism continuum. The Journal of Social
Psychology, 35, 111-125.
*Kidd, A. H., & Kidd, R. M. (1972). Relation of F-test scores to rigidity. Perceptual and Motor
Skills, 34, 239-243.
Kipnis, D. (1997). Ghosts, taxonomies, and social psychology. American Psychologist, 52, 205–
211.
94
*Kirtley, D. (1967). General authoritarianism and political ideology (Unpublished doctoral
dissertation). University of Miami, Coral Gables, Florida.
*Kirtley, D., & Harkless, R. (1969). Some personality and attitudinal correlates of
dogmatism. Psychological Reports, 24, 851-854.
*Kirton, M. J. (1978). Wilson and Patterson's conservatism scale: A shortened alternative
form. British Journal of Social and Clinical Psychology, 17, 319-323.
Kitschelt, H., Hawkins, K. A., Luna, J. P., Rosas, G., & Zechmeister, E. J. (2010). Latin
American party systems. Cambridge University Press.
*Klein, R. A., Vianello, M., Hasselman, F., Adams, B. G., Adams Jr, R. B., Alper, S., ... &
Batra, R. (2018). Many Labs 2: Investigating variation in replicability across samples and
settings. Advances in Methods and Practices in Psychological Science, 1, 443-490.
Klein, R. A., Cook, C. L., Ebersole, C. R., Vitiello, C. A., Nosek, B. A., Chartier, C. R., …
Ratliff, K. A. (2019, December 11). Many Labs 4: Failure to Replicate Mortality Salience
Effect With and Without Original Author Involvement.
https://doi.org/10.31234/osf.io/vef2c
Knapp, G., & Hartung, J. (2003). Improved tests for a random effects meta‐regression with a
single covariate. Statistics in Medicine, 22, 2693-2710.
Knight, K. (1999). Liberalism and conservatism. In J. P. Robinson, P. R. Shaver, & L. S.
Wrightsman (Eds.), Measures of political attitudes (pp. 59–158). Academic Press.
*Kohn, P. M. (1974). Authoritarianism, rebelliousness, and their correlates among British
undergraduates. British Journal of Social and Clinical Psychology, 13, 245-255.
*Kossowska, M., & Hiel, A. V. (2003). The relationship between need for closure and
conservative beliefs in Western and Eastern Europe. Political Psychology, 24, 501-518.
95
*Krosch, A. R., Berntsen, L., Amodio, D. M., Jost, J. T., & Van Bavel, J. J. (2013). On the
ideology of hypodescent: Political conservatism predicts categorization of racially
ambiguous faces as Black. Journal of Experimental Social Psychology, 49, 1196-1203.
Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: how difficulties in recognizing
one's own incompetence lead to inflated self-assessments. Journal of Personality and
Social Psychology, 77, 1121-1134.
Kruglanski, A. W., & Webster, D. M. (1996). Motivated closing of the mind: “Seizing” and
“freezing.” Psychological Review, 103, 263-283.
Kruglanski, A. W., Pierro, A., Mannetti, L., & De Grada, E. (2006). Groups as epistemic
providers: Need for closure and the unfolding of group-centrism. Psychological Review,
113, 84–100.
Kruglanski, A. W., Webster, D. M., & Klem, A. (1993). Motivated resistance and openness to
persuasion in the presence or absence of prior information. Journal of Personality and
Social Psychology, 65, 861–876.
*Ksiazkiewicz, A., Ludeke, S., & Krueger, R. (2016). The role of cognitive style in the link
between genes and political ideology. Political Psychology, 37, 761-776.
Kvarven, A., Strømland, E., & Johannesson, M. (2020). Comparing meta-analyses and
preregistered multiple-laboratory replication projects. Nature Human Behaviour, 4, 423-
434.
Lakens, D., Adolfi, F. G., Albers, C. J., Anvari, F., Apps, M. A., Argamon, S. E., ... & Zwaan, R.
A. (2018). Justify your alpha. Nature Human Behaviour, 2, 168-171.
Lakoff, G. (2008). The political mind: Why you can’t understand 21st Century American politics
with an 18th Century brain. New York: Penguin Group.
96
Laméris, M. D., Jong-A-Pin, R., & Garretsen, H. (2018). On the measurement of voter
ideology. European Journal of Political Economy, 55, 417-432.
*Landy, J. F. (2016). Representations of moral violations: Category members and associated
features. Judgment & Decision Making, 11, 496-508.
Langbert, M., Quain, A. J., & Klein, D. B. (2016). Faculty voter registration in economics,
history, journalism, law, and psychology. Econ Journal Watch, 13, 422-451.
Lauriola, M., Foschi, R., Mosca, O., & Weller, J. (2016). Attitude toward ambiguity: Empirically
robust factors in self-report personality scales. Assessment, 23, 353-373.
Lefkofridi, Z., Wagner, M., & Willmann, J. E. (2014). Left-authoritarians and policy
representation in Western Europe: Electoral choice across ideological dimensions. West
European Politics, 37, 65-90.
Lehtonen, M., Soveri, A., Laine, A., Järvenpää, J., de Bruin, A., & Antfolk, J. (2018). Is
bilingualism associated with enhanced executive functioning in adults? A meta-analytic
review. Psychological Bulletin, 144, 394-425.
*Leone, L., & Chirumbolo, A. (2008). Conservatism as motivated avoidance of affect: Need for
affect scales predict conservatism measures. Journal of Research in Personality, 42, 755-
762.
Lewin, K. (1936). A dynamic theory of personality: Selected papers. The Journal of Nervous and
Mental Disease, 84, 612-613.
Lilienfeld, S. O. (2010). Can psychology become a science? Personality and Individual
Differences, 49, 281-288.
Lilienfeld, S. O. (2014). The Research Domain Criteria (RDoC): An analysis of methodological
and conceptual challenges. Behaviour Research and Therapy, 62, 129-139.
97
Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Sage Publications.
*Lönnqvist, J. E., Szabó, Z. P., & Kelemen, L. (2019). Rigidity of the far‐right? Motivated social
cognition in a nationally representative sample of Hungarians on the eve of the far‐right
breakthrough in the 2010 elections. International Journal of Psychology, 54, 292-296.
Lykken, D. T. (1991). What’s wrong with psychology anyway. Thinking clearly about
psychology, 1, 3-39.
Malka, A., & Soto, C. J. (2015). Rigidity of the economic right? Menu-independent and menu-
dependent influences of psychological dispositions on political attitudes. Current
Directions in Psychological Science, 24, 137-142.
Malka, A., Lelkes, Y., & Holzer, N. (2017). Rethinking the rigidity of the right model: Three
suboptimal methodological practices and their implications. In Politics of Social
Psychology (pp. 126-146). Psychology Press.
Malka, A., Soto, C. J., Inzlicht, M., & Lelkes, Y. (2014). Do needs for security and certainty
predict cultural and economic conservatism? A cross-national analysis. Journal of
Personality and Social Psychology, 106, 1031-1051.
Malka, A., Lelkes, Y., & Soto, C. J. (2019). Are cultural and economic conservatism positively
correlated? A large-scale cross-national test. British Journal of Political Science, 49,
1045-1069.
*Maltby, J., Day, L., & Edge, R. (1997). Social conservatism and intolerance of
ambiguity. Perceptual and Motor Skills, 85, 929-930.
Marks, G., Hooghe, L., Nelson, M., & Edwards, E. (2006). Party competition and European
integration in the East and West: Different structure, same causality. Comparative
Political Studies, 39, 155-175.
98
Mason, L. (2018). Uncivil agreement: How politics became our identity. University of Chicago
Press.
*Mazur, A. (2004). Believers and disbelievers in evolution. Politics and the Life Sciences, 23,
55-61.
McCarty, N., Poole, K. T., & Rosenthal, H. (2016). Polarized America: The dance of ideology
and unequal riches. MIT Press.
*McCann, S. J., & Stewin, L. L. (1986). Authoritarianism and Canadian voting preferences for
political party, prime minister, and president. Psychological Reports, 59, 1268-1270.
McClosky, H. (1958). Conservatism and personality. American Political Science Review, 52, 27-
45.
McKusick, V. A. (1969). On lumpers and splitters, or the nosology of genetic
disease. Perspectives in biology and medicine, 12(2), 298-312.
*McQuesten, M. J. (2009). Political ideology and social-cognitive motives: The need for closure
in relation to religiosity, worldview, and candidate selection. (Unpublished doctoral
dissertation).
Meehl, P. E. (1978). Theoretical risks and tabular astgris~ Sir Karl, Sir Ronald, and the slow
progress in soft psychology. Journal of Consulting and Clinical Psychology, 46, 806-834.
Meehl, P. E. (1992). Factors and taxa, traits and types, differences of degree and differences in
kind. Journal of Personality, 60, 117-174.
*Meirick, P. C., & Bessarabova, E. (2016). Epistemic factors in selective exposure and political
misperceptions on the right and left. Analyses of Social Issues and Public Policy, 16, 36-
68.
99
Middendorp, C. (1978) Progressiveness and conservatism: the fundamental dimensions of
ideological controversy and their relationship to social class. Den Haag/Paris/New York:
Mouton.
*Mirci, S. (1976). The relationship between political attitudes, self-concept and dogmatism in
Turkish university students. (Unpublished doctoral dissertation).
Mischel, W., Coates, B., & Raskoff, A. (1968). Effects of success and failure on self-
gratification. Journal of Personality and Social Psychology, 10, 381-390.
Miyake, A., & Friedman, N. P. (2012). The nature and organization of individual differences in
executive functions: Four general conclusions. Current Directions in Psychological
Science, 21, 8-14.
Mooney, C. (2012). The Republican brain: The science of why they deny science--and reality.
John Wiley & Sons.
*Mutzner, R. (1992). Ideology and cognition: An attributional comparison of liberal and
conservative interpretations of the downfall of communism in Eastern
Europe (unpublished doctoral dissertation). Brigham Young University.
Nam, H. H., Jost, J. T., Meager, M. R., & Van Bavel, J. J. (2021). Toward a neuropsychology of
political orientation: exploring ideology in patients with frontal and midbrain
lesions. Philosophical Transactions of the Royal Society B, 376(1822), 20200137.
Nam, H. H., Jost, J. T., Kaggen, L., Campbell-Meiklejohn, D., & Van Bavel, J. J. (2018).
Amygdala structure and the tendency to regard the social system as legitimate and
desirable. Nature Human Behavior, 2, 133-138.
100
Neuberg, S. L., Judice, T. N., & West, S. G. (1997). What the Need for Closure Scale measures
and what it does not: Toward differentiating among related epistemic motives. Journal of
Personality and Social Psychology, 72, 1396–1412.
*Neuringer, C. (1964). Rigid thinking in suicidal individuals. Journal of Consulting Psychology,
28, 54–58.
Newton, C., Feeney, J., & Pennycook, G. (2021, March 22). The Comprehensive Thinking
Styles Questionnaire: A novel measure of intuitive-analytic thinking styles.
https://doi.org/10.31234/osf.io/r5wez.
*Nilsson & Jost (unpublished data).
*Nilsson, A., Erlandsson, A., & Västfjäll, D. (2019). The complex relation between receptivity to
pseudo-profound bullshit and political ideology. Personality and Social Psychology
Bulletin. Pagination forthcoming.
O'Connor, P. (1952). Ethnocentrism, "intolerance of ambiguity," and abstract reasoning
ability. The Journal of Abnormal and Social Psychology, 47, 526–530.
Oishi, S., & Graham, J. (2010). Social ecology: Lost and found in psychological
science. Perspectives on Psychological Science, 5, 356-377.
*Okdie, B. M., Rempala, D. M., & Garvey, K. (2016). The first shall be first and the last shall be
last: YouTube, need for closure, and campaigning in the internet age. Personality and
Individual Differences, 89, 148-151.
*Okimoto, T. G., & Gromet, D. M. (2016). Differences in sensitivity to deviance partly explain
ideological divides in social policy support. Journal of Personality and Social
Psychology, 111, 98-117.
101
Olcaysoy, I., & Saribay, S. A. (2014, February). The relationship between resistance to change
and opposition to equality at political and personal levels. In Poster presented at the 15th
annual meeting of the Society for Personality and Social Psychology (SPSP), Austin, TX.
*Onraet, E., Van Hiel, A., Roets, A., & Cornelis, I. (2011). The closed mind:‘Experience’ and
‘cognition’ aspects of openness to experience and need for closure as psychological bases
for right‐wing attitudes. European Journal of Personality, 25, 184-197.
Orben, A., & Lakens, D. (2020). Crud (re) defined. Advances in Methods and Practices in
Psychological Science, 3, 238-247.
Oxley, D. R., Smith, K. B., Alford, J. R., Hibbing, M. V., Miller, J. L., Scalora, M., ... &
Hibbing, J. R. (2008). Political attitudes vary with physiological traits. Science, 321,
1667-1670.
Pan, J., & Xu, Y. (2018). China’s ideological spectrum. The Journal of Politics, 80, 254-273.
*Panno, A., Carrus, G., Brizi, A., Maricchiolo, F., Giacomantonio, M., & Mannetti, L. (2018).
Need for cognitive closure and political ideology. Social Psychology, 49, 103-112.
Paxton, R.O. (2004). The anatomy of fascism. New York: Knopf.
*Peabody, D. (1961). Attitude content and agreement set in scales of authoritarianism,
dogmatism, anti-Semitism, and economic conservatism. The Journal of Abnormal and
Social Psychology, 63, 1-11.
*Pennycook, G., & Rand, D. G. (2019). Cognitive reflection and the 2016 US Presidential
election. Personality and Social Psychology Bulletin, 45, 224-239.
*Pennycook, G., Cheyne, J. A., Seli, P., Koehler, D. J., & Fugelsang, J. A. (2012). Analytic
cognitive style predicts religious and paranormal belief. Cognition, 123, 335-346.
102
*Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2014). The role of
analytic thinking in moral judgements and values. Thinking & Reasoning, 20, 188-214.
Pennycook, G., & Rand, D. G. (2018). Who falls for fake news? The roles of bullshit receptivity,
overclaiming, familiarity, and analytic thinking. Retrieved from
https://papers.ssrn.com/sol3/papers.cfm?abstract_id_3023545.
*Peterson, B., Smith, J. A., Tannenbaum, D., & Shaw, M. P. (2009). On the “exporting” of
morality: Its relation to political conservatism and epistemic motivation. Social Justice
Research, 22, 206-230.
*Pettigrew, T. F. (1958). The measurement and correlates of category width as a cognitive
variable. Journal of Personality, 26, 532-544.
*Poley, W. (1974). Dimensionality in the measurement of authoritarian and political
attitudes. Canadian Journal of Behavioural Science / Revue Canadienne des Sciences du
Comportement, 6, 81–94.
Popper, K. R. (1959). The propensity interpretation of probability. The British Journal for the
Philosophy of Science, 10, 25-42.
*Plant, W. T. (1960). Rokeach's Dogmatism Scale as a measure of general
authoritarianism. Psychological Reports, 6, 164.
*Radom, A. S. (2011). Do religious conservatives and religious liberals think differently? An
exploration of differences in cognitive and personality styles (unpublished doctoral
dissertation). Boston University.
Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data
analysis methods (Vol. 1). Sage.
103
*Rautu, A. (2018). Toward a structurally-sound model of uncertainty-related personality traits
(Unpublished doctoral dissertation). University of Minnesota, USA.
*Ray, J. J. (1973). Dogmatism in relation to sub‐types of conservatism: Some Australian
data. European Journal of Social Psychology, 3, 221-232.
*Rempala, D. M., Okdie, B. M., & Garvey, K. J. (2016). Articulating ideology: How liberals and
conservatives justify political affiliations using morality-based explanations. Motivation
and Emotion, 40, 703-719.
Renkewitz, F., and Keiner, M. (2018). How to Detect Publication Bias in Psychological
Research? A Comparative Evaluation of Six Statistical Methods. Available
at: https://doi.org/10.31234/osf.io/w94ep (accessed January 18, 2020).
Reyna, C. (2017). Scale creation, use, and misuse: How politics undermines measurement.
In Politics of Social Psychology (pp. 91-108). Psychology Press.
*Reynolds, C. J., Makhanova, A., Ng, B. K., & Conway, P. (2020). Bound together for God and
country: The binding moral foundations link unreflectiveness with religiosity and
political conservatism. Personality and Individual Differences, 155, 109-632.
*Rock, M.S. (2009). Where do we draw our lines? Approach/avoidance motivation, political
orientation, and cognitive rigidity (unpublished Master’s thesis). University of
Massachusetts - Amherst.
*Roets, A., & Van Hiel, A. (2006). Need for closure relations with authoritarianism,
conservative beliefs and racism: The impact of urgency and permanence tendencies.
Psychologica Belgica, 46, 235-252.
*Rokeach, M. (1956). Political and religious dogmatism: An alternative to the authoritarian
personality. Psychological Monographs: General and Applied, 70, 1.
104
Rokeach, M. (1960). The open and closed mind: investigations into the nature of belief systems
and personality systems. Basic Books.
Rokeach, M. (1967). Authoritarianism scales and response bias: Comment on Peabody's paper.
*Rokeach, M., & Fruchter, B. (1956). A factorial study of dogmatism and related concepts. The
Journal of Abnormal and Social Psychology, 53, 356–360.
*Rollwage, M., Dolan, R. J., & Fleming, S. M. (2018). Metacognitive failure as a feature of
those holding radical beliefs. Current Biology, 28, 4014-4021.
Rönkkö, M., & Cho, E. (2020). An updated guideline for assessing discriminant
validity. Organizational Research Methods. Advance online publication.
Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological
Bulletin, 86, 638–641.
Rorer, L. G. (1965). The great response-style myth. Psychological Bulletin, 63, 129.
*Royzman, E. B., Landy, J. F., & Goodwin, G. P. (2014). Are good reasoners more incest-
friendly? Trait cognitive reflection predicts selective moralization in a sample of
American adults. Judgment and Decision Making, 9, 176-190.
Reinero, D. A., Wills, J. A., Brady, W. J., Mende-Siedlecki, P., Crawford, J., & Van Bavel, J. J.
(2019, February 7). Is the Political Slant of Psychology Research Related to Scientific
Replicability? https://doi.org/10.31234/osf.io/6k3j5.
*Sargent, M. J. (2004). Less thought, more punishment: Need for cognition predicts support for
punitive responses to crime. Personality and Social Psychology Bulletin, 30, 1485-1493.
*Saribay, S. A., & Yilmaz, O. (2017). Analytic cognitive style and cognitive ability differentially
predict religiosity and social conservatism. Personality and Individual Differences, 114,
24-29.
105
*Scherer, A. M., Windschitl, P. D., & Graham, J. (2015). An ideological house of mirrors:
Political stereotypes as exaggerations of motivated social cognition differences. Social
Psychological and Personality Science, 6, 201-209.
*Schlenker, B. R., Chambers, J. R., & Le, B. M. (2012). Conservatives are happier than liberals,
but why? Political ideology, personality, and life satisfaction. Journal of Research in
Personality, 46, 127-146.
*Sensales, G., Areni, A., Boyatzi, L., Dal Secco, A., & Kruglanski, A. (2014). Perceived impact
of terrorism and the role of the media: representations by Italian citizens differing in
political orientation and need for closure. Behavioral Sciences of Terrorism and Political
Aggression, 6, 41-57.
*Serino, D. (2018). Thou shalt not kill: Because it's wrong, or because you'll feel bad?
(Unpublished doctoral dissertation). Fairleigh Dickinson University.
*Shalek, R. D. (2012). The relationship between cognitive complexity and political partisanship
(Unpublished doctoral dissertation). East Carolina University.
*Sidanius, J. (1978). Cognitive functioning and socio-politico ideology: An exploratory
study. Perceptual and Motor Skills, 46, 515-530.
*Sidanius, J. (1985). Cognitive functioning and sociopolitical ideology revisited. Political
Psychology, 6, 637-661.
Simpson, G. G. (1945). The principles of classification and a classification of mammals. Bulletin
of the American Museum of Natural History, 85, 1-350.
*Smithers, A. G., & Lobley, D. M. (1978). Dogmatism, social attitudes and personality. British
Journal of Social and Clinical Psychology, 17, 135-142.
106
*Soenens, B., Duriez, B., & Goossens, L. (2005). Social–psychological profiles of identity
styles: attitudinal and social-cognitive correlates in late adolescence. Journal of
Adolescence, 28, 107-125.
*Solomon, E. D. (2013). Back in the good old days: Investigating the relationship between
political ideology and time perspective (Unpublished doctoral dissertation). Saint Louis
University.
Soveri, A., Antfolk, J., Karlsson, L., Salo, B., & Laine, M. (2017). Working memory training
revisited: A multi-level meta-analysis of n-back training studies. Psychonomic Bulletin &
Review, 24, 1077-1096.
Stanley, T. D., & Doucouliagos, H. (2014). Meta‐regression approximations to reduce
publication selection bias. Research Synthesis Methods, 5, 60-78.
Stanovich, K. E., & Toplak, M. E. (2019). The need for intellectual diversity in psychological
science: Our own studies of actively open-minded thinking as a case
study. Cognition, 187, 156-166.
Stanovich, K. E., & West, R. F. (1997). Reasoning independently of prior belief and individual
differences in actively open-minded thinking. Journal of Educational Psychology, 89,
342.
*Steininger, M., & Lesser, H. (1974). Dogmatism, dogmatism factors, and liberalism-
conservatism. Psychological Reports, 35, 15-21.
*Sterling, J., Jost, J. T., & Pennycook, G. (2016). Are neoliberals more susceptible to
bullshit? Judgment & Decision Making, 11, 352-360.
*Stern & West (unpublished data).
107
Sternberg, R. J., & Grigorenko, E. L. (1997). Are cognitive styles still in style?. American
Psychologist, 52, 700-712.
Stimson, James A. 2004. Tides of consent: How public opinion shapes American politics.
Cambridge, UK: Cambridge University Press.
Stone, W. F., Lederer, G., & Christie, R. (1993). The status of authoritarianism. In Strength and
Weakness (pp. 229-245). Springer, New York, NY.
Stoycheva, K., Tair, E., & Popova, K. (2020). Rigidity and its relations to the basic dimensions
of personality. Psychological Research, 23, 127-149.
Suedfeld, P, Tetlock, P.E., and Streufert, S. (1992). Conceptual/integrative complexity. In C.
Smith (Ed.), Motivation and personality: Handbook of thematic content analysis, (pp.
401-418). Cambridge: Cambridge University Press.
*Tackett, K. (2018). The composition of worldviews: The relationships between conservatism,
religiosity, empathy, dogmatism, and psychological flexibility (Unpublished Master’s
thesis). Morehead State University.
*Talhelm, T., Haidt, J., Oishi, S., Zhang, X., Miao, F. F., & Chen, S. (2015). Liberals think more
analytically (more “WEIRD”) than conservatives. Personality and Social Psychology
Bulletin, 41, 250-267.
*Talhelm, T. (2018). Hong Kong Liberals Are WEIRD: Analytic Thought Increases Support for
Liberal Policies. Personality and Social Psychology Bulletin, 44, 717-728.
Tetlock, P. E., Hannum, K. A., & Micheletti, P. M. (1984). Stability and change in the
complexity of senatorial debate: Testing the cognitive versus rhetorical style
hypotheses. Journal of Personality and Social Psychology, 46, 979–990.
108
Tetlock, P. E. (1983). Cognitive style and political ideology. Journal of Personality and Social
Psychology, 45, 118-126.
Tetlock, P. E. (1986). A value pluralism model of ideological reasoning. Journal of Personality
and Social Psychology, 50, 819–827
Tetlock, P. E. (1989). Structure and function in political belief systems. In A. R. Pratkanis, S. J.
Breckler, & A. G. Greenwald (Eds.), Attitude structure and function (pp. 129–151).
Hillsdale, NJ: Erlbaum.
Tetlock, P. E. (1993). Cognitive structural analysis of political rhetoric: Methodological and
theoretical issues. Explorations in Political Psychology, 380-405.
Tetlock, P. E. (2007). Psychology and politics: The challenges of integrating levels of analysis in
social science. In A. W. Kruglanski & E. T. Higgins (Eds.), Social psychology:
Handbook of basic principles (p. 888–912). The Guilford Press.
Tetlock, P. E., & Boettger, R. (1989). Cognitive and rhetorical styles of traditionalist and
reformist Soviet politicians: A content analysis study. Political Psychology, 209-232.
*Thompson, R. C., & Michel, J. B. (1972). Measuring authoritarianism: A comparison of the F
and D scales. Journal of Personality, 40, 180-190.
Thornton, A., & Lee, P. (2000). Publication bias in meta-analysis: its causes and
consequences. Journal of Clinical Epidemiology, 53, 207-216.
Tomkins, S. (1963). Left and right: A basic dimension of ideology and personality. In R. W.
White (Ed.) & K. F. Bruner (Collaborator), The study of lives: Essays on personality in
honor of Henry A. Murray (p. 388–411). Atherton Press.
109
*Toner, K., Leary, M. R., Asher, M. W., & Jongman-Sereno, K. P. (2013). Feeling superior is a
bipartisan issue: Extremity (not direction) of political views predicts perceived belief
superiority. Psychological Science, 24, 2454-2462.
Toplak, M. E., West, R. F., & Stanovich, K. E. (2011). The Cognitive Reflection Test as a
predictor of performance on heuristics-and-biases tasks. Memory & Cognition, 39, 1275-
1289.
*Tullett, A. M., Hart, W. P., Feinberg, M., Fetterman, Z. J., & Gottlieb, S. (2016). Is ideology
the enemy of inquiry? Examining the link between political orientation and lack of
interest in novel data. Journal of Research in Personality, 63, 123-132.
*Vail III, K. E., Arndt, J., Motyl, M., & Pyszczynski, T. (2012). The aftermath of destruction:
Images of destroyed buildings increase support for war, dogmatism, and death thought
accessibility. Journal of Experimental Social Psychology, 48, 1069-1081.
Van Den Noortgate, W., & Onghena, P. (2003). Multilevel meta-analysis: A comparison with
traditional meta-analytical procedures. Educational and Psychological Measurement, 63,
765-790.
Van den Noortgate, W., López-López, J. A., Marín-Martínez, F., & Sánchez-Meca, J. (2015).
Meta-analysis of multiple outcomes: a multilevel approach. Behavior Research
Methods, 47, 1274-1294.
Van den Noortgate, W., López-López, J. A., Marín-Martínez, F., & Sánchez-Meca, J. (2013).
Three-level meta-analysis of dependent effect sizes. Behavior Research Methods, 45,
576-594.
110
*Hiel, A. V., & Mervielde, I. (2002). Explaining conservative beliefs and political preferences: A
comparison of social dominance orientation and authoritarianism. Journal of Applied
Social Psychology, 32, 965-976.
*Van Hiel, A., Pandelaere, M., & Duriez, B. (2004). The impact of need for closure on
conservative beliefs and racism: Differential mediation by authoritarian submission and
authoritarian dominance. Personality and Social Psychology Bulletin, 30, 824-837.
Van Hiel, A., Onraet, E., & De Pauw, S. (2010). The relationship between social‐cultural
attitudes and behavioral measures of cognitive style: A meta‐analytic integration of
studies. Journal of Personality, 78, 1765-1800.
Van Hiel, A., Onraet, E., Crowson, H. M., & Roets, A. (2016). The relationship between right‐
wing attitudes and cognitive style: A comparison of self‐report and behavioural measures
of rigidity and intolerance of ambiguity. European Journal of Personality, 30, 523-531.
*Van Prooijen, J. W., & Krouwel, A. P. (2017). Extreme political beliefs predict dogmatic
intolerance. Social Psychological and Personality Science, 8, 292-300.
*Varela, J. G., Gonzalez, E., Jr., Clark, J. W., Cramer, R. J., & Crosby, J. W. (2013).
Development and preliminary validation of the Negative Attitude Toward Immigrants
Scale. Journal of Latina/o Psychology, 1, 155–170.
Viechtbauer, W. (2010). Conducting meta-analyses in R with the metafor package. Journal of
Statistical Software, 36, 1-48.
Vitriol, J. A., Larsen, E. G., & Ludeke, S. G. (2019). The generalizability of personality effects
in politics. European Journal of Personality. Pagination forthcoming.
111
Von Hippel, W., & Buss, D. M. (2017). Do ideologically driven scientific agendas impede the
understanding and acceptance of evolutionary principles in social psychology? Politics of
Social Psychology (pp. 17-35): Psychology Press.
Wakslak, C. J., Jost, J. T., Tyler, T. R., & Chen, E. S. (2007). Moral outrage mediates the
dampening effect of system justification on support for redistributive social
policies. Psychological Science, 18, 267-274.
*Webster, D. M., & Kruglanski, A. W. (1994). Individual differences in need for cognitive
closure. Journal of Personality and Social Psychology, 67, 1049-1062.
Westen, D. (2007). The political brain. New York: Public Affairs.
*Whitley Jr, B. E., & Lee, S. E. (2000). The relationship of authoritarianism and related
constructs to attitudes toward homosexuality. Journal of Applied Social Psychology, 30,
144-170.
Wilson, G. D. (1973). The psychology of conservatism. Academic Press.
Wilson, G. D., & Patterson, J. R. (1968). A new measure of conservatism. British Journal of
Social and Clinical Psychology, 7, 264-269.
Woznyj, H. M., Banks, G. C., Dunn, A. M., Berka, G., & Woehr, D. (2020). Re-introducing
cognitive complexity: A meta-analysis and agenda for future research. Human
Performance, 33, 1-33.
*Yilmaz, O., & Saribay, S. A. (2016). An attempt to clarify the link between cognitive style and
political ideology: A non-western replication and extension. Judgment and Decision
Making, 11, 287–300.
*Yilmaz, O., & Saribay, S. A. (2017). The relationship between cognitive style and political
orientation depends on the measures used. Judgment and Decision Making, 12, 140–147.
112
Yilmaz, O., & Alper, S. (2019). The link between intuitive thinking and social conservatism is
stronger in WEIRD societies. Judgment & Decision Making. Pagination forthcoming.
Yılmaz, O., Sarıbay, S. A., Bahçekapılı, H. G., & Harma, M. (2016). Political orientations,
ideological self-categorizations, party preferences, and moral foundations of young
Turkish voters. Turkish Studies, 17, 544-566.
*Yilmaz, O., & Saribay, S. A. (2018). Lower levels of resistance to change (but not opposition to
equality) is related to analytic cognitive style. Social Psychology, 49, 65–75.
*Young, E. H. (2011). Why we're liberal, why we're conservative: A cognitive theory on the
origins of ideological thinking (Unpublished doctoral dissertation). State University of
New York at Stonybrook.
*Zacker, J. (1973). Authoritarian avoidance of ambiguity. Psychological Reports, 33, 901-902.
*Zippel, B., & Norman, R. D. (1966). Party switching, authoritarianism, and dogmatism in the
1964 election. Psychological Reports, 19, 667-670.
*Zmigrod, L., Rentfrow, P. J., & Robbins, T. W. (2018). Cognitive underpinnings of
nationalistic ideology in the context of Brexit. Proceedings of the National Academy of
Sciences, 115, E4532-E4540.
*Zmigrod, L., Rentfrow, P. J., & Robbins, T. W. (2019). The partisan mind: Is extreme political
partisanship related to cognitive inflexibility? Journal of Experimental Psychology:
General.
Zmigrod, L., Eisenberg, I. W., Bissett, P. G., Robbins, T. W., & Poldrack, R. A. (2021). The
cognitive and perceptual correlates of ideological attitudes: a data-driven
approach. Philosophical Transactions of the Royal Society B, 376(1822), 20200424.
113
Zmigrod, L. (in press). A Psychology of Ideology: Unpacking the Psychological Structure of
Ideological Thinking. Perspectives on Psychological Science.
Zmigrod, L. (2020). The role of cognitive rigidity in political ideologies: theory, evidence, and
future directions. Current Opinion in Behavioral Sciences, 34, 34-39.