The 2018 Cooperative Congressional Election Survey included two items labeled as measures of "sexism", for which respondents received five response options from "strongly agree" to "strongly disagree". One of these sexism measures is the Glick and Fiske 1996 hostile sexism statement that "Feminists are making entirely reasonable demands of men". This item was recently used in the forthcoming Schaffner 2020 article in the British Journal of Political Science.

It is not clear to me what "demands" the statement refers to. Moreover, it seems plausible that Democrats would conceptualize these demands differently than Republicans do so that, in effect, many Democrats would respond to a different item than many Republicans respond to. Democrats might be more likely to conceptualize reasonable demands such as support for equal work for equal pay, but Republicans might be more likely to conceptualize more disputable demands such as support for taxpayer-funded late-term abortions.

---

To assess whether CCES 2018 respondents were thinking only of the reasonable demand of men's support for equal work for equal pay, let's check data for the 2016 American National Election Studies Time Series Study, which asked post-election survey participants to respond to the item: "Do you favor, oppose, or neither favor nor oppose requiring employers to pay women and men the same amount for the same work?".

In weighted ANES 2016 data, 87% of participants asked that item favored requiring employers to pay women and men the same amount for the same work, including non-substantive responses, with a 95% confidence interval of [86%, 89%]. However, in weighted CCES 2018 post-election data, only 38% of participants somewhat or strongly agreed that feminists are making entirely reasonable demands of men, including non-substantive responses, with a 95% confidence interval of [37%, 39%].

So, in these weighted national samples, 87% favored requiring employers to pay women and men the same amount for the same work, but only 38% agreed that feminists are making entirely reasonable demands of men. I think that this is strong evidence that a large percentage of U.S. adults do not think of only reasonable demands when responding to the statement that "Feminists are making entirely reasonable demands of men".

---

To address the concern that the interpretation of the "demands" differs by partisanship, here are support levels by partisan identification:

Democrats

  • 92% favor requiring employers to pay women and men the same amount for the same work [2016 ANES]
  • 59% agree that feminists are making entirely reasonable demands of men [2018 CCES]
  • 33 percentage-point difference

Republicans

  • 84% favor requiring employers to pay women and men the same amount for the same work [2016 ANES]
  • 18% agree that feminists are making entirely reasonable demands of men [2018 CCES]
  • 66 percentage-point difference

So that's an 8-point Democrat/Republican gap in favoring requiring employers to pay women and men the same amount for the same work, but a 41-point Democrat/Republican gap in agreement that feminists are making entirely reasonable demands of men.

I think that this is at least suggestive evidence that a nontrivial percentage of Democrats and an even higher percentage of Republicans are not thinking of reasonable feminist demands such as support for equal work for equal pay. If it is generally true that, responding to the "feminist demands" item, Democrats on average think of different demands than Republicans think of, that seems like a poor research design, to infer sexism in politically relevant variables based on a too-vague item that different political groups interpret differently.

---

NOTES:

1. ANES 2016 citations:

The American National Election Studies (ANES). 2016. ANES 2012 Time Series Study. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2016-05-17. https://doi.org/10.3886/ICPSR35157.v1.

ANES. 2017. "User's Guide and Codebook for the ANES 2016 Time Series Study". Ann Arbor, MI, and Palo Alto, CA: The University of Michigan and Stanford University.

2. CCES 2018 citation:

Stephen Ansolabehere, Brian F. Schaffner, and Sam Luks. Cooperative Congressional Election Study, 2018: Common Content. [Computer File] Release 2: August 28, 2019. Cambridge, MA: Harvard University [producer] http://cces.gov.harvard.edu.

3. ANES 2016 Stata code:

tab V162149

tab V160502

keep if V160502==1

tab V162149

gen favorEQpay = V162149

recode favorEQpay (-9 -8 2 3=0)

tab V162149 favorEQpay, mi

svyset [pweight=V160102], strata(V160201) psu(V160202)

svy: prop favorEQpay

tab V161155

svy: prop favorEQpay if V161155==1

svy: prop favorEQpay if V161155==2

4. CCES 2018 Stata code:

tab CC18_422d tookpost, mi

tab CC18_422d tookpost, mi nol

keep if tookpost==2

tab CC18_422d, mi

gen femagree = CC18_422d

recode femagree (3/5 .=0) (1/2=1)

tab CC18_422d femagree, mi

svyset [pw=commonpostweight]

svy: prop femagree

tab CC18_421a

svy: prop femagree if CC18_421a==1

svy: prop femagree if CC18_421a==2

Tagged with: ,

This post discusses a commonly used "blatant" measure of dehumanization. Let me begin by proposing two blatant measures of dehumanization:

1. Yes or No?: Do you think that members of Group X are fully human?

2. On a scale in which 0 is not at all human and 10 is fully human, where would you rate members of Group X?

I would interpret a "No" response for the first measure and a response of any number lower than 10 for the second measure as dehumanization of members of Group X. If there are no reasonable alternate interpretation for these responses, then these are face-valid unambiguous measures of blatant dehumanization.

---

But neither above measure is the commonly used social science measure of blatant dehumanization. Instead, the the commonly used "measure of blatant dehumanization" (from Kteily et al. 2015), referred to as the Ascent measure, is below:

And here is how Kteily et al.'s 2015 described the ends of the tool (emphasis omitted):

Responses on the continuous slider were converted to a rating from 0 (least "evolved") to 100 (most "evolved")...

Note that participants are instructed to rate how "evolved" the participant considers the average member of a group to be and that these ratings are placed on a scale from "least evolved" to "most evolved", but these ratings are then interpreted as participant perceptions about the humanness of the group. This doesn't seem like a measure of blatant dehumanization if participants aren't asked to indicate their perceptions of how human the average member of a group is.

The Ascent measure is a blatant measure of dehumanization only if "human" and "evolved" are identical concepts, but these aren't identical concepts. It's possible to simultaneously believe that Bronze Age humans are fully human and that Bronze Age humans are less evolved than humans today. Moreover, I think that the fourth figure in the Ascent image is a Cro-Magnon that is classified by scientists as human, and Kteily et al. seem to agree:

...the image is used colloquially to highlight a salient distinction between early human ancestors and modern humans; that is, the full realization of cognitive ability and cultural expression

The perceived humanness of the fourth figure matters for understanding responses to the Ascent measure because much of the variation in responses occurs between the fourth figure and fifth figure (for example, see Table 1 of Kteily et al. 2015 and Note 1 below).

There is an important distinction between participants dehumanizing a group and participants rating one group lower than another group on a measure that participants interpret as indicating something other than "humanness", such as the degree of "realization of cognitive ability and cultural expression", especially because I don't think that humans need to have "the full realization of cognitive ability and cultural expression" in order to be fully human.

---

NOTES

1. The Jardina and Piston TESS study conducted in 2015 and 2016 with only non-Hispanic White participants had a Ascent measure in which 66% and 77% of unweighted responses for the respective targets of Blacks and Whites were in the 91-to-100 range.

2. I made some of the above points in 2015 in the ANES Online Commons. Lee Jussim raised issues discussed above in 2018, and I didn't find anything earlier.

3. More Twitter discussion of the Ascent measure: here with no reply, here with no reply, here with a reply, here with a reply.

Tagged with:

The PS: Political Science and Politics article "Fear, Institutionalized Racism, and Empathy: The Underlying Dimensions of Whites' Racial Attitudes" by Christopher D. DeSante and Candis Watts Smith reports results for four racial attitudes items from a "FIRE" battery.

I have a paper and a blog post indicating that combinations of these items substantially associate with environmental policy preferences net of controls for demographics, partisanship, and political ideology. DeSante and Smith have a paper that reported an analysis that uses combinations of these items to predict an environmental policy preference ("Support E.P.A.", in Table 3 of the paper), but results for this outcome variable are not mentioned in the DeSante and Smith 2020 PS publication. DeSante and Smith 2020 reports results for the four FIRE racial attitudes items separately, so I will do so below for environmental policy preference outcome variables, using data from the 2016 Cooperative Congressional Election Study (CCES).

---

Square brackets contain predicted probabilities from a logistic regression—net of controls for gender, education, age, family income, partisanship, and political ideology—of selecting "oppose" regarding the policy "Strengthen enforcement of the Clean Air Act and Clean Water Act even if it costs US jobs". The sample is limited to White respondents, and the estimates are weighted. The first probability in square brackets is at the highest level of measured agreement to the indicated statement on a five-point scale, with all other model predictors at their means; the second probability is for the corresponding highest level of measured disagreement to the indicated statement.

  • [38% to 56%, p<0.05] I am angry that racism exists.
  • [29% to 58%, p<0.05] White people in the U.S. have certain advantages because of the color of their skin.
  • [39% to 42%, p>0.05] I often find myself fearful of people of other races.
  • [51% to 36%, p<0.05] Racial problems in the U.S. are rare, isolated situations.

Results below are from a fractional logistic regression predicting an index of values of the four environmental policy items summed together and placed on a 0-to-1 scale:

  • [0.28 to 0.48, p<0.05] I am angry that racism exists.
  • [0.23 to 0.44, p<0.05] White people in the U.S. have certain advantages because of the color of their skin.
  • [0.28 to 0.32, p<0.05] I often find myself fearful of people of other races.
  • [0.42 to 0.26, p<0.05] Racial problems in the U.S. are rare, isolated situations.

The standard deviation for the 0-to-1 four-item environmental policy index is 0.38, so three of the four results immediately above indicate nontrivially high differences in predictions for a environmental policy preferences outcome variable that has no theoretical connection to race, which I think raises legitimate questions about whether these racial attitudes items should ever be used to estimate the causal influence of racial attitudes.

---

NOTES

1. Stata code.

2. Data source: Stephen Ansolabehere and Brian F. Schaffner, Cooperative Congressional Election Study, 2016: Common Content. [Computer File] Release 2: August 4, 2017. Cambridge, MA: Harvard University [producer] http://cces.gov.harvard.edu

Tagged with:

1.

The Hassell et al. 2020 Science Advances article "There is no liberal media bias in which news stories political journalists choose to cover" reports null results from two experiments on ideological bias in media coverage.

The correspondence experiment emailed journalists a message about a candidate who planned to announce a candidacy for state legislator, with a question of whether the journalist would be interested in a sit-down interview with the candidate to discuss the candidate's candidacy and vision for state government. Experimental manipulations involved the description of the candidate, such as "...is a true conservative Republican..." or "...is a true progressive Democrat...".

The conjoint experiment asked journalists to hypothetically choose between two candidacy announcements to cover, with characteristics of the candidates experimentally manipulated.

---

2.

Hassell et al. 2020 claims that (p. 1)...

Using a unique combination of a large-scale survey of political journalists, data from journalists' Twitter networks, election returns, a large-scale correspondence experiment, and a conjoint survey experiment, we show definitively that the media exhibits no bias against conservatives (or liberals for that matter) in what news that they choose to cover.

I think that a good faith claim that research "definitively" shows no media bias against conservatives or liberals in the choice of news to cover should be based on at least one test that is very likely to detect that type of bias. But I don't think that either experiment provides such a "very likely" test.

I think that a "very likely" scenario in which ideology would cause a journalist to not report a story has at least three characteristics: [1] the story unquestionably reflects poorly on the journalist's ideology or ideological group, [2] the journalist has nontrivial gatekeeping ability over the story, and [3] the journalist could not meaningfully benefit from reporting the story.

Regarding [1], it's not clear to me that any of the candidate announcement stories would unquestionably reflect poorly on any ideology or ideological group. The lack of an ideological valence to the story is especially lacking in the correspondence experiment, given that a liberal journalist could ask softball questions to try to make a liberal candidate look good and could ask hardball questions to try to make a conservative candidate look bad.

Regarding [2], it's not clear to me that a journalist would have nontrivial gatekeeping ability over the candidate announcement story: it's not like a journalist could keep secret the candidate's candidacy.

---

3.

I think that title of the Hassell et al. 2020 Monkey Cage post describing this research is defensible: "Journalists may be liberal, but this doesn't affect which candidates they choose to cover". But I'm not sure who thought otherwise.

Hassell et al. 2020 describe the concern about selective reporting as "... journalists may omit news stories that do not adhere to their own (most likely liberal) predispositions" (p. 1). But in what sense does a conservative Republican announcing a candidacy for office have anything to do with adhering to a liberal disposition? The concern about media bias in the selection of stories to cover, as I understand it, is largely about stories that have an obvious implication for ideologically preferred narratives. So something like "Conservative Republican accused of sexual assault", not "Conservative Republican runs for office".

The selective reporting that conservatives complain about is plausibly much more likely—and plausibly much more important—at the national level than at a lower level. For example, I don't think that ideological bias is large enough to cause a local newspaper to not report on a police shooting of an unarmed person in the newspaper's distribution area; however, I think that ideological bias is large enough to influence a national media organization's decisions about which subset of available police shootings to report on.

Tagged with:

1.

The Carrington and Strother 2020 "Who thinks removing Confederate icons violates free speech?" Politics, Groups, and Identities article "examine[s] the relationship between both 'heritage' and 'hate' and pro Confederate statue views" (p. 5).

The right panel of Carrington and Strother 2020 Figure 2 indicates how support for Confederate symbols associates with their "hate" measure. Notice how much of the "hate" association is due to those who rate Whites less warmly than they rate Blacks. Imagine a line extending horizontally from [i] the y-axis at a 50 percent predicted probability of support for Confederate symbols to [ii] the far end of the confidence interval; that 50 percent ambivalence about Confederate symbols falls on the "anti-White affect" part of the "hate" measure.

---

2.

The second author of Carrington and Strother 2020 has discussed the Wright and Esses 2017 article that claimed that "Most supporters of the flag are doing so because of their strong Southern pride and their conservative political views and do not hold negative racial attitudes toward Blacks" (p. 235). Moreover, my 2015 Monkey Cage post on support for the Confederate battle flag presented evidence that conflicted with claims that the second author of Carrington and Strother 2020 made in a prior Monkey Cage post.

The published version of Carrington and Strother 2020 did not cite Wright and Esses 2017 or my 2015 post. I don't think that Carrington and Strother 2020 had an obligation to cite either publication, but if these publications were not cited in the initial submission, I think that that would plausibly produce a less rigorous peer review, if the journal's selection of peer reviewers is at least partly dependent on manuscript references. And the review process for Carrington and Strother 2020 appears to have not been especially rigorous, to the extent that this can be inferred from the published Carrington and Strother 2020, which reported multiple impossible p-values ("p < .000") and referred to "American's views toward Confederate statues" (p. 5, instead of "Americans' views") and to "the Cour's decision" (p. 7, instead of "the Court's decision").

The main text reports a sample of 332, but table Ns are 233; presumably, the table results are for Whites only, and the sample is for the full set of respondents, but I don't see that mentioned in the article. The appendix indicates that the Figure 2 outcome variable had four levels and that the Figure 3 outcome variable had six levels, but figure results are presented in terms of predicted probabilities, so I suspect that the analysis dichotomized these outcome variables for some reason, but let me known if you find an indication of that in the article.

And did no one in the review process raise a concern about the Carrington and Strother 2020 suggestion below that White Southern pride requires or is nothing more than "pride in a failed rebellion whose stated purpose was the perpetuation of race-based chattel slavery" (p. 6)?

It must be noted that White Southern pride should not be assumed to be racially innocuous: it is hard to imagine a racially neutral pride in a failed rebellion whose stated purpose was the perpetuation of race-based chattel slavery.

It seems possible to be proud to be from the South but not have pride in the Confederacy, similar to the way that it is possible to be proud to be a citizen of a country and not have pride in every action of the country or even in a major action of that country.

---

3.

My peer review might have mentioned that, while Figure 2 of Carrington and Strother 2020 indicates that racial attitudes are a larger influence than Southern pride, the research design might have been biased toward this inference: Southern pride is measured with a 5-point item, racial attitudes are measured with a 201-point scale, and it is plausible that a more precise measure might produce a larger association, all else equal.

Moreover, the left panel of Carrington and Strother 2020 Figure 2 indicates that the majority supported Confederate symbols. Maybe I'm thinking about this incorrectly, but much of the association for racial attitudes is due to the "less than neutral about Whites" part of the racial attitudes scale, but there is no corresponding "less than neutral" part of the Southern pride item. Predicted probabilities for the racial attitudes panel extend much lower than neutral because of more negative attitudes about Whites relative to Blacks, but the research design doesn't provide corresponding predicted probabilities for those who have negative attitudes about Southern pride.

---

4.

I think that a core claim of Carrington and Strother 2020 is that (p. 2):

...our findings suggest that the free speech defense of Confederate icons in public spaces is, in part, motivated by racial attitudes.

The statistical evidence presented for this claim is that the racial attitudes measure associates with a measure of agreement with a free speech defense of Confederate monuments. But, as indicated in the right panel of Carrington and Strother 2020 Figure 3, the results are also consistent with the claim that racial attitudes partly motivates *not* agreeing with this free speech defense.

---

5.

The Carrington and Strother 2020 use of a White/Black feeling thermometer difference for their measure of racial attitudes permitted comparison of those who have relatively more favorable feelings about one of the racial groups to those who have relatively more favorable feelings about the other racial group.

The racial resentment measure that sometimes is used as a measure of racial attitudes would have presumably instead coded the bulk of respondents on or near the end of the "Warmer to Black" [sic] part of the Carrington and Strother 2020 "hate" measure as merely being not racially resentful, which would not have permitted readers to distinguish those who reported relatively high more negative feelings about Whites from those whose reported feelings favor neither Whites nor Blacks.

Tagged with:

Brian Schaffner posted a paper ("How Political Scientists Should Measure Sexist Attitudes") that engaged my critique in this symposium entry about the gender asymmetry in research on gender attitudes. This post provides comments on the part of the paper that engages with my critiques.

---

Schaffner placed men as the subject of five hostile sexism items, used responses to these items to construct a male-oriented hostile sexism scale, placed that scale into a regression alongside a female-oriented hostile sexism scale, and discussed results, such as (p. 39):

...including this scale in the models of candidate favorability or issue attitudes does not alter the patterns of results for the hostile sexism scale. The male-oriented scale demonstrates no association with gender-related policies, with coefficients close to zero and p-values above .95 in the models asking about support for closing the gender pay gap and relaxing Title IX.

The hostile sexism items include "Most men interpret innocent remarks or acts as being sexist" and "Many men are actually seeking special favors, such as hiring policies that favor them over women, under the guise of asking for equality".

These items reflect negative stereotypes about women, and it's not clear to me that these items should be expected to perform as well measuring "hostility towards men" (p. 39) as the items perform measuring hostility against women when women are the target of the items. I discussed in this prior post Schaffner 2019 Figure 2, which indicated that participants at low levels of hostile sexism discriminated against men; so the Schaffner 2019 data have participants who prefer women to men, but the male-oriented version of hostile sexism doesn't sort them sufficiently well.

If a male-oriented hostile sexism scale is to compete in a regression against a female-oriented hostile sexism scale, then interpretation of the results needs to be informed by how well each scale measures sexism against its target. I think an implication of the Schaffner 2019 results is that placing men as the target of hostile sexism items doesn't produce a good measure of sexism against men.

---

The male-oriented hostile sexism might be appropriate as a "differencer" in the way that stereotype scale responses about Whites can be used to better measure stereotype scale responses about Blacks. For example, for the sexism items, a sincerely-responding participant who strongly agrees that people in general are too easily offended would be coded as a hostile sexist by the woman-oriented hostile sexism item but would be coded as neutral by a "differenced" hostile sexism item.

I don't know that this differencing should be expected to overturn inferences, but I think that it is plausible that this differencing would improve the sorting of participants by levels of sexism.

---

Schaffner 2019 Figure A.1 indicates that the marginal effect of hostile sexism reduced the favorability ratings of female candidates Warren and Harris and increased the favorability ratings of Trump; see Table A.4 for more on this, and see Table A.5 for associations with policy preferences. However, given that low hostile sexism associates with sexism against men, I don't think that these associations in isolation are informative about whether sexism against women causes such support for political candidates or policies.

---

If I analyze the Shaffner 2019 data, here are a few things that I would like to look for:

[1] Comparison of the coefficient for the female-oriented hostile sexism scale to the coefficient for a "differenced" hostile sexism scale, predicting Trump favorability ratings.

[2] Assessment of whether responses to certain items predict discrimination by target sex in the conjoint experiment, such as for participants who strongly supported or strongly opposed the pay gap policy item or participants with relatively extreme ratings of Warren, Harris, and Trump (say, top 25% and bottom 25%).

Tagged with: , ,

In this post, I discussed the possibility that "persons at the lower levels of hostile sexism are nontrivially persons who are sexist against men". Brian Schaffner provides more information on this possibility, in the paper "How Political Scientists Should Measure Sexist Attitudes". I'll place Figure 2 from the paper below:

From the paper discussion of Figure 2 (p. 14):

The plot on the right shows the modest influence of hostile sexism on moderating the gender treatment in the politician conjoint. Subjects in the bottom third of the hostile sexism distribution were about 10 points more likely to select the female profile, a difference that is statistically significant (p=.005). However, the treatment effect was small and not statistically significant among those in the middle and top terciles.

From what I can tell, this evidence suggests that the proper interpretation of the hostile sexism scale is not as a measure of sexism against women but as a measure of male/female preference, with participants who prefer men sorted to high levels of the measure and participants who prefer women sorted to low levels of the measure. If hostile sexism were a proper linear measure of sexism against women, low values of hostile sexism would predict equal treatment of men and women and higher levels would predict favoritism of men over women.

Tagged with: , ,