The journal Politics, Groups, and Identities recently published Mangum and Block Jr. 2021 "Perceived racial discrimination, racial resentment, and support for affirmative action and preferential hiring and promotion: a multi-racial analysis".

---

The article notes that (p. 13):

Intriguingly, blame [of racial and ethnic minorities] tends to be positively associated with support for preferential hiring and promotion, and, in 2008, this positive relationship is statistically significant for Black and Asian respondents (Table A4; lower right graph in Figure 6). This finding is confounding...

But from what I can tell, this finding might be because the preferential hiring and promotion outcome variable was coded backwards to the intended coding. Table 2 of the article indicates that a higher percentage of Blacks than of Whites, Hispanics, and Asians favored preferential hiring and promotion, but Figures 1 and 2 indicate that a lower percentage of Blacks than of Whites, Hispanics, and Asians favored preferential hiring and promotion.

My analysis of data for the 2004 National Politics Study indicated that the preferential hiring and promotion results in Table 2 are correct for this survey and that blame of racial and ethnic minorities negatively associates with favoring preferential hiring and promotion.

---

Other apparent errors in the article include:

Page 4:

Borrowing from the literature on racial resentment possessed (Feldman and Huddy 2005; Kinder and Sanders 1996; Kinder and Sears 1981)...

Figures 3, 4, 5, and 6:

...holding control variable constant

Page 15:

African Americans, Hispanics, and Asians support affirmative action more than are Whites.

Page 15:

Preferential hiring and promotion is about who deserves special treatment than affirmative action, which is based more on who needs it to overcome discrimination.

Note 2:

...we code the control variables to that they fit a 0-1 scale...

---

Moreover, the article indicates that "the Supreme Court ruled that affirmative action was constitutional in California v. Bakke in 1979", which is not the correct year. And the article seems to make inconsistent claims about affirmative action: "affirmative action and preferential hiring and promotion do not benefit Whites" (p. 15), but "White women are the largest beneficiary group (Crosby et al. 2003)" (p. 13).

---

At least some of these flaws seem understandable. But I think that the number of flaws in this article is remarkably high, especially for a peer-reviewed journal with such a large editorial group: Politics, Groups, and Identities currently lists a 13-member editorial team, a 58-member editorial board, and a 9-member international advisory board.

---

NOTES

1. The article claims that (p. 15):

Regarding all races, most of the racial resentment indicators are significant statistically and in the hypothesized direction. These findings lead to the conclusion that preferential hiring and promotion foster racial thinking more than affirmative action. That is, discussions of preferential hiring and promotion lead Americans to consider their beliefs about minorities in general and African Americans in particular more than do discussions of affirmative action.

However, I'm not sure of how the claim that "preferential hiring and promotion foster racial thinking more than affirmative action" is justified by the article's results regarding racial resentment.

Maybe this refers to the slopes being steeper for the preferential hiring and promotion outcome than for the affirmative action outcome, but it would be a lot easier to eyeball slopes across figures if the y-axes were consistent across figures; instead, the y-axes run from .4 to .9 (Figure 3), .4 to 1 (Figure 4), .6 to 1 (Figure 5), and .2 to 1 (Figure 6).

Moreover, Figure 1 is a barplot that has a y-axis that runs from .4 to .8, and Figure 2 is a barplot that has a y-axis that runs from .5 to .9, with neither barplot starting at zero. It might make sense for journals to have an editorial board member or other person devoted to reviewing figures, to eliminate errors and improve presentation.

For example, the article indicates that (p. 6):

Figures 1 and 2 display the distribution of responses for our re-coded versions of the dependent variables graphically, using bar graphs containing 95% confidence intervals. To interpret these graphs, readers simply check to see if the confidence intervals corresponding to any given bar overlap with those of another.

But if the intent is to use confidence interval overlap to assess whether there is sufficient evidence at p<0.05 of a difference between groups, then confidence intervals closer to 85% are more appropriate. I haven't always known this, but this does seem to be knowledge that journal editors should use to foster better figures.

2. Data citation:

James S. Jackson, Vincent L. Hutchings, Ronald Brown, and Cara Wong. National Politics Study, 2004. ICPSR24483-v1. Ann Arbor, MI: Bibliographic Citation: Inter-university Consortium for Political and Social Research [distributor], 2009-03-23. doi:10.3886/ICPSR24483.v1.

Tagged with: , ,

[UPDATE] The color scheme for the first two plots has been changed, based on a comment from John, below. Original plots had the red and blue reversed [1, 2].

---

Below are plots of 0-to-100 feeling thermometer responses from the 2020 ANES Social Media Study.

---

The first plot indicates that, compared to Blacks in the oldest age category, a higher percentage of Blacks in the youngest age category reported cold feelings (under a rating of 50) toward the four included racial groups:

---

This second plot indicates that the pattern by age for Black respondents is limited to White respondents' ratings of Whites:

---

I checked data in this third plot after reading the Lee and Huang 2021 post discussing recent anti-Asian violence, which indicated that:

A recent study finds that in fact, Christian nationalism is the strongest predictor of xenophobic views of COVID-19, and the effect of Christian nationalism is greater among white respondents, compared to Black respondents.

The 2020 Social Media Study didn't appear to have good items for measuring Christian nationalism, but below I used White born again Christian Trump voters as a reasonably related group. A relatively low percentage of this group rated Asians under 50, compared to the percentage of Black respondents that rated Asians under 50.

---

And the fourth plot is for all White respondents compared to all Black respondents:

---

NOTES

[1] Data source: American National Election Studies. 2021. ANES 2020 Social Media Study: Pre-Election Data [dataset and documentation]. March 8, 2021 version. www.electionstudies.org.

[2] Stata code for the analysis and R code for the plots. Data for plots 1, 2, 3, and 4. Stata output.

Tagged with:

This plot reports disaggregated results from the American National Election Studies 2020 Time Series Study pre-election survey item:

On another topic: How much do you feel it is justified for people to use violence to pursue their political goals in this country?

Not shown is that 83% of White Democrats and 92% of White Republicans selected "Not at all" for this item.

Regression output controlling for party identification, gender, and race is in the Stata output file, along with uncertainty estimates for the plot percentages.

---

NOTES

1. Data source: American National Election Studies. 2021. ANES 2020 Time Series Study Preliminary Release: Pre-Election Data [dataset and documentation]. February 11, 2021 version. www.electionstudies.org.

2. Stata code for the analysis and R code for the plot. Dataset for the R plot.

Tagged with: , , ,

The Open Science Framework has a preregistration for the Election Research Preacceptance Competition posted in March 2017 for contributors Erin Cassese and Tiffany D. Barnes, for a planned analysis of data from the 2016 American National Election Studies Time Series Study. The preregistration was titled "Unpacking White Women's Political Loyalties".

The Cassese and Barnes 2019 Political Behavior article "Reconciling Sexism and Women's Support for Republican Candidates: A Look at Gender, Class, and Whiteness in the 2012 and 2016 Presidential Races" reported results from analyses of data for the 2016 American National Election Studies Time Series Study that addressed content similar to that of the aforementioned preregistration: 2016 presidential vote choice, responses on a scale measuring sexism, a comparison of how vote choice associates with sexism among men and among women, perceived discrimination against women, and a comparison of 2016 patterns to 2012 patterns.

But, from what I can tell, the analysis in the Political Behavior article did not follow the preregistered plan, and the article did not even reference the preregistration.

---

Moreover, some of the hypotheses in the preregistration appear to differ from hypotheses in the article. For example, the preregistration did not expect vote choice to associate with sexism differently in 2012 compared to 2016, but the article did. From the preregistration (emphasis added):

H5: When comparing the effects of education and modern sexism on abstentions, candidate evaluations, and vote choice in the 2012 and 2016 ANES data, we expect comparable patterns and effect sizes to emerge. (This is a non-directional hypothesis; essentially, we expect to see no difference and conclude that these relationships are relatively stable across election years. The alternative is that the direction and significance of the estimated effects in these models varies across the two election years.)

From the article (emphasis added):

To evaluate our expectations, we compare analysis of the 2012 and 2016 ANES surveys, with the expectation that hostile sexism and perceptions of discrimination had a larger impact on voters in 2016 due to the salience of sexism in the campaign (Hypothesis 5).

I don't think that the distinction between modern sexism and hostile sexism in the above passages matters: for example, the preregistration placed the "women complain" item in a modern sexism measure, but the article placed the "women complain" item in a hostile sexism measure.

---

Another instance, from the preregistration (emphasis added):

H3: The effect of modern sexism differs for white men and women. (This is a non-directional hypothesis. For women, modern sexism is an ingroup orientation, pertaining to women’s own group or self-interest, while for men it is an outgroup orientation. For this reason, the connection between modern sexism, candidate evaluations, and vote choice may vary, but we do not have strong a priori assumptions about the direction of the difference.)

From the article (emphasis added):

Drawing on the whiteness and system justification literatures, we expect these beliefs about gender will influence vote choice in a similar fashion for both white men and women (Hypothesis 4).

---

I think that readers of the Political Behavior article should be informed of the preregistration because preregistration, as I understand it, is intended to remove flexibility in research design, and preregistration won't be effective at removing research design flexibility if researchers retain the flexibility to not inform readers of the preregistration. I can imagine a circumstance in which the analyses reported in a publication do not need to follow the associated preregistration, but I can't think a good justification for Cassese and Barnes 2019 readers not being informed of the Cassese and Barnes 2017 preregistration.

---

NOTES

1. Cassese and Barnes 2019 indicated that (footnote omitted and emphasis added):

To evaluate our hypothesis that disadvantaged white women will be most likely to endorse hostile sexist beliefs and more reluctant to attribute gender-based inequality to discrimination, we rely on the hostile sexism scale (Glick and Fiske 1996). The ANES included two questions from this scale: (1) Do women demanding equality seek special favors? and (2) Do women complaining about discrimination cause more problems than they solve? Items were combined to form a mean-centered scale. We also rely on a single survey item asking respondents how much discrimination women face in the United States. Responses were given on a 5-point Likert scale ranging from none to a great deal. This item taps modern sexism (Cassese et al. 2015). Whereas both surveys contain other items gauging gender attitudes (e.g., the 2016 survey contains a long battery of hostile sexism items), the items we use here are the only ones found in both surveys and thus facilitate direct comparisons, with accurate significance tests, between 2012 and 2016.

However, from what I can tell, the ANES 2012 Time Series Codebook and the ANES 2016 Time Series Codebook both contain a modern sexism item about media attention (modsex_media in 2012, MODSEXM_MEDIAATT in 2016) and a gender attitudes item about bonding (women_bond in 2012, and WOMEN_WKMOTH in 2016). The media attention item is listed in the Cassese and Barnes 2017 preregistration as part of the modern sexism dependent variable / mediating variable, and the preregistration indicates that:

We have already completed the analysis of the 2012 ANES data and found support for hypotheses H1-H4a in the pre-Trump era. The analysis plan presented here is informed by that analysis.

2. Some of the content from this post is from a "Six Things Peer Reviewers Can Do To Improve Political Science" manuscript. In June 2018, I emailed the Cassese and Barnes 2019 corresponding author a draft of the manuscript, which redacted criticism of the work of other authors that I had not yet informed of my criticism of their work. For another example from the "Six Things" manuscript, click here.

Tagged with: , ,

The Chudy 2021 Journal of Politics article "Racial Sympathy and Its Political Consequences" concerns White racial sympathy for Blacks.

More than a decade ago, Hutchings 2009 reported evidence about White racial sympathy for Blacks. Below is a table from Hutchings 2009 indicating that, among White liberals and White conservatives, sympathy for Blacks predicted at p<0.05 support for government policies explicitly intended to benefit Blacks such as government aid to Blacks, controlling for factors such as anti-Black stereotypes:

Chudy 2021 thanked Vincent Hutchings in the acknowledgments, and Vincent Hutchings is listed as co-chair of Jennifer Chudy's "Racial Sympathy in American Politics" dissertation. But see whether you can find in the Chudy 2021 JOP article an indication that Hutchings 2009 had reported evidence that White racial sympathy for Blacks predicted support for government policies explicitly intended to benefit Blacks.

Here is a passage from Chudy 2021 referencing Hutchings 2009:

I start by examining white support for "government aid to blacks," a broad policy area that has appeared on the ANES since the 1970s. The question asks respondents to place themselves on a 7-point scale that ranges from "Blacks Should Help Themselves" to "Government Should Help Blacks." Previous research on this question has found that racial animus leads some whites to oppose government aid to African Americans (Hutchings 2009). This analysis examines whether racial sympathy leads some white Americans to offer support for this contentious policy area.

I think that the above passages can be reasonably read as suggesting an incorrect claim that the Hutchings 2009 "previous research on this question" did not examine "whether racial sympathy leads some white Americans to offer support for this contentious policy area [of government aid to African Americans]".

---

NOTES:

1. Chudy 2021 reported results from an experiment that varied the race of a target culprit and asked participants to recommend a punishment. Chudy 2021 Figure 2 plotted estimates of recommended punishments at different levels of racial sympathy.

The Chudy 2021 analysis used a linear regression, which produced an estimated difference by race on a 0-to-100 scale of -22 at the lowest level of racial sympathy and of 41 at the highest level of racial sympathy. These differences can be seen in my plot below to the left, with a racial sympathy index coded from 0 through 16.

However, a linear relationship might not be a correct presumption. The plot to the right reports estimates calculated at each level of the racial sympathy index, so that the estimate at the highest level of racial sympathy is not influenced by cases at other levels of racial sympathy.

2. Chudy 2021 Figure 2 plots results from Chudy 2021 Table 5, but using a reversed outcome variable for some reason.

3. Chudy 2021 used the term "predicted probability" to discuss the Figure 2 / Table 5 results, but these results are predicted levels of an outcome variable that had eight levels, from "0-10 hours" to "over 70 hours" (see the bottom of the final page in the Chudy 2021 supplemental web appendix).

4. The bias detected in this experiment across all levels of racial sympathy was 13 units on a 0-to-100 scale, disfavoring the White culprit relative to the Black culprit (p=0.01) [svy: reg commservice whiteblackculprit].

5. Code for my analyses.

Tagged with:

The Electoral Integrity Project surveyed U.S.-based election experts two weeks after the 2016 U.S. presidential election, with reminders sent in late November to mid December. The overall response rate was 19%.

The plot below reports expert self-reported political ideology (N=718, and not counting the eight respondents who did not respond to this item).

The plot below reports two-party vote share for Hillary Clinton among the 580 experts who reported voting for Hillary Clinton or Donald Trump. Two-party vote share for Donald Trump by expert field ranged from 0% (0 of 25) for sociology/anthropology to 4.3% (3 of 70) for political theory.

---

NOTES

1. Data source: Pippa Norris, Alessandro Nai, and Max Grömping. 2017. The expert survey of Perceptions of Electoral Integrity, US 2016 subnational study, Release 1.0, (PEI_US_1.0), January 2017: www.electoralintegrityproject.com. See https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/YXUV3W.

2. R code for the political ideology plot.

3. R code for the vote choice plot.

4. Stata code:

tab state, mi

tab leftrightscale
tab leftrightscale, mi

tab supported
tab supported citizen
tab supported if supported==1 | supported==2

local subfield "elections ampol statepol comparative inter polcomm theory publicadmin publicpol methods socio"
foreach i of local subfield {
display ""
display "---------------- subfield = `i'"
tab supported if (supported==1 | supported==2) & `i'==1
}

The plot below is from Strickler and Lawson 2020 "Racial conservatism, self-monitoring, and perceptions of police violence":

I thought that the plot might be improved:

---

Key differences between the plots:

1. The original plot has a legend, which requires readers to match colors in a legend to colors of estimates. The revised plot labels the estimates without using a legend.

2. The original plot reports treatment effects on a relative scale. The revised plot reports estimates on an absolute scale, so that readers can directly see the mean percentages that rated the shooting justified, for each group in each condition.

3. The revised plot uses 83% confidence intervals, so that readers can use non-overlaps in the confidence intervals to get a sense of whether the p-value is p<0.05 for a given comparison.

4. The revised plot reverses the axes and stacks the plots vertically, so that, for instance, it's easier to perceive that the percentage of nonWhite respondents in the control that rated the shooting as justified is lower than the percentage of White respondents in the control that rated the shooting as justified, at about p=0.05.

---

The plot below repeats the plot above (left) and adds the same plot but with x-axes for each panel (right):

---

NOTES

1. Thanks to Ryan Strickler for sending me data and code for the article.

2. Code for the paired plot. Data for the plots.

3. Prior discussion of Strickler and Lawson 2020.

4. Other plot improvement posts.

Tagged with: , ,