In 2019, Michael Tesler published a Monkey Cage post subtitled "The majority of people who hold racist beliefs say they have an African American friend". Here is a description of these racist beliefs:

Not many whites in the survey took the overtly racist position of saying 'most blacks' lacked those positive attributes. The responses ranged from 9 percent of whites who said 'most blacks' aren't intelligent to 20 percent who said most African Americans aren't law-abiding or generous.

My analysis of the Pew Research Center data used in the Tesler 2019 post indicated that Tesler 2019 labeled as "overtly racist" the belief that most Blacks are not intelligent, even if a participant also indicated that most Whites are not intelligent.

In the Pew Research Center data (citation below), including Don't Knows and refusals, 118 of 1,447 Whites responded "No" to the question of whether most Blacks are intelligent, which is about 8 percent. However, 57 of the 118 Whites who responded "No" to the question of whether most Blacks are intelligent also responded "No" to the question of whether most Whites are intelligent. Thus, based on these intelligence items, 48 percent of the White participants who Tesler 2019 coded as taking an "overtly racist position" against Blacks also took a (presumably) overtly racist position against Whites. It could be that about half of the Whites who are openly racist against Blacks are also openly racist against Whites, or it could be that most or all of these 57 White participants have a nonracial belief that most people are not intelligent.

Even classification of responses of the 56 Whites who reported "No" for whether most Blacks are intelligent and "Yes" for whether most Whites are intelligent should address the literature on the distribution of IQ test scores in the United States and the possibility that at least some of these 56 Whites used the median U.S. IQ as the threshold for being intelligent.

---

I offered Michael Tesler an opportunity to reply. His reply is below:

Scholars have long disputed what constitutes racism in survey research.  Historically, these disagreements have centered around whether racial resentment items like agreeing that “blacks could be just as well of as whites if they only tried harder” are really racism or prejudice.  Because of these debates, I have avoided calling whites who score high on the racial resentment scale racists in both my academic research and my popular writing.

Yet even scholars who are most critical of the racial resentment measure, such as Ted Carmines and Paul Sniderman, have long argued that self-reported racial stereotypes are “self-evidently valid” measures of prejudice.  So, I assumed it would be relatively uncontroversial to say that whites who took the extreme position of saying that MOST BLACKS aren’t intelligent/hardworking/honest/law-abiding hold racist beliefs.  As the piece in question noted, very few whites took such extreme positions—ranging from 9% who said most blacks aren’t intelligent to 20% who said most blacks are not law-abiding.

If anything, then, the Pew measure of stereotypes used severely underestimates the extent of white racial prejudice in the country.  Professor Zigerell suggests that differencing white from black stereotypes is a better way to measure prejudice.  But this isn’t a very discerning measure in the Pew data because the stereotypes were only asked as dichotomous yes-no questions.  It’s all the more problematic in this case since black stereotypes were asked immediately before white stereotypes in the Pew survey and white respondents may have rated their own group less positively to avoid the appearance of prejudice.

In fact, Sniderman and Carmines’s preferred measure of prejudice—the difference between 7-point anti-white stereotypes and 7-point anti-black stereotypes—reveals far more prejudice than I reported from the Pew data.  In the 2016 American National Election Study (ANES), for example, 48% of whites rated their group as more hardworking than blacks, compared to only 13% in the Pew data who said most blacks are not hardworking.  Likewise, 53% of whites in the 2016 ANES rated blacks as more violent than whites and 25% of white Americans in the pooled 2010-2018 General Social Survey rated whites as more intelligent than blacks.

Most importantly, the substantive point of the piece in question—that whites with overtly racist beliefs still overwhelmingly claim they have black friends—remains entirely intact regardless of measurement.  Even if one wanted to restrict racist beliefs to only those saying most blacks are not intelligent/law-abiding AND that most whites are intelligent/law-abiding, 80%+ of these individuals who hold racist beliefs reported having a black friend in the 2009 Pew Survey.

All told, the post in question used a very narrow measure, which found far less prejudice than other valid stereotype measures, to make the point that the vast majority of whites with overtly racist views claim to have black friends.  Defining prejudice even more narrowly leads to the exact same conclusion.

I'll add a response in the comments.

---

NOTES

1. The title of the Tesler 2019 post is "No, Mark Meadows. Having a black friend doesn't mean you're not racist".

2. Data citation: Pew Research Center for the People & the Press/Pew Social & Demographic Trends. Pew Research Center Poll: Pew Social Trends--October 2009-Racial Attitudes in America II, Oct, 2009 [dataset]. USPEW2009-10SDT, Version 2. Princeton Survey Research Associates International [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, RoperExpress [distributor], accessed Aug-14-2019.

3. "White" and "Black" in the data analysis refer to non-Hispanic Whites and non-Hispanic Blacks.

4. In the Pew data, more White participants (147) reported "No" for the question of whether most Whites are intelligent, compared to the number of White participants (118) who reported "No" for the question of whether most Blacks are intelligent.

Patterns were similar among the 812 Black participants: 145 Black participants reported "No" for the question of whether most Whites are intelligent, but only 93 Black participants reported "No" for the question of whether most Blacks are intelligent.

Moreover, 76 White participants reported "Yes" for the question of whether most Blacks are intelligent and "No" for the question of whether most Whites are intelligent.

5. Stata code:

tab racethn, mi

tab q69b q70b if racethn==1, mi

tab q69b q70b if racethn==2, mi

Racial attitudes have substantially correlated with environmental policy preferences net of partisanship and ideology, such as here, here, and here. These results were from data collected in 2012 or later, so, to address the concern that this association is due to "spillover" of anti-Obama attitudes into non-racial policy areas, I checked whether the traditional four-item measure of racial resentment substantially correlated with environmental policy preferences net of partisanship and ideology in ANES data from 1986, which I think is the first time these items appeared together on an ANES survey.

I limited the sample to non-Hispanic Whites and controlled for participant gender, education, age, family income, partisanship, and ideology, and the race of the interviewer. The outcome variable concerns federal spending on improving and protecting the environment, which I coded so that 1 was "increased" and 0 was "same" or "decreased", with Don't Knows and Not Ascertaineds coded as missing; only 4 percent of respondents had indicated "decreased".

Other model variables at their means, the predicted probability of a reported preference for increased federal spending on improving and protecting the environment was 65% [54%, 76%] at the lowest level of racial resentment, but fell to 39% [31%, 47%] at the highest level of racial resentment. That's a substantial 26 percentage-point drop "caused" by racial attitudes, for anyone who thinks that such a research design permits causal inference.

---

NOTES:

1. Kinder and Sanders 1996 used racial resentment to predict non-racial attitudes (pp. 121-124), but, based on reading that section, I don't think KS96 predicted this environmental policy preference variable.

2. Data source: Warren E. Miller and the University of Michigan. Institute for Social Research. American National Election Studies. ANES 1986 Time Series Study. Inter-university Consortium for Political and Social Research [distributor].

3. Stata code and output.

4. The post title is about 1986, but some ANES 1986 interviews were conducted in Jan/Feb 1987. The key result still holds if the sample is limited to cases with an "86" year for the "Date of Interview" variable, with respective predicted probabilities of 67% and 37% (p=0.002 for racial resentment). About 4 or so dates appear to be incorrect, such as "01-04-86", "12-23-87", and "11-18-99". Code:

logit env2 RR4 i.female i.educ age i.finc i.party i.ideo i.V860037 if NHwhite==1 & substr(V860009, 7, 8)=="86"
margins, atmeans at(RR4=(0 1))

The PS: Political Science and Politics article "Fear, Institutionalized Racism, and Empathy: The Underlying Dimensions of Whites' Racial Attitudes" by Christopher D. DeSante and Candis Watts Smith reports results for four racial attitudes items from a "FIRE" battery.

I have a paper and a blog post indicating that combinations of these items substantially associate with environmental policy preferences net of controls for demographics, partisanship, and political ideology. DeSante and Smith have a paper that reported an analysis that uses combinations of these items to predict an environmental policy preference ("Support E.P.A.", in Table 3 of the paper), but results for this outcome variable are not mentioned in the DeSante and Smith 2020 PS publication. DeSante and Smith 2020 reports results for the four FIRE racial attitudes items separately, so I will do so below for environmental policy preference outcome variables, using data from the 2016 Cooperative Congressional Election Study (CCES).

---

Square brackets contain predicted probabilities from a logistic regression—net of controls for gender, education, age, family income, partisanship, and political ideology—of selecting "oppose" regarding the policy "Strengthen enforcement of the Clean Air Act and Clean Water Act even if it costs US jobs". The sample is limited to White respondents, and the estimates are weighted. The first probability in square brackets is at the highest level of measured agreement to the indicated statement on a five-point scale, with all other model predictors at their means; the second probability is for the corresponding highest level of measured disagreement to the indicated statement.

  • [38% to 56%, p<0.05] I am angry that racism exists.
  • [29% to 58%, p<0.05] White people in the U.S. have certain advantages because of the color of their skin.
  • [39% to 42%, p>0.05] I often find myself fearful of people of other races.
  • [51% to 36%, p<0.05] Racial problems in the U.S. are rare, isolated situations.

Results below are from a fractional logistic regression predicting an index of values of the four environmental policy items summed together and placed on a 0-to-1 scale:

  • [0.28 to 0.48, p<0.05] I am angry that racism exists.
  • [0.23 to 0.44, p<0.05] White people in the U.S. have certain advantages because of the color of their skin.
  • [0.28 to 0.32, p<0.05] I often find myself fearful of people of other races.
  • [0.42 to 0.26, p<0.05] Racial problems in the U.S. are rare, isolated situations.

The standard deviation for the 0-to-1 four-item environmental policy index is 0.38, so three of the four results immediately above indicate nontrivially high differences in predictions for a environmental policy preferences outcome variable that has no theoretical connection to race, which I think raises legitimate questions about whether these racial attitudes items should ever be used to estimate the causal influence of racial attitudes.

---

NOTES

1. Stata code.

2. Data source: Stephen Ansolabehere and Brian F. Schaffner, Cooperative Congressional Election Study, 2016: Common Content. [Computer File] Release 2: August 4, 2017. Cambridge, MA: Harvard University [producer] http://cces.gov.harvard.edu

1.

The Carrington and Strother 2020 "Who thinks removing Confederate icons violates free speech?" Politics, Groups, and Identities article "examine[s] the relationship between both 'heritage' and 'hate' and pro Confederate statue views" (p. 5).

The right panel of Carrington and Strother 2020 Figure 2 indicates how support for Confederate symbols associates with their "hate" measure. Notice how much of the "hate" association is due to those who rate Whites less warmly than they rate Blacks. Imagine a line extending horizontally from [i] the y-axis at a 50 percent predicted probability of support for Confederate symbols to [ii] the far end of the confidence interval; that 50 percent ambivalence about Confederate symbols falls on the "anti-White affect" part of the "hate" measure.

---

2.

The second author of Carrington and Strother 2020 has discussed the Wright and Esses 2017 article that claimed that "Most supporters of the flag are doing so because of their strong Southern pride and their conservative political views and do not hold negative racial attitudes toward Blacks" (p. 235). Moreover, my 2015 Monkey Cage post on support for the Confederate battle flag presented evidence that conflicted with claims that the second author of Carrington and Strother 2020 made in a prior Monkey Cage post.

The published version of Carrington and Strother 2020 did not cite Wright and Esses 2017 or my 2015 post. I don't think that Carrington and Strother 2020 had an obligation to cite either publication, but if these publications were not cited in the initial submission, I think that that would plausibly produce a less rigorous peer review, if the journal's selection of peer reviewers is at least partly dependent on manuscript references. And the review process for Carrington and Strother 2020 appears to have not been especially rigorous, to the extent that this can be inferred from the published Carrington and Strother 2020, which reported multiple impossible p-values ("p < .000") and referred to "American's views toward Confederate statues" (p. 5, instead of "Americans' views") and to "the Cour's decision" (p. 7, instead of "the Court's decision").

The main text reports a sample of 332, but table Ns are 233; presumably, the table results are for Whites only, and the sample is for the full set of respondents, but I don't see that mentioned in the article. The appendix indicates that the Figure 2 outcome variable had four levels and that the Figure 3 outcome variable had six levels, but figure results are presented in terms of predicted probabilities, so I suspect that the analysis dichotomized these outcome variables for some reason, but let me known if you find an indication of that in the article.

And did no one in the review process raise a concern about the Carrington and Strother 2020 suggestion below that White Southern pride requires or is nothing more than "pride in a failed rebellion whose stated purpose was the perpetuation of race-based chattel slavery" (p. 6)?

It must be noted that White Southern pride should not be assumed to be racially innocuous: it is hard to imagine a racially neutral pride in a failed rebellion whose stated purpose was the perpetuation of race-based chattel slavery.

It seems possible to be proud to be from the South but not have pride in the Confederacy, similar to the way that it is possible to be proud to be a citizen of a country and not have pride in every action of the country or even in a major action of that country.

---

3.

My peer review might have mentioned that, while Figure 2 of Carrington and Strother 2020 indicates that racial attitudes are a larger influence than Southern pride, the research design might have been biased toward this inference: Southern pride is measured with a 5-point item, racial attitudes are measured with a 201-point scale, and it is plausible that a more precise measure might produce a larger association, all else equal.

Moreover, the left panel of Carrington and Strother 2020 Figure 2 indicates that the majority supported Confederate symbols. Maybe I'm thinking about this incorrectly, but much of the association for racial attitudes is due to the "less than neutral about Whites" part of the racial attitudes scale, but there is no corresponding "less than neutral" part of the Southern pride item. Predicted probabilities for the racial attitudes panel extend much lower than neutral because of more negative attitudes about Whites relative to Blacks, but the research design doesn't provide corresponding predicted probabilities for those who have negative attitudes about Southern pride.

---

4.

I think that a core claim of Carrington and Strother 2020 is that (p. 2):

...our findings suggest that the free speech defense of Confederate icons in public spaces is, in part, motivated by racial attitudes.

The statistical evidence presented for this claim is that the racial attitudes measure associates with a measure of agreement with a free speech defense of Confederate monuments. But, as indicated in the right panel of Carrington and Strother 2020 Figure 3, the results are also consistent with the claim that racial attitudes partly motivates *not* agreeing with this free speech defense.

---

5.

The Carrington and Strother 2020 use of a White/Black feeling thermometer difference for their measure of racial attitudes permitted comparison of those who have relatively more favorable feelings about one of the racial groups to those who have relatively more favorable feelings about the other racial group.

The racial resentment measure that sometimes is used as a measure of racial attitudes would have presumably instead coded the bulk of respondents on or near the end of the "Warmer to Black" [sic] part of the Carrington and Strother 2020 "hate" measure as merely being not racially resentful, which would not have permitted readers to distinguish those who reported relatively high more negative feelings about Whites from those whose reported feelings favor neither Whites nor Blacks.

The Morning et al. 2019 DuBois Review article "Socially Desirable Reporting and the Expression of Biological Concepts of Race" reports on an experiment from the Time-sharing Experiments for the Social Sciences. Documentation at the TESS link indicates that the survey was fielded between Oct 8 and Oct 14 of 2004, and the article was published online Oct 14 of 2019, so the data were about 15 years old, but I did not see anything in the article that indicated the year of data collection.

Here is a key result, discussed on page 11 of the article:

When respondents in the comparison group were asked directly whether they agreed with the statement on genetics and race, only 13% said they did. This figure is significantly lower than the 22% we estimated previously as "truly" supporting the race statement. As a result, we conclude that the social desirability effect for this item equals 9 percentage points (22 – 13).

That 22% estimate of support is for non-Black responses that are not weighted to reflect population characteristics, but my analysis indicated that the estimate of support falls to 14% when the weight variable in the TESS dataset is applied to the non-Black responses. The social desirability effect in the analysis with these weights is thus not statistically different than zero in the data. Nonetheless, the Morning et al. 2019 abstract generalizes the results to the population of non-Black Americans:

We show that one in five non-Black Americans attribute income inequality between Black and White people to unspecified genetic differences between the two groups. We also find that this number is substantially underestimated when using a direct question.

---

I would like for peer review to require [1] an indication of the year(s) of data collection and [2] a discussion of weighted results for an experiment when the data should be known or suspected to have included a third-party weight variable (such as data from TESS or a CCES module).

---

NOTES

1. This post is a follow-up of this tweet that tagged two of the Morning et al. 2019 co-authors.

2. In this tweet, I expressed doubt that a peer reviewer or editor would check these data to see if inferences are robust to weighting. Morning et al. 2019 indicates that a peer reviewer suggested that a weight be applied to account for an inequality between experimental groups (p. 8):

...the baseline group has a disproportionately large middle-income share and small lower-income share relative to the test and comparison groups. As suggested by one anonymous reviewer, we reran the analyses using a weight calculated such that the income distribution in the baseline group corresponds to that found in the treatment and comparison groups.

3. I am co-author in an article that discusses, among other things, variation in the use of weights for survey experiments in a political science literature.

"Evidence of Bias in Standard Evaluations of Teaching" (Mirya Holman, Ellen Key, and Rebecca Kreitzer, 2019) has been cited as evidence of bias in student evaluations of teaching.

I am familiar with Mitchell and Martin 2018, so let's check how that study is summarized in the list, as archived on 20 November 2019. I count three substantive errors in the summary and one spelling error, highlighted below, and not counting the fgender in the header or the singular RateMyProfessor:

The summary referred to the online courses as being from different universities, but all of the online courses in the Mitchell and Martin 2018 analysis were at the same university. The summary referred to "female instructors" and "male professors", but the Mitchell and Martin 2018 analysis compared comments and evaluations for only one female instructor to comments and evaluations for only one male instructor. The summary indicated that female instructors were evaluated differently in intelligence, but no Mitchell and Martin 2018 table reported a statistical significance asterisk for the Intelligence/Competency category.

---

The aforementioned errors in the summary of Mitchell and Martin 2018 can be easily fixed, but that would not address a flaw in a particular use of the list, given that, from what I can tell, Mitchell and Martin 2018 has errors that undercut the inference about students using different language when evaluating female instructors than when evaluating male instructors. Listing that study and other studies as evidence of bias in student evaluations of teaching based on an uncritical reading of results shouldn't be convincing evidence of bias in student evaluations of teaching, especially if the categorizing of studies does not indicate whether "bias" is operationalized as an unfair difference or as a mere difference.

I think there would be value in a version of "Evidence of Bias in Standard Evaluations of Teaching" that accurately summarizes each study that has tested for unfair bias in student evaluations of teaching using a research design with internal validity and plausibly sufficient statistical power, especially if each summary were coupled with a justification of why the study provides credible evidence about unfair bias in student evaluations of teaching. But I don't see why anyone should be convinced by "Evidence of Bias in Standard Evaluations of Teaching" in its current form.

The Enders 2019 Political Behavior article "A Matter of Principle? On the Relationship Between Racial Resentment and Ideology" interprets its results as "providing disconfirmatory evidence for the principled conservatism thesis" (p. 3 of the pdf). This principled conservatism thesis "asserts that adherence to conservative ideological principles causes what are interpret[ed] as more resentful responses to the individual racial resentment items, especially those that deal with subjects like hard work and struggle" (p. 5 of the pdf).

So how could we test whether adherence to conservative principles causes what are interpreted as resentful responses to racial resentment items? I think that a conservative principle informing a "strongly agree" response to the racial resentment item that "Irish, Italians, Jewish, and many other minorities overcame prejudice and worked their way up. Blacks should do the same without any special favors" might be an individualism that opposes special favors to reduce inequalities of outcome, so that, if a White participant strongly agreed that Blacks should work their way up without special favors, then—to be principled—that White participant should also strongly agree that poor Whites should work their way up without special favors.

Thus, testing the principled conservatism thesis could involve asking participants the same racial resentment items with a variation in targets or a variation to a domain in which Blacks tend to outperform Whites. If there is a concern about social desirability affecting responses when participants are asked the same item with a variation in target or domain, the items could be experimentally manipulated and responses compared at an aggregate level. This type of analysis involved manipulating the target of racial resentment items to be Blacks or another group has recently been conducted and reported on in a paper by Carney and Enos, but this paper is not cited in Enders 2019, and I would have hoped that the peer reviewers would have requested or required a discussion of information in that paper that relates to the principled nature of conservatives' responses to racial resentment items.

---

Instead of manipulating the target of racial resentment items, Enders 2019 tested the principled conservatism thesis with an analysis that assessed how responses to racial resentment items associated with attitudes about limited government and with preferences about federal spending on, among other things, public schools, child care, and the environment. From what I can tell, Enders 2019 assessed the extent to which participants are principled in a test in which principled conservative responses are only those responses in which responses expected from a conservative to racial resentment items match responses expected from a conservative to items measuring preferences about federal spending or match responses expected from a conservative to items measuring attitudes about limited government. As I think Enders 2019 suggests, this is a consistency across domains at the level of "conservatism" and is not a consistency across targets within the domain of the racial resentment items: "If I find that principled conservatism does not account for a majority of the variance in the racial resentment scale under these conditions, then I will have reasonably robust evidence against the principled conservatism thesis" (p. 7 of the pdf).

But I don't think that the level of "conservatism" is the correct level for assessing whether perceived racially prejudiced responses to racial resentment items reflect "adherence to (conservative) ideological principles" (p. 2 of the pdf). Enders 2019 indicates that "Critics argue that racially prejudiced responses to the items that compose the racial resentment scale are observationally equivalent to the responses that conservatives would provide" (abstract). However, at least for me, my criticism of the racial resentment items as producing unjustified inferences of racial bias is not limited to inferences about responses from self-identified conservatives: "This statement [about whether, if blacks would only try harder, they could be just as well off as whites] cannot be used to identify racial bias because a person who agreed with the statement might also agree that poor whites who try harder could be just as well off as middle-class whites" (p. 522 of this article). I don't perceive any reason why a person who supports increased federal spending on the public schools, child care, and the environment cannot also have a principled objection to special favors to reduce inequalities of outcome.

And even if "conservatism" were the correct level of analysis, I don't think that the Enders 2019 operationalizations of principled conservatism—as a preference for limited government and as a preference for decreased federal spending—are valid because, as far as I can tell, these operationalizations of principled conservatism are identical to principled libertarianism.

---

Enders 2019 asks "Why else would attitudes about racial issues be distinct from attitudes about other policy areas, if not for the looming presence and substantive impact of racial prejudice?" (p. 21 of the pdf). I think the correct response is that the principles that inform attitudes about these other policy areas are distinct from the principles that inform attitudes about issues in the racial resentment items, to the extent that these attitudes even involve principles.

I don't think that the principle that "the less government, the better" produces conservative policy preferences about federal spending on national defense or domestic law enforcement, and I don't see a reason to assign to racial prejudice an inconsistency between support for increased federal spending in these domains and agreement that "the less government, the better". And I don't perceive a reason for racial prejudice to be assigned responsibility for a supposed inconsistency between responses to the claim that "the less government, the better" and responses to the racial resentment statements that "Generations of slavery and discrimination have created conditions that make it difficult for blacks to work their way out of the lower class" or that "...if blacks would only try harder they could be just as well off as whites", because, as far as I can tell, there is no inconsistency in which a preference for limited government compels particular responses to these racial resentment items.

---

NOTES

1. Enders 2019 noted that: "More recently, DeSante (2013), utilizing an experimental research design, found that the most racially resentful whites, as opposed to less racially resentful whites, were more likely to allocate funds to offset the state budget deficit than allocated such funds to a black welfare applicant. This demonstrates a racial component of racial resentment, even accounting for principled conservatism" (p. 6). But I don't think that this indicates a demonstration of a racial component of racial resentment, because there is no indication whether the preference for allocating funds to offset the state budget deficit instead of allocating funds to welfare recipients occurred regardless of the race of the welfare recipients. My re-analysis of data for DeSante 2013 indicated that "...when comparing conditions with two White applicants and conditions with two Black applicants, there is insufficient evidence to support the inference of a difference in the effect of racial resentment on allocations to offset the state budget deficit" (pp. 5-6).

2. I sent the above comments to Adam Enders in case he wanted to comment.

3. After I sent the above comments, I saw this Robert VerBruggen article on the racial resentment measure. I don't remember seeing that article before, but it has a lot of good points and ideas.