In "Gendered Nationalism and the 2016 US Presidential Election: How Party, Class, and Beliefs about Masculinity Shaped Voting Behavior" (Politics & Gender 2019), Melissa Deckman and Erin Cassese reported a Table 2 model that had a sample size of 750 and a predictor for college degree that had a logit coefficient of -0.57 and a standard error of 0.28, so the associated t-statistic is -0.57/28, or about -2.0, which produces a p-value of about 0.05.

The college degree coefficient fell to -0.27 when a "gendered nationalism" predictor was added to the model, and Deckman and Cassese 2019 indicated (pp. 17-18) that:

A post hoc Wald test comparing the size of the coefficients between the two models suggests that the coefficient for college was significantly reduced by the inclusion of the mediator [F(1,678) = 7.25; p < .0072]...

From what I can tell, this means that there is stronger evidence for the -0.57 coefficient differing from the -0.27 coefficient (p<0.0072) than for the -0.57 coefficient differing from zero (p≈0.05).

This type of odd result has been noticed before.

---

For more explanation, below are commands that can be posted into Stata to produce a similar result:

clear all
set seed 123
set obs 500
gen Y = runiform(0,10)
gen X1 = 0.01*(Y + runiform(0,10)^2)
gen X2 = 0.01*(Y + 2*runiform(0,10))
reg Y X1
egen weight = fill(1 1 1 1 1)
svyset [pw=weight]
svy: reg Y X1
estimates store X1alone
svy: reg Y X1 X2
estimates store X1paired
suest X1alone X1paired
lincom _b[X1alone:X1] - 0
di _b[X1paired:X1]
lincom _b[X1alone:X1] - 0.4910762
lincom _b[X1alone:X1] - _b[X1paired:X1]

The X1 coefficient is 0.8481948 in the "reg Y X1" model and is 0.4910762 in the "reg Y X1 X2" model. Results for the "lincom _b[X1alone:X1] - _b[X1paired:X1]" command indicate that the p-value is 0.040 for the test that the 0.8481948 coefficient differs from the 0.4910762 coefficient. But results for the "lincom _b[X1alone:X1] - 0.4910762" command indicate that the p-value is 0.383 for the test that the 0.8481948 coefficient differs from the number 0.4910762.

So, from what I can tell, there is stronger evidence that the 0.8481948 X1 coefficient differs from an imprecisely estimated coefficient that has the value of 0.4910762 than from the value of 0.4910762.

---

As indicated in the link above, this odd result appears attributable to the variance sum law:

Variance(X-Y) = Variance(X) + Variance(Y) - 2*Covariance(X,Y)

For the test of whether the 0.8481948 X1 coefficient differs from the 0.4910762 X1 coefficient, the formula is:

Variance(X-Y) = Variance(X) + Variance(Y) - 2*Covariance(X,Y)

But for the test of whether the -0.57 coefficient differs from zero, the formula reduces to:

Variance(X-Y) = Variance(X) + 0 - 0

For the simulated data, subtracting 2*Covariance(X,Y) reduces Variance(X-Y) more than adding the Variance(Y) increases Variance(X-Y), which explains how the p-value can be lower for comparing the two coefficients to each other than for comparing one coefficient to the value of the other coefficient.

See the code below:

suest X1alone X1paired
matrix list e(V)
di (.8481948-.4910762)/sqrt(.16695974)
di (.8481948-.4910762)/sqrt(.16695974+.14457114-2*.14071065)
test _b[X1alone:X1] = _b[X1paired:X1]

Stata output here.

Tagged with:

The 2018 CCES (Cooperative Congressional Election Survey) included an item asking for attitudes about the item: "White people in the U.S. have certain advantages because of the color of their skin". Schaffner 2020 ("The Heightened Importance of Racism and Sexism in the 2018 U.S. Midterm Elections") used this item in a "denial of racism" measure, which Schaffner 2020 in the title and elsewhere reduced to "racism". The included items permitted Schaffner 2020 to note that higher values of the "denial of racism" measure associate with voting for Republican candidates for president and the House (e.g., in Figure 2).

The 2018 CCES did not include a parallel item about whether White people in the United States have certain disadvantages, but the 2016 American National Election Studies Time Series Study has a set of items that permits comparison of denial of discrimination against certain groups. Here are results for the racial groups asked about, from the web sample with weights applied. Non-responses are included in the percentages, and error bars indicate ends of 95% confidence intervals:

Here are the above data, disaggregated by racial groups:

Here are data for Whites, with point estimates indicating responses by partisanship:

So these data indicate that a higher percentage of White Republicans than of White Democrats deny that there is discrimination against Blacks, Hispanics, and Asians. But these data also indicate that a higher percentage of White Democrats than of White Republicans deny that there is discrimination against Whites.

Let's check the ANES 2016 Time Series Study data to see how well each "denial of discrimination" measure predicts two-party vote choice in the 2016 U.S. presidential election, using the full sample (not only Whites):

So these data indicate that denial of discrimination against Whites was at least as good of a predictor of 2016 U.S. presidential election two-party vote choice as denial of discrimination against Blacks was, and was a better predictor than discrimination against Hispanics and discrimination against Asians.

For results below, the sample is limited to Whites:

---

I think that results are more informative measuring denial of discrimination against more than one racial group, especially given evidence that Republicans and Democrats advantage different racial groups. I think it's worth considering why the persons who decide which items to include on the CCES didn't include a parallel item about whether White people in the United States have certain disadvantages.

---

NOTES

1. The other 2018 CCES item that Schaffner 2020 used for the "denial of racism" measure is: "Racial problems in the U.S. are rare, isolated situations". DeSante and Smith 2017 referred to this item as a measure of "acknowledgment of institutional racism", but this item does not refer to institutions and uses "racial problems" instead of "racism". These seem like suboptimal choices for trying to measure "acknowledgment of institutional racism".

2. ANES 2016 citations:

The American National Election Studies (ANES). 2016. ANES 2012 Time Series Study. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2016-05-17. https://doi.org/10.3886/ICPSR35157.v1.

ANES. 2017. "User's Guide and Codebook for the ANES 2016 Time Series Study". Ann Arbor, MI, and Palo Alto, CA: The University of Michigan and Stanford University.

3. CCES 2018 citation:

Stephen Ansolabehere, Brian F. Schaffner, and Sam Luks. Cooperative Congressional Election Study, 2018: Common Content. [Computer File] Release 2: August 28, 2019. Cambridge, MA: Harvard University [producer] http://cces.gov.harvard.edu.

4. Code for the denial of discrimination analyses.

Tagged with:

Let's define "isolated negative feeling" as rating one target group under 50 but rating all other included target groups at 50 or above, on 0-to-100 feeling thermometers. Target groups in the plot below were Whites, Blacks, Hispanics, and Asians. Data are from the web sample of the 2016 American National Election Studies Time Series Study, with weights applied and limited to participants who provided a numeric rating for all four target groups. Error bars indicate ends of 95% confidence intervals:

---

NOTES

1. ANES 2016 citations:

The American National Election Studies (ANES). 2016. ANES 2012 Time Series Study. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2016-05-17. https://doi.org/10.3886/ICPSR35157.v1.

ANES. 2017. "User's Guide and Codebook for the ANES 2016 Time Series Study". Ann Arbor, MI, and Palo Alto, CA: The University of Michigan and Stanford University.

2. Code.

Tagged with:

PS Political Science & Politics recently published Liu et al. 2020 "The Gender Citation Gap in Undergraduate Student Research: Evidence from the Political Science Classroom". The authors use their study to discuss methods to address gender bias in citations among students:

To the extent that women, in fact, are underrepresented in undergraduate student research, the question becomes: What do we, as a discipline, do about this?...

However, Liu et al. 2020 do not establish that women authors were unfairly underrepresented in student research, because Liu et al. 2020 did not compare citation patterns to a benchmark of the percentage of women that should be cited in the absence of gender bias.

PS Political Science & Politics has an relevant article for benchmarking: Teele and Thelen 2017, in which Table 1 reports the percentage of authors who are women for research articles published from 2000 to 2015 in ten top political science journals. Based on that table, about 26.3% of authors were women.

The Liu et al. 2020 student sample had 75 male students and 65 female students,with male students citing 21.2% women authors and female students citing 33.1% women authors, so the percentage of women cited by the students overall was about 26.7% when weighted by student gender, which is remarkably close to the 26.3% benchmark.

There might be sufficient evidence to claim that the 95% confidence interval for male students does not contain the proper benchmark, and the same might be true for female students, but the 26.3% benchmark from Teele and Thelen 2017 might not be the correct benchmark: for example, maybe students wrote more on topics for which women have published relatively more, or maybe students drew from publications from before 2000 (during which women were a smaller percentage of political scientists than from 2000 to 2015). But the correct benchmark for inferring that women authors were unfairly underrepresented should have been addressed before PS published the final paragraph of Liu et al. 2020, with recommendations about how to address women's under-representation in undergraduate student research.

Tagged with: , ,

Back in 2016, SocImages tweeted a link to a post entitled "Trump Supporters Substantially More Racist Than Other Republicans". The "more racist" label refers to Trump supporters being more likely than Cruz supporters and Kasich supporters to indicate on stereotype scales that Blacks "in general" are less intelligent, more lazy, more rude, more violent, and more criminal than Whites "in general". I had a brief Twitter discussion with Philip Cohen and offered to move the discussion to a blog post. Moreover, I collected some relevant data, which is reported on in a new publication in Political Studies Review.

---

In 2017, Turkheimer, Harden, and Nisbett in Vox estimated the Black/White IQ gap to be closer to 10 points than to 15 points. Ten points would be a relatively large gap, about 2/3 of a standard deviation. Suppose that a person reads this Vox article and reads the IQ literature and, as a result, comes to believe that IQ is a valid enough measure of intelligence for it to be likely that the Black/White IQ gap reflects a true difference in mean intelligence. This person later responds to a survey, rating Whites in general one unit higher on a stereotype scale for intelligence than the person rates Blacks in general. My question, for anyone who thinks that such stereotype scale responses can be used as a measure of anti-Black animus, is:

Why is it racist for this person to rate Whites in general one unit higher than Blacks in general on a stereotype scale for intelligence?

I am especially interested in a response that is general enough to indicate whether it would be sexist against men to rate men in general higher than women in general on a stereotype scale for criminality.

Tagged with: , ,

In 2019, Michael Tesler published a Monkey Cage post subtitled "The majority of people who hold racist beliefs say they have an African American friend". Here is a description of these racist beliefs:

Not many whites in the survey took the overtly racist position of saying 'most blacks' lacked those positive attributes. The responses ranged from 9 percent of whites who said 'most blacks' aren't intelligent to 20 percent who said most African Americans aren't law-abiding or generous.

My analysis of the Pew Research Center data used in the Tesler 2019 post indicated that Tesler 2019 labeled as "overtly racist" the belief that most Blacks are not intelligent, even if a participant also indicated that most Whites are not intelligent.

In the Pew Research Center data (citation below), including Don't Knows and refusals, 118 of 1,447 Whites responded "No" to the question of whether most Blacks are intelligent, which is about 8 percent. However, 57 of the 118 Whites who responded "No" to the question of whether most Blacks are intelligent also responded "No" to the question of whether most Whites are intelligent. Thus, based on these intelligence items, 48 percent of the White participants who Tesler 2019 coded as taking an "overtly racist position" against Blacks also took a (presumably) overtly racist position against Whites. It could be that about half of the Whites who are openly racist against Blacks are also openly racist against Whites, or it could be that most or all of these 57 White participants have a nonracial belief that most people are not intelligent.

Even classification of responses of the 56 Whites who reported "No" for whether most Blacks are intelligent and "Yes" for whether most Whites are intelligent should address the literature on the distribution of IQ test scores in the United States and the possibility that at least some of these 56 Whites used the median U.S. IQ as the threshold for being intelligent.

---

I offered Michael Tesler an opportunity to reply. His reply is below:

Scholars have long disputed what constitutes racism in survey research.  Historically, these disagreements have centered around whether racial resentment items like agreeing that “blacks could be just as well of as whites if they only tried harder” are really racism or prejudice.  Because of these debates, I have avoided calling whites who score high on the racial resentment scale racists in both my academic research and my popular writing.

Yet even scholars who are most critical of the racial resentment measure, such as Ted Carmines and Paul Sniderman, have long argued that self-reported racial stereotypes are “self-evidently valid” measures of prejudice.  So, I assumed it would be relatively uncontroversial to say that whites who took the extreme position of saying that MOST BLACKS aren’t intelligent/hardworking/honest/law-abiding hold racist beliefs.  As the piece in question noted, very few whites took such extreme positions—ranging from 9% who said most blacks aren’t intelligent to 20% who said most blacks are not law-abiding.

If anything, then, the Pew measure of stereotypes used severely underestimates the extent of white racial prejudice in the country.  Professor Zigerell suggests that differencing white from black stereotypes is a better way to measure prejudice.  But this isn’t a very discerning measure in the Pew data because the stereotypes were only asked as dichotomous yes-no questions.  It’s all the more problematic in this case since black stereotypes were asked immediately before white stereotypes in the Pew survey and white respondents may have rated their own group less positively to avoid the appearance of prejudice.

In fact, Sniderman and Carmines’s preferred measure of prejudice—the difference between 7-point anti-white stereotypes and 7-point anti-black stereotypes—reveals far more prejudice than I reported from the Pew data.  In the 2016 American National Election Study (ANES), for example, 48% of whites rated their group as more hardworking than blacks, compared to only 13% in the Pew data who said most blacks are not hardworking.  Likewise, 53% of whites in the 2016 ANES rated blacks as more violent than whites and 25% of white Americans in the pooled 2010-2018 General Social Survey rated whites as more intelligent than blacks.

Most importantly, the substantive point of the piece in question—that whites with overtly racist beliefs still overwhelmingly claim they have black friends—remains entirely intact regardless of measurement.  Even if one wanted to restrict racist beliefs to only those saying most blacks are not intelligent/law-abiding AND that most whites are intelligent/law-abiding, 80%+ of these individuals who hold racist beliefs reported having a black friend in the 2009 Pew Survey.

All told, the post in question used a very narrow measure, which found far less prejudice than other valid stereotype measures, to make the point that the vast majority of whites with overtly racist views claim to have black friends.  Defining prejudice even more narrowly leads to the exact same conclusion.

I'll add a response in the comments.

---

NOTES

1. The title of the Tesler 2019 post is "No, Mark Meadows. Having a black friend doesn't mean you're not racist".

2. Data citation: Pew Research Center for the People & the Press/Pew Social & Demographic Trends. Pew Research Center Poll: Pew Social Trends--October 2009-Racial Attitudes in America II, Oct, 2009 [dataset]. USPEW2009-10SDT, Version 2. Princeton Survey Research Associates International [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, RoperExpress [distributor], accessed Aug-14-2019.

3. "White" and "Black" in the data analysis refer to non-Hispanic Whites and non-Hispanic Blacks.

4. In the Pew data, more White participants (147) reported "No" for the question of whether most Whites are intelligent, compared to the number of White participants (118) who reported "No" for the question of whether most Blacks are intelligent.

Patterns were similar among the 812 Black participants: 145 Black participants reported "No" for the question of whether most Whites are intelligent, but only 93 Black participants reported "No" for the question of whether most Blacks are intelligent.

Moreover, 76 White participants reported "Yes" for the question of whether most Blacks are intelligent and "No" for the question of whether most Whites are intelligent.

5. Stata code:

tab racethn, mi

tab q69b q70b if racethn==1, mi

tab q69b q70b if racethn==2, mi

Tagged with: , ,

Racial attitudes have substantially correlated with environmental policy preferences net of partisanship and ideology, such as here, here, and here. These results were from data collected in 2012 or later, so, to address the concern that this association is due to "spillover" of anti-Obama attitudes into non-racial policy areas, I checked whether the traditional four-item measure of racial resentment substantially correlated with environmental policy preferences net of partisanship and ideology in ANES data from 1986, which I think is the first time these items appeared together on an ANES survey.

I limited the sample to non-Hispanic Whites and controlled for participant gender, education, age, family income, partisanship, and ideology, and the race of the interviewer. The outcome variable concerns federal spending on improving and protecting the environment, which I coded so that 1 was "increased" and 0 was "same" or "decreased", with Don't Knows and Not Ascertaineds coded as missing; only 4 percent of respondents had indicated "decreased".

Other model variables at their means, the predicted probability of a reported preference for increased federal spending on improving and protecting the environment was 65% [54%, 76%] at the lowest level of racial resentment, but fell to 39% [31%, 47%] at the highest level of racial resentment. That's a substantial 26 percentage-point drop "caused" by racial attitudes, for anyone who thinks that such a research design permits causal inference.

---

NOTES:

1. Kinder and Sanders 1996 used racial resentment to predict non-racial attitudes (pp. 121-124), but, based on reading that section, I don't think KS96 predicted this environmental policy preference variable.

2. Data source: Warren E. Miller and the University of Michigan. Institute for Social Research. American National Election Studies. ANES 1986 Time Series Study. Inter-university Consortium for Political and Social Research [distributor].

3. Stata code and output.

4. The post title is about 1986, but some ANES 1986 interviews were conducted in Jan/Feb 1987. The key result still holds if the sample is limited to cases with an "86" year for the "Date of Interview" variable, with respective predicted probabilities of 67% and 37% (p=0.002 for racial resentment). About 4 or so dates appear to be incorrect, such as "01-04-86", "12-23-87", and "11-18-99". Code:

logit env2 RR4 i.female i.educ age i.finc i.party i.ideo i.V860037 if NHwhite==1 & substr(V860009, 7, 8)=="86"
margins, atmeans at(RR4=(0 1))

Tagged with: ,