Continuing from this Twitter thread...

Hi Logan,

1. I do not dispute the claimed correlation between White Southerners' racial attitudes and support for the Confederate battle flag, but the Wright and Esses 2017 analysis suggests an important causal claim that a meaningfully-large percentage of White Southerners support use of the Confederate battle flag for reasons unrelated to racial animus. Is there evidence that that causal claim is not correct?

2. I think that a "'heritage' doesn't tell us much" claim should be based on the performance of measures of pride in Southern heritage. Civil War knowledge and linked fate with Southerners are not measures of pride, so these measures cannot support a claim about the low explanatory power of pride.

3. Could you articulate what is inadequate about the Wright and Esses racial attitude measures? Given the results that Carney and Enos reported here, racial resentment does not appear to be an adequate measure of racial attitudes.

I recently blogged about the Betus, Lemieux, and Kearns Monkey Cage post (based on this Kearns et al. working paper) that claimed that "U.S. media outlets disproportionately emphasize the smaller number of terrorist attacks by Muslims".

I asked Kearns and Lemieux to share their data (I could not find an email for Betus). My request was denied until the paper was published. I tweeted a few questions to the coauthors about their data, but these tweets have not yet received a reply. Later, I realized that it would be possible to recreate or at least approximate their dataset because Kearns et al. included their outcome variable coding in the appendix of their working paper. I built a dataset based on [A] their outcome variable, [B] the Global Terrorism Database that they used, and [C] my coding of whether a given perpetrator was Muslim.

My analysis indicated that these data do not appear to support the claim of disproportionate media coverage of terror attacks by Muslims. In models with no control variables, terror attacks by Muslim perpetrators were estimated to receive 5.0 times as much media coverage as other terror attacks (p=0.008), but, controlling for the number of fatalities, this effect size drops to 1.53 times as much media coverage (p=0.480), which further drops to 1.30 times as much media coverage (p=0.622) after adding a control for attacks by unknown perpetrators, so that terror attacks by Muslim perpetrators are compared to terror attacks by known perpetrators who are not Muslim. See the Stata output below, in which "noa" is the number of articles and coefficients represent incident rate ratios:

kearns et al 1My code contains descriptions of corrections and coding decisions that I made. Data from the Global Terrorism Database is not permitted to be posted online without permission, so the code is the only information about the dataset that I am posting for now. However, the code describes how you can build your own dataset with Stata.

Below is the message that I sent to Kearns and Lemieux on March 17. Question 2 refers to the possibility that the Kearns et al. outcome variable includes news articles published before the identities of the Boston Marathon bombers were known; that lack of knowledge of who the perpetrators were makes it difficult to assign that early media coverage to the Muslim identity of the perpetrators. Question 3 refers to the fact that the coefficient on the Muslim perpetrator predictor is larger as the number of fatalities in that attack is smaller; the Global Terrorism Database lists four rows of data for the Tsarnaev case, the first of which has only one fatality, so I wanted to check to make sure that there is no error about this in the Kearns et al. data.

Hi Erin,

I created a dataset from the Global Terrorism Database and the data in the appendix of your SSRN paper. I messaged the Monkey Cage about writing a response to your post, and I received the suggestion to communicate with you about the planned response post.

For now, I have three requests:

  1. Can you report the number of articles in your dataset for Bobby Joe Rogers [id 201201010020] and Ray Lazier Lengend? The appendix of your paper has perpetrator Ray Lazier Lengend associated with the id for Bobby Joe Rogers.
  1. Can you report the earliest published date and the latest published date among the 474 articles in your dataset for the Tsarnaev case?
  1. Can you report the number killed in your dataset for the Tsarnaev case?

I have attached a do file that can be used to construct my dataset and run my analyses in Stata. Let me know if you have any questions, see any errors, or have any suggestions.

Thanks,

L.J

I have not yet received a reply to this message.

I pitched a response post to the Monkey Cage regarding my analysis, but the pitch was not accepted, at least while the Kearns et al. paper is unpublished.

---

NOTES:

[1] Data from the The Global Terrorism Database have this citation: National Consortium for the Study of Terrorism and Responses to Terrorism (START). (2016). Global Terrorism Database [Data file]. Retrieved from https://www.start.umd.edu/gtd.

[2] The method for eliminating news articles in the Kearns et al. working paper included this choice:

"We removed the following types of articles most frequently: lists of every attack of a given type, political or policy-focused articles where the attack or perpetrators were an anecdote to a larger debate, such as abortion or gun control, and discussion of vigils held in other locations."

It is worth assessing the degree to which this choice disproportionately reduces the count of articles for the Dylann Roof terror attack, which served as a background for many news articles about the display of the Confederate flag. It's not entirely clear why these types of articles should not be considered when assessing whether terror attacks by Muslims receive disproportionate media coverage.

[3] Controlling for attacks by unknown perpetrators, controlling for fatalities, and removing the Tsarnaev case drops the point estimate for the incident rate ratio to 0.89 (p=0.823).

The Monkey Cage published a post that claimed that "U.S. media outlets disproportionately emphasize the smaller number of terrorist attacks by Muslims". Such an inference depends on the control variables making all else equal, but the working paper on which the inference was based had few controls and few alternate specifications. The models controlled for fatalities but the Global Terrorism Database used for the key reference also lists the number of persons injured, and a measure of total casualties might be a better control than only fatalaties. For example, the Boston Marathon bombing is listed as having 1 fatality and 132 injured, but the models in the working paper would estimate the media coverage to be the same as if the bombing had had 1 fatality and 0 injured.

Moreover, as noted in the comments to the post, the Boston Marathon bombing is an outlier in terms of the outcome variable (20 percent of articles were devoted to that single event). But the working paper reported no model that omitted this outlier from the analysis, so it is not clear to what extent the estimates and inferences reflect a "Muslim perpetrator" effect or a "Boston Marathon bombing" effect. And, as also noted in the comments, proper controls would reflect the difference in expected media coverage for terrorist attacks in which the perpetrator was killed at the scene versus terrorist attacks in which there was a manhunt for the perpetrator.

Finally, from what I can tell based on the post and the working paper, the number of articles for the Boston Marathon bombing might include articles published before it was known or credibly suspected that the perpetrators were Muslim. If so, then the article count for the Boston Marathon bombing might be inflated because media coverage of the bombing before the religion of the perpetrators was known or credibly suspected cannot be attributed to the religion of the perpetrators.

My request for the data and code used for the post was declined, but hopefully I'll remember to check for the data and code after the working paper is published. In the meantime, I asked the authors on Twitter about inclusion of articles before the suspects were known and about results when the Boston Marathon bombing is excluded from the analysis.

This post is a response to a question tweeted here.

---

I was responding only to the idea that poor educational outcomes for the Chinese in Spain would disprove culture as an influence on educational outcomes. Before concluding anything from the Chinese-in-Spain example about the influence of culture on educational outcomes, we'd need to estimate the level of educational outcomes that would be expected of the Chinese in Spain in the absence of cultural influence and then compare that estimate to observed educational outcomes.

So what level of educational outcomes should be expected of the Chinese in Spain? The 2014 Financial Times article "China's migrants thrive in Spain's financial crisis" reported an estimate that 70 or 80 percent of the Chinese in Spain are from Qingtian, "an impoverished rural county". Nonetheless, the FT article suggests that the Chinese in Spain are doing relatively well in employment and business, citing low unemployment and overrepresentation in business startups. Maybe culture has something to do with these things, and maybe culture and success in employment and business will translate into better future educational outcomes. Or maybe culture has no effect on these things.

Nathaniel Bechhofer ‏linked to a tweeted question from Elizabeth Plank about whether white supremacists are more likely to be men. Ideally, for measuring white supremacist beliefs, we would define "white supremacist", develop items to measure white supremacist beliefs or actions, and then conduct a new study, but, for now, let's examine some beliefs that might provide a sense of what we'd find from an ideal survey.

ANES

I was working with the ANES Time Series Cumulative Data file last night, so I'll start there, with a measure of white ethnocentrism, coded 1 for respondents who rated whites higher than blacks, Hispanics, and Asians on feeling thermometers, and 0 for respondents who provided substantive responses to the four racial group feeling thermometers and were not coded 1. Data were available for surveys in 1992, 2000, 2002, 2004, 2008, and 2012. This is not a good measure of white supremacist beliefs, either in terms of face validity or considering the fact that 27 percent of white respondents (N=2,345 of 8,586) were coded 1. Nonetheless, in weighted analyses, 27.5 percent of white men and 28.9 percent of white women were coded 1, with a p-value for the difference of p=0.198.

I then coded a new measure as 1 for respondents who rated whites above 50 and who rated blacks, Hispanics, and Asians below 50 on the four racial group feeling thermometers, and as 0 for respondents who provided substantive responses to the four racial group feeling thermometers and were not coded 1. Data were available for surveys in 1992, 2000, 2002, 2004, 2008, and 2012. Only 1.6 percent of white respondents (N=134 of 8,586) were coded 1, and, as before, weighted analyses did not detect a sex difference: 1.9 percent of white men and 1.6 percent of white women were coded 1, with a p-value for the difference of p=0.429.

GSS

The General Social Survey 1972-2014 file contained an item measuring agreement that "On the average [Negroes/Blacks/African-Americans] have worse jobs, income, and housing than white people....Because most [Negroes/Blacks/African-Americans] have less in-born ability to learn". Data were available for surveys in 1977, 1985, 1986, 1988, 1989, 1990, 1991, 1993, 1994, 1996, 1998, 2000, 2002, 2004, 2006, 2008, 2010, 2012, and 2014. There was a detected sex difference in weighted analyses, with 13.5 percent of white men and 12.0 percent of white women agreeing with the statement (p=0.002, N=21,911).

The next measure was coded 1 for respondents who favored a close relative marrying a white person and opposed a close relative marrying a black person, a Hispanic American person, and an Asian American person, and coded 0 for white respondents with other responses, including non-substantive responses. Data were available for surveys in 2000, 2004, 2006, 2008, 2010, 2012, and 2014. There was a detected sex difference in weighted analyses, with 13.1 percent of white men and 10.4 percent of white women coded 1 (p=0.001, N=7,604).

The next measure was coded 1 for respondents coded 1 for the aforementioned marriage item and who selected 9 on a 1-to-9 scale for how close they felt to whites. Data were available for surveys in 2000, 2004, 2006, 2008, 2010, 2012, and 2014. There was no detected sex difference in weighted analyses, with 5.5 percent of white men and 4.7 percent of white women coded 1 (p=0.344, N=3,952).

In the 1972 GSS, nonblack respondents were asked: "Do you think Negroes should have as good a chance as white people to get any kind of job, or do you think white people should have the first chance at any kind of job?". Of 1,330 white respondents, 20 of 670 (3.0 percent of) white men and 23 of 660 (3.5 percent of) white women reported that white people should have first chance at any kind of job (p=0.607 in an unweighted analysis).

The next measure was based on the item asking: "If you and your friends belonged to a social club that would not let [Negroes/Blacks] join, would you try to change the rules so that [Negroes/Blacks/African-Americans] could join?" (sic for the lack of "African-Americans" in the first set of brackets). Respondents were coded 1 for reporting that they would not try to change the rules. Data were available for surveys in 1977, 1985, 1986, 1988, 1989, 1990, 1991, 1993, and 1994. There was a detected sex difference in weighted analyses, with 45.5 percent of white men and 37.8 percent of white women coded 1 (p<0.001, N=7,924).

In the 2000 GSS, respondents were given this task:

Now I'd like you to imagine a neighborhood that had an ethnic and racial mix you personally would feel most comfortable in. Here is a blank neighborhood card, which depicts some houses that surround your own. Using the letters A for Asian, B for Black, H for Hispanic or Latin American and W for White, please put a letter in each of these houses to represent your preferred neighborhood where you would most like to live. Please be sure to fill in all of the houses.

Respondents were coded 1 if the respondent marked "white" for all the houses and coded 0 otherwise, with 0 including responses of doesn't matter, no neighbors, mixed race, or non-substantive responses. There was a nontrivial sex difference in weighted point estimates, with 16.9 percent of white men and 13.5 percent of white women coded 1, but the p-value was p=0.110 (N=1,108).

---

The 1972 GSS "white people should have the first chance at any kind of job" item seems like the best measure of white supremacist beliefs among the measures above, but agreement with that belief was low enough that there was not much power to detect a sex difference.

Based on the other data above and absent other data, it appears reasonable to expect at least a slight over-representation of men among whites with white supremacist beliefs, to the extent that white supremacist beliefs positively correlate with the patterns above. Research (1, 2) has found men to score higher than women on social dominance orientation scales, so the magnitude of expected sex differences in white supremacist beliefs among whites should depend on the degree to which white supremacist beliefs are defined to include a preference for political or social dominance.

---

NOTES:

Datasets were anes_timeseries_cdf_stata12.dta and GSS7214_R1.DTA. Code here.

This post reports on publication bias analyses for the Tara L. Mitchell et al. 2005 meta-analysis: "Racial Bias in Mock Juror Decision-Making: A Meta-Analytic Review of Defendant Treatment" [gated, ungated]. The appendices for the article contained a list of sample sizes and effect sizes, but the list did not match the reported results in at least one case. Dr. Mitchell emailed me a file of the correct data (here).

VERDICTS

Here is the funnel plot for the Mitchell et al. 2005 meta-analysis of verdicts:

mitchell-et-al-2005-verdicts-funnel-plotEgger's test did not indicate at the conventional level of statistical significance the presence of funnel plot asymmetry in any of the four funnel plots, with p-values of p=0.80 (white participants, published studies), p=0.82 (white participants, all studies), p=0.10 (black participants, published studies), and p=0.63 (black participants, all studies).

Trim-and-fill with the L0 estimator imputed missing studies for all four funnel plots to the side of the funnel plot indicating same-race favoritism:

mitchell-et-al-2005-verdicts-tf-l0Trim-and-fill with the R0 estimator imputed missing studies for only the funnel plots for published studies with black participants:

mitchell-et-al-2005-verdicts-tf-r0---

SENTENCES

Here is the funnel plot for the Mitchell et al. 2005 meta-analysis of sentences:

mitchell-et-al-2005-sentences-funnel-plotEgger's test did not indicate at the conventional level of statistical significance the presence of funnel plot asymmetry in any of the four funnel plots, with p-values of p=0.14 (white participants, published studies), p=0.41 (white participants, all studies), p=0.50 (black participants, published studies), and p=0.53 (black participants, all studies).

Trim-and-fill with the L0 estimator imputed missing studies for the funnel plots with white participants to the side of the funnel plot indicating same-race favoritism:

mitchell-et-al-2005-sentences-tf-l0Trim-and-fill with the R0 estimator did not impute any missing studies:

mitchell-et-al-2005-sentences-tf-r0---

I also attempted to retrieve and plot data for the Ojmarrh Mitchell 2005 meta-analysis ("A Meta-Analysis of Race and Sentencing Research: Explaining the Inconsistencies"), but the data were reportedly lost in a computer crash.

---

NOTES:

1. Data and code for the Mitchell et al. 2005 analyses are here: data file for verdicts, data file for sentences, R code for verdicts, and R code for sentences.

I happened across the Saucier et al. 2005 meta-analysis "Differences in Helping Whites and Blacks: A Meta-Analysis" (ungated), and I decided to plot the effect size against the standard error in a funnel plot to assess the possibility of publication bias.The funnel plot is below.

Saucier wt al. 2005 Funnel PlotFunnel plot asymmetry was not detected in Begg's test (p=0.486) but was detected in the higher-powered Egger's test (p=0.009)

---

NOTE:

1. Saucier et al. 2005 reported sample sizes but not effect sizes standard errors for each study, so I estimated the standard errors with formula 7.30 of Hunter and Schmidt (2004: 286).

2. Code here.