The Statistics of Tea Partying and Interracial Trust, Part I

By way of Digby, I came across this poll of white attitudes towards various ethnicities (including whites) based on self-identified Tea Party support (note: respondents were only from four states, NV, MO, GA, and NC). One of things that struck me while looking at the data (pdf and pdf) was the extent to which whites who identified strongly with the Tea Party didn't trust other whites.

72% of skeptics of the Tea Party thought other whites were trustworthy, while Tea Partiers thought only 49% of whites were trustworthy (p = 0.0022). The other significant result, and which is puzzling is that those who never heard of the Tea Party (31% of the sample--?!?) were more likely to think Latinos were trustworthy than either 'middle of the road' Tea Party-leaners or Tea Partiers (p = 0.000136 and p = 0.001 respectively). Interestingly, skeptics didn't differ significantly from middle of the road or true Believer Tea Partiers (p = 0.0377* and p = 0.0683 respectively)

Out of the total number of comparisons (24 total combinations of attitudes towards whites, Asians, African Americans, and Latinos with four groups, skeptics, middle of the road, true believers, and never heard of the Tea Party), these were the only significant differences.

Keep in mind several things:

1) This survey has a very small sample size, making it hard to test for significance.
2) I used two-tailed tests, as many of the values were close enough that testing unidirectionally was not warranted (not to mention laden with political assumptions).
3) These data are from four conservative states, and might not be applicable to other states.

Next, I'll discuss how the four different categories of white respondents differ in how they each view different ethnicities (e.g., do white Tea Party skeptics trust African-Americans less than whites?).

*One of the thorniest issues in statistics is the issue of multiple comparisons: if I make enough tests at the 0.05 level, I expect to generate false positives--'find' a difference that doesn't really exist. (This is particularly a problem with a posteriori tests, also known as post-hoc or 'a fucking fishing expedition')**. To prevent these false discoveries, we adjust the p-values. Naturally, there are several methods for doing so, some which are ridiculously strict (the Bonferroni), while others are pretty strict (sequential Bonferroni or Dunn-Slidak), and others are less strict (the false discovery rate procedure). To decide whether to use Dunn-Slidak or the false discovery rate procedure, I typically determine how important not having false positives is. Identifying candidate hypotheses for further work, less strict. Preventing the nuclear reactor from going critical, more strict.

**There is the whole separate issue of if you've conducted a post-hoc versus an a priori test. If you set out to test five explicit hypotheses, then I'm of the school that you should just report the p-values and not correct at all (why should you be penalized for reporting five obvious comparisons together, since, if you reported them separately, you could have used a less strict criterion). Here, I initially asked if Tea Partiers were less trusting of other whites. Arguably, that's an a priori test (it actually occurred to me as the obvious control before I even saw the raw numbers--I'm willing to distrust everybody equally). But, then I started nosing around in the 'non-white' data, which definitely qualifies as a 'fucking fishing expedition.'

More like this

tests at the 0.05 level, I expect to generate false positives--'find' a difference that doesn't really exist. (This is particularly a problem with a posteriori tests, also known as post-hoc or 'a fucking fishing expedition')**. To prevent these false discoveries, we adjust the p-values. Naturally, there are several methods for doing so, some which are ridiculously strict (the Bonferroni), while others are pretty strict (sequential Bonferroni or Dunn-Slidak), and others are less strict (the false discovery rate procedure). To decide whether to use Dunn-Slidak or the false discovery rate procedure, I typically determine how important not having false positives is. Identifying candidate hypotheses for further work, less strict. Preventing the nuclear reactor from going critical, more strict.

To decide whether to use Dunn-Slidak or the false discovery rate procedure, I typically determine how important not having false positives is. Identifying candidate hypotheses for further work, less strict.

This is particularly a problem with a posteriori tests, also known as post-hoc or 'a fucking fishing expedition')**. To prevent these false discoveries, we adjust the p-values. Naturally, there are several methods for doing so, some which are ridiculously strict (the Bonferroni), while others are pretty strict (sequential Bonferroni or Dunn-Slidak), and others are less strict (the false discovery rate procedure). To decide whether to use Dunn-Slidak or the false discovery rate procedure, I typically determine how important not having false positives is. Identifying candidate hypotheses

really exist. (This is particularly a problem with a posteriori tests, also known as post-hoc or 'a fucking fishing expedition')**. To prevent these false discoveries, we adjust the p-values. Naturally, there are several methods for doing so, some which are ridiculously strict (the Bonferroni), while others are pretty strict (sequential Bonferroni or Dunn-Slidak), and others are less strict (the false discovery rate procedure). To decide whether to use Dunn-Slidak or the false discovery rate procedure, I typically determine