Fall 2017 Articles, Politics, Race, Social Issues

How Racial Resentment Drives Public Opinion

Racial resentment is one of many factors that can influence how people form certain opinions. In the case of our survey, we wanted to see if racial resentment coincided with opinions of the NFL protests versus the Charlottesville protests.

Measuring racial resentment in surveys can be difficult. Because of social desirability bias, a type of response bias where respondents tend to answer questions in a manner that will be viewed more favorably by others, asking someone if they’re racist is not a question that elicits an honest response. To avoid asking bluntly, we used part of a racial resentment scale that is commonly used in surveys to gauge racism. Michael Tesler and David Sears, two political scientists, created this scale based on the idea that racial resentment is “subtle hostility towards African-Americans”.

Because we couldn’t include the entire battery of questions in our short survey, we chose what we considered to be the most effective question. We asked everyone who took our survey if they agreed or disagreed with the statement: “Over the past few years, blacks have gotten less than they deserve.”

Half of our respondents received a question asking if they approved or disapproved of NFL athletes protesting by kneeling during the national anthem. About 25% of these students disapproved of this form of protest. Of the 25% who disapproved, 87% disagreed with the statement: “over the past few years, blacks have gotten less than they deserve.

The other half of respondents were asked if they approved or disapproved of a recent protest by white nationalists in Charlottesville against the removal of a Confederate statue. About 45% of these respondents disapproved of this form of protest. Of 45% who disapproved, 27% disagreed with the statement: “over the past few years, blacks have gotten less than they deserve.”

Compared with the 42% of people who approved of this form of protest, 28% disagreed with the statement “over the past few years, blacks have gotten less than they deserve.

Support for the phrase “over the past few years, blacks have gotten less than they deserve” was equal regardless of whether the respondents supported or opposed the Charlottesville protest.  The lack of difference between these responses indicates that there is no racial bias behind support or opposition towards the Charlottesville protest.

On the other hand, there was stark difference in the response to the phrase “over the past few years, blacks have gotten less than they deserve” based on whether the respondent approved or disapproved of the NFL form of protest. The overwhelming majority of respondents who approved of the NFL form of protest disagreed with the statement. According to the racial resentment scale, this would indicate there is an underlying racial bias that affects opinion on the NFL protest.

A Toplines report for all survey questions, and results, is available here: Protester Poll Toplines


Almost 700 students were asked to take this survey and 279 completed it. However, we cannot report a margin of sampling error for this survey because our results come from on a non-probability sample. Most of our surveys adhere to the theoretical principles of probability sampling, such as when every NCSU student has a non-zero and equal chance of being randomly invited to take a survey (and nearly all we contact respond to it). Instead, only certain students were asked to take this survey.

Our result comes from students who previously agreed to be sent our future surveys. In short, they chose us, non-randomly, so we can’t know for sure if they “think like” most students. If respondents are not selected according to probability theory, it isn’t possible to calculate traditional diagnostic statistics about a survey, such as the margin of sampling error.

Most industry professionals today, however, agree that the margin of sampling error is overrated for evaluating the validity of polling results; if only 20% (or less!) of students respond to an invitation to take a survey, even when they were contacted at random, the subsequent sample doesn’t conform to the assumptions of probability theory. We could present advanced statistics about the likely representativeness of our sample, but the benefit of generating those stats is outweighed by their degree of complexity.

Instead, we argue that in general we’ve learned that our panel of interested survey takers does a good job of mimicking a random sample of State students. Over the past two semesters, we’ve tested whether differences exists between results we obtain from the non-probability panel compared to a truly random draw. So far, we don’t observe significant differences of opinions asked among students contacted the different methods.

For additional information about best practices for reporting on the precision of non-probability sampling, you can watch this “debate” and/or read this guidance for how to report on results from non-probability samples.

Leave a Reply