The uncertainty of the lives of the approximately 800,000 Dreamers, who have been enrolled in DACA, has been brought to the forefront of politics lately as Trump rescinded the policy with no sure reinstatement in sight. In response, multiple news sources have released polls regarding American opinion of DACA. Though the majority of these polls indicate most Americans are in favor of DACA, accurately polling DACA opinions proves very difficult. With regard to such a controversial and emotional issue, question wording plays a huge role in determining the results.
Gauging public opinion on polemic issues is crucial yet difficult. Certain media outlets appear to be denouncing DACA polls. Breitbart recently accused media outlets of pushing skewed polls regarding DACA legislation. Polling firms like NBC and Politico appear to use solely positive language, including emotionally-charged words such as “Dreamers”, “children”, and “protection.” However, this does not mean that all polls are biased or condemnable.
In our latest poll, the Packpoll team wanted to see if giving respondents more information about DACA would affect the support for the program. We employed a split ballot method, in which half of the survey respondents received a question with no background information and the other half of the respondents received a question that included slight background on DACA. When providing information of DACA, we purposely used very plain language to avoid skewed results.
We asked half of our respondents their support for “the American immigration policy, DACA (Deferred Action for Childhood Arrivals).” We asked the other half the same question but followed up with more information. We explained DACA is “a policy that allows some immigrants who were brought to the U.S. illegally as minors to avoid deportation and obtain work permits for periods of two years at a time, renewable upon application.”
We expected that regardless of the straightforward wording, more information would increase support for DACA. This is exactly what the results indicate. The basic question with no added information had 18% disapprove and 65% approve. When we included information, the disapproval rating hovered at 15%, but the approval rating jumped to 80%. While giving more information about the policy did not change the percent of individuals who disapproved dramatically, there is a difference in the respondents who stated they had no opinion on the matter. In the group of respondents that received no information on DACA, 17% of people said they had no opinion. However, when we provided just an additional sentence giving some background on DACA, only 5% of people claimed to have no opinion.
The results indicate that regardless of the information, people are more likely to choose approval over no opinion if more details are added. Though we don’t know exactly why, there are multiple reasons that could be affecting this. This could be because people feel more confident in having an opinion when information, even if very little, is given. Another speculation could be that these people genuinely didn’t know anything about DACA previously, though it seems unlikely given all of the respondents are NC State undergraduate students and DACA has been a very vocal matter on our campus. Or maybe Breitbart is right, and adding information to such an emotional issue like DACA is basically asking respondents if they approve of being nice to people. Regardless, a 15% jump in approval is definitely noteworthy, especially considering the briefness and vagueness of the information included.
A Toplines report for all survey questions, and results, is available here: DACA Poll Toplines
NOTE ON METHODOLOGY:
Almost 700 students were asked to take this survey and 243 completed it. However, we cannot report a margin of sampling error for this survey because our results come from on a non-probability sample. Most of our surveys adhere to the theoretical principles of probability sampling, such as when every NCSU student has a non-zero and equal chance of being randomly invited to take a survey (and nearly all we contact respond to it). Instead, only certain students were asked to take this survey about DACA.
Our results about DACA come from students who previously agreed to be sent our future surveys. In short, they chose us, non-randomly, so we can’t know for sure if they “think like” most students. If respondents are not selected according to probability theory, it isn’t possible to calculate traditional diagnostic statistics about a survey, such as the margin of sampling error.
Most industry professionals today, however, agree that the margin of sampling error is overrated for evaluating the validity of polling results; if only 20% (or less!) of students respond to an invitation to take a survey, even when they were contacted at random, the subsequent sample doesn’t conform to the assumptions of probability theory. We could present advanced statistics about the likely representativeness of our sample, but the benefit of generating those stats is outweighed by their complicatedness.
Instead, we argue that in general we’ve learned that our panel of interested survey takers does a good job of mimicking a random sample of State students. Over the past two semesters, we’ve tested whether differences exists between results we obtain from the non-probability panel compared to a truly random draw. So far, we don’t observe significant differences of opinions asked among students contacted the different methods. Past results suggests that our results for students’ opinions about DACA are broadly representative of what most undergraduates at NCSU think about DACA.
Nevertheless, we might have overestimated State’s support for DACA. More of our respondents call themselves “Democrat” than is probably true for all undergraduates, and Democrats are more supportive of DACA. Since political partisanship is a fluid attitude and not a fixed characteristic, like age, we can’t be certain about the “true” percentage of Democrats (or Republicans). Thus, without knowing more about the fixed traits of our DACA sample (we did not ask more questions about their demographics), nor being certain our sample is “too Democratic,” we do not attempt to weight/adjust our data to known properties about NCSU undergraduates.
For additional information about best practices for reporting on the precision of non-probability sampling, you can watch this “debate” and/or read this guidance for how to report on results from non-probability samples.