It was found in this survey that 76% of student are themselves, or know someone who is, and 83% of students who are not, or do not know anyone, are still in favor of DACA. While most students are in favor of DACA, there is only a small percent difference (7%) between the opinion of those who know someone and those that do not.
Another question asked was whether respondents approved or disapproved of a compromise to keep DACA but also providing funds for a Mexican-American border wall.
“Would you approve or disapprove of a deal between President Trump and Congress passing DACA into law in exchange for funding to build more walls on the Mexican-American border?”
When this question is cross tabbed with the question of knowing someone affected by DACA, there still seems to only be a difference in their response by about 6%.
The affect of whether you know someone affected by DACA is similar to the recent poll showing that when Americans find out that Puerto Ricans are also American citizens, they are more in favor of government aid after the crisis.
While it may seem that knowing someone who benefits from the DACA program personally would affect respondents views on DACA, it appears that at North Carolina State University, a majority of the students already have their opinion set, whether they know someone affected by the DACA program or not.
A Toplines report for all survey questions, and results, is available here: DACA Poll Toplines
NOTE ON METHODOLOGY:
Almost 700 students were asked to take this survey and 243 completed it. However, we cannot report a margin of sampling error for this survey because our results come from on a non-probability sample. Most of our surveys adhere to the theoretical principles of probability sampling, such as when every NCSU student has a non-zero and equal chance of being randomly invited to take a survey (and nearly all we contact respond to it). Instead, only certain students were asked to take this survey about DACA.
Our results about DACA come from students who previously agreed to be sent our future surveys. In short, they chose us, non-randomly, so we can’t know for sure if they “think like” most students. If respondents are not selected according to probability theory, it isn’t possible to calculate traditional diagnostic statistics about a survey, such as the margin of sampling error.
Most industry professionals today, however, agree that the margin of sampling error is overrated for evaluating the validity of polling results; if only 20% (or less!) of students respond to an invitation to take a survey, even when they were contacted at random, the subsequent sample doesn’t conform to the assumptions of probability theory. We could present advanced statistics about the likely representativeness of our sample, but the benefit of generating those stats is outweighed by their complicatedness.
Instead, we argue that in general we’ve learned that our panel of interested survey takers does a good job of mimicking a random sample of State students. Over the past two semesters, we’ve tested whether differences exists between results we obtain from the non-probability panel compared to a truly random draw. So far, we don’t observe significant differences of opinions asked among students contacted the different methods. Past results suggests that our results for students’ opinions about DACA are broadly representative of what most undergraduates at NCSU think about DACA.
Nevertheless, we might have overestimated State’s support for DACA. More of our respondents call themselves “Democrat” than is probably true for all undergraduates, and Democrats are more supportive of DACA. Since political partisanship is a fluid attitude and not a fixed characteristic, like age, we can’t be certain about the “true” percentage of Democrats (or Republicans). Thus, without knowing more about the fixed traits of our DACA sample (we did not ask more questions about their demographics), nor being certain our sample is “too Democratic,” we do not attempt to weight/adjust our data to known properties about NCSU undergraduates.
For additional information about best practices for reporting on the precision of non-probability sampling, you can watch this “debate” and/or read this guidance for how to report on results from non-probability samples.