The Insite February 2017 Poll— methodology, limitations and representativeness

Image:

Polling sentiment on matters of student politics, surprisingly, had to Insite’s knowledge never been attempted on campus. We decided to change that by launching the Insite February 2017 poll! What you’ll find below is a more technical discussion on the sampling methods used, the survey, sample size, distribution, response rate and why we can’t technically calculate a margin of error, but what it would look like if we did.

If you don’t enjoy these technical discussions but still remain interested in the results of the poll, stay tuned, we’re aiming of publishing KSU’s approval rating by Thursday, and whether political affiliation at the national level has an effect on student politics some time over the weekend.

Methodology

Many websites and newspapers these days are increasingly relying on votes from visitors to their website or social media accounts when polling. This is troublesome because the readership of a website or a page often generalises poorly. Take Insite’s readership for example; it’s unlikely that everyone who attends university follows us.

So, to get our sample, we elected to use the mall intercept method; which involves walking up to students and asking them if they’d like to take our questionnaire. It is important to note that this method is not without limitations or criticism. It is what’s technically called a non-probability sample, which means that participants selected do not have an equal probability of being chosen – they might, for instance, choose to stay at home or go to lunch off campus. Not giving your population an equal probability of being chosen is always a limitation when attempting to generalise to the whole. This is why traditional political polls still use telephones, since a phone book is something very easy to randomly pluck numbers from.

The university equivalent to a probability sample would be through the registrar, however this avenue was not pursued because of survey fatigue on this medium, low response rates (usually it’s below 15%) and what would probably be a long wait. The main criticism of mall intercepts is that they often aren’t representative of the society at large. The reason for that is simple; shopping malls, in the United States, where the method is mostly deployed, tend to be frequented by females of a specific age bracket. Additionally, in most malls, other demographics like social class and race also come into play.

These things aren’t really that much of an issue for university – to be eligible to vote, people must be students, and students can overwhelmingly be found going to lectures. To further decrease the risk of an unrepresentative sample, students were intercepted as they were entering or leaving certain faculty buildings and across the whole campus. Care was taken to ensure that the demographic distribution of the sample by faculty and year of study was as close to that of the university at large population as possible. This ensures that no one faculty is disproportionately over or under represented.

The Survey

The main question we sought to answer in the survey was which student political organisation was in the lead. A related facet to this was whether that number would change if you weighed in the likelihood of students voting. This was assessed on a 7 point Likert scale ranging from “Not very likely” to “Very likely”. Two additional close ended questions were intended to gauge KSU’s approval rating and whether the majority of students would like to see more than two organisations contest.

The final two questions attempted to look into two other factors that might drive student politics; party affiliation at the national level, and where a student is from. While casual observation of how a prototypical Pulse voter might differ from an SDM voter might elicit some stereotypical responses, to our knowledge, no real research into the matter has ever been attempted – our aim was to change that.

The overarching concern was to extract as much relevant information that we could analyse in the shortest amount of time.

Image:

The survey protocol and script was maintained as uniformly as possible. Participants were politely asked if they could spare a minute, the interviewer identified himself as being a member of Insite, the topic of the survey was introduced and if the participant had no objections he or she was handed a survey sheet, asked to fill it, fold it and deposit it into a bag. Participants were informed of the confidentiality of the survey, and the option to skip any question they did not feel like answering. Following this, a few pleasantries were exchanged, and the participant was reminded that he or she could follow the outcome of the survey on Insite’s website and social media accounts. Out of 251 individuals approached, 243 replied, eliciting a response rate of 96%. Just one response was invalid. The study was conducted on Monday 20st February, in a 6 hour time period spanning from 9am to 3pm.

So how representative is our sample?

In the end, it’s actually pretty representative. The way this was assessed was to view the distribution of university students across faculties, and see whether this matched the distribution in our sample. Luckily for us, the numbers for the 2016/2017 academic year are available online. So to put it simply, since we know that 11% of all university students attend the faculty of Arts, if in our sample 30% of the respondents came from this faculty, we’d know it was over represented.

Table showing the percentage distribution of students by faculty in the university population and those observed in our sample. The third column indicates the percentage by which each faculty is over or under represented in our sample.

Image:

Table showing the percentage distribution of students by faculty in the university population and those observed in our sample. The third column indicates the percentage by which each faculty is over or under represented in our sample.

However this didn’t happen by much and distribution was satisfactory across most faculties. Some, like Science and Social Wellbeing were slightly over represented in our sample, while others, like FEMA or Health Science students were under represented.

Gender also provided us an opportunity to check; we know that 58.3% of all university students are female. In our sample, 61.9% were female.

Lastly, year of study can also shed some light in this. It’s actually pretty hard to come across statistics on which year students are in, but logic would dictate that the number of students in their 1st year would be largest, followed by relatively equal 2nd and 3rd years and a slight drop by the 4th year (since most Bachelor of Arts courses are only 3 years long).

A histogram of the distribution of our sample across year of study.

Image:

A histogram of the distribution of our sample across year of study.

In this regard, the sample seems to be slightly younger in its year of study, and 4th year or 4th year or more categories could stand to be a little higher.

Why didn’t you provide a margin of error?

Margin of error describes the error that results from surveys because they are only based on a subset of the entire population of likely voters. This number can only be technically calculated if the sample is a probability sample. As explained above, this survey technically employs a non-probability sample. That being said, the demographic breakdown of gender and faculty does hint at a sample that is a decent microcosm of the voting population at large.

With this caveat, and with a grain of salt, if we were for a second to pretend that our sample actually is a probability sample, the margin of error at the 95% confidence interval would be 6%. This means that in 95% of the time, the estimate of this poll should be within 6% of the true result.

This means that in reality, Pulse might only be 7% behind.

Why now? Elections aren’t until April!

A couple of reasons: now is a good baseline to test how current sentiment stands. If hypothetically we were to run another poll a week before the election, that would give us two points in time, and should theoretically allow us to gauge how dynamic over time voter intention is. It could even allow us to evaluate things like the effectiveness of electoral campaigns, something both Pulse and SDM’s treasurers would probably be delighted to know.

Secondly, attendance at university has its own rhythm, peaking at the beginning of each semester before dropping later throughout. Intercepting people now gives us a more varied and numerous sample.

Written by

Charles Mercieca

Your Comments

Recommend this article

The Insite February 2017 Poll— methodology, limitations and representativeness

Thank you!

This article has been sent!

Close