Statistically Suspicious ?
Here is the scenario …
1. The researcher had a return rate more than triple the average return rate for emailed surveys in general. This return rate was despite apparently not having any of the features related to higher return rates – incentives for completion, an advance postcard or email explaining the survey and incentives, email or phone follow-ups to non-respondents. A 2005 study conducted with 1,500 subjects from the same population in the same state on a similar topic had a return rate less than one-third that this researcher claims, even though the 2005 researchers used incentives AND advance notice AND in four out of five sites, emailed or called non-respondents.
2. The completion rate for each item was 97-98%. From the first item asking about the topic to the 50th, there was no drop out in responses. There appeared to be almost no drop out of the study, over 98% finished it. There was zero correlation between where the item was on the survey and how many people completed it because virtually every one of over 750 people completed every question.
3. Responses came in three types – from Group 0, Group 1 and Group 2. Over half of the subjects – nearly 400 – came from the same location, there were times when as many as 9 respondents would start the survey at the exact same minute of the same day. In one group, surveys were ONLY started between 10 and 11 in the morning or between 3 and 7:15 pm. (Well, 5% did come in between 7:15 and 9). None of the 141 surveys in that group came at any other time.
4. The typical audience member quoted is verbatim the same in both this report and another report from a study at a different site completed two years ago.
5. In the report, there is an almost complete absence of detail on sampling method, nothing on return rate, nothing on how the online survey was distributed, nothing about missing data, incentives or follow-up. There is minimal discussion about possible bias in the sample. Return rate had to be computed based on the raw data and a report on the number of students surveyed. Similarly, missing data percentage was computed from the raw data.
When someone questioned this, it was stated that,
“The data were reviewed by a statistician who said that he saw no problems.”
Your comments and opinions are eagerly awaited.
They didn’t specify a ‘competent’ statistician.
Obviously things look suspicious, but valid outliers do exist. It’s a bad idea to throw out all outliers in all cases.
But it is perfectly reasonable to heavily scrutinize the email campaign, campaigning and to expect the firm in question to be forthcoming in any such “audit.”
Sounds like someone working in a computer lab or something recruiting people to take part in the survey. There must be some hidden human involvement going on to get the peculiar timing of the studies. Also, a captive survey taker is not likely to quit in the middle as someone in their free time might. Strange.
It feels like a lecturer has directed his students to fill out the survey during a particular lecture (and perhaps given time to do that), is that possible in context?
“more than triple the average return rate for emailed surveys in general.”
Point 1 actually doesn’t seem too unusual and I wouldn’t draw anything from that – how does it compare to the upper quartile for emailed surveys? I imagine that a well-written and easy-to-complete experience could make even bigger than 3-fold difference.
(the accidental gender stereotype was because I imagine male lecturers to be naughty in that way! sorry)
Yes, Ronald, that was the same immediate thought I have. We have done “email” surveys where we had a nearly 100% return rate. We offered incentives to the students – a pizza party for the whole class – and the teacher took them en masse to the lab to complete the survey. However, supposedly, this survey was just emailed out to the students.
It does sound exactly like someone taught a 10 am class and another, larger class at 7-10. If that’s the case, though, then why on earth not just say so?