If you can't do traditional or online qual prior to launching a survey, consider running the survey with some open-ended questions. Pause after getting 100 answers, then write up some closed-ended counterparts and resume fielding.
Recently, at a New England MRA chapter event, a fellow attendee told me that she had enjoyed my article here on The Qual Sandwich For Researchers On No-Carb Diets. As a recap, here's a synopsis of its extended metaphor: "Once upon a time, when most research projects took three to six months to complete, we would sandwich large quantitative studies between two slices of qualitative research... With today’s tight timelines and even tighter budgets, many full-service researchers have gone on the Atkins diet: no carbs, no qual. Their study is just what they consider the meat: the quantitative research.... Just as the carb conscious can swap white bread for flatbreads, the budget conscious can save on qualitative research. Use online or mobile qualitative research to uncover initial insights about the market..."
While she liked the article, she confessed that too often there was no time or budget for doing any qualitative research. It was all about the survey.
Which got me thinking. Just because it was all about the survey didn't mean that we couldn't do any qualitative research. A survey itself doesn't have to be quant or qual – it is simply a tool, one that can be used for exploration or for projecting to a population. I've done surveys of as few as 10 people, for hard-to-reach executive positions, where many of the questions were simply open-ended. On the other extreme, I've done surveys of 2,000 consumers randomly selected from an e-commerce firm's comprehensive house list, in order to develop a rich, quantitative representation of their customers.
Could we use the same tool for both qualitative and quantitative work? What if we double downed on using the survey? Like the KFC Double Down, which uses two fried chicken filets instead of bread... Yes, it sounds unappetizing. Both the sandwich—and the approach!
Unappetizing enough that we couldn't convince any clients to try the technique. As a result, we decided to do a research project ourselves demonstrating how the methodology might work. This past Saturday morning we decided to do a study on sports superstitions and the upcoming American football match, as my British friends say (and as my American lawyer recommends I say).
We were going to ask respondents a number of questions for which we did not know the range of possible answers:
- "Overall, how satisfied are you with the National Football League? Why?"
- "Who are you rooting for to win [the Big Game]?" Then: "Why are you rooting for the Broncos?" or "Why are you rooting for the Panthers?" or "Why aren't you rooting for either team?"
- "What’s your own sports superstition, if any, about affecting the play of your team?"
Each respondent had three of these five open-ended questions that they had to answer.
But rather than treat the questionnaire as a constant throughout the fielding of the survey, we were going to iterate its design as we learnt more. So we invited 100 respondents to take the survey, then paused a few hours later once we collected sufficient results. We then analyzed the text responses and made some changes:
- For the question about satisfaction with the NFL, we came up with a grid question asking about satisfaction with 11 attributes of the NFL: the entertainment value of the games, the competitive parity of the teams, dominance of quarterbacking, etc. We hid the open-ended question and replaced it with this matrix.
- The answers to the question about their own sports superstitions, "if any," were disappointing. The most popular answer: "None." (My favorite answer: "None, except don't celebrate too early.") So, taking a page from behavioral economics, we decided that our respondents would know other people better than themselves. So we retired that open-ended question and replaced it with another, "What’s the silliest sport superstition, if any, you've seen or heard of among your friends or family members?"
- Then we re-opened the survey.
After the next 200 or so results poured in, we paused the survey again. We now had enough responses to come up with closed-ended questions about why they preferred a specific team to win (if any), so each of those three open-ended questions was retired and replaced with a corresponding closed-ended, select-all-that-apply question. We also had enough responses to retire our open-ended question about other people's superstitions and replace it with a closed-ended question, "What do you do to help the team you are rooting for win?"
We then re-opened the survey, now a primarily closed-ended quantitative exercise, to collect a further 700 responses. We had 1,000 responses of our core quantitative questions, plus the insights from these added questions. We published this today as a free report, "Sports Superstitions & The Big Game: How American Fans Believe They Influence Their Team's Chances."
Overall, the approach worked tremendously. We certainly fielded better closed-ended questions than we would have if we had just cobbled together our own lists. For instance, our early-morning draft of a closed-ended question about NFL satisfaction didn't include entertainment value, which turned out to be the most important driver of satisfaction!
Our leading question about the silliest superstitions led to better feedback for constructing our closed-ended question. In fact, the average answer was 13 words long when discussing others, compared to just 9 words long when discussing themselves. It's an odd mindset shift though: leading questions can be useful projective techniques in qualitative research, but absolutely must be avoided in quantitative research.
One weakness of the study was that we didn't implement quota sampling and weighting. I would have wanted to hit quotas prior to each stopping point (after 100 responses and after 300 responses) but I didn't have the time to do that (we undersampled those aged 60 and up). And the hurriedness of the schedule makes this technique a meal in the car at the drive-through window—rather than a nice, 3-course meal at a 4-star restaurant.
But, to take one last bite out of our extended metaphor, we will use as our inspiration the practice of mindful eating: we're going to call this mindful surveying.
And we're hungry for more.