Online surveys are replacing telephone interviews as the dominant data collection tool in opinion and market research. With the growth of Web surveys and online panels, researchers have begun to fear the existence of professional respondents. One of their main concerns is that professional respondents – those who frequently participate in online surveys – are in it for the incentives and provide data of lower quality than more altruistic respondents. This fear is based on the assumption that professional respondents are only extrinsically motivated by incentives and, in order to earn as much as possible, rush through a questionnaire with minimal cognitive effort. But is this fear justified?

In a recent article, Hauser & Schwarz1 showed that quintessential professional respondents, the members of Amazon’s Mechanical Turk, are more attentive to instructions than student subjects. Does this mean that professional respondents should actually be welcomed rather than feared?

a recent article...showed that quintessential professional respondents...are more attentive to instructions than student subjects. Does this mean that professional respondents should actually be welcomed rather than feared?

We had the invaluable opportunity to analyze data from the Netherlands Online Panel Comparison Project (NOPVO).2 This data set contains information from 19 major Internet panels, which together capture 90 percent of the respondents in online market research in the Netherlands. Using latent class analysis, we showed that professional respondents could be discerned in all panels and that they had different demographics and psychographic profiles than altruistic respondents. However, professional respondents were not a threat to data quality.3

NOPVO Data

Data were collected from 19 large commercial market research panels that are open to external clients. All panels offer participants a small reward on completion of the questionnaire. The most commonly used incentive is the so-called “value point” that respondents can save to cash in later. These panels represent 90 percent of all panelists in the Netherlands. Each panel in the NOPVO study sampled 1,000 participants between 18 and 65 years old. The entire sample of all panels was checked and deduped to avoid respondents who were members of multiple panels being invited more than once. The same questionnaire was used for all panels. A broad variety of topics was included and, in addition, questions were asked about respondents’ panel membership and personal background (demographic and psychographic variables). Data collection was carried out by an independent party. To ensure that all panel members felt that they were completing a questionnaire created by their own panel, the questionnaire layout was carefully adapted to resemble the usual layout of each individual panel provider. All selected panel members received the invitation on the same day, after which the questionnaire remained available online for seven days. No quota sampling was used and no reminders were sent. The response was 50 percent on average, but varied greatly between panels, ranging from 18 percent to 77 percent. The final sample contained 9,461 respondents. The average completion time was 13 minutes. The identity of all panel agencies was protected and unknown to both the original NOPVO researchers and to us.

Professional Respondents Do Exist

Professional respondents do exist and can be clearly distinguished from altruistic respondents, who are more intrinsically motivated to complete surveys. Professional respondents belong to multiple panels, frequently participate in a large number of surveys and are focused on incentives. Surprisingly, an incentive alone does not provide enough motivation for a professional respondent. Respondents’ decisions to participate also appear to be influenced by whether the survey seems fun or not. As early as 1978, Dillman observed that a pleasant survey experience is essential when it comes to achieving a good response. There is much to gain from the development of questionnaires that are interesting and easy to complete.4

Who are these professional respondents? Compared to altruistic respondents, professional respondents are more often female, slightly less educated and less often gainfully employed. Similar results were reported by Whitsett, who summarized nine studies on the demographics of frequent respondents in online panels in the U.S.5 Professionals check their email more often. This finding has not been mentioned in relevant research literature before. A possible explanation could be that professional respondents are more curious and keener on invitations for new studies and, consequently, check their email more often. Professional respondents also use the Internet more frequently than the other groups and are, in general, more active on the Web. Their psychological profile shows that they prefer to finish a task on their own instead of working together and they describe themselves as being more interested in politics. However, there is no difference in political orientation or self-reported voting behavior. Professionals do report that they are somewhat less satisfied with life in general and describe their health as poorer. The effects of the differences are small, however.

Despite these differences in the profiles of more extrinsically motivated professional respondents versus more intrinsically motivated altruistic respondents, the two groups are largely alike. No differences at all were found in important demographics like age, nationality, household size or religion.

Despite these differences in the profiles of more extrinsically motivated professional respondents versus the more intrinsically motivated altruistic respondents, the two groups are largely similar.

Are Professional Respondents a Threat to Data Quality?

We could not find much empirical support for the common belief that professional respondents pose a threat to the quality of survey data; in fact, we found evidence which suggests the opposite. Answers given by professional respondents for a series of related questions resulted in a slightly higher scale-reliability, which supports the earlier findings of Chang and Krosnick.6 Practice at completing surveys is not bad as it may increase reporting accuracy.

Focusing on undesirable response styles, such as straight-lining, we found only very small effects and these also depended on the topic of the question. This result could not be explained by a practice effect or demotivating order effect because the grid that showed increased response effects for professional respondents was the first to appear in the questionnaire. The content of the questions themselves may have had an impact, and it is possible that response styles are more associated with the question topic than with respondent characteristics. This finding is supported by findings of Van Meurs, Van Ossenbruggen and Nekkers,7 who analyzed data from multiple online surveys in a major Dutch online panel and concluded that the presence of response styles was more dependent on the questionnaire itself than on respondent characteristics.

Food for Thought

A frequently heard concern in online market research is that the emergence of professional respondents threatens the quality of our data. Data from 19 major Dutch commercial online panels could not confirm this picture, however. Yes, professional respondents can be identified, but they do not seriously threaten data quality in terms of lower reliability or more undesirable response styles.

Perhaps, following Van Meurs, Van Ossenbruggen, and Nekkers, we should shift our attention to question format and question presentation. To paraphrase Van Ossenbrugge: “Online research should focus more on the emergence of unprofessional questionnaires, which are fabricated by inexperienced researchers and/or programmed by unprofessional companies, and endanger the respondents’ experience and consequently contribute to a further decline in response rates and low quality data.”

All’s Well that Ends Well . . . or Perhaps Not Quite?

Our study focused on the internal validity of the data produced by professional respondents in online panels: the data quality. We did not study the external validity, in other words, the extent to which findings can be generalized from the respondent sample to a larger population.

there is not much empirical evidence to suggest that the existence of professional respondents poses a serious threat to data quality

Our main conclusion is that professional respondents do indeed exist in large market research panels. We also conclude that there is not much empirical evidence to suggest that professional respondents poses a serious threat to data quality. However, professional respondents do display certain socio-demographic characteristics which distinguish them from other types of respondents. When a panel consists of many professional respondents, this could pose a severe threat to the external validity and the representativeness of the findings. This threat is particularly salient when the variables of main interest to the researcher or client, such as attitudes or buying behavior, are related to variables in the profile of the professional respondent.

 


Acknowledgements:

We thank the initiators of the NOPVO Study, Robert van Ossenbruggen, Ted Vonk, and Pieter Willems, for so generously sharing their data; Elizabeth Brüggen for kindly providing us with the questionnaire and data description; and the Netherlands Market Research Organization (MOA) for their permission to use the data.

 


Footnotes

1 D. Hauser & N. Schwarz (2015) Attentive Turkers: MTurk participants perform better on online attention checks than subject pool participants. Behavior Research Methods, Volume 4, March. https://www.researchgate.net/publication/272178519_Attentive_Turkers_MTu... (retrieved December 13, 2015).

2 Vonk, T., Van Ossenbruggen, R., & Willems, P. (2006). The effects of panel recruitment and management on research results. a study across 19 panels. ESOMAR: Panel Research 2006, 79–100.

3 A detailed description of the analyses, theoretical background, and literature review can be found in Matthijsse, S.M., de Leeuw, E.D., & Hox, J.J. (2015) Internet panels, professional respondents, and data quality. Methodology, 11, 3, 81–88. A pre-publication copy can be requested from the first author at e.d.deleeuw@uu.nl.

4 Dillman, D.A. (1978). Mail and Telephone Surveys. New York; Wiley. See also Puleston, J. (2012a). Gamification 101 — from theory to practice — part I and II. Quirk’s Marketing Research Media. (Retrieved November 2013 at http://www.quirks.com/articles.

5 Whitsett, H.C. (2013). Understanding frequent survey responders on online panels. Nera Economic Consulting. Retrieved October 2013 at http://www.nera.com/nera-files/PUB_Frequent_Survey_Responders_0313.pdf.

6 Chang, L., & Krosnick, J.A. (2009). National surveys via RDD telephone interviewing versus the Internet. Comparing sample representativeness and response quality. Public Opinion Quarterly, 73, 641–678.

7 Van Meurs, A., Van Ossenbruggen, R., & Nekkers, L. (2007) ‘Do rotten apples spoil the barrel?’ paper presented at the 2007 ESOMAR panel research conference, see http://www.esomar.org/web/research_papers/Web-Panel_1656_Do-rotten-apple... (Retrieved October 2013).