When measuring the digital world, mobile disruption is now pervasive. In the U.S., 141 million people own smartphones which means a market penetration of 59 percent (comScore, 2013), and tablet ownership has more than doubled in the past year to 34 percent, according to the 2013 Pew Internet & American Life Project report.

Smartphones and tablets continue to revolutionize how businesses, governments, and consumers think about the traditional modes of gathering and disseminating information, and no industry is immune to this transformation. From advertising to publishing to higher education, industries are adapting to mobile’s ubiquitous presence, and pressure for the market research industry to integrate mobility into its market tools is increasing.

To date, there appears to be very little understanding of mobile’s current and future potential in the market research industry. While 9 out of 10 market research decision-makers see mobile as a viable research method, it currently only accounts for 2 percent of quantitative market research, and less than one-fifth of market research companies offer mobile research capabilities today (Macer, Wilson, Meaning Ltd., 2012). The gap between what is believed to be possible with the mobile platform and how it is currently being used represents a missed opportunity for the market research industry, and may even become the industry’s Achilles heel as more and more “outsider” companies, like Google, are entering the industry to take advantage of real-time analytics through automated or metered means.

However, before capitalizing on the potential of mobile, the MR industry as a whole needs the know-how to first design effective mobile surveys for responsive and engaging experiences that work across multiple devices, and then to understand the implications of providing mobile surveys and analyzing respondent data.

As such, Bovitz Inc. researchers and designers wanted to explore the use of mobile surveys more formally. Like much of the rest of the market research industry, we had some strong thoughts and hypotheses about the mobile experience. When we set out to confirm these, we found a whole different story.

The Approach

To explore the viability of the mobile platform, Bovitz researchers developed a sleek, responsive survey interface to enable a synonymous experience for mobile, tablet and computer respondents; we then developed a survey experience to isolate the effects of the device being used. We utilized universally interesting and appealing survey content (think personal identity, food, travel) and structured it into three 5-minute modules, each consisting of commonly used question types: multiple-response lists, open-ends, and an attribute battery.

To qualify, participants in the study needed to indicate that they would be willing to take the survey on multiple devices so that they were not biased by one device or the other, and then they were randomly assigned to continue on their computer, smartphone or tablet. We also eliminated biases for subject matter, question experience, survey length, functionality, device preference, and demographic differences, attempting to leave only the “device effect” to account for any differences in the data.

Figure 1 illustrates the flow of qualification and sample design among the incoming nationally-representative U.S. sample.

Note: For simplification, we will refer to those who were assigned to take the survey on each device as computer users, smartphone users, and tablet users; however, it should be noted that all respondents own and use both computers and mobile devices daily.

Myth-Busting

Myth #1: Mobile Data is of Lower Quality

One belief about the mobile survey platform is that it produces different, and even lower quality data than its computer counterpart. Many defining features of the mobile device, namely its limited screen size and portability, seem to lend themselves to a more frustrated and distracted respondent, and thus poorer quality responses. This was not true, as we found no consistent differences in data quality between computer, smartphone and tablet users.

Users across all three devices selected the same number of answers in both long and short multiple-response questions, and they typed the same number of characters in their open-end responses. They opted out of open-end questions with the same incidence, and they passed “trap” questions at the same rate. They responded to open-end questions with the same number of ideas, and they used comparable sentence construction in their responses.

In some cases mobile users appeared to be even more engaged than computer users. For example, respondents were presented with a description of a fictitious new television show and asked to highlight all of the words or phrases that they liked and disliked in the account. On average, smartphone and tablet users highlighted 18.4 and 19.9 words each, while computer users selected a significantly lower number of words (14.9). One possible explanation for this is that making the “touch” functionality of mobile devices central to answering the question encourages participation among mobile users, more so than the traditional “click” function does among computer users.

Myth #2: Mobile Surveys are Only Possible at Shorter Lengths

There is a pervasive belief in the market research industry that mobile surveys must be shorter than standard computer surveys. In a sample of market research firms, the mean acceptable limit for online surveys was 20 minutes vs. just 8 minutes for mobile surveys (Macer, Wilson, Meaning Ltd., 2012). Given the greater likelihood for frustration and distraction on a mobile device, we also assumed that the only possibilities for the platform lay in the shortest survey lengths. To test this, we randomly assigned respondents to a 10, 15, or 20-minute survey and ensured them they would be compensated for whichever length they received. When asked to assess the survey’s length, the vast majority of respondents across all three devices found the time to be “just about right” (76 percent of computer users, 67 percent of smartphone users, and 77 percent of tablet users).

The survey length assessment is even more interesting when we discovered that mobile users took more time than computer users on every single question throughout the survey. For each module in the survey, total mobile time ranged from 12 – 23 percent longer than computer time, and yet, two-thirds of mobile users still felt that the survey length was appropriate. Possible explanations for longer mobile survey length include mobile respondents being interrupted by calls, texts, or other alerts while taking the survey on the go, or they may also simply have felt more comfortable multi-tasking while using their mobile device.

While most agreed with the survey time, 26 percent of smartphone users felt it was too long, which is significantly higher than the 17 percent indicated by computer and tablet users. The majority of those who felt the survey length was too long were the 20-minute smartphone users, indicating that the smartphone does have a slightly lower survey length threshold than the computer or tablet. Notwithstanding, the prospect of 15-minute mobile surveys being a reasonable length for mobile respondents dispels a major industry misconception about what kind of surveys can be given on these devices.

Myth #3: Mobile is an Unpleasant Experience

The smaller screen and touch interface of a mobile device make it seem like it might make for a difficult and unpleasant survey experience. Our results show that the mobile platform provides a nearly comparable experience to the computer on ease-of-use and enjoyment when the experience of mobile respondents is taken into consideration in designing the survey.

See Figure 2 for an illustration of the survey’s appearance across devices.

Nine in ten respondents found the survey to be extremely or very easy to take on the device they were assigned. Not surprisingly, computer users indicated the highest ease of use (97 percent), but smartphone and tablet users were also very high at 90 percent and 91 percent, respectively.

Not only did mobile users find the survey experience just as easy as computer users, they also enjoyed the experience just as much. Roughly two-thirds of respondents on each device enjoyed the experience at least somewhat (64 percent, 61 percent, and 67 percent for computer, smartphone, and tablet users, respectively), a strong indicator for mobile’s potential. Computer and tablet users were significantly more likely to say they enjoyed the survey very much (26 percent and 27 percent, compared to 20 percent for smartphone users), so there is definite room for survey optimization on the smartphone.

See Figure 3 for the assessment of survey enjoyment.

Myth #4: Computer is King for Survey-Taking

Conventional wisdom tells us that since computers took over as the dominant survey-taking vehicles from random digit dialing via phone surveys, they have remained the go-to device for respondent survey activity. However, our research shows smartphones and tablets are gaining traction to overtake computers as the default survey-taking devices.

Once respondents had a taste of the mobile experience, smartphone and tablet survey-takers wanted more. At the end of our survey, we gauged future interest in survey participation on each of the three devices. The effect of respondents’ current survey experience was staggering. Of those who took our survey on their computer, 44 percent and 51 percent would be very or somewhat interested in taking future surveys on their smartphone or tablet, respectively. When we ask the same question of those who took the survey on their smartphone or tablet; interest jumps to 70 percent and 78 percent. It appears that after experiencing a mobile survey, respondents are significantly more likely to express future interest in using the platform.

Not only are mobile users interested in the platform for taking surveys, but their preference for it even rivals that for computers. When asked to indicate their share of preference for the three devices in terms of taking surveys, computer users gave smartphones only 17 percent and tablets only 16 percent of their preference, while smartphone and tablet users allocated 35 percent and 40 percent of their total preference to their respective devices. Once survey-takers get a glimpse into the mobile experience, their inclination to reach for their smartphone or tablet (instead of their computer) more than doubles.

See Figure 4 (next page) for illustrations of interest and preference across devices.

Mobile Outlook

The myth-busting findings of this research have far-reaching consequences for the market research industry. Now that we all know how well the mobile platform performs within the course of our work, everyone has to play their part in responding to the disruption with enthusiasm and action.

In partnership with market research firms, technology providers in the industry will need to be proactive in bringing an adaptive seamless survey experience across devices to their work. At this time, we anticipate that the future of mobile surveys remains primarily in web-based form, as opposed to apps, as this channel is currently the best way to fully take advantage of respondents who wish to take devices on mobile without excluding traditional computer users.

Survey programming should consider all devices when designing for functionality and appearance; this will ensure a comparable experience across platforms, while still aiming for the optimal user experience for each device. Even beyond the basics, technology partners should focus on innovation around the platform and how to bring new and engaging survey tasks to mobile respondents. By including mobile features and experiences that users appreciate and enjoy, surveys can keep respondents surprised, engaged and committed.

For panel providers, educating respondents about the mobile experience will be key to its adoption. While a small portion of our respondents had actually initiated the survey on their mobile device, only 48 percent and 35 percent of total respondents screened were willing to take the survey on their smartphone and tablet. Those proportions already show promise for the platform, but the top barriers to using the devices for taking surveys will require actual experience to overcome; 39 percent of smartphone refusals and 31 percent of tablet refusals stemmed from the assumption that the functionality of the respective device was not as good as that of the computer for taking surveys. Other barriers to usage were much more of a problem for smartphones than for tablets. Many believed that it would be difficult to take a survey on the device (35 percent for smartphones and 13 percent for tablets), and some simply didn’t want to take surveys on the device (30 percent for smartphones and 18 percent for tablets).

Despite these misconceptions, we have seen in our research that those who take a survey on their mobile device actually find it to be just as easy and enjoyable of an experience as on their computer, so encouraging survey-takers to initiate a mobile experience will likely help to break down current barriers to the platform. This encouragement, coupled with messaging about the positives of the platform, will allow panel providers to build their mobile capabilities. Panels who do not upgrade their capabilities risk a poor reputation for survey quality and losing panelists to competitors that are more mobile-friendly.

Perhaps most importantly, bringing the mobile platform to the forefront of market research creates opportunities for providing real-time data to businesses and other end-users of the research. The portability of mobile devices allows companies to catch consumers during authentic decision-making moments, and assess their thoughts, preferences, and behaviors as they occur in real-time. In these mobile data-gathering scenarios, consumers could genuinely react to authentic products in real situations, rather than asked days or even weeks after the activities occurred.

Businesses will always be eager to get consumer reactions to their products or services in the marketplace, and by placing a digital survey into a consumer’s hand while they are standing in the aisle, sitting in the restaurant booth, waiting in the checkout line, using their product, or cruising through the mall, their voices are going to be heard louder, faster, and clearer. Being equipped to take advantage of the immediacy and intimacy of these experiences is going to be the new norm in the market research industry, and those companies who are able to execute their mobile strategy the best will become the new industry leaders.

Limitations

Our research is just the beginning of the mobile story. Because the platform is still breaking onto the scene, novelty could have had an artificial inflationary effect on mobile respondents’ enjoyment of the survey experience. Without much familiarity with taking surveys on mobile devices, respondents likely went into the experience with low expectations of the ease of functionality and were thus pleasantly surprised when their expectations were easily exceeded. Measuring interest and preference for the platform immediately after such a bias-busting experience may have led to amplification of these metrics. Longitudinal research is recommended to understand how the attractiveness of the mobile platform will develop over time and across multiple survey experiences.

Because we recognize the potential for mobile to deliver real-time data, we also know that non-panel respondents come along with the territory. Mobile research among the general non-panel population is needed to assess the viability of the platform for those who are not accustomed to participating in research. Some level of enjoyment, ease of functionality, and agreement with survey length in this research may be attributed to the panel nature of the respondent base.