While it is tempting to use advanced programming to “force” good answers and good behavior out of survey respondents, don’t do it. Let respondents make mistakes. Let them mess up, and then review who messes up. This is one of the best ways to assess data quality in a survey, and trust me you will be surprised (maybe saddened, maybe worried) at what you find.
This advice came to mind for a survey we just fielded that required respondents to watch a four minute video explaining a new product. The video was followed by questions about features and value and likelihood of purchase, including a set of conjoint exercises to assess willingness to pay. The research sponsor asked that we force respondents to watch the entire video before allowing them to advance. The product is extremely complex, they argued, and the answers we get back will be suspect if people do not watch the video.
The problem, though, is that I cannot force respondents to pay attention. So why not let them give me data that tells me they did not pay attention? If they stop the video and move on, I will know. I will have data telling me how long the video played. But if I force four minutes of video play before letting the survey advance to the next screen, I will never know. Surely they can ignore the video while being forced to let it play.
The same goes for all sorts of survey questions. Many survey researchers impose constraints on what respondents can do, which makes for tidy, logically consistent, and easy-to-clean data. For example, you might force a respondent to give consistent answers so that “years employed” can never exceed “age.” But while you can force respondents to be logically consistent, popping up with error messages if they are not, you cannot force them to tell the truth. And if you force logical consistency, you lose evidence that they are not telling the truth.
It is unfortunate that some people (and most robots) race through surveys and give random answers. They want their incentive payments and that is all. But you can use their bad data to assess, diagnose, and eliminate. So avoid getting overly focused on using programming and technology to “solve” the problem of inattentive behavior. Let them mess up, and use that data to your advantage.