Researchers often brutalize respondents, asking them to evaluate long lists of product claims, attribute features, graphics, brand names, etc. Adding to the hurt, researchers often want to capture open-end evaluations about the ideas—especially the winning ideas. MaxDiff (best-worst scaling) has risen in popularity over the last decade as a better approach than standard rating scales, accomplishing more with smaller sample sizes, while avoiding scale bias and increasing the sensitivity of the measurements. But, what to do when the list is 100, 200 or 500 items long? How can we divide the effort across respondents to not wear out any one respondent while focusing respondents’ open-end evaluations on the winning items—without knowing ahead of time which ones those are? Adaptive approaches learn from early respondents to focus later respondents’ attention on the items that are rising to the top. This means wasting less respondent time asking respondents to evaluate losing ideas over and over again. These approaches reduce sample size requirements by 3x to 4x compared to non-adaptive MaxDiff approaches.
What the audience will take away:
What is MaxDiff (best-worst scaling) and why is it superior to standard rating scales?
How Procter & Gamble leveraged adaptive MaxDiff to evaluate a long list of features with lower sample size requirements
How they can leverage open-end content to gain qualitative insights into the winning ideas, without wasting so much respondent effort providing open-end responses regarding losing ideas