Brad will be moderating a panel to discuss these issues at NEXT, May 9-10 in New York.

Embarrassing polling misses. New tech (and other) products that totally fail to gain traction with their targets. A drive to speed and low cost from market research providers from clients who assume projectability is a given. Declining response rates for traditional market research modes. Unknown skews of all sorts in online panels. A growing focus on gaining insights from social media, and treating it as projectable findings to all consumers.

How worried should we be about market research as a dependable driver of decision making?

Recent polling misses were not outliers

The major market research miss of the 2016 elections has raised far too little concern. It highlights a problem that has been growing for years in all aspects of quantitative market research, although the gaps between polling and the election outcome illustrate this more dramatically. Yes, the national polls were not bad, but the sheer number of state poll failures, by very reputable survey companies, is virtually unprecedented.

I fear that the political polling problem of 2016 is not an outlier. Rather, it appears to be the continuation of an arc of declining projectability that has been going on for over a decade in traditional market research.

Let me explain. Over a decade or so ago, online panel-based market research took off. It was cheaper, faster, and easier than phone or in-person market research. Often lost in the stampede to online polling, though, was what it would take to make findings from online panels projectable. It is not enough to collect survey findings. Findings need to represent the world one is trying to act in, or they can lead to very bad decisions. For example, if you think your hot new market segment is X in size, but it is really only one-quarter of X, the results will not be good. Research on social media is even harder to project to a market universe.

Weighting models attempt to fix a host of survey issues

Most sophisticated researchers attempt to adjust survey results by weighting. Basically, you take the raw results you get from whatever mode you use (online panel, phone, face to face, whatever) and adjust them to project to a market of interest (a/k/a the universe). This is a harder task for online panels. Since they are generally not constructed randomly, the standard random sample assumptions in classical statistics don’t work that well. Nonetheless, the idea is the same: take what you have, and adjust it to get to a “truth."

Political polls have the same issue, although weighting is generally based on determining likely voter turnout by subgroup. Generally, the polling firm with the most accurate turnout or weighting model wins the prize of most accurate forecast.

In periods of rapid change, weighting models struggle or fail.

A key problem with all of this is that weighting models are generally dependent on people or companies (for B2B research) behaving in the same way as they have in the past. The models don’t handle rapid change well. If political leanings, or product needs and desires, are shifting rapidly, it's tough for models to catch it. This is as dangerous in politics as it is in hot new technology markets, because it leads to bad "market" decisions.

It is rare to have blatant misses, such as the swing states in the current election, to illustrate the problem. In a business setting, it is often unclear if a bad new product launch is due to bad research, bad execution, or some marketing failure. Market research is usually just one input into a complex go-to-market process. Did Clinton lose because bad polling did not alert her to take corrective actions? Did the Edsel fail because the research missed a rapid consumer preference change away from chrome? The answers are hard to pin down.

What has become clear over the past decade, however, is that understanding of projectable findings has been eroding. Between clients not understanding this, and market research firms fighting to deliver cheaper and faster results than their competition, there is not enough focus on paying what it would cost to fix this issue.

Market Research failures can lead to Big Data gains, but should not

Let’s keep Big Data’s potential to supplant traditional market research for another day. Even if it can deliver on its promise -- which is still unclear -- Big Data is far better at understanding “what” populations are doing, rather than “why” they are doing it. Truthfully, Big Data and primary market research need each other but often are fighting for the same budget. 

Participate in a panel discussion on these issues with Brad at NEXT, May 9-10 in New York.