Business managers need facts. Facts can be defined as reliable bits of information on which decisions can be based. That's not too hard to understand, is it?

Apparently it is.

Anyone who has taken a course in statistics should understand that numbers are not the same as facts. Anyone who has experienced a misadventure in either Big Data analysis or online surveys should know this. Are all of the key components represented in the calculation of a number? Are the data accurate? If not, what if anything does a number mean?

Problems I've seen:

(1) Managers don't think through what they need to know.  If I want to set a sales target for a new product, I need to know a lot more than consumer purchase interest -- key benefits, pricing, emerging technology, competitor actions, sales channels, communications channels and coverage, customer support requirements, etc.

(2)  People aren't necessarily truthful. Survey respondents will lie in order to obtain incentive payments. Sales people will lie to avoid embarrassment or job loss, entering incorrect information into CRMs. However, ask a person a question the answer to which he/she can't possibly know, and you may get a fictional response just because they're trying to be cooperative. 

  • Dr. Gallup argued that paying respondents to do survey would encourage fraud, and he was right.  One client told me years ago about having to discard over 50% of completed interviews on a web survey of office technology as fraudulent.  I've seen surveys of doctors in which, after exclusion of obvious cheaters, another 25% of respondents were found to be doubtful.

  • Panel companies don't permit third party validation of survey respondents, and that's a huge problem.

  • Respondent fatigue is real. At the end of a long survey, respondents may start entering noise just to get the survey done.  In one recent study, a faculty member indicated that his 4-year, private college had 2 faculty members and 1 student. 

These aren't reasons to avoid doing research. They are reasons to rethink how you do research.

Multi-mode and multi-method research

I've become very partial to supplementing data mining and perhaps replacing web surveys with smaller scale studies of buyers/decision-makers.   These studies employ a mix of close and open end items with a goal of achieving a high response rate (over 50% -- in a recent study, 83% of eligible respondents completed a questionnaire). The methodology allows us to

  • Qualify respondents thoroughly;

  • Gather metrics with known standard errors;

  • Obtain verbal explanations of metrics -- understanding the "why";

  • Identify issues that aren't on the client's radar screen; and

  • If the survey is executed well, reinforce the relationship between the client and the buyer.

Arguably, a small study of 100 completed interviews  can provide a more solid basis for business decisions than a web survey of 1000 interviews. And it may cost less.

In building a model or testing an idea, a number means nothing unless you have a plausible backstory explaining why that number is what it is. A number is not a fact. Significance testing (which isn't even plausible in mass population surveys) attempts to assess the mathematical plausibility that a number could be true -- but says nothing about substantive plausibility. Put another way, if there is a statistically significant relationship between a customer having green eyes and buying your product, would you want to build a sales program targeting people with green eyes?