Over the course of my 16 years at MedSurvey, a medical market research company, I have found that one type of project, while fundamental to market research, has persistently remained one of the most challenging to run efficiently: message recall studies.
As with many companies developing new products, pharmaceutical companies work closely with advertising agencies to create messages they hope will resonate with their target audience—in this case, healthcare professionals.
In the medical market research world, the process of testing these messages goes something like this: A pharmaceutical company sends 100 sales reps out to deliver one of 10 different messages about a specific medication to 8 to 10 doctors per day for a week. By the end of the week, they have detailed 5,000 doctors with one of the 10 “test messages.” A data collection company like MedSurvey is then contracted by an agency to follow up with some of these 5,000 doctors to learn which of the messages they recall receiving recently (say, in the past 72 hours). If some of the 10 messages are not generating high recall, the pharmaceutical company may choose to go back to the drawing board with their ad agency.
Not surprisingly, the quality of the data we collect during these studies can have enormous impact, sometimes with the fate of multi-million-dollar advertising campaigns hanging in the balance. Yet, despite the high stakes and the best intentions of everyone involved, these studies are often among the most challenging to successfully complete. In fact, it is my experience that when doctors are surveyed about message recall, only 30% even qualify for the study in the first place. What is going so wildly wrong?
The short answer, I’ve learned, is usually “time.” The following scenario is not uncommon: Sales Rep X visits Doctor Y on Monday, then logs the week’s activity on Friday. Our client, in turn, relays the “new” list of detailed doctors to us on the following Monday afternoon. On Tuesday, we promptly process the list and prepare to recruit doctors for the study. On Wednesday, we ask Doctor Y a screener question: “Did you speak to representatives from the following companies in the past 72 hours?” Of course, Doctor Y did speak with a sales rep from the company in question—and might even remember the test message—but is nevertheless disqualified because over a week has passed since the initial interaction. In this case, no one—not the sales rep, the doctor, the client, the pharmaceutical company, or the data collection service—is to blame. The problem is not any one part of the system, but rather the system itself. So, what is the long answer to what is going wrong with message recall studies? A lack of coordination and communication across the board.
Unfortunately, there is a tendency for the various moving parts in message recall studies (e.g., sales reps or data collection services) to operate as isolated silos. But the reality is that these “silos” are actually critical parts of an interconnected system. And as with any system, when there is a breakdown in communication between any two parts, this can lead to the collapse of the system as a whole. The trick, then, is to develop processes for open communication and coordination among all players in message recall studies. I’d like to offer some thoughts about best practices, for all parties involved.
The best overarching advice I can offer to our clients is to keep in mind that when designing screening criteria for a message recall study, it is critical that the last time a doctor was detailed by a rep that it be lined up with the amount of time it takes for the list of detailed doctors to be delivered to and processed by the data collection company. Aligning these may include a good deal of cross-communication with pharmaceutical companies to understand the logging practices of sales reps as well as with data collection companies to understand their own time parameters for the project. The following are several ways that sales rep interaction time and survey initiation time can become more closely aligned.
First, I’ve found that it is nearly always preferable for lists of detailed doctors (or “data dumps”) to be provided to data collection companies on a daily rather than a weekly basis. When there is a weekly lag in the relay of information, it is far less likely that doctors will qualify for the study. However, it is also important to communicate with pharmaceutical companies to understand how their sales reps are logging data. If sales reps are not required to log their activity daily, even if you provide “updated” lists daily, these lists may still be “out of date.”
Second, even when lists are relayed “instantaneously” from pharmaceutical company to client to data collection company, I still like to be aware of “the email volley.” If Pharma Pam sends a fresh list to Client Carol at 9 a.m. Monday, but Carol is in a meeting until noon and emails the list to MedSurvey Mike early in the afternoon, Monday morning’s list may miss the cutoff for processing and be delayed until Tuesday morning. A couple hours here and there can quickly turn into 24 hours (and possibly hundreds of respondents) lost. One possible strategy that I would love to see implemented would be to create a process by which the list generated by the pharmaceutical company uploads automatically to a vendor site that the data collection company can access immediately, allowing data to be transferred in real time.
Third, and perhaps the most effective practice to consider implementing immediately—for both clients and data collection companies—is to share the master target list prior to daily or weekly data dumps of detailed doctors. This strategy can save a great deal of time by allowing vendors to process the targeted sample even before the targeted audiences have been detailed. Once we receive a data dump, we merely have to match the detailed doctor with the name on the target list and we are already prepared to initiate the survey. (I would add one word of caution. There are times when names on the master target list and the data dump lists do not match, indicating that a glitch has occurred somewhere. We find the best practice here is to alert our client immediately but to avoid contacting doctors who are not listed on the original master list.)
I’ve noticed several other places where vigilant monitoring and communication can be critical to the success of message recall studies. One of the most common, from the data collection side, is for there to be considerable overlap between weekly lists of doctors, given that many sales reps will see the same doctors each week. It is important to keep an eye out for this and report back to clients if not enough fresh sample is being provided. Both data collection companies and our clients would also benefit from understanding how lists are being generated by sales reps and being on the lookout for possible problems with quality that might result from, for example, manual entry or inconsistent logging procedures by sales reps.
Because there are so many moving parts in the process of testing marketing messages, there is always the potential for a problem or change in one area to affect another area, without anyone being the wiser until it is too late. I will always remember a message recall study that we ran for years with no problems hitting quota, until suddenly we could no longer deliver enough completes. It wasn’t until much later that we learned the reason: the pharmaceutical company had reduced the size of their sales rep force by close to a third and had forgotten to mention it. I strongly believe that throughout market research, maintaining high levels of transparency and clear communication is critically important. In the case of message recall studies in particular, when time constraints are tight and there is so little room for error, actively maintaining open channels of communication—and streamlining processes for communication among all parts of the system—is essential for high quality results.