A group of researchers and research users took a hard look at campaign survey and opinion research at the Microsoft Innovation and Policy Center this morning. Sunshine Hillygus, professor at Duke University, Margie Omero, managing director of Purple Insights, Neil Newhouse, partner and cofounder of Public Opinion Strategies, and Amy Walter, national editor for the Cook Political Report, came together t discuss the future of polling and advancements in political campaign technology.
A lot of the discussion centered on the use and abuse of research. For low-turnout elections, said Newhouse, pollsters “have a very difficult time getting that right.” He noted that while seemingly every college and university seems to have their own poll, “not all polls are created equal." Hillgus lamented the proliferation of unvetted entrepreneurial pollsters being treated the same as established public opinion polling outfits. Omero highlighted the skew created by certain methods like interactive voice response (IVR) polling, “how do you know you're reaching the intended respondent or a qualified voter?” Also, she pointed out that IVR is by nature autodialing, so it cannot be used to call someone on a cell phone, thanks to the TCPA. That said, if you are trying to land the over-55 demographic, IVR can be quite useful, because they’re the only ones still answering landlines. Omero felt that IVR is "on its way out, at least nationally."
Back to election prediction, Newhouse urged the audience to “stop looking at national surveys to figure out who is leading” in the GOP primary because “it is not a national election.” He warned that only 25-30 percent of the respondents in most national surveys of the GOP primary are actually GOP primary voters.
Walter: What about the rise of Donald Trump and disbelief that his current polling numbers indicate his future performance as a GOP candidate for President? Is our data suspect?
Hillygus noted that "polls serve different purposes... Horse races might not be what these polls are good for." She suspected that “we are expecting too much of polls,” particularly in low-turnout races where we “don't know who is going to turn out to vote."
Newhouse relayed a series of experiments his firm ran a while back on their own research. After asking respondents the normal questions, they drilled down on how likely the respondents felt they were to vote, and their interest in the campaign and intensity. After the election, they checked on how many of their respondents voted. The data showed no correlation between reported likelihood to vote or interest and intensity with who actually voted! “The most important variable was whether they had voted before.”
Omero echoed these concerns that polling is sometimes “an instrument that measures pounds being used to measure ounces,” and that the people relying on polling easily forget that “the margins... do matter.” She made a pitch for more qualitative research in campaigns and campaign news coverage. It is "essential to really hear voters in their own words," which you “just can’t get in polling. You must have both."
Walter: Why don’t you just do all of your research online?
Discussing representativeness concerns with online research, Hillygus sounded familiar, saying it is “impossible to get accurate knowledge from these online panels." Omero noted that there are ways to make online research more accurate and representative using voter registration data, but that “you may not end up saving much money” compared to just doing a telephone survey.
However, Hillygus and Omero highlighted the enormous costs and enormous bias induced by not being able to autodial cell phones, thanks to the Telephone Consumer Protection Act (TCPA). Omero specifically cited Gallup’s recent $12 million class action TCPA lawsuit settlement as well as the new FCC regulations that made the TCPA even harder to manage.
Walter: If Democrats or Republicans are getting something wrong in their polling assumptions, what is it?
Omero smartly said that if she knew what she was getting wrong, she wouldn’t be getting it wrong. However, some of the areas of concern include age questions, which “are incredibly tricky methodologically,” as well as questions of "ethnic and racial makeup.” However, the most difficult thing to measure is bias, she said. If you ask a respondent directly about bias, they will usually respond the way they think you want to hear it, not the way they actually think.
Newhouse felt that the Democrats’ biggest challenge (both in polling and in campaigning) was whether or not Hillary Clinton will get the same kind of black turnout on Election Day as Barack Obama did. On the GOP side, the biggest challenge might be enthusiasm, he said. Republicans in research Newhouse is conducting “wish that Election Day was next week” – their “enthusiasm is off the charts." When GOP respondents are being asked to name one positive thing, they answer “the election next year.” The challenge becomes what happens (and predicting what happens) when that enthusiasm advantage over Democrats fades in the last couple of months of the campaign season.
Walter: Do any of you researchers answer your own phone?
Hillygus relayed a joke that the only people left to answer telephone surveys are old people who want to chat and pollsters who feel guilty about their work.
Walter: Election polls got Kentucky and Ohio wrong last week. Why?
Hillygus responded that the most interesting point was that all the public polls “got it wrong about equally." There is always a complicated effort to figure out who will turn out on Election Day, and thus whose opinions need to be taken most seriously. However, Hillygus suspected that "there is herding going on in some races" Some firms won’t release outlier polls in high-profile horse races because, as Omero put it, it is “easier to all go down together than to stand out” and risk being wrong on your own.
Newhouse suggested that the GOP enthusiasm advantage he’d already highlighted was the key. "Those Republican voters couldn't wait to vote” and pollsters “may have underestimated their desire to vote." In Ohio, the public opinion polls “started with a general election screen” instead of checking who had voted in the past. While assuming that young people would turn out en masse to help pass a ballot measure legalizing marijuana, based on these surveys about a ballot measure, the pollsters didn't read out the measure’s wording to respondents. So while many respondents were favorable to legalization of marijuana, they reacted badly when they got to the voting booths and saw the ballot measure mention the word “monopoly,” and voted the ballot measure down.
Hillygus lamented that pollsters did not run research in the last week of the campaign, and mostly got the Democrat gubernatorial candidate Conway’s final numbers correct, but not the Republican candidate Bevin’s. The final poll showed a tie, but the enthusiasm broke voters late for Bevin. Newhouse agreed: "poll late for god's sake, because things change!"
How are pollsters adapting to the TCPA and the rise of call blocking technology?
Hillygus expressed exasperation that politicians can support regulations blindly, not realizing they are killing their own phone work. Walter responded that very few Members of Congress invest much in polling these days because they are in safe seats.
Politicians and consumers are both also reacting to a rise in what they presume to be push polls, she said, not understanding that most of those calls are actually just message testing research. Much of the problem comes back to journalists, who just don’t know what polling really is or how it works. Walter compared journalists to excitable teens obsessed with sex, but who have no idea yet what sex actually is.
Are prediction markets a potential replacement for opinion polling?
This far out from Election Day, prediction markets might be “far more accurate” than opinion polls, said Hillygus. Of course, polls are asking if the election were held today, what would you do, while prediction markets are asking more complex questions.
Newhouse commented that prediction markets are just reflections of "conventional wisdom," dubbing them as "Morning Joe on steroids."
Omero felt that the question "goes back to what are we using polls for.” A prediction market “still doesn't tell us what people think."
Is a lack of reliability in polling just inevitable?
The panelists disagreed with the question, but did not that election polling is more complicated than it used to be. Omero pointed out that Gallup is getting out of primary horse race polling for good reasons. Campaigns are getting more sophisticated, Newhouse observed, such as the Obama turnout surge that most pollsters failed to capture. Campaigns’ improved targeting and communications with voters is “making our job harder because they are getting better” at their job.
Omero made a final plea for more sophistication from the news media. “Polling provides more precision,” but journalists should be looking to supplement their “breathless reports of poll results” with some qualitative interviews with average voters.
And yet, concluded Hillygus, polling is still one of the only ways to capture the opinions of people that don't ordinarily speak out on social media or throw money at political candidates. Their biggest value is in deciding what people really care out.