Insights Association Urges Reform of Federal Government Research Policy - Articles

Articles

12May

Insights Association Urges Reform of Federal Government Research Policy

Noting that the U.S. government’s current approach “to survey, opinion and market research, unchanged since 2006, wastes taxpayer resources and hurts the private sector,” the leading nonprofit trade association for the insights industry called for the urgent modernization of federal research policy.

In comments filed with the Office of Management and Budget (OMB) on May 12, 2025, the Insights Association urged “OMB to finally modernize federal statistical policy as it relates to survey, opinion and market research by”:

  • “streamlining the research approval process”;
  • “accepting modern survey methods and technology, particularly online panels”;
  • ”stopping the demands for arbitrarily-high response rates that waste resources and harass research subjects”;
  • “allowing flexibility in offering and valuing participant incentives, instead of discouraging their use and arbitrarily capping their value”;
  • “increasing competition in the marketplace for the provision of research services to the federal government, which will provide more opportunities for small businesses and spark innovation”; and
  • “recognizing ISO standards as a mark of quality in research, which will help federal agencies more easily identify trustworthy research partners.”

IA warned that “red tape is choking innovation in federal research,” pointing out that the current federal approach “is onerous, archaic and antiquated. The policy has not been updated in nearly twenty years (since 2006), while the real world in which research must operate, not to mention how rigorous research is conducted in the private sector, has changed dramatically. The existing policy grew out of the 1995 Paperwork Reduction Act (PRA), and expanded to not just strangle federal research efforts by trapping them in a late-1990’s mindset, but created unforeseen harm to the private sector in research and in competition for federal contracts. Because OMB has locked down research, there's been minimal innovation.”

Commenting that the federal government needs to “update its research guidance and approval process for modern times,” IA warned that, “the collection of requirements OMB currently enforces actually make the research less effective both in terms of operational execution and methodological excellence.”

The Insights Association concluded that current policy “wastes immense taxpayer resources (time and money), hinders federal decision-making, burdens respondents, hurts private-sector research efforts, and severely limits innovation and competition.”

Read IA’s full comments here or below.

The Insights Association’s more than 9,600 members are the world’s leading producers of intelligence, analytics and insights defining the needs, attitudes and behaviors of consumers, organizations and their employees, students and citizens. With that essential understanding, leaders can make intelligent decisions and deploy strategies and tactics to build trust, inspire innovation, realize the full potential of individuals and teams, and successfully create and promote products, services and ideas.

 

......................................................... .........................................................

RE: Request for Information (RFI): Deregulation [FR Doc. 2025-06316]

OMB has sought through this RFI “proposals to rescind or replace regulations that stifle American businesses and American ingenuity” and details on “regulations that are unnecessary, unlawful, unduly burdensome, or unsound.”

The Insights Association, the leading nonprofit association for the market research and analytics industry, files these comments in response to urge the modernization of OMB federal statistical policy, specifically as it relates to streamlining the research approval process, accepting modern survey methods and technology, stopping the demands for arbitrarily-high response rates, allowing flexibility in offering and valuing participant incentives, increasing competition in the marketplace, and recognizing ISO standards as a mark of quality in research.

The Insights Association’s more than 9,600 members are the world’s leading producers of intelligence, analytics and insights defining the needs, attitudes and behaviors of consumers, organizations and their employees, students and citizens. With that essential understanding, leaders can make intelligent decisions and deploy strategies and tactics to build trust, inspire innovation, realize the full potential of individuals and teams, and successfully create and promote products, services and ideas. We are a more than $77 billion industry[1] working with some of the biggest and brightest companies in the world.

Executive Summary

The current federal approach to survey, opinion and market research, unchanged since 2006, wastes taxpayer resources and hurts the private sector.

The Insights Association urges OMB to finally modernize federal statistical policy as it relates to survey, opinion and market research by:

  • streamlining the research approval process
  • accepting modern survey methods and technology, particularly online panels;
  • stopping the demands for arbitrarily-high response rates that waste resources and harass research subjects;
  • allowing flexibility in offering and valuing participant incentives, instead of discouraging their use and arbitrarily capping their value;
  • increasing competition in the marketplace for the provision of research services to the federal government, which will provide more opportunities for small businesses and spark innovation; and
  • recognizing ISO standards as a mark of quality in research, which will help federal agencies more easily identify trustworthy research partners.

Federal statistical policy’s approach to survey, opinion and market research is onerous, archaic and antiquated. The policy has not been updated in nearly twenty years (since 2006), while the real world in which research must operate, not to mention how rigorous research is conducted in the private sector, has changed dramatically. The existing policy grew out of the 1995 Paperwork Reduction Act (PRA), and expanded to not just strangle federal research efforts by trapping them in a late-1990’s mindset, but created unforeseen harm to the private sector in research and in competition for federal contracts. Obsolete red tape is choking innovation in federal research.

The Insights Association has raised these concerns with the Office of the Chief Statistician in the past, whose general response has been to lay fault on federal agencies and contractors for not understanding how to conduct research or how to navigate OMB’s guidance.

Current federal statistical policy wastes immense taxpayer resources (time and money), hinders federal decision-making, burdens respondents, hurts private-sector research efforts, and severely limits innovation and competition.

OMB must update its research guidance and approval process for modern times.

Statutory backing

The Paperwork Reduction Act statute (44 U.S. Code § 3501-3531) references the need to “minimize the paperwork burden for individuals, small businesses, educational and nonprofit institutions, Federal contractors, State, local and tribal governments, and other persons resulting from the collection of information by or for the Federal Government.”

This vague statutory mention has been blown up into a huge regulatory burden (5 CFR § 1320) on anyone trying to conduct federal survey, opinion and marketing research. These regulations make it exceptionally challenging, and time-consuming, to get even a short 1-question survey or a study of only a handful of people, approved by the OMB, which acts as the super-regulator of federal data collection.

OMB’s interpretations may make sense when it comes to necessary federal forms, but not for research.

A stultifying approval process

Getting a simple survey approved by OMB usually takes up to a year or more. The original rationale for a given data collection is often superseded by events before a survey research project is even fielded, let alone results analyzed and reported. We’ve heard of a general lack of service and communication with agencies and contractors in the approval process. The cost in time and resources is steep; the cost of the ignorance engendered by the inability to conduct research in furtherance of decision-making, or significant delays thereof, is even steeper. The hindrances and delays don’t just impact seemingly-discretionary research efforts, but even regular statutorily-required research.

All requests to run a research study require entry into the ROCIS.Gov system; we are told that clearance takes at least 9 months, but usually takes a lot longer. Each federal agency has their own process for moving forms from their agency to and through the queue. OMB evaluators in this process are often neither statisticians or survey researchers, nor operating with any experience in the subject matter of the agencies for whom they must approve data collections. Competent statisticians and researchers are forced to spend much more time writing “clearances” for studies than any real research work, which is a huge waste of resources and drives away federal talent.

OMB examiners presumably have the best of intentions, but we are told they sometimes go outside the bounds of what their role was intended to be (such as rejecting certain questions, despite them being demanded by stakeholders and dictated by statute) because they lack any systemic accountability.

Any federal effort to request or collect data where the same questions are asked to 10 or more respondents generally must obtain OMB clearance. This means that relatively small studies (e.g., a survey of a dozen state program managers) need to be reviewed and approved by OMB, and that very large studies and very small studies undergo the same review process.

Agencies are required to estimate burden hours associated with a data collection effort, and those estimates are focused on the aggregate burden hours for the whole effort based on the number of respondents and the time the data collection takes each respondent. This means that a one-minute survey of 1,200 research subjects and a 60-minute survey of 20 research subjects are considered to create an equal burden (1,200 minutes / 20 hours), but from a practical researcher standpoint and the perspective of each research subject, an hour-long survey is a much bigger burden than a one-minute survey.

Ongoing data collection efforts need to be renewed every three years, and the steps and process are basically the same for an existing piece of research up for renewal as for a new study.

OMB should reform its approval process for research, including considering:

  1. raising the trigger threshold for OMB review from 10 people contacted, at least for research purposes, to 100 or 1,000; and
  2. waiving review for regular recurring studies with no material changes.

Online panels and other methods, modes and technologies need to be recognized

When the federal statistical policy was published in 2006, Internet coverage was limited. Today, the issue is how many households or individuals have access to high-speed broadband, but pretty much everyone in the country has some form of Internet access, either in a home or institution, or on their own devices. This policy was set in stone before the advent of the iPhone!

Now, most everyone in the country has a smartphone and other Internet-access-point devices, and many of those individuals even have multiple phones, but the federal government still operates under regulations and guidance that treat the Internet as something new-fangled and untrustworthy.[2] OMB regulations specify that “Unless the agency is able to demonstrate, in its submission for OMB clearance, that such characteristic of the collection of information is necessary to satisfy statutory requirements or other substantial need, OMB will not approve a collection of information ... In connection with a statistical survey, that is not designed to produce valid and reliable results that can be generalized to the universe of study.”[3] That leads OMB guidance to be biased against panels and other research methods it considers to be insufficiently probability-based.[4] OMB’s Q&A goes further on online panels, with question 29 stating that "use of these panels for Federal surveys that are seeking to generalize to a target population can be problematic." Question 43 from OMB's Q&A addresses the "advantages and disadvantages of using Internet surveys," recognizing their low cost, speed and flexibility, but then insists there is no sampling frame, that researchers do not know the identity of their research subjects, and "Respondents may have concerns about confidentiality and, therefore, be reluctant to provide some information over the Internet."[5] Question 72 notes that, “In their ICRs, agencies proposing to use multipurpose survey panels should provide a justification for their use, provide expected response rates in detail, and devote careful attention to potential nonresponse bias as warranted… Although these panels have been used as a convenience sample and/or for pilot studies, there is some recent research that examines the quality of estimates from these panels. OMB will continue to monitor this research area and evaluate results from agency studies on nonresponse bias.”[6]

Despite that promise to continue monitoring and evaluating, OMB’s regulations, guidance and Q&A have remained mostly untouched since 2006.

Online panels were growing fast in 2006, but not yet widely-trusted, and the telephone was still a prevalent mode for research. Now, most research in the U.S. is conducted online, because phone studies are usually cost-prohibitive. Not only are online panels common and respected across the industry, there are also probability-based online panels, a specialized kind of panel that can reach the highest levels of statistical reliability for a fraction of the cost of traditional modes.

As explained by Gallup,[7] the prevalence of "lower response rates” in research via telephone means that “researchers are spending significantly more resources to contact and interview individuals by phone. Contact rates -- the percentage of households in the sample who answer the phone -- are declining. While there are many reasons for this decline, some suggest it could reflect changes in how people are using their phones. Phones are being used more for text messaging and browsing the internet and less for answering or making a traditional phone call."

Per a 2023 Pew Research Center study of national public opinion pollsters, “Telephone polling with live interviewers dominated the industry in the early 2000s, even as pollsters scrambled to adapt to the rapid growth of cellphone-only households. Since 2012, however, its use has fallen amid declining response rates and increasing costs.”[8] The study found that "10% of the pollsters examined... used live phone as their only method of national public polling, but 32% used live phone alone or in combination with other methods. In some cases, the other methods were used alongside live phone in a single poll, and in other cases the pollster did one poll using live phone and other polls with a different method."

Even in the public polling space, online panels are taking over. That 2023 Pew study found that nearly half of national public opinion pollsters use online panels alone or in combination with other methods. Extending the population reach of research by blending modes is considered a best practice, making the research more representative of different kinds of people with different preferences for survey-taking. 

Online research panels are popular because of their speed, scalability, cost-efficiency, and self-administered benefits. Panelists are generally profiled in advance, allowing clients to target specific audiences and sub-groups. Research subjects can be swiftly recruited and data collected from them for reasonable prices, with minimal respondent burden. While not as ideal as truly randomized samples, online panels can still provide statistically-representative samples of a given population or audience, thanks to extensive research and verification of research subjects in the construction of the panels and in preparation for specific studies. Longitudinal studies are particularly easier to conduct via online panels, also. Allowing people to take the survey pseudonymously can improve response biases that are associated with interviewer-led methods. 

Probability-based online panels

Of course, the highest standard of statistical reliability comes from probability-based[9] online panels, which are different than the regular panels. Probability-basedonline panels are a specialized type of research panel where research subjects are recruited using random sampling methods (e.g., address-based sampling (ABS)) that give every individual in the target population a known and non-zero chance of selection, which supports statistical inference and generalizability. Probability-based online panels are a service provided by multiple private companies and nonprofit organizations and already used by various federal agencies. Such panels are designed to reflect the demographics and characteristics of the broader population, or of a specific target population. Because of random sampling, these panels often yield more reliable and generalizable results than regular online opt-in panels, and are less likely to overrepresent people with strong opinions or more frequent Internet use. Knowing the probabilities involved in such panels also allows for better post-stratification and more accurate statistical weighting and adjustments. Once recruited, research subjects are invited to join long-term panels for future surveys, typically conducted online (although initial contact may be via mail, phone, or in-person). Such high-level research costs more, but nowhere near as much as the traditional methods recommended by OMB’s outdated guidance.

Question 34 in OMB's Q&A seems to appreciate the advantages of probability-based online panels, noting that, "Some Internet panels have been recruited from a probability-based sampling frame such as a Random Digit Dialing (RDD) sample of telephone numbers, and panel members are given Internet access as part of their participation. In this case, the Internet simply serves as the mode of data collection, not the sampling frame... The issues of coverage and quality of the frame apply to whatever frame was used (e.g., RDD), not the Internet." And yet, the same question derides them, claiming "there are also concerns about potential self-selection of respondents and low response rates in these panels" and that probability-based online panels "work well when samples of persons interested in taking part in surveys are needed, and the objective is not to generalize to a specific target population (e.g., pilot studies). Agencies planning to use a pre-existing panel or Internet-based sampling frame need to justify its appropriateness for the intended use of the data in the ICR."[10]

Other methods and technologies

Of course, online panels are just one part of an ever-growing ecosystem of insights technology and methods, including social media listening, market research online communities (MROCs), mobile apps, and behavioral tracking. Meanwhile, the rapid rise of artificial intelligence (AI) means great advances are possible in back-end design and processing of research, not to mention possibilities of reducing burden on research subjects by incorporating synthetic data. OMB Q&A on “the different modes of survey data collection” from 2006 is clearly insufficient to the modern research world when it simply notes the “most commonly used data collection modes are in-person (or face-to-face), telephone, mail, and web (including e-mail).”[11]

OMB should:

  1. revise its research guidance to include online panel research as a recommendable option, with probability-based online panels in particular as an option for research requiring specifically high-levels of statistical validity and reliability, such as official statistics; and
  2. set a regular schedule for revisiting its guidance (perhaps every five years at minimum) to accommodate changes in technology, research and society.

Unrealistic response rate demands are hurting research subjects and private sector research

Standard 1.3 in OMB’s research guidance requires federal agencies to “design the survey to achieve the highest practical rates of response.”[12]

In practice, most surveys get rejected by OMB examiners reviewing these requests if they do not achieve a 70-80 percent response rate. Response rates have been tanking for years, both for the private sector and federal surveys, so continuing this rate requirement makes no sense. A lower response rate does not necessarily mean that results are invalid or inaccurate. Significant (and potentially harassing) levels of nonresponse follow-up (NFRU) are required to even begin to approach those kinds of high response rates.

OMB, in considering “typical response rates for Federal Government statistical surveys” for question 65 in its Q&A, admitted that “The Paperwork Reduction Act does not specify a minimum response rate.” OMB found, in a review of “199 general statistical survey information collections that were approved in 1998,” that the “mean response rate was 82.2 percent (unweighted) and the median response rate was 84.7 percent” and “about two-thirds of surveys achieved response rates above 80 percent and eighty percent of surveys achieved response rates above 70 percent.” The agency then admitted that “recent, but less systematic observations suggest that response rates have been decreasing in many ongoing surveys in the past few years. Some evidence suggests these declines have occurred more rapidly for some data collection modes (such as RDD telephone surveys) and are more pronounced for non-government surveys than Federal Government surveys. Generally, these declines have occurred despite increasing efforts and resources that have been expended to maintain or bolster response rates.”[13]

Back in the real world, response rates have declined precipitously since 2006, whether federal or otherwise. For example, a 2013 study from the National Academies found that "response rates have been steadily declining for at least the past two decades. A similar decline in survey response can be observed in all wealthy countries."[14]

Response rates for telephone research (the preferred manner of research in OMB’s guidance) are particularly low. The Pew Research Center, a leading research organization, disclosed their average response rate in 1997 as 36 percent, versus 6 percent in 2018.[15] The decline in response rates is pretty much universal in random digit dialing (RDD) surveys. A 2017 American Association for Public Opinion Research (AAPOR) Task Force Report found that, "Landline rates declined from an average of 15.7 percent in 2008 to an average of 9.3 percent in 2015 (a relative decline of 41 percent), and cell phone response rates declined at the same rate, from an average of 11.7 percent to an average of 7.0 percent (a relative decline of 40 percent)."[16]

OMB guidance sets expectations that most surveys should strive for maximum response, while most researchers would do backflips if they could achieve more than 10-15 percent response rates for a study conducted via phone.

Question 66 in OMB's Q&A addresses "acceptable response rates for different kinds of survey collections," noting that "ICRs for surveys with expected response rates lower than 80 percent need complete descriptions of how the expected response rate was determined, a detailed description of steps that will be taken to maximize the response rate... and a description of plans to evaluate nonresponse bias," as well as "a clear justification as to why the expected response rate is adequate based on the purpose of the study and the type of information that will be collected (whether influential or not)." Lower response rates may still be justifiable only if federal agencies "are seeking to gather information that is planned for internal use only, is exploratory, or is not intended to be generalized to a target population," such as "customer satisfaction and web site user surveys and other qualitative or anecdotal collections."[17]

Guideline 2.3.2 in OMB’s guidance says to “Encourage respondents to participate to maximize response rates and improve data quality,” including by planning “an adequate number of contact attempts” while downplaying the use of participant incentives.[18]

Demanding such a high response rate requires a huge investment of time and money, wasting taxpayer resources. Further, the many repeated re-contacts with participants to try to achieve the high response rates, sometimes multiple times a day for days on end, would never be tolerated by a private sector research study, which must abide by professional codes and standards that prioritize participant welfare. For example, Sec. 1.1 of the Insights Association Code of Standards requires professionals to "Respect the rights and well-being of research subjects and make all reasonable efforts to ensure that research subjects are not harmed, disadvantaged, or harassed as a result of their participation in research." Also, Sec. 2.5 requires them to “Respect the right of research subjects to refuse requests to participate in research.”[19]

Research subjects do not necessarily distinguish between research requests. The demanded NRFU, which borders on harassment, can discourage research subjects from ever participating in another research study again. A research subject so burned hurts every other research study in the future in both the public and private sectors because that individual may never agree to participate in anyone’s study again for years, or ever.

As mixed-mode research becomes prevalent, viewing the research world through a simplistic lens of response rates may no longer make much sense anyhow.

OMB must eliminate, or dramatically lower, its minimum-acceptable response rate (regardless of whether such rates are spelled out or simply rules of thumb OMB examiners apply in practice).

Participant incentives are unnecessarily discouraged

A key solution to response rate problems – participant incentives – are effectively discouraged by OMB. OMB’s Guideline 2.3.2.5 contends that “incentives are not typically used in Federal surveys.”[20] Agencies are forced through many hoops to justify the use of incentives, let alone their form or amount, while many if not most research studies in the private sector provide some form of incentive, including flexibility to participants to choose the form of their incentive. Whole private companies are dedicated to, and expert at, providing participant incentives. Relatively small incentives can appear to add to the cost of a study, but save an outsized amount of resources in the process.

Question 74 in OMB's Q&A defines an incentive as "a positive motivational influence; something that induces action or motivates effort. Incentives are often used in market research, and sometimes used in survey research, to encourage participation. They may be monetary or non-monetary, such as phone cards, books, calculators, etc." However, OMB only really feels incentives are appropriate "with hard-to-find populations or respondents whose failure to participate would jeopardize the quality of the survey data (e.g., in panel surveys experiencing high attrition), or in studies that impose exceptional burden on respondents, such as those asking highly sensitive questions, or requiring medical examinations."[21]

To achieve a representative sample of participants, many surveys must provide incentives that attract, retain, and compensate individuals for their time and effort. Acknowledging the time and effort of respondents builds trust and goodwill, especially in panel-based or longitudinal research settings where repeated participation is desired. Incentives are often imperative for motivating response from population subgroups, to ensure that the target population is well represented in the survey.

Absent sufficient incentives, especially in longer, recurring, more sensitive or more demanding surveys, research subjects may lack the motivation to engage or complete questionnaires. Research has consistently shown that monetary incentives (e.g., cash, gift cards) typically lead to the highest increases in response rates, although non-monetary incentives (e.g., sweepstakes, charitable donations, small gifts) can also help boost participation. Incentives, especially when structured properly, can encourage research subjects to provide more thoughtful and complete responses, and to take the survey more seriously, reducing careless or rushed answers.

Certain populations—such as lower-income individuals, younger respondents, or harder-to-reach demographics—may be more likely to participate when incentives are offered. This helps reduce nonresponse bias, improving the overall representativeness and validity of the survey results.

According to a recent piece of research-on-research in the private sector from BHN EQ & Qualtrics, research subjects studied said that “proper compensation is the key reason they choose to get involved.” This study found that “98% of survey respondents report being compensated when they participate.” With an eye on customer/user experience studies, the study warned that, “without incentives, responses can suffer from “squeaky-wheel syndrome,” meaning that unhappy customers are more likely to give feedback and therefore skew your results. It’s important to get a range of data from customers across the satisfaction spectrum.”[22]

The BHN EQ & Qualtrics study also emphasized that the type of incentive offered and method of its delivery “can also impact response rates and recruitment. When asked what kind of compensation they would prefer to receive, the most chosen option was gift cards and prepaid cards. … Since the offer of an incentive is the top reason our respondents cited for participating in research, it shouldn’t come as a surprise that the value of that incentive is one of the most important drivers of their decision. There are a number of factors to take into account when trying to determine the ideal amount, but ultimately, it may take some trial and error for each study.”[23]

Many researchers with whom we have spoken, in and out of the federal government, claim OMB sets arbitrary value caps on incentive levels, though there is disagreement on what that cap may be (usually in the $40-70 range). The private sector focuses on fair market value and experienced companies and organizations are often in a better position to peg that value for a given study than a federal bureaucrat.

Question 75 addresses why federal agencies must bend over backwards to justify incentives. “While incentives have been used in the private sector without much controversy, most Federal Government surveys do not provide incentives to respondents… The regulations implementing the Paperwork Reduction Act (PRA) of 1980 prohibited the use of incentives for respondents to Federal surveys unless agencies could demonstrate a substantial need. The regulations implementing the 1995 reauthorization of the PRA require agencies to justify any payments to respondents. In keeping with these concerns, OMB’s guidelines on providing incentives to respondents follow a general conceptual framework that seeks to avoid the use of incentives except when the agency has clearly justified the need for the incentive and has demonstrated positive impacts on response and data quality by using an incentive."[24]

Question 76 details the ample justifications demanded from federal agencies if they wish to use incentives in a study. While stating that "OMB desk officers carefully review the justification of incentives," it emphasizes that federal agencies have to "cite the research literature and demonstrate how their study particularly merits use of an incentive by its similarity to specific studies on similar populations using similar methods that exist in the literature, or propose a field test or experiment to evaluate the effects of the incentive."[25]

OMB needs to recognize that, as cooperation and response rates have further declined in the last twenty years, the importance of incentives has commensurately increased, and federal agencies must be given much greater flexibility in providing research subjects with incentives and determing the value to be offered.

Improving federal statistical policy will increase competition

Most commercial firms rarely seek research work with the federal government because the demanded methods are antiquated, creating huge barriers to entry.

Even simple moderations to current federal statistical policy would open up more competition in the provision of services to the federal government. The Insights Association's Annual U.S. Insights & Analytics Industry Report for 2024[26] pegged the U.S. industry at over $77 billion in 2023, sporting hundreds of firms that would be potentially open to providing service to the government if the hurdles were not quite so onerous.

Reducing the mercurial complexity of OMB’s research guidance and approval would not just decrease the cost of research studies, it would lower the barriers to entry in this marketplace, particularly for small businesses.

ISO 20252 is a useful mark of research quality

One last consideration for OMB to add to its research regulations and guidance is a measure of research transparency and quality of research partners in the marketplace that can provide reassurance to federal agencies. A key such measure is ISO 20252 (Market, Opinion and Social Research).[27]

ISO 20252 addresses market research stakeholder needs and includes data practices, ethical requirements, and clauses on data management security. Processes outlined in ISO 20252 are designed to produce transparent, consistent, well-documented, and error-free methods of conducting and managing research projects. ISO eliminates any doubt in regard to an organization’s adherence to internationally recognized standards—criteria that can be met in any myriad of ways, as it’s tailored to each applicant.

The market research ISO standards are not easily met, and they may not make sense for all companies and organizations, but they can help agencies make decisions when considering research providers. OMB should encourage federal agencies to recognize that companies and organizations certified to ISO 20252 may make trusted partners in research studies.

Conclusion

Statistics, and survey, opinion and market research, are active scientific fields, not a fixed set of unchanging modes and methods. There have been plenty of advancements in research since OMB's guidance was issued in 2006, as explained in these comments. However, OMB has shown no interest in revisiting its regulations or guidance. Importantly, the collection of requirements OMB currently enforces actually make the research less effective both in terms of operational execution and methodological excellence.

Many stakeholders thought that the Foundations for Evidence-Based Policymaking Act of 2018[28] could help change the federal approach to research, but that has not happened yet. OMB has the opportunity to deliver on those hopes. The Insights Association stands ready to assist.

Sincerely,

Howard Fienberg
Senior VP, Advocacy
Insights Association

 

[1] Insights Association Annual U.S. Insights & Analytics Industry Report for 2024 https://www.insightsassociation.org/Resources/Reports-Library/Insights-Analytics-Market-Report

[2] The decennial census did not even adopt an online response option until 2020.

[3] 5 C.F.R. § 1320.5(d)(2)(v)

[4] Survey Design Standard 1.2: “Agencies must develop a survey design, including … selecting samples using generally accepted statistical methods (e.g., probabilistic methods that can provide estimates of sampling error). Any use of nonprobability sampling methods (e.g., cut-off or model-based samples) must be justified statistically and be able to measure estimation error.” Guideline 1.2.3: “When a nonprobabilistic sampling method is employed, include the following in the survey design documentation: a discussion of what options were considered and why the final design was selected, an estimate of the potential bias in the estimates, and the methodology to be used to measure estimation error. In addition, detail the selection process and demonstrate that units not in the sample are impartially excluded on objective grounds in the survey design documentation.” - OFFICE OF MANAGEMENT AND BUDGET STANDARDS AND GUIDELINES FOR STATISTICAL SURVEYS. September 2006. https://obamawhitehouse.archives.gov/sites/default/files/omb/inforeg/statpolicy/standards_stat_surveys.pdf

[7] "Still Listening: The State of Telephone Surveys." by Stephanie Marken. Methodology Blog. Gallup. January 11, 2018. https://news.gallup.com/opinion/methodology/225143/listening-state-telephone-surveys.aspx

[8] "How Public Polling Has Changed in the 21st Century." By Courtney Kennedy, Dana Popky and Scott Keeter. Pew Research Center. April 19, 2023. https://www.pewresearch.org/methods/2023/04/19/how-public-polling-has-changed-in-the-21st-century/

[9] “Probabilistic methods for survey sampling are any of a variety of methods for sampling that give a known, non-zero, probability of selection to each member of the target population. The advantage of probabilistic sampling methods is that sampling error can be calculated. Such methods include: random sampling, systematic sampling, and stratified sampling. They do not include: convenience sampling, judgment sampling, quota sampling, and snowball sampling.” OFFICE OF MANAGEMENT AND BUDGET STANDARDS AND GUIDELINES FOR STATISTICAL SURVEYS. September 2006. https://obamawhitehouse.archives.gov/sites/default/files/omb/inforeg/statpolicy/standards_stat_surveys.pdf

[12] OFFICE OF MANAGEMENT AND BUDGET STANDARDS AND GUIDELINES FOR STATISTICAL SURVEYS. September 2006. https://obamawhitehouse.archives.gov/sites/default/files/omb/inforeg/statpolicy/standards_stat_surveys.pdf

[14] National Academies of Sciences, Engineering, and Medicine. 2013. Nonresponse in Social Science Surveys: A Research Agenda. Washington, DC: The National Academies Press. https://doi.org/10.17226/18293

[15]  "Response rates in telephone surveys have resumed their decline." By Courtney Kennedy and Hannah Hartig. Pew Research Center. February 27, 2019. https://www.pewresearch.org/short-reads/2019/02/27/response-rates-in-telephone-surveys-have-resumed-their-decline/

[16] “The Future Of U.S. General Population Telephone Survey Research.” AAPOR Task Force Report. April 27, 2017. https://aapor.org/wp-content/uploads/2022/11/Future-of-Telephone-Survey-Research-Report.pdf

[18] OFFICE OF MANAGEMENT AND BUDGET STANDARDS AND GUIDELINES FOR STATISTICAL SURVEYS. September 2006. https://obamawhitehouse.archives.gov/sites/default/files/omb/inforeg/statpolicy/standards_stat_surveys.pdf

[20] OFFICE OF MANAGEMENT AND BUDGET STANDARDS AND GUIDELINES FOR STATISTICAL SURVEYS. September 2006. https://obamawhitehouse.archives.gov/sites/default/files/omb/inforeg/statpolicy/standards_stat_surveys.pdf

[28] Pub. L. No. 115-435, 132 Stat. 5529

About the Author

Howard Fienberg

Howard Fienberg

Based in Washington, DC, Howard is the Insights Association's lobbyist for the marketing research and data analytics industry, focusing primarily on consumer privacy and data security, the Telephone Consumer Protection Act (TCPA), tort reform, and the funding and integrity of the decennial Census and the American Community Survey (ACS). Howard has more than two decades of public policy experience. Before the Insights Association, he worked in Congress as senior legislative staffer for then-Representatives Christopher Cox (CA-48) and Cliff Stearns (FL-06). He also served more than four years with a science policy think tank, working to improve the understanding of scientific and social research and methodology among journalists and policymakers. Howard is also co-director of The Census Project, a 900+ member coalition in support of a fair and accurate Census and ACS. He has also served previously on the Board of Directors for the National Institute for Lobbying and Ethics and and the Association of Government Relations Professionals. Howard has an MA International Relations from the University of Essex in England and a BA Honors Political Studies from Trent University in Canada, and has obtained the Certified Association Executive (CAE), Professional Lobbying Certificate (PLC) and the Public Policy Certificate (PPC). When not running advocacy for the Insights Association, Howard enjoys hockey, NFL football, sci-fi and horror movies, playing with his dog, and spending time with family and friends.

Attachments

Related

New TCPA Ruling Offers No Insights Industry Relief

New TCPA Ruling Offers No Insights Industry Relief

A new ruling on the Telephone Consumer Protection Act (TCPA) from the Federal Communications Commiss...

Read More >
Email Survey Invitations: Legal Landscape and Best Practices

Email Survey Invitations: Legal Landscape and Best Practices

Email invitations for survey research remain important to market research, allowing efficient engage...

Read More >
Fighting for You: May 2025 Legislative and Regulatory Update

Fighting for You: May 2025 Legislative and Regulatory Update

Legislative and regulatory issues facing the insights industry are blooming all over, including: a n...

Read More >
Massachusetts Targets Push Polls in 2025 - H. 858

Massachusetts Targets Push Polls in 2025 - H. 858

Massachusetts H. 858, legislation modeled on New Hampshire‘s law that targets push polls for disclo...

Read More >
New FCC Rules Detail Revoking Consent for Autodialer Calls and Texts Under the TCPA

New FCC Rules Detail Revoking Consent for Autodialer Calls and Texts Under the TCPA

The Telephone Consumer Protection Act (TCPA) regulations have long required that consumers can revok...

Read More >
Senate Again Considers Independent Contractor Status and Portable Benefits

Senate Again Considers Independent Contractor Status and Portable Benefits

The U.S. Senate will again consider proposals to improve the classification of independent contracto...

Read More >
Members only Article - Please login to view