Associations Shaping the Future of AI in Market Research - Opportunities, Ethics, and Regulation - Articles

Articles

01Dec

Associations Shaping the Future of AI in Market Research - Opportunities, Ethics, and Regulation

By Melanie Courtright, CEO at the Insights Association, and Howard Fienberg, Senior VP, Advocacy at the Insights Association

(Originally shared on the MRII site on November 9, 2023.)

During recent conversations with leaders at market research agencies and corporate insights departments, current and future use of AI in our work and is a persistent topic. Some leaders in the space fully believe that AI, and generative tools, will be as important an evolution as the migration to online. More powerful Generative AI applications can bring welcomed time, cost, and quality efficiencies, but can also enable more sophisticated and harder-to-detect fraud. 

As the representatives of the world’s leading insights and data analytics market, people turn to the Insights Association (IA) for answers and assurance. One thing we can tell you unequivocally: helping to navigate this evolution stands as a top priority for the Insights Association and our Board of Directors. I’m pleased to say that our fellow associations globally share this same sense of urgency. For the past several months we have been working closely with them in unprecedented coordination to ensure a complete and unified approach. Following are some details.

What Associations are Doing in the AI Area: It’s important to note that many of the concepts that should guide the use of AI in Insights already exist in the ethics codes of the leading market research associations: Transparency, Duty of Care, Fit for Purpose, Use of Data, and Privacy, are all part of our core code. So, we already are bound by those concepts in the work we do, including with AI. That said, AI brings some new considerations and applications. Therefore, your global associations are working on AI-specific guidelines, and some have already been published. Insights Association published a paper outlining the legal concerns and risks of AI, along with several recommendations for insights companies offering and/or using Generative AI. ESOMAR has a task force on AI, which they’ve kindly invited IA and other associations to join. Market Research Society (MRS) in the UK and the Global Research Business Network (GRBN) are coordinating a global standard. You will see more news and developments on this front from all of us very soon. 

Where Associations Stand on AI and Synthetic Sample: Associations strongly encourage evolution and innovation in our profession as it ensures our future. We balance that enthusiasm with a reminder of the risks and considerations, and encouragement to be thoughtful about them. Specific to AI, and within that synthetic sample, IA has been discussing the 6 Rs to understand and address when using the tools or developing new products: 

  1. Reason for Use: What new outcomes are we able to generate using AI, what problems are being solved, and what problems are being created? What are the tradeoffs? 
  2. Risk: Transparency to all stakeholders, legal, privacy, IP ownership, data provenance, and regulation considerations.
  3. Respondent Care: Are participants fully aware of how their data is being used, both now and in the future, and is the experience ensured to do them no harm? 
  4. Representation: Data fit, bias, and gaps. Who does the underlying data being used in AI represent, and not represent?
  5. Recency: Data age, and fit for predictions and modeling.
  6. Repeatability: Is the output created consistent and reliable for decision-making?

Government regulation of AI in the works: IA is working with U.S. state and federal policymakers to ensure that their approach to regulating AI does not strangle our industry’s innovation in the crib. The Federal Trade Commission (FTC), the insights industry's top U.S. regulator, is certainly taking a tough stance, and Congress is toying with legislative approaches, including a recent dueling pair of Senate proposals:

  • The U.S. AI Act, which would restrict most AI uses, establish a new overarching regulator/enforcer, and punish violations with private lawsuits;
  • The AIRIA ACT, which would regulate the most potentially risky uses of AI and require transparency when providing content produced by generative AI.

More on the legal concerns with Generative AI: Insights companies and organizations need to be aware of the already-existing potential legal pitfalls in using this technology, including:

  1. the output of AI tools may not be eligible for copyright/patent protection;
  2.  is your use of Generative AI in violation of copyright and trade secrets laws in how it is being trained;
  3. are you sharing your (and your clients and data subjects’) proprietary information with these tools and is it contractually protected from misuse;
  4. are you minimizing bias at the input stage and in your algorithms;
  5. what transparency can you provide your data subjects about their information’s interaction and handling by the AI tool; and
  6. are you risking lawsuit by misrepresenting your tools, what they are and how they operate, both to your clients/partners and to your data subjects?

On behalf of IA and the organizations we partner with globally to ensure the viability and advancement of the insights profession, we encourage you to stay engaged and active. Contact us directly to learn how to get involved.

This information is not intended and should not be construed as or substituted for legal advice. It is provided for informational purposes only. It is advisable to consult with private counsel on the precise scope and interpretation of any laws/regulation/legislation and their impact on your particular business.

About the Author

Howard Fienberg

Howard Fienberg

Based in Washington, DC, Howard is the Insights Association's lobbyist for the marketing research and data analytics industry, focusing primarily on consumer privacy and data security, the Telephone Consumer Protection Act (TCPA), tort reform, and the funding and integrity of the decennial Census and the American Community Survey (ACS). Howard has more than two decades of public policy experience. Before the Insights Association, he worked in Congress as senior legislative staffer for then-Representatives Christopher Cox (CA-48) and Cliff Stearns (FL-06). He also served more than four years with a science policy think tank, working to improve the understanding of scientific and social research and methodology among journalists and policymakers. Howard is also co-director of The Census Project, a 900+ member coalition in support of a fair and accurate Census and ACS. He has also served previously on the Board of Directors for the National Institute for Lobbying and Ethics and and the Association of Government Relations Professionals. Howard has an MA International Relations from the University of Essex in England and a BA Honors Political Studies from Trent University in Canada, and has obtained the Certified Association Executive (CAE), Professional Lobbying Certificate (PLC) and the Public Policy Certificate (PPC). When not running advocacy for the Insights Association, Howard enjoys hockey, NFL football, sci-fi and horror movies, playing with his dog, and spending time with family and friends.

Related

Fighting for You: October 2024 Legislative and Regulatory Update

Fighting for You: October 2024 Legislative and Regulatory Update

Spooky season is here, but thanks to the Insights Association the insights industry had the upper ha...

Read More >
Illinois Law H.B. 3773 Restricts AI in Employment Context

Illinois Law H.B. 3773 Restricts AI in Employment Context

A new law in Illinois restricts the use of artificial intelligence for potential discrimination in e...

Read More >
California 2024 Legislative Update

California 2024 Legislative Update

The insights industry had more wins than losses in California in 2024, as legislators took on consum...

Read More >
California A.B. 2930 Would Have Restricted AI in Decision-Making

California A.B. 2930 Would Have Restricted AI in Decision-Making

California A.B. 2930, legislation opposed by the Insights Association which would have restricted au...

Read More >
California A.B. 2877 Would Have Prohibited AI Training with Sensitive Data from Minors

California A.B. 2877 Would Have Prohibited AI Training with Sensitive Data from Minors

California A.B. 2877, a bill opposed by IA that would have prohibited developers from using the sens...

Read More >
Disclosures About Generative AI Training Data Required by New California law A.B. 2013

Disclosures About Generative AI Training Data Required by New California law A.B. 2013

A new law in California requires expansive disclosures from developers of generative artificial inte...

Read More >
Members only Article - Please login to view