IA's Code In Action: Synthetic Participants in Market Research - Articles

Articles

16Sep

IA's Code In Action: Synthetic Participants in Market Research

Administrator | 16 Sep, 2024 | Return|

ABOUT THIS SERIES:

The Insights Association Code of Standards & Ethics outlines the fundamental principles of market research practices and client and participant relations. Important stuff. But how much do you and your team know about it?

In this series of articles, penned by members of IA’s Standards Committee, we will bring the Code to life – highlighting areas where it can and should be applied in various aspects of performing market research properly and ethically.

Remember, you and your team can also learn about the IA Code via a special, free self-paced on-demand module available on our Learning Channel. This module provides an overview of all sections of the Code and includes a quiz. It’s the perfect refresher for seasoned researchers and ideal training for recent hires. Get Started Now!

Synthetic Participants in Market Research

Curiosity – and debate – continues to grow about using AI to simulate market research participants. The creation of “Synthetic Participants” or the use of “Synthetic Sample” is being written about frequently in industry publications and forums and discussed more widely in recent webinars and conferences.

In this method, an AI is usually based on some real-world data (such as interview transcripts or survey results) and asked to draw the inferences needed to answer new questions. Without getting into the merits of synthetic participants, this article will explore how this topic intersects with the Insights Association Code of Standards & Ethics.

Consent and Privacy

According to IA’s Code (Section 2: Primary Data Collection) research participants have a right to know, and consent to, how their information will be used:

"Obtain the research subject's consent for research participation and the collection of personal data or ensure that consent was properly obtained by the owner of the data or sample source."

And

"Obtain consent from the research subject prior to using his/her data in a manner that is materially different from what the research subject has agreed."

The Issue: Allowing the research company to create a synthetic replica of themselves is likely materially different from the parameters any individual research participation acknowledged when they agreed to participate in a given study.

Recommendation: Most research participants would likely not be knowledgeable enough about synthetic participants to give informed consent. At a minimum, it is recommended that participants know that their data may be used in the future as part of additional research studies.

Additionally, IA Code Section 6: Data Protection and Privacy advises researchers to:

"Ensure that all personal data collected, received, or processed by the researcher, subcontractor or other service provider is secured and protected against loss, unauthorized access, use, modification, destruction, or disclosure by the implementation of appropriate information security measures."

The Issue: When creating synthetic participants, it’s possible that the original participant may be re-identified and their personal information disclosed.

Recommendation: Ensure that data is anonymized before providing it to an AI. The AI can’t disclose sensitive data if it doesn’t have that data to begin with.

Research Integrity and Transparency

Section 9 of the IA Code implores researchers to:

“Perform all work in accordance with generally accepted research practices and principles. When using new and emerging research practices, researchers must ensure that the underlying principles are methodologically sound."

In other words, it has to work. A danger with generative AI is that it will generate an output, even if that output has no basis in reality. Hallucinatory outputs damage the credibility of the research.

When it comes to transparency, sections 1, 8, and 10 of the IA Code apply:

"Be honest, transparent, fair, and straightforward in all interactions."

"Accurately represent their qualifications, skills, experience, and resources."

"Ensure that the findings released are an accurate portrayal of the research data..."

"Provide the basic information, including technical details, to permit independent assessment..."

Recommendation: First, fully disclose when synthetic participants are used. Be honest and up-front about the abilities and limitations of synthetic participants. If the limitations aren’t fully known, say so. Also, provide enough details about your methodology so that research consumers can form their own opinions about the validity of the approach.

At this point, synthetic participants are experimental, and should be treated as such.

Professionalism and Public Trust

Section 11 of the IA Code asserts the need for professionalism:

"Act with high standards of integrity, professionalism, and transparency in all relationships and practices."

“Comply with all applicable international, national, state, and local laws and regulations."

"Behave ethically and do nothing that might damage the reputation of research or lead to a loss of public confidence in it."

Be Aware: When deriving synthetic participants from real people, there are regulatory issues to consider. GDPR, for starters, but also the European AI Act, in addition to myriad other regulations.

Section 1 of the IA Code notes the need to instill public trust:

"Balance the interests of research subjects, research integrity, and business objectives with research subjects' privacy and welfare being paramount."

"Make all reasonable efforts to ensure that research subjects are not harmed, disadvantaged, or harassed as a result of their participation in research."

As technology advances, synthetic participants may become an important component of market research. They may also turn out to be synthetic snake oil. It’s incumbent on researchers to approach this technology with extreme diligence and professionalism.

Recommendations: If nothing else, no human research participants should be harmed by the construction and use of synthetic participants. Synthetic participants should not be used in ways that undermine the public trust in market research.

ScottSwigartAbout the Author: Scott Swigart
A member of IA's Standards Committee, Scott is a seasoned insights professional with over 25 years of experience serving technology giants like Amazon, Microsoft, Google, and Salesforce. He brings deep category expertise and proficiency in leading-edge AI tools to his role of SVP, Technology Group & AI Innovation at Shapiro+Raj.




 

About the Author

Related

Not any article
Members only Article - Please login to view