IA's Code in Action: Applying IA Research Standards to AI - Articles

Articles

10Apr

IA's Code in Action: Applying IA Research Standards to AI

Administrator | 10 Apr, 2025 | Return|

ABOUT THIS SERIES:

The Insights Association Code of Standards & Ethics outlines the fundamental principles of market research practices and client and participant relations. Important stuff. But how much do you and your team know about it? In this series of articles, penned by members of IA’s Standards Committee, we bring the Code to life – highlighting areas where it can and should be applied in various aspects of performing market research properly and ethically. Remember, you and your team can also learn about the IA Code via a special, free self-paced on-demand module available on our Learning Channel. This module provides an overview of all sections of the Code and includes a quiz. It’s the perfect refresher for seasoned researchers and ideal training for recent hires. Get Started Now!

Code in Action: Applying IA Research Standards to AI
Utilization and application of artificial intelligence (AI) tools in research is expanding and already supporting exciting improvements in the speed, scale, and predictability of insights the industry can deliver. But the impact AI can have on research raises familiar, new, and evolving considerations for practitioners. Key mandatories to ensure research integrity and quality when using AI tools are outlined in the revamped Insights Association Code of Conduct.

Heightened Risks to Participant Confidentiality: A Focus Area
The Duty of Care the industry owes to research participants is outlined in Section 1 of the IA Code and it is a requirement that data obtained for research purposes should never reveal the identity of its subjects without consent. Use of AI tools both expand the research data lifecycle and intensify the risk that, without proper management, the personally identifiable information (PII) of participants is reverse engineered by AI inference. Researchers must maintain an active responsibility when using AI tools for ensuring PII is protected.

Methodological Transparency: No Black Boxes
Section 2 of the IA Code highlights the importance of transparency when research methods are deployed. This is especially important when working with new and emerging technologies, such as those leveraging AI, as these are inherently less familiar to both research practitioners and stakeholders. First and foremost, researchers must disclose usage of AI and its purpose, be that alongside, embedded within, or independent of other methodologies. Essential subsequent disclosures include the AI technique deployed (are you using data already in existence to make predictions based on past patterns or trends, or are you generating new ‘synthetic’ data from which learning is derived?), model-type selected (publicly available versus open-sourced or proprietary and closed), as well as accuracy of any prediction being made (standard metrics such as normalized mean absolute error or root mean square error being most appropriate to benchmark and interpret model accuracy ahead of decision-making). Transparency is vital when research tools in question are new and complex.

Human Oversight: Cannot be Substituted
Application of AI to research does not render the human researcher obsolete nor redundant. To the contrary, no AI system should operate without human judgment embedded in its design, implementation, validation, or application. It’s essential that any AI model deployed against any research objective be audited regularly and examined to ensure the intended purpose is met, and that output is not falling victim to any bias or flaws inherent in either source data or design. This is particularly important when it comes to considering the role demographic or cultural variables might play: as with any research, flawed data leads to flawed insights with these risks only intensified by AI. When utilizing AI tools, the human researcher must maintain a skeptical view, regularly interrogating outputs gleaned to ensure integrity, utility and the absence of bias.

An Evolving Technology: An Obligation to Stay Informed
At the time of writing, the insights industry is early in what will be a decade of adoption and application of AI. Like new technologies and ideas that have come before, from digital data collection through to behavioral sciences, capability will evolve and do so often ahead of true and full understanding. In a world of change, the research practitioner has an obligation to adopt a learning mindset and stay up to date on the latest tools, trends, and risks in applying AI to research. It’s always possible to innovate in research, but equally essential to do so in keeping with the spirit and letter of the IA Code.


AlexHuntAbout the Author: Alex Hunt

Based in New York, Alex is the CEO of Behaviorally. He is a recognized pioneer in the practical application of behavioral science to shopper marketing in order to help clients develop consumer experience that drives business growth. Prior to his role at Behaviorally, Alex served as global CEO of PRS IN VIVO, spent a decade building System1 Research (formerly BrainJuicer), and served as an Executive Director of the System1 Group PLC Board.

 

About the Author

Related

Not any article
Members only Article - Please login to view