The Researcher’s Role in an AI World: Making Sure the Right Voices Are Heard - Articles

Articles

Stay at the forefront of the consumer insights and analytics industry with our Thought Leadership content. Here you’ll find timely updates on the Insights Association’s advocacy efforts, including the latest legislative and regulatory developments that impact how we work. In addition, this section offers expert perspectives on innovative research techniques and methodologies, as well as valuable analysis of evolving consumer trends. Together, these insights provide a trusted resource for professionals looking to navigate change, elevate their practice, and shape the future of our industry.

The Researcher’s Role in an AI World: Making Sure the Right Voices Are Heard

By Brittne Kakulla, PhD | AARP | Member, Insights Association IDEA Council

 

AI is transforming research—but not always for the better. In this thought-provoking piece, Brittne Kakulla explores how AI-driven analysis can unintentionally silence underrepresented voices by prioritizing dominant trends. She highlights the researcher’s evolving role as both an analyst and an ethical gatekeeper, ensuring that the data truly reflects all perspectives. Featuring practical tools like Google NotebookLM and actionable prompt strategies such as the “Cognitive Verifier,” this article offers guidance for balancing speed with responsibility. As AI accelerates workflows, Kakulla reminds us: meaningful insights depend not just on what’s most common—but on what might be missing.  

 

The Researcher’s Responsibility in an AI Era

Researchers have always carried two responsibilities at once. The first is technical: find the patterns inside the data. The second is ethical: make sure those patterns actually represent the people you studied. For most of our careers, those two responsibilities required the same amount of attention. AI is changing that balance, and not always in the direction we would choose.
 

AI tools are extraordinarily good at the first job. They can scan thousands of responses, surface recurring themes, and deliver a synthesized summary faster than any human analyst. But they do not understand social context. They do not know that some voices in a dataset are quieter, not because those people have less to say, but because they were harder to reach or less represented when we collected data in the first place.


When AI summarizes a dataset, it tends to amplify what it sees most clearly. The result is that dominant voices get louder, while quieter voices can disappear entirely. This can introduce bias. As AI becomes more embedded in our research workflows, our job is to become better bias auditors. We are not just using these tools. We are responsible for what they produce.


When Important Voices Get Lost

Let me share an example from a research project I was recently working on: a large technology adoption dataset with thousands of survey responses and open-ended responses across multiple groups.
 

When I fed the data into an AI synthesis tool, the output was clean and fast. The most common digital behaviors surfaced immediately, and the summary read as though everyone in the study had roughly the same relationship with technology. If I had stopped there, the story would be incomplete, because, as I know, older adult relationships with technology depend on many things, not just whether they own a cellphone.   


The AI did not diminish those stories due to carelessness. They had been statistically outvoted. The AI was doing exactly what it was designed to do: deprioritize them.

Think of it like a public meeting where fifty people speak in favor of a proposal and ten speak against it. If you just count hands, the opposition barely registers. AI tools can count hands, but our job as researchers is to understand the nuances and context behind the hands. Because a skilled researcher knows that a small, consistent signal of concern is worth examining, even if it is outnumbered.


Ensuring all voices are heard has always been a risk in research. The difference now is the speed. Before common AI tools, a researcher manually reviewed transcripts, designed screeners, or built deliberate checkpoints into the analysis process to ask: who is not showing up here? Those slower moments created natural space for reflection.


Today, with the speed of AI, researchers can process thousands of responses and deliver a polished survey or summary in seconds, but we must intentionally build in moments of reflection. The tools will not do it for us without our expertise.


On LinkedIn, I have been working my way through prompting patterns through a researcher’s lens. I will be bringing that work here in a quarterly series. Each column will introduce one AI tool and two prompt patterns that researchers can put into practice to harness the value AI can add to their workflow while understanding its limitations. This is the first!


+ + Tool Callout: Google NotebookLM

What it does: Google NotebookLM works exclusively from the sources you provide. It synthesizes documents such as transcripts, reports, survey notes, and open-ended responses into organized summaries and themes, with no outside information added, which means every output stays grounded in your data. The free version is very robust and only requires a Google account to get started.


Research use case: NotebookLM excels at finding connections across large volumes of material that would take a researcher hours to work through manually. Load in interview transcripts, research memos, and open-ended responses, and it will surface recurring themes, link related ideas across documents, and help you see patterns in the aggregate that are easy to miss when reading source by source.


The benefit: Time. For researchers working under tight timelines or managing large mixed-methods datasets, this kind of rapid synthesis is genuinely useful.


The limitation: NotebookLM, like most language-based AI tools, organizes around frequency. The themes it highlights are the ones it encounters most often. That means the summary will naturally reflect the groups and behaviors that appear most in your data. If your data has coverage gaps, the summary will reflect those gaps too, often without flagging them.


Prompt Pattern: Cognitive Verifier

One technique that helps address this limitation is a prompt pattern called Cognitive Verifier. The idea is simple: instead of accepting an AI’s first output as final, you ask the AI to challenge its own conclusions.
 

What it does: The Cognitive Verifier asks the AI to look at a summary it has already produced and identify what might be missing, undercounted, or overlooked.


Research use case: It prompts the model to search for perspectives that were in the data but did not make it into the initial synthesis, particularly voices from groups that may be underrepresented in the dataset.


Limitation: This technique cannot fix a sampling or design problem. If a group was not meaningfully included in your data collection, no prompt will recreate their perspective. The Cognitive Verifier is a review tool, not a repair tool.


Why this prompt pattern works: AI models detect statistical patterns in language. When you use a Cognitive Verifier prompt, you are redirecting the model. Instead of asking it to summarize what is most common, you are asking it to audit what it may have underweighted. You are effectively asking it to act less like a statistician and more like an analyst.


Practice Prompts

Here are two prompts you can adapt for your own research workflows.


Prompt 1: Evaluate the summary for missing perspectives

“Review the summary you just produced. Identify any perspectives or experiences that may have been underrepresented due to lower response frequency. Pay particular attention to groups who may have less consistent access to digital tools or who are typically harder to reach in online research panels. What voices may be present in the underlying data but not reflected in this summary?”

Prompt 2: Assess frequency bias and surface minority themes

“This summary was produced using frequency-based synthesis. Evaluate whether that approach may have introduced bias toward the most commonly expressed views. Identify two to three themes that represent minority perspectives in the data, experiences that appeared less often but may carry meaning that is disproportionate to their frequency. What would a researcher focused on equity want to investigate further?”


Closing Reflection

AI tools will increasingly support research workflows, and the trend will only accelerate. But the most important equity work still happens before data are collected, in sampling design, questionnaire development, and participant recruitment. No prompt pattern fixes what was left out at the design stage.

Bias detection, interpretation, and protecting representation are not tasks we can delegate to a model. They are the core of what we do, and they become more important as our tools get faster and more autonomous.

The goal is not to slow AI down. The goal is to use it well. AI should accelerate analysis. It should not erase the voices that were already hardest to hear.

_____________________________________________________
ABOUT THE AUTHOR

BrittneKakula
Brittne Kakulla, PhD, is a researcher at AARP focused on technology adoption among adults 50-plus. She serves on the Insights Association IDEA Council and writes about AI literacy for research professionals.

 

Prompt patterns come from White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., Elnashar, A., Spencer-Smith, J., & Schmidt, D.C. (2023). A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT. arXiv:2302.11382. https://arxiv.org/abs/2302.11382

Related

Not any article

Share

Login

Members only Article - Please login to view
  • Back to top