Schedule

Hoboken, NJ | April 21, 2026

Schedule


Who Attends

Developed by Your Peers

This event is designed for professionals involved in producing, evaluating, or applying research and data insights. Attendees include client-side insights and analytics leaders, research and consulting agencies, sample and data collection partners, methodologists, data scientists, technologists, and those working across qualitative and quantitative research. Together, we’ll raise the bar for data quality—and for the business decisions our industry supports.

Tuesday, April 21, 2026

Start your day with a delicious breakfast while connecting with fellow attendees. Fuel up, make introductions, and get ready for an inspiring day of learning, collaboration, and innovation at Ignite: Data Quality

Welcome and introduction to the event. Overview of the day's agenda and key goals.
- Anita Watkins, CEO, Insights Association 
- Ronda Slavin, Synchrony
- India Swearington, Bloomberg Media
- David Boyd, Kantar

As AI becomes embedded across the research lifecycle from design and sampling to analysis and insight generation, data quality risks are no longer limited to poor methodology or execution. Increasingly, they arise from automation itself. This session examines how shadow AI, AI-assisted workflows, and emerging agent-like systems quietly undermine data quality by eroding provenance, auditability, and human accountability.

Drawing on lessons from cybersecurity and enterprise AI governance, the session reframes data quality as a systems-level credibility challenge, not simply a technical or methodological one. Participants will explore how common practices such as unapproved AI tools, automated synthesis of open-ended responses, and systems that treat data as implicit instruction introduce invisible quality failures that traditional QA processes do not detect. These failures often surface only downstream, when insights must be defended to stakeholders or acted on in high-stakes decisions.

The session focuses on practical, research-relevant governance patterns that help organizations balance speed, scale, and rigor. Rather than advocating for less automation, it offers guidance on where human judgment must remain non-delegable and how research leaders can preserve transparency and trust in AI-enabled insight pipelines.  

Learning Objectives / Actionable Insights
- Identify where traditional data quality controls fail in AI-driven research workflows, particularly in the presence of shadow AI and automated synthesis.  
- Understand when data stops acting as passive input and begins functioning as instruction and why this matters for insight integrity.  
- Apply governance practices that enable responsible automation while maintaining defensible, trustworthy research outcomes.

Presented By: Dr. Cecilia Dones, Columbia University

Be among the first to see the results of Wave 2 of the Global Data Quality (GDQ) Benchmarking Report—the industry’s only international, cross-company assessment of online research data quality. Building on the foundation established in Wave 1, Wave 2 delivers expanded data and deeper insights across eight critical benchmarks, including incidence rate, abandon rate, in-survey cleanout rate, fraud detection, and respondent verification. More than a snapshot, this wave enables organizations to track progress over time, compare performance against global and regional standards, and identify meaningful opportunities for improvement. In an environment where data integrity underpins every insight and business decision, benchmarking is essential. Join us to explore the latest findings and learn how comparative data can help strengthen research processes, enhance transparency, and drive continuous quality improvement across the industry.

Presented by: Chuck Miller, DM2 & Emily Bosworth, Material

Stretch, grab a coffee, and chat with fellow participants. Use this time to exchange ideas, meet new colleagues, and build connections that can spark future collaboration.

Modern research depends on a diverse, global sample ecosystem that enables speed, scale, and access to hard‑to‑reach audiences. At the same time, survey fraud has evolved into a more sophisticated, profit‑driven subset of that ecosystem—fueled by bots, AI‑generated responses, identity‑masking tools, device manipulation software, and organized fraud networks.  

What was once isolated bad behavior has become increasingly industrialized, scaling across some sample sources and evading traditional quality checks. This panel will examine the modern fraud landscape, including AI‑ and agentic‑driven response pollution, incentive abuse, blended sample risks, and the distinct vulnerabilities across B2C and B2B research.

Panelists will explore how tech‑enabled fraud can pass common attention checks and data‑cleaning measures—biasing results and obscuring the true cost of poor‑quality data. Beyond identifying risks, the discussion will focus on practical safeguards: developing a shared fraud taxonomy, identifying ecosystem blind spots, evaluating suppliers on cost‑per‑quality outcomes, and implementing layered, passive‑first defenses that combine behavioral and technical signals.  

Attendees will leave with actionable frameworks and watch‑outs to strengthen fraud-prevention strategies while protecting legitimate respondent experience and overall research integrity.

Panelists: Andrew Gordon, Prolific; Frank Kelly, Virtual Incentives; Rich Ratcliff, OpinionRoute; Steven Snell, RepData; Mark Menig, PureSpectrum | Moderated by Melanie Courtright, Sago

Stretch, grab a coffee, and chat with fellow participants. Use this time to exchange ideas, meet new colleagues, and build connections that can spark future collaboration.

Data quality is no longer assumed, it’s challenged. Sample Buyers now face scrutiny from procurement, compliance and legal, and “trust me” has become “show me.” This session breaks down what defensible sample really requires: transparent sourcing, identity and eligibility controls, behavioral quality signals, operational traceability and audit-ready documentation. We will also show why quality is a shared governance outcome across providers, agencies and brands. Attendees will leave with a practical checklist for defensible sample and a governance mindset that turns trust into proof.

Key Takeaways
1. What buyers should explicitly require from sample partners.
2. What “defensible sample” looks like in practice.
3. How ISO 20252 provides structure but must be operationalized.

Presented by: 
Alexandrine de Montera, CPO & ISO Quality Officer, Full Circle Research
Bob Graff, Vice President, MarketVision Research
- Nick Fox, Quantitative UX Researcher, Meta


In a market research landscape reshaped by AI and rapid decision-making, trust in insights depends on stronger rigor and transparency. This joint session from Savanta and Ketchum explores how close supplier–client collaboration improves data quality, accountability, and confidence in an AI-disrupted world.
Key takeaways include:
- Shared Accountability Framework: A repeatable model clarifying supplier and client roles across design, execution, validation, analysis, and quality control.
- Human + AI Verification in Practice: Real-world workflows showing how AI-driven checks are combined with human review to ensure accuracy, ethics, and better open-end analysis.
- Respondent-Centered Survey Redesign: Practical strategies to improve survey experience, reduce fatigue, avoid bias, and deliver more reliable results.

Presented By:
- Patrick O’Donnell, VP, Group Manager, Ketchum Analytics
- Dr. Nick Baker, Chief Research Officer, Savanta

Get the latest on proposed and recently enacted regulations and legislation that impacts data collection, management, and security. Better understand how the law can be an effective addition to your data quality toolkit.

Presented by: Howard Fienberg, SVP, Advocacy, Insights Association

Synthetic data and AI “respondent panels” are being marketed as faster, cheaper replacements for traditional research—but do they truly produce reliable insights? This session cuts through the hype to examine how synthetic panels actually work, where they fall short, and the risks of treating them as substitutes for real consumer data. Panelists will explore when synthetic approaches add value, where they can introduce bias or false confidence, and why governance, transparency, and human oversight are essential. We’ll highlight the key questions research leaders should ask before adopting these tools and discuss practical, evidence-based alternatives that build on existing data without relying on black-box claims. This educational discussion will equip attendees with a clear framework for evaluating synthetic data responsibly—prioritizing rigor, durability, and business impact over novelty and hype.

Panelists: Dyna Boen, Escalent, Scott Swigart, Shapiro + Raj; moderated by John Bremer

As AI-driven fraud rises, this session explores a shift from post-field data cleaning to real-time quality control during interviews. Drawing on comparative studies with the University of Mannheim, AI-moderated interviews (AIMIs) significantly outperformed traditional surveys—delivering richer responses, broader thematic coverage, zero gibberish, and improved participant experience. These gains stem from built-in design features such as voice-only responses, real-time detection of low-quality or inconsistent answers, and automated scoring of response depth. Together, these tools prevent fraud and enhance data integrity during collection, not after. Attendees will learn how AIMIs improve quality metrics, how fraud prevention can be embedded into methodology, and when real-time controls are most effective versus traditional approaches.

Presented By: Matteo Cera, Glaut

Recharge for the final sessions and continue to forge valuable connections.

Building on last year’s session, “Red Herrings Aren’t Cutting It: The Evolution of In-Survey Quality Testing,” this presentation takes the next step in redefining data quality—starting where it matters most: thoughtful survey design and genuinely engaged respondents.

Too often, quality efforts focus on catching bad actors while overlooking the role poor design plays in driving disengagement. This session reframes data quality as a two-way street—where sample integrity and survey experience must work together. We’ll introduce a transparent data quality dashboard that surfaces key metrics at the survey level, including: Sample removed pre-survey (e.g., duplicates, inaccurate targeting); Sample removed in-survey, with clear reasons for removal Drop-off rates (DOR).

Going further, we connect these metrics to survey design through a novel scoring model that quantifies respondent effort. By measuring cognitive load across survey structure (independent of content), this model helps identify design weaknesses, predict respondent fatigue, and improve overall data quality.

Attendees will learn how to:
- Balance rigorous quality controls with better survey design
- Apply a more empathetic, respondent-first approach to data quality. 
- Build smarter quality frameworks that detect inattentive respondents, survey farms, and emerging threats like LLM-enabled bots

If you care about cleaner data, better respondent experiences, and more reliable insights, this session will give you practical tools to get there. 

Presented by Rossi Dobrikova, aytm

As fraud in market research grows more sophisticated, traditional data-quality checks are being pushed to their limits. Bots, click farms, and coordinated bad actors can now bypass downstream safeguards—compromising studies before they even begin. This session explores a modern, multi-layered approach to fraud prevention that starts at the point of entry. We’ll examine how real-person verification, impersonation detection, behavioral monitoring, and in-survey validation work together to prevent invalid participants from accessing research in the first place—rather than attempting to clean bad data after the fact. Attendees will gain a clearer understanding of how the fraud landscape is evolving, the differences between identity verification and traditional response-quality checks, and why continuous monitoring and layered defenses are essential to protecting research integrity. Designed as an educational discussion, this session will provide practical guidance for strengthening data quality frameworks and staying ahead of emerging threats.

Presented by: Sarah Zurilla, Prodege & Henry LeGard, Verisoul



This session shows that data quality issues aren’t just about fraud or sampling—they’re also driven by survey design. Drawing on a large U.S./Canada study, it demonstrates how mobile-first, conversational surveys enhanced with AI produce richer, more engaging responses (including much longer open-ended and video inputs) while maintaining consistency on quantitative measures. The session also shares real-world applications from major brands and offers practical design principles and quality guardrails for scaling AI-powered conversational research without sacrificing rigor.

Presented by Matt Kleinschmit, Founder & CEO of Reach3 Insights & Brand-Side Researcher TBA

Space is limited, so act fast before tickets sell out!

This intimate setting ensures every voice is heard and every conversation matters.
  • Back to top