Schedule

Hoboken, NJ | April 21, 2026

Schedule

Schedule Details Coming Soon!

Who Attends

Developed by Your Peers

This event is designed for professionals involved in producing, evaluating, or applying research and data insights. Attendees include client-side insights and analytics leaders, research and consulting agencies, sample and data collection partners, methodologists, data scientists, technologists, and those working across qualitative and quantitative research. Together, we’ll raise the bar for data quality—and for the business decisions our industry supports.

Tuesday, April 21, 2026

Start your day with a delicious breakfast while connecting with fellow attendees. Fuel up, make introductions, and get ready for an inspiring day of learning, collaboration, and innovation at Ignite: Data Quality

Welcome and introduction to the event. Overview of the day's agenda and key goals.

As AI becomes embedded across the research lifecycle from design and sampling to analysis and insight generation, data quality risks are no longer limited to poor methodology or execution. Increasingly, they arise from automation itself. This session examines how shadow AI, AI-assisted workflows, and emerging agent-like systems quietly undermine data quality by eroding provenance, auditability, and human accountability.

Drawing on lessons from cybersecurity and enterprise AI governance, the session reframes data quality as a systems-level credibility challenge, not simply a technical or methodological one. Participants will explore how common practices such as unapproved AI tools, automated synthesis of open-ended responses, and systems that treat data as implicit instruction introduce invisible quality failures that traditional QA processes do not detect. These failures often surface only downstream, when insights must be defended to stakeholders or acted on in high-stakes decisions.

The session focuses on practical, research-relevant governance patterns that help organizations balance speed, scale, and rigor. Rather than advocating for less automation, it offers guidance on where human judgment must remain non-delegable and how research leaders can preserve transparency and trust in AI-enabled insight pipelines.  

Learning Objectives / Actionable Insights
- Identify where traditional data quality controls fail in AI-driven research workflows, particularly in the presence of shadow AI and automated synthesis.  
- Understand when data stops acting as passive input and begins functioning as instruction and why this matters for insight integrity.  
- Apply governance practices that enable responsible automation while maintaining defensible, trustworthy research outcomes.

Presented By: Cecilia Dones, Columbia University

Be among the first to see the results of Wave 2 of the Global Data Quality (GDQ) Benchmarking Report—the industry’s only international, cross-company assessment of online research data quality. Building on the foundation established in Wave 1, Wave 2 delivers expanded data and deeper insights across eight critical benchmarks, including incidence rate, abandon rate, in-survey cleanout rate, fraud detection, and respondent verification. More than a snapshot, this wave enables organizations to track progress over time, compare performance against global and regional standards, and identify meaningful opportunities for improvement. In an environment where data integrity underpins every insight and business decision, benchmarking is essential. Join us to explore the latest findings and learn how comparative data can help strengthen research processes, enhance transparency, and drive continuous quality improvement across the industry.

Survey fraud has evolved into a sophisticated, profit-driven ecosystem fueled by bots, AI-generated responses, identity-masking tools, device manipulation software, and organized fraud communities. What was once isolated bad behavior is now industrialized—scaling across sample sources and increasingly evading traditional quality checks. This panel will examine the modern fraud landscape, including AI and agentic response pollution, incentive abuse, blended sample risks, and the distinct vulnerabilities across B2C and B2B research. Panelists will highlight how tech-enabled fraud can pass common attention and data-cleaning measures—biasing results and masking true costs. Beyond identifying the risks, the discussion will focus on practical safeguards: developing a clear fraud taxonomy, recognizing ecosystem blind spots, evaluating suppliers based on cost-per-quality outcomes, and implementing layered, passive-first defenses that combine behavioral and technical signals. Attendees will leave with actionable frameworks and watchouts to strengthen fraud prevention strategies while protecting legitimate respondent experience and research integrity.

Take a short break to stretch, grab a coffee, and chat with fellow participants. Use this time to exchange ideas, meet new colleagues, and build connections that can spark future collaboration.

Data quality is no longer assumed, it’s challenged. Sample Buyers now face scrutiny from procurement, compliance and legal, and “trust me” has become “show me.” This session breaks down what defensible sample really requires: transparent sourcing, identity and eligibility controls, behavioral quality signals, operational traceability and audit-ready documentation. We will also show why quality is a shared governance outcome across providers, agencies and brands. Attendees will leave with a practical checklist for defensible sample and a governance mindset that turns trust into proof.

Key Takeaways
1. What buyers should explicitly require from sample partners.
2. What “defensible sample” looks like in practice.
3. How ISO 20252 provides structure but must be operationalized.

Presented by: 
Alexandrine de Montera, CPO & ISO Quality Officer, Full Circle Research
Bob Graff, Vice President, MarketVision Research
- Nick Fox, Quantitative UX Researcher, Meta


In a market research landscape reshaped by AI and rapid decision-making, trust in insights depends on stronger rigor and transparency. This joint session from Savanta and Ketchum explores how close supplier–client collaboration improves data quality, accountability, and confidence in an AI-disrupted world.
Key takeaways include:
- Shared Accountability Framework: A repeatable model clarifying supplier and client roles across design, execution, validation, analysis, and quality control.
- Human + AI Verification in Practice: Real-world workflows showing how AI-driven checks are combined with human review to ensure accuracy, ethics, and better open-end analysis.
- Respondent-Centered Survey Redesign: Practical strategies to improve survey experience, reduce fatigue, avoid bias, and deliver more reliable results.

Presented By:
- Patrick O’Donnell, VP, Group Manager, Ketchum Analytics
- Dr. Nick Baker, Chief Research Officer, Savanta

We'll have topical conversation starters to choose from to help break the ice and facilitate connections - or chat about your latest favorite streaming binge.

Get the latest on proposed and recently enacted regulations and legislation that impacts data collection, management, and security. Better understand how the law can be an effective addition to your data quality toolkit.

Presented by: Howard Fienberg, SVP, Advocacy, Insights Association

As fraud in market research grows more sophisticated, traditional data-quality checks are being pushed to their limits. Bots, click farms, and coordinated bad actors can now bypass downstream safeguards—compromising studies before they even begin. This session explores a modern, multi-layered approach to fraud prevention that starts at the point of entry. We’ll examine how real-person verification, impersonation detection, behavioral monitoring, and in-survey validation work together to prevent invalid participants from accessing research in the first place—rather than attempting to clean bad data after the fact. Attendees will gain a clearer understanding of how the fraud landscape is evolving, the differences between identity verification and traditional response-quality checks, and why continuous monitoring and layered defenses are essential to protecting research integrity. Designed as an educational discussion, this session will provide practical guidance for strengthening data quality frameworks and staying ahead of emerging threats.

What if your survey could think in real time? Declining engagement, rushed answers, and satisficing are persistent threats to data quality. This session introduces a new, AI-powered approach that treats every survey question as a clear information objective—and adapts dynamically based on whether that objective has truly been met. Using large language models, the system evaluates response adequacy as it happens, probes only when needed, and moves on when it has what it needs—maximizing insight while minimizing respondent burden. We’ll also explore a next-generation engagement measurement framework that goes beyond completion rates, incorporating behavioral and linguistic signals—such as response timing, editing patterns, and depth of expression—to diagnose friction, fatigue, and data quality risk. Attendees will leave with a practical roadmap for designing adaptive, objective-driven surveys, detecting engagement risk early, and rigorously testing impacts on breakoff, burden, and insight quality. If you care about smarter surveys and stronger data, this session will reshape how you think about questionnaire design.

Recharge for the final sessions and continue to forge valuable connections.

Synthetic data and AI “respondent panels” are being marketed as faster, cheaper replacements for traditional research—but do they truly produce reliable insights? This session cuts through the hype to examine how synthetic panels actually work, where they fall short, and the risks of treating them as substitutes for real consumer data. Panelists will explore when synthetic approaches add value, where they can introduce bias or false confidence, and why governance, transparency, and human oversight are essential. We’ll highlight the key questions research leaders should ask before adopting these tools and discuss practical, evidence-based alternatives that build on existing data without relying on black-box claims. This educational discussion will equip attendees with a clear framework for evaluating synthetic data responsibly—prioritizing rigor, durability, and business impact over novelty and hype.

Space is limited, so act fast before tickets sell out!

This intimate setting ensures every voice is heard and every conversation matters.
  • Back to top