Avoiding the ‘Foolish’ Side of AI - Articles

Articles

28Mar

Avoiding the ‘Foolish’ Side of AI

Administrator | 28 Mar, 2024 | Return|

By Crispin Beale, CEO, Insight250  |  Photos by Pixabay

As we advance through the Age of AI, there is widespread fear amongst many, as to the threats AI could pose to our culture. However, there is equally tremendous excitement as to the opportunities AI could present - opinions are divided.

To continue the discussion around this intriguing topic, I reached out to a cross-section of market research and insights experts from around the world. Being that April Fool’s Day is here, I decided to raise the topic with a twist that reflects the foolish holiday, asking our global leaders and innovators the following question:

With artificial intelligence all the rage, many are going full-speed ahead to adopt the technology. What do you see as the aspects or applications of AI that are overhyped and could really fool users, creating potential issues for the quality and credibility of market research?

Read on to see how they responded and use their wise counsel to avoid becoming an “April Fool”.

Sharmila Das, Chairwoman, Purple Audacity, India

“AI generated outputs offer novelty, speed and great appearance. HOWEVER, the provision of giving the command lies often with either the technician or AI itself instead of an experienced Researcher. Hence cross examining the source as well as the assumptions is not always possible. Secondly, the language of the output lacks authenticity and is often very easy to identify as AI generated. Both these aspects could cast a lot of doubt on the capabilities of our industry.”

Alex Hunt, CEO, Behaviorally, USA

“Don’t believe it is possible to overestimate the impact AI will have on any enterprise in the prediction business: from those predicting traffic patterns to enable autonomous driving through to data and insights businesses that predict human behavior or marketing effectiveness. But it is all too easy for doomsdayers to overestimate the negative impact of the coming wave of change. Yes, the cost of marketing effectiveness research and predictions will decrease in decibels, yet complements to them, such as human judgment or qualitative diagnostics, will witness a surge in demand. Yes, tasks centered on repeatable research will be automated, yet new roles will be created to apply judgment to that data. The researcher’s role is going to change, much the parallel would be the change in an accountant’s role post the advent of Excel: from adding up numbers to using numbers!”

Dan Foreman, Latana; Zappi; Bakamo; Mediaprobe; Veylinx; Phebi; VST; Empower; Incivus; hatted, UK

“Why did the overhyped AI market researcher get in trouble? Because it thought it could predict the future as accurately as a fortune teller! But turns out, it was just throwing biased data around like confetti at a party. Remember folks, when it comes to AI, always keep a human skeptic on speed dial to separate the insights from the insecurities!”

Jean-Marc Léger, President / CEO, Leger, Canada

“Undoubtedly, AI enhances our intelligence and efficiency, but the most significant danger is that researchers think they fully grasp the complexity of AI. It will take years of trial and error to fully understand its limitations and challenges. The use of the synthetic data is a prime example. The potential benefits are as high as its possible failures.”

Ande Gilmartin (née Milinyte), Associate Director, Opinion, UK

“I think AI is a really exciting development that could make research faster and more efficient. That being said, my biggest worries about using AI in research are potential privacy violations and generation of fake data. Anecdotally, we have already seen instances of AI “creating” false data out of real datasets, and we know that some platforms like Chat-GPT are open source, so there is potential for things to go awry. But with appropriate regulation and quality control, I think we can harness AI to make research better.”

Madhavi Kale, Global Head, Consumer and Client Insights, Energy and Resources at Sodexo, Singapore

“AI promotes a misleading notion that research can be done quicker and cheaper and by anyone. The real risk to business is this could potentially lead to poor decisions.”

Mark Langsfeld, CEO, mTab, USA

“The fool's errand about AI is that it can replace the expert mind in strategy, analysis, and insight. Many are hastily adopting an "all in" approach with AI, not recognizing that AI is not always as intelligent as it is purported to be. Human expertise remains the cornerstone of successful insights.”

Fool2

Vesna Hajnsek, Senior Insights Lead, Swarovski, Switzerland

“Lack of in-depth analysis that takes context into consideration. So far, models summarize what consumers say at face value and don't analyze context nor compare it with existing information. This will lead to face-value research that is insight-poor. Plus, lack of awareness of how the models are trained, maintained and how their inherent biases are addressed. This has the potential to lead to blind spots and blind adoption of outputs that should otherwise be vetted.”

Ryan Barry, President, Zappi, USA

“The biggest miss in AI in our industry won’t be AI. It’ll be that we don’t use it intentionally, we don’t start doing research in a way that will enrich data warehouses that brands ARE building and as a result we are again left out of the matrix. This scares me because we all know these models NEED new information to avoid spitting out duplicates.”

Mark Ursell, CEO, QuMind, UK

“There are two main areas where there are challenges for market research quality and users. The first is where AI analytical tools are pulling from multiple data sets to create insight. Understanding these data sets is going to be crucial so correct conclusions are derived where the key skill needed is curiosity and the ability to make sure comparisons are credible. Secondly in AI generated questionnaire design, where the user will need deep knowledge and the ability to know how to construct a survey that will elicit the right answers, otherwise it will lead to poor data and incorrect insights.”

Urpi Torrado, CEO Datum Internacional, Peru

“AI needs human oversight and researchers’ abilities can detect machine hallucination and potential flaws. Researchers are curious and trained to go beyond the data, so instead of being fooled, we can take advantage of AI and boost our potential.”

Danny Russell, Owner, DRC, UK

“With nearly half of the world’s population being asked to participate in elections this year, the potential for AI-generated pictures (that apparently paint a thousand words) to fool people’s voting intentions MUST be concerning. Exacerbated by social media pouring petrol into these flames and the potential for untruths to spread has never been greater.”

Isabelle Fabry - Associate, ACTFUTURE, ESOMAR Representant, France

“In the realm of artificial intelligence, the promises for market research are enticing, but filter bubbles risk narrowing perspectives by recommending content solely based on users' past preferences. These bubbles isolate users from different or contradictory viewpoints. Similarly, text generators can produce real misleading content. Caution is warranted to maintain research credibility in the face of these challenges.”

Fool3

Jon Puleston, VP of Innovation Profiles Division, Kantar, UK

“The image creation capabilities of generative Ai are so potentially useful but at the same time proving incredibly frustrating to work with.  You might liken it to recruiting a graphic designer who came to the interview with the most amazing design portfolio, you give them the job, only to discover they can’t take a brief and always go off and do their own thing, not quite what you asked for - creating over-elaborate visuals with a signature style that rapidly becomes clingy annoying.  Gen AI image generation right now can only produce visual platitudes, every person featured has model good looks and so often the default solution is a white businessman in a suit.”

Arundati Dandapani, Founder, Generation1.ca, Canada

"AI is not human, so it cannot feel, think, aspire, believe, judge nor intuit. While its computing power and potential to improve work and life productivity are powerful, working effectively with AI demands careful human oversight especially around data ethics, privacy and moral intelligence. Ironically AI’s overconfident, over-positive, highly certain, and culturally vapid or insensitive responses can be deceptive if you are not checking the data sources nor facts for biases, misrepresentations, and other misleading information that could damage your research results and harm professional credibility.”

Caroline Frankum, Global Chief Executive Officer, Profiles Division, Kantar, UK

“AI is opening up new ways of working, researching, and discovering the world and, when used as a ‘purpose for good’ tool  not a weapon, can help people feel deeply understood. For businesses, AI flexibility and nuance present amazing opportunities to shape more relatable, human-like representations of brands – but only if engineering prompts for LLMs are written with DEI front and centre and data outputs are structured by DEI humans to truly reflect the diverse world we serve. This means striking the right ‘balance’ between machine and human collaboration – i.e. having AI work ‘with’ people, not without or against them!”

Nick Baker, Chief Research Officer, Savanta, UK

“Beware the Open AI interface and your agency's contractual obligations to not sharing data, just pour that data in and you've broken your contract most likely unless you've 'locked it down', created a secure bespoke environment to use AI LLMs etc. Don't trip over so easily.”

Guillaume Aimetti, Co-founder / CTO, Inspirient GmbH, Germany

"In the short term, the hype surrounding AI is inevitable, but its long-term value, especially in Generative AI, often remains underestimated. However, in Market Research, the uncritical use of Large Language Models (LLMs) for statistical analysis poses a significant risk. LLMs, not inherently designed for statistical computations, can mislead users. To safeguard the credibility of market research, researchers must discern which AI solutions are suited for specific tasks, prioritizing those optimized for statistical processing."

Lucy Davison, Founder and CEO, Keen as Mustard Marketing, UK

“Our experience shows us that where AI falls down is in effective storytelling. You can, using multiple prompts, get generative AI to create a usable first draft from a set of words, but in the area of insights you really need visceral and contextualised customer stories – based on data – which means Chat GPT doesn’t cut it. Thus far the ‘closed’ systems we have used in our experiments are excellent at pattern recognition, grouping data and sorting them but not at giving us the human lens we need to create powerful and memorable stories.”

Alexander Edwards, President, Strategic Vision, USA

“Market Research is going through a revolution where people are being replaced by machine learning and AI systems. This is because AI has surpassed human intelligence in many areas. For example, in the medical field, AI trained systems can now perform complex surgeries, completely unassisted by human hands, with almost twice as much success rate.  When applied in market research, AI is now forecasting the success of ad campaigns with a near 92% accuracy rate, ensuring unprecedented wealth as algorithms are applied to Wall Street. As true AI has also achieved a level of emotional consciousness, ad campaigns written, images provided and voiced solely by AI have had an 87% increase in the emotional reaction. In summary, we may as well quit our research jobs as there is only a 3% probability that AI had any influence on the claims in this scientific market research.”

Ben Page, Chief Executive, IPSOS, France

“The most risky element of AI for Market research, apart from general hallucinations, may be where AI starts to hallucinate with synthetic data.  Automation has already led to us being deskilled as an industry.  Just try asking young researchers what the margin of error is on their data, how they calculated it, and what design effect they believe is operating on their sample.  Now make it even more exciting by taking this data and getting AI to use it to predict answers to a range of unknown or new phenomena. Experienced political pollsters know that it is easy to build models that work on past elections, but much harder to get them consistently right in the face of changing political and social dynamics.  So the old saying, "to err is human, but to really f… things up you need a computer, must apply.”

Victoria Usher, CEO, GingerMay, UK

“The significant VC investment in AI gives an indication of the potential of this game-changing technology. It is, however, overhyped right now with Garner placing AI on the ‘peak of inflated expectations’. AI and all its associated benefits will never match the power of human intelligence. While we use AI across our business including for data analysis, we would never take its findings in isolation without adding human rigour. Users must ensure AI outputs align with common sense factors.

So businesses should embrace AI - but with a large dollop of caution and human intervention.”

Crispin Beale:
Thank you to all those who contributed to this article.

As we continue as a profession to, carefully, embrace advancing technologies such as AI, be sure to help champion those leaders who innovate and push the boundaries. If you are or know such individuals, be sure to nominate them for the annual Insight250 Awards - you can do so now at insight250.com/nominate (nominations for this year will close at the start of April 2024). Good luck everyone.

ABOUT THE AUTHOR

Crispin Beale - Chief Executive, Insight250, Senior Strategic Advisor, mTab; Group President, Behaviorally 

Crispin Beale is a marketing, data, and customer experience expert. Crispin spent over a decade on the Executive Management Board of Chime Communications as CEO of leading brands such as Opinion Leader, Brand Democracy, Facts International, and Watermelon. Before this, Crispin held senior marketing and insight roles at BT, Royal Mail Group, and Dixons. Crispin originally qualified as a chartered accountant and moved into management consultancy with Coopers & Lybrand (PwC). Crispin has been a Fellow, Board Director (and Chairman) of the MRS for nearly 20 years and UK ESOMAR Representative for over 10 years. Crispin is currently a Senior Strategic Advisor at mTab as well as Group President at Behaviorally.

About the Author

Related

Not any article
Members only Article - Please login to view