By Rob Key, CEO, Converseon
When search engine optimization (SEO) first emerged in the early 2000s, brands and agencies raced to decode the algorithms. It was the Wild West of digital marketing—meta tags, backlinks, keyword stuffing, and link farms all promised visibility and reach. For a while, these dark arts worked. But as Google’s algorithms evolved, so did its intolerance for manipulation. The result was a reckoning: black-hat techniques led to penalties, reputational damage, and the loss of consumer trust.
Today, we are witnessing a new version of that same race—this time around Generative Engine Optimization (GEO) or what some call “Answer Engine Optimization” (AEO), the emerging discipline of shaping how large language models (LLMs) like ChatGPT, Claude, and Gemini represent brands and surface their information. Just as SEO transformed how people found information, GEO is transforming how machines understand it. But with that transformation comes the same ethical crossroads we faced two decades ago. As GEO becomes the rage, it is critically important that we establish an ethical backbone to these efforts built around the concepts of “trusted” and “responsible” AI, accuracy, transparency and trust.
There are indeed reputable organizations focusing on improving agentic output. However, in the new GEO gold rush there is also high risk that others may use tactics that are more dubious; potentially exacerbating bias, contributing to AI slop and misinformation and ultimately irreparably undermining trust at a time when it is most needed.
I speak on this topic from experience. Before pivoting into NLP and predictive analytics, in the mid-2000s, Converseon helped pioneer the practice of search engine reputation management – or what we referred to as SERM at the time. The objective was to promote as much positive content on the first page – “pushing down” negative listings to the second or third pages where they were less likely to be seen. In the 2006 Search Engine Strategies conference, for example, in a presentation on reputation management, we recommended similar techniques to today’s GEO, including mapping the conversation and influencing search engine sources, but also cautioned clearly to “avoid the dark arts.” Times may change, but the fundamentals remain largely constant.
The Temptation of Manipulation
We recognized then that short-term visibility hacks were a dangerous illusion. Techniques like cloaking, link exchanges, and keyword spamming temporarily inflated rankings but ultimately undermined long-term credibility. The parallels with today’s “generative optimization” tactics are unmistakable.
Already, some agencies are experimenting with ways to “seed” or “influence” model outputs by spamming public data sources—posting repetitive content across Reddit threads, manipulating Wikipedia edits, or flooding Quora with brand-friendly narratives. These tactics may provide a temporary bump in visibility within LLM-generated responses, but they are fundamentally fragile. As model providers improve their alignment systems, these manipulative signals will be filtered out or—worse—penalized.
In 2006, BMW's German website (bmw.de) was penalized and temporarily removed from Google’s search index after Google discovered the company had used “doorway pages” — deceptive web pages designed to rank highly for certain keywords (like “used car” or “neuwagen”) but which immediately redirected visitors to other, more visually rich pages that weren’t optimized for search. The GEO world has not yet had a similar incident, but it is not difficult to imagine a future analogous scenario that would cast a dark shadow on the industry. And this is where the danger lies.
From Optimization to Context Engineering
The brands that will win in this next era won’t be those that master the newest hacks or discover hidden prompt exploits. They will be those that build a deep reservoir of trust and authority—digitally, contextually, and narratively.
Generative models are trained to find the mean of collective opinion. They make predictions based on millions of sources. Attempts to flood that signal with bias, or through one specific source, do not shift the center—they distort it. Over time, the system learns to detect and ignore such noise. The result: wasted resources, lost credibility, and a risk to brand trust that far outweighs the ephemeral gains.
For example, at the same time that many GEO solutions publicly advocated for Reddit engagement, several third-party trackers (such as Similarweb) reported that Reddit’s visible share of citations in ChatGPT dropped from 14% to 2% in just one month.
The future of brand visibility in the age of generative AI should not be about gaming models. It should focus on quality, context-rich, accurate information. In other words, it’s about context engineering, which is defined as the discipline of designing, curating, and structuring data and metadata so that generative and analytical AI systems can interpret and respond with accuracy, relevance, and trustworthiness within a specific domain or enterprise environment.
While many GEO efforts today focus on promotion and visibility the reality is that intentionally feeding and influencing content algorithmic output is arguably an indirect form of AI model training. Each piece of content – whether owned or earned – is essentially training data for these models. And assisting in any type of AI training requires elevated responsibilities grounded in ethical and responsible approaches that are not always congruent with many promotional approaches. The AI EU Act as an example, requires additional responsibility for those training AI models, including the “use of data governance measures: ensure data used for training is of suitable quality, address bias.”
This is because generative engines don’t just scrape—they synthesize. Their understanding of a brand depends on the breadth, depth, and consistency of the digital signals they ingest. That means the best form of GEO isn’t “optimization” at all—it’s strategic content alignment. Brands that maintain a coherent, factual, and values-driven presence across earned, owned, and shared media create the ecosystem these models learn from. Instead of chasing algorithms, forward-thinking brands are focusing on:
• Deep listening and measurement of social and media discussions to clearly understand stakeholder needs and information requirements; as well as how the brand is positioned across these massive unstructured datasets. This is the “canary in the coal mine” since this is the data that will eventually be synthesized within the AI models. This should go beyond sentiment to other measures as well including trust and intent.
• Cross-channel brand measurement: ensuring the same factual and emotional narrative is reinforced across PR, social, and owned assets.
• Robust data governance with an emphasis on fact checking, clarification, providing useful and clear information based and eliminating – not reinforcing – bias.
• Structured, high-quality metadata that helps models interpret and categorize information correctly.
• Focus on precision, not promotion. Perhaps the most profound shift GEO demands is in the role of public relations.
Historically, PR has balanced storytelling and advocacy. But in the generative era, the goal should be to clarify the record, not cloud it. Every piece of misinformation left unchallenged, every biased press release masquerading as fact, becomes a data point in the model’s understanding of your brand. The more authentic and accurate your public footprint, the more faithfully generative engines will represent you.
This means CMOs and Chief Communication leaders should view GEO not as a marketing function but as a data governance one—ensuring that brand information, values, and messaging are represented ethically, consistently, and accurately across every medium.
An Industry Call-to-Action
For all of these reasons, it is incumbent on the industry to rapidly establish a comprehensive code of ethics for GEO built around “responsible, ethical and trusted AI standards. And for brands considering GEO services to demand one.
These standards should reflect such important standards of responsible and trusted AI, which means designing, developing, deploying - and contributing to – AI models based on the principles of accuracy, bias mitigation, transparency, fairness, non-discrimination and data governance. As UNESCO states, AI models should “enhance human agency and social good, not replace or undermine it.”
This means helping to correct erroneous information, contributing clear and useful information based on understanding the information needs and wants of users, while refraining from contributing other biased or misleading promotional data into the models. That requires investment not in manipulation but in infrastructure.
There are some good examples available. Within the PR industry, earlier this year, AMEC (the International Association for the Measurement and Evaluation of Communication) updated its Barcelona Principles 4.0 to clearly state that “measurement and evaluation must be grounded in ethics, governance, transparency and accountability” — including data and AI usage. This is a strong foundation on which to extend similar standards to GEO. (author’s note: I was fortunate to have been able to contribute to this update as Director and a member of the AMEC subcommittee). Another excellent example is the Global Data Quality Initiative, a cross-association industry effort in the market-research, insights and analytics ecosystem designed to address ongoing and emerging risks to data quality in the market and social research, consumer insights and analytics industry.
Of course, principles without actual implementation efforts often fall flat. Many new GEO solutions are emerging from marketing and public relations functions grounded in promotion and positioning, and may not have deep experience in AI training, bias mitigation, ethical context engineering. These represent quite different mindsets and approaches.
A strong GEO set of ethical principles should include robust training in comprehensive ethical and trusted AI and data governance standards.
So, too, clarity and communication from the agentic platforms themselves regarding GEO would be helpful. In 2008, for example, Google formed a search quality team under Matt Cutts to act as a liaison between the search engine and optimizers to help elevate ethical standards. Indeed, during this earlier search engine reputation management era, we held transparent and open conversations with Matt and his team to help ensure ethical approaches.
Combining the industry and LLM efforts can create a powerful ethical AI partnership for GEO. And none too soon.
Generative AI is redefining how information is found, trusted, and acted upon. Those who learn from the SEO wars of the past—and resist the siren song of shortcuts—will lead the way in this new era of digital brand management.
Just as early SEO pioneers learned that authentic authority beats artificial tricks, today’s GEO leaders will learn that transparency and integrity, and consistent brand management are the most powerful optimization strategies.
The lesson is simple: you cannot game trust. And in the age of generative engines, trust is the ultimate ranking factor. And one we must place at the center of efforts to influence LLMs. For if trust is lost, it generally cannot be regained. And that is something none of us can afford to let happen.
About the Author
Rob Key is Founder & CEO of Converseon, a leading “trusted AI” technology (with a focus on NLP) and consumer intelligence firm. Rob serves as Director of AMEC, the International Association for the Measurement and Evaluation of Communication, which is an active member of the Global Data Quality initiative. He can be reached at rkey@converseon.com