The Yabble blog

Reflections On 2025: A Conversation With Doug Guion

Written by Yabble | December 18, 2025

The past 12 months have marked an inflection point for the insights industry. At the start of 2025, AI featured heavily in conversations but wasn't being used widely in practice. Experimentation was fragmented, confidence was uneven, and many leaders were still unsure what role AI could take in research and decision-making. 

By the end of the year, the conversation had moved on. 

Rather than debating whether AI belonged in market research, organizations were focused on how to apply it with discipline, accuracy, and intent. This shift was not driven by a sudden leap in model capability either. According to Doug Guion, Product Growth Lead at Yabble, it came from something more pragmatic: clearer understanding about what different types of AI exist and what tasks each type is best suited to. 

Doug has spent the year working closely with global brands navigating AI adoption. His view is that 2025 reshaped expectations, not because AI suddenly became magical, but because teams became more deliberate and intentional with how they used it.

 

The limits of generic LLMs for insights work

At the start of 2025, large language models were often conflated with AI itself. Many teams assumed these general-purpose models could be applied directly to research tasks, expecting fast, reliable insights from a single prompt. Early experiments quickly highlighted the limitations. LLMs are optimized for coherent, plausible language, not for analytical accuracy, transparent sourcing, or structured reasoning. 

Doug notes that the initial disappointment led some organizations to question AI as a whole. “People blamed AI when the output did not meet research standards,” he recalls. The problem was not the technology. It was applying the wrong tool to the job.” 

As the year progressed, teams became more selective. They began distinguishing between tasks suited to creative language generation and those requiring rigorous evidence and methodological care. The conversation shifted from whether LLMs could do everything to which type of AI is appropriate for market research. 

By the end of 2025, Doug observes that senior teams are starting to use LLMs strategically, for summarization, ideation, and exploratory drafts, but not as a replacement for research-grade interpretation. He emphasizes that confidence in AI depends on matching the right tool to the right objective. Leaders who understand this distinction can leverage AI to accelerate work without sacrificing accuracy or trust.

 

Data quality moved to the center of the conversation

As of January 2025, many organizations focused their AI discussions on capability. The prevailing question was whether AI tools were powerful enough to generate insights. Far less attention was paid to what those tools were drawing from, or how confidently their outputs could be trusted once deployed in real decision-making contexts. 

That changed quickly. 

As AI usage expanded, scrutiny of data inputs intensified. Synthetic content became more visible across the public internet, while traditional panels began facing well-documented challenges with AI respondents and contamination. According to Doug, this exposed a structural weakness in how some teams were thinking about AI-driven insights. 

If an AI tool is cherry-picking answers from any particular one source or set of sources,” Doug explains, there’s a strong probability that it could be wrong, just in general terms.” 

Over the course of the year, brand-side expectations hardened in two clear ways. 

First, there was a growing demand for diversified sourcing. Teams became more skeptical of systems that relied on narrow inputs or prioritized convenience over coverage. Insights increasingly needed to reflect the relative weight of perspectives across a broad dataset, rather than amplifying fringe or highly visible viewpoints.

Doug likens this to evaluating claims in context. “If you’re looking at the shape of the Earth and there are a couple of sources saying it’s flat, the AI doesn’t have to assume that’s true. It needs to understand how small that view is relative to everything else. 

Second, transparency became a baseline requirement. Leaders wanted to see where information came from and how it was evaluated so they could apply their own judgment. Outputs that could not be interrogated were treated with caution, particularly when the stakes of decisions were high. 

Doug sees this shift as a sign that AI is being taken more seriously, not less. Companies are starting to understand that it isn’t magic, he says. “There’s criteria you can apply. You have to ask what the task is, what the terrain looks like, and what evaluative standards you need to use to decide whether the AI solution is actually fit for purpose. 

By the end of 2025, data strategy had become inseparable from AI strategy. Teams were no longer asking whether AI could generate answers quickly. They were asking whether those answers were grounded in diverse, current, and inspectable sources.

The implication for 2026 is clear - organizations that invest in sourcing discipline, contextual synthesis, and transparency will be better positioned to use AI with confidence. Those that do not risk producing outputs that look convincing but fail under scrutiny, often too late to correct course. 

For insights leaders, the priority moving forward is not simply adopting AI tools, but establishing clear standards for what qualifies as usable data and defensible evidence in an AI-enhanced workflow.

 

AI personas went from disbelief to experimentation

At the beginning of 2025, the idea of AI personas was met with near-total skepticism across the insights industry.

Doug recalls that the concept was often treated as an “impossible object,” something that sounded interesting in theory but could never function in practice. Most conversations focused exclusively on why AI personas would not work, why they could not work, and why they should not be trusted. 

What changed over the course of the year was not universal acceptance, but the quality of the debate. “The disbelief didn’t disappear,” Doug explains. “It became more nuanced.Teams began to recognise that not all AI personas are the same, and that different approaches carry very different implications. 

 

As the year progressed, two broad classes of personas started to emerge.

The first focused on recreating a respondent metaphor. These systems use AI to generate synthetic respondents, digital twins, or entire synthetic panels designed to complete surveys in place of real people. While some providers have made technical progress, Doug notes that these approaches continue to generate significant skepticism among global brands.

Quantitative research is designed to extract information from humans so it can be projected onto a population. Having AI answer surveys, in his view, amounts to building an imputation machine rather than learning something genuinely new.

If you already know what’s missing in the dataset, you don’t need fake people to give you the answer.” 

 

The more durable progress came from a different direction. Rather than attempting to simulate individuals, some teams began applying AI to persona development in a way that mirrors traditional segmentation. These AI-driven personas function as intelligent audience frameworks built from known data.

Like segmentations, they cluster attributes to represent groups a brand wants to understand, but they differ in two important ways. They can draw from vastly more diverse sources, and they are not fixed at the moment of creation. 

Doug sees this as the real shift in thinking. Traditional personas freeze understanding in time. If something changes after they are built, they have no way to incorporate it. AI-driven frameworks, by contrast, can retrieve new information, enrich themselves, and respond to novel ideas as the world evolves. This allows teams to explore reactions to concepts that do not yet exist in the market, much like a real person would.

You don’t need prior data to have an opinion. You react based on what you care about and what matters to you.” 

 

By the end of 2025, AI personas are no longer treated as fantasy, but they are also not being embraced indiscriminately. Brands are actively exploring them, with board-level mandates and real budgets, while becoming far more discerning about which approaches create value.

Doug believes this distinction will define how personas are used in 2026. Attempts to manufacture synthetic people will continue to face resistance. Intelligent, data-driven frameworks that evolve with the real world are far more likely to earn trust and become embedded in how teams think, test, and decide.

 

What Doug expects to define 2026

Looking ahead, Doug sees three shifts accelerating as AI moves from experimentation into day-to-day operations across insights, marketing, and innovation teams.

 

Brands will begin mining the data they already have

Doug believes many global organizations are only just realizing the scale of the data they already possess. Years of research, customer feedback, operational data, and internal studies have accumulated across regions and business units, largely unused because unifying it was expensive, slow, and prone to error. 

AI has changed that equation.

What was once technically difficult is now far more practical, making it possible for teams to interrogate their own data as a single, connected asset. Doug expects this to materially reduce the amount of foundational research brands need to commission externally. 

This does not mean research agencies lose relevance, but their role will change. Brands will call on them less frequently for baseline understanding and more often for high-stakes, strategic questions. As Doug puts it, agencies will no longer be “the sole arbiter” of consumer understanding. That responsibility will increasingly sit inside organizations themselves, supported by AI.

 

AI personas will become a normal part of early-stage decision-making

Doug expects AI personas to become a standard tool on the ideation end of the spectrum. When teams are faced with many possible directions and limited time, personas provide a way to narrow options, stress-test ideas, and explore positioning before committing to real-world testing. 

Crucially, Doug does not see this as a replacement for human research. “It won’t be either-or,” he says. “It will be a matter of sequencing.Teams will use AI personas to explore and refine ideas quickly, then validate selectively with real consumers where confidence or precision is required. 

This shift allows decisions to be made earlier, more confidently, and in real time. Instead of waiting for discrete research cycles, teams can iterate continuously while maintaining rigor where it matters most.

 

Innovation cycles will compress across the organization

The third change Doug expects is a significant shortening of the distance between R&D, product development, and market activation. AI has already accelerated prototyping, coding, and creative development. Doug believes the same acceleration will extend across insight generation, testing, and iteration. 

By combining internal data, historical research, and AI personas grounded in both public and proprietary sources, teams will be able to work continuously across markets and time zones. Iteration will happen in real time, in any language, without waiting for sequential handoffs between functions. 

This does not eliminate the need for research, but it reshapes it. Research will remain critical, but it will no longer act as a gate that everything must pass through. Instead, it becomes one part of a faster, more integrated decision system, with AI absorbing work that once slowed teams down. 

Doug’s expectation for 2026 is not that research disappears, but that it evolves. The organizations that adapt to this shift will move faster, test more ideas, and reach the market with greater confidence than those that continue to operate in discrete, linear cycles.

 

Closing thoughts

By the end of 2025, the insights industry is no longer preoccupied with whether AI has a place in research. That question has largely been settled. What remains unresolved, and increasingly important, is how AI is integrated into real decision systems without weakening standards, trust, or accountability. 

Doug’s reflections make it clear that this year was less about technological breakthroughs and more about clarification. Teams learned that not all AI is interchangeable. Generic language models have value, but limits. Data quality, sourcing, and transparency determine whether outputs can be used with confidence. AI personas can accelerate thinking, but only when they are grounded in robust data frameworks rather than attempts to simulate individuals. Across each theme, progress came from sharper judgment rather than broader adoption. 

2025 was the year expectations solidified. Budgets were allocated, mandates were set, and internal conversations shifted from experimentation to execution. In 2026, the differentiator will not be access to AI tools, but the discipline with which they are deployed. Organizations that treat AI as an operational capability, with clear standards around evidence, sequencing, and decision ownership, will extract sustained value. Those that treat it as a shortcut risk undermining the very confidence they are trying to build. 

The work ahead is practical rather than conceptual. It involves rethinking data strategy, redefining the role of research, and embedding AI into workflows in ways that support faster, better decisions. That is the challenge Doug sees shaping the year to come, and it is where the next phase of competitive advantage will be built. 

 

If you want to explore how to apply AI to your market research thoughtfully and at scale, get in touch with the team at Yabble for a chat about your 2026 insights plans. Book a meeting here.