B2B software buyers: the canary in the coal mine for AI Search
By Arjan ter Huurne, March 19, 2026
For months, the debate around AI Search has been stuck in a false binary.
One camp says:“AI is taking over search.”The other replies:“Calm down. Google is still vastly bigger.”
Strictly speaking, the second camp is right. Google remains enormous. Google itself says it now handles more than 5 trillion searches annually. Meanwhile, Graphite’s recent analysis argues that AI usage is much larger than most marketers assume, estimating that AI sessions are now 21% of total search worldwide when web and app usage are combined, while also making the important point that search has not fallen away - the overall discovery pie has grown.
See the full Graphite.io research here, with more analyses.
Source: Graphite.io research
But commercially, that is not the most important question.
The more important question is this:
Which audiences are changing behaviour first, and what does that signal for everyone else?
That is where B2B software buyers matter so much.
G2’s research has been pointing in one clear direction. In its 2025 buyer behaviour reporting, G2 said that roughly three in ten software buyers were already starting research with AI Search more often than with Google. By early 2026, G2 was saying that half of B2B software buyers now start their search with an AI chatbot instead of traditional Google search. (research.g2.com)
That is the signal.
B2B software buyers are the canary in the coal mine.
They are not representative of every category. They are more digitally fluent, more research-heavy, and more comfortable using ChatGPT, Gemini, Claude or Perplexity to structure a buying decision. But that is precisely why they matter. They are often the first visible cohort to show what broader consumer behaviour will look like later.
In other words: they are early, not irrelevant.
Why this matters beyond B2B software
This shift should not be dismissed as a niche quirk of tech buyers.
B2B software is a category where the buying journey is already information-dense, comparison-led and trust-sensitive. Those are exactly the conditions under which AI Search becomes useful. A buyer can ask for a shortlist, compare trade-offs, explore integration concerns, test assumptions, and keep refining the query conversationally. That is simply a better research interface for many decision journeys.
And those same mechanics are now spreading far beyond software.
Healthcare. Travel. Finance. Education. Retail. Professional services. High-consideration consumer purchases. In all of these categories, more journeys now involve an AI layer before, during, or after traditional search.
So yes, Google is still bigger.
But that is not the point.
The point is that buyer behaviour is fragmenting across search engines, chat interfaces, assistants, answer engines and AI-native recommendation layers. If you are waiting until AI referral traffic in your analytics dashboard looks “big enough”, you are likely waiting too long.
The measurement problem: the black box between influence and attribution
This is where many leadership teams get stuck.
They hear the market moving. They see anecdotal evidence. They notice prospects referencing ChatGPT or Gemini in calls. They test prompts themselves and realise that brands are being compared, interpreted and recommended inside these systems.
Then they open GA4.
And the numbers often look underwhelming.
For many businesses, traffic attributed directly from tools like ChatGPT is still a very small percentage of sessions. In many cases it is below 1%. In some stronger early-adopter cases it may be a few percentage points. But it often doesnotlook remotely as large as the behavioural shift happening upstream in research and evaluation.
That disconnect is creating a dangerous illusion.
Because what GA4 captures well is click-through traffic. What it captures far less effectively is AI influence on consideration, perception, shortlist formation and brand framing before the click ever happens.
That is the black box.
A prospect may:
- discover your category in Google,
- shortlist vendors in ChatGPT,
- pressure-test options in Claude,
- compare implementation risk in Gemini,
- visit your site directly later,
- convert via branded search, direct, or CRM nurture.
In reporting, that final conversion may appear as direct, branded organic, paid brand, or even email-assisted. The generative layer that shaped the decision can disappear almost entirely from view.
That doesnotmean the generative layer was unimportant. It means your attribution model is incomplete.
Why post-conversion surveys are making a comeback
This is why a very old measurement discipline is suddenly becoming strategically relevant again: asking customers how they actually found and evaluated you.
Post-conversion surveys used to be treated by some teams as soft, messy, secondary data. In the age of AI Search, they are becoming essential again.
Because the question is no longer just:
“Which channel delivered the last click?”
It is also:
“Which interfaces shaped the buyer’s understanding, shortlist and preference before conversion?”
The brands that will understand AI Search earliest are not the ones staring hardest at referral reports. They are the ones combining:
- quantitative analytics,
- CRM and pipeline data,
- qualitative buyer research,
- post-conversion self-reported attribution,
- prompt visibility measurement, and
- competitive benchmarking inside AI systems.
That is a much more serious way to approach the market.
From vanity visibility to business measurement
At PromptMarketing, this is exactly where we think the discipline needs to mature.
It is easy to obsess over whether your brand appears in ChatGPT once or twice for a handful of prompts. That is not enough.
AI Search measurement has to move beyond novelty screenshots and into something more rigorous:
1. Map the prompt universe
Identify the real prompts that matter across the buyer journey:
- category discovery,
- comparison,
- use case evaluation,
- objections,
- implementation concerns,
- pricing/value framing,
- alternatives and substitutes.
2. Measure visibility by buying stage
Not every prompt has equal commercial value. Being visible on an early educational query is different from being recommended on a high-intent comparative query.
3. Measure representation, not just mention rate
A mention can be good, neutral, misleading, or actively damaging. “Visibility” without context is a poor KPI.
4. Connect prompt performance to commercial outcomes
Tie findings back to pipeline quality, lead source narratives, win/loss data, sales-call language, branded search lift and conversion performance over time.
5. Track competitors in the same generative environments
Your brand is not being evaluated in isolation. AI systems are constantly framing relative strengths, weaknesses, use cases and trade-offs between you and your competitors.
That is where AEO and GEO become commercially meaningful.
AEO and GEO matter, but they are not the whole story
The industry is converging around terms like AEO(Answer Engine Optimisation) and GEO(Generative Engine Optimisation). Those labels are useful. They signal that visibility is shifting from classic ten-blue-links SEO toward answer surfaces, citations, entity understanding and model-mediated recommendation.
And yes, brands need to take both seriously.
But there is a risk in treating AEO or GEO as merely the next distribution tactic.
Because the real issue is not only whether AI systems can find your brand. It is whether they understand your brand correctly.
That is a much higher bar.
The deeper problem: AI representation
This week, I spoke with a CMO who described genuine anxiety about how AI chatbots were portraying her brand.
Not because the brand was invisible.
Because it was being placed in the wrong corner of the market.
Wrong competitors. Wrong positioning. Wrong strengths. Wrong frame.
That is a strategic problem, not just a discoverability problem.
A brand can have decent visibility in AI Search and still be harmed if the models consistently:
- oversimplify its proposition,
- associate it with the wrong category,
- benchmark it against the wrong peers,
- miss its premium positioning,
- flatten meaningful differentiation,
- or repeat outdated market perceptions.
That is why representation matters as much as reach.
Marketing leaders have spent decades shaping how brands are positioned across websites, PR, analyst relations, reviews, case studies, campaigns, and category narratives. Now a new interpretive layer sits between that work and the audience.
AI systems are not just retrieving information. They are compressing, synthesising and narrating what your brand is.
If that narrative is off, performance suffers upstream and downstream.
How do you change course?
You do not fix poor representation by trying to “game the chatbot”.
You change it by improving the quality, consistency and machine-legibility of the evidence layer around your brand.
That means working across:
- site architecture and structured data,
- entity clarity and category definitions,
- positioning language consistency,
- third-party validation and reviews,
- comparative content and proof points,
- digital PR and authority signals,
- knowledge graph reinforcement,
- market-facing documentation, and
- prompt-level testing of how the brand is being interpreted.
This is where PromptMarketing’s Prompt Intelligence pillar becomes critical.
For us, that includes work such as:
- Sentiment Analysis How positively, accurately and confidently is the brand described across relevant prompts?
- AI Competitive Benchmarking Which competitors are recommended more often, more favourably, and in which buyer contexts?
- Brand Proposition Matching How closely does AI’s description of your brand match your intended market position and real-world proposition?
These are not academic exercises. They are practical diagnostics for fixing misalignment between the brand you believe you are and the brand AI systems currently present.
The real strategic shift
The most important shift for marketers is this:
We are moving from ranking management to interpretation management.
In classic SEO, the central question was often:“How do I rank?”
In AI Search, the more consequential questions are:
- Am I surfaced?
- Am I cited?
- Am I compared against the right alternatives?
- Am I represented accurately?
- Am I recommended in moments that matter?
- Can I connect that influence back to growth?
That is a broader, more strategic discipline.
And it is one reason I believe “Prompt Marketing” is a useful frame for what comes next. Because the market no longer revolves only around search queries and click-throughs. It increasingly revolves around prompts, model interpretation, answer inclusion, recommendation logic and machine-mediated brand understanding.
Final thought
B2B software buyers are not the whole market.
But they are showing us where the market is going.
When half of a high-value buying audience starts research in AI Search, the right response is not to say,“Yes, but Google is still bigger.”
The right response is to ask:
What does this tell us about the future of discovery, evaluation and brand choice?
The answer is that AI Search is already part of the customer journey, even when our dashboards understate it.
And the brands that win will not be the ones that merely appear.
They will be the ones that are visible, measurable, and represented correctly.
Launch your next big idea today
Join creators, teams, and startups already turning their ideas into reality. Get started in minutes and see how simple launching can be when everything works together seamlessly.
Try it for free!