FiredUp! - The Startup Marketing Podcast
FiredUp! is the show for marketers working in early and late-stage startups. Each week, we walk through fresh strategies and tactics to build brand and drive demand for your startup. Featuring interviews with marketing leaders, our take on the latest trends, and practical tips about PR, content marketing and growth marketing, we promise plenty of signal with some noisy fun along the way.
FiredUp! is hosted by the team at startup marketing agency, Firebrand. Learn more at firebrand.marketing today.
FiredUp! - The Startup Marketing Podcast
AI Search Monitoring Using Scrunch with Kevin White
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Traditional SEO rankings are no longer enough to guarantee visibility as B2B buyers shift their research to AI engines like ChatGPT, Perplexity, and Claude. For many startup marketers, these platforms represent a blank space where traditional tracking tools fail to provide clear insights. In this episode, we dive into the fundamental shift from deterministic search to probabilistic AI platforms with Kevin White, head of marketing at Scrunch. This week, episode 135 of the FiredUp! podcast is about AI search monitoring using Scrunch!
Download the Multiplier Marketing Megapack today. Exclusive offer for all our listeners — get all our Startup Guides in one go! Over 50 pages of advanced tips and advice that dive deep into content marketing, search advertising, and marketing attribution.
In this episode of the FiredUp! podcast, Kevin White shares the importance of measuring and reporting brand performance over time despite the probabilistic nature of AI search and actionable steps you can take right now to adapt and change as information on AI search becomes more available.
Kevin White is the head of marketing at Scrunch AI, where he’s building visibility infrastructure for the post-LLM web — analytics and optimisation that show brands how they’re being represented inside ChatGPT, Perplexity, Claude, and Google AI Overviews. Before Scrunch, Kevin spent a decade marketing for the companies that defined the modern operator stack — Common Room, Retool, and Segment — and has advised teams at Ashby and Deepnote.
Kevin, Morgan, and Alastair discuss:
- SEO vs. AI Search Monitoring: Understand the shift from deterministic rankings (static lists of links) to probabilistic platforms (AI answers that change based on the prompt). To win in 2026, you must monitor citation performance across thousands of synthetic prompts to see if your brand is consistently recommended.
- The Power of Fan-Out Queries: Learn how to move beyond simple keyword searches. Successful AI search optimization involves testing fan-out queries—the natural, conversational questions your buyers are actually asking—to understand how AI engines perceive your brand's authority and relevance.
- PR and Content Synergies: AI search visibility isn't just a technical task; it's a content and PR challenge. LLMs rely heavily on high-authority sources and third-party mentions. Strengthening your earned media and byline programs is now a direct lever for improving your rankings in ChatGPT and Perplexity.
- Structured Data Still Matters: While AI is smart, it still craves organization. Implementing robust structured data and clear site architectures helps AI agents crawl and "digest" your information more effectively, increasing the likelihood of being cited as a primary source.
Thank you for listening! Tune in to all the episodes for practical tips on crushing your startup marketing goals. Don’t forget to follow, rate, and review the podcast, and tell us your key takeaways!
CONNECT WITH KEVIN WHITE:
CONNECT WITH FIREBRAND:
Firebrand is a startup marketing agency. We help tech startups secure outsized marketing outcomes on their path to growth.
X (Formerly Twitter)
5.5.2026
SEO has clear rankings, but AI search works differently. So today we are talking to scrunch about AI search monitoring and how brands can improve their rankings in chat, GPT, perplexity, etc. Hello, everyone. Welcome to fired up the podcast for marketers working in early and late stage startups. Hello everyone. Welcome to fired up. My name is Morgan mcclintic, and today I'm joined by my co host, Alistair knee, who's our head of digital. Alistair, how are you?
Alastair Nee:I'm doing well. Hi everyone.
Morgan McLintic:And we are joined by Kevin White, who is Head of Marketing at AI visibility monitoring company scrunch. Kevin, welcome to fired up. How you doing?
Kevin White:Hello, I'm fired up. I'm ready to get going. Excited to get into things,
Morgan McLintic:great stuff. We're happy to have you here. So first up here look AI, search works differently from SEO. Can you just tell us a little bit about how scrunch thinks about visibility monitoring.
Kevin White:This question comes up a lot. You know, what's the difference between AI search and SEO? And there's lots of different camps, where people are like, it's really just SEO repackaged, or a new way of doing SEO, versus like, hey, it's completely different thing. You know, I think the truth oftentimes lies somewhere in the middle. The thing that I feel like helps people click, there's like, there's two things. One is on the just kind of understanding of how it works, and then also how that like ties into metrics. So the biggest difference that I see in AI search versus traditional search is that AI platforms, llms are probabilistic, meaning you ask the same question the same time, even with the same user within the same session, and you'll get a different answer, and that is actually a feature of the platforms, not a bug that trips up a lot of people at first, where it's like, hey, when I asked the same thing in Google, it shows my site the same time over and over and over again within these like rankings of the top 10 blue links with AI search, it's definitely not the case. And so because of that, it actually changes the metrics that you look at and the way that you go about tracking AI search, because you're never going to show up 100% of the time. And so essentially, the way to track and measure AI search, and if your brand is performing well or not, or if your products are performing well or not, is to not look at it as a point in time, but as holistically over a period to see, you know if your brand presence is growing for the same prop sets, the same kind of clusters of prompts that you'd want to track, where you'd want to show up. Are you growing over time? Are your competitors growing over time? How are you comparing and looking at like, daily snapshots to get like, a more holistic view, to see how things are trending and to put things into perspective, a really great brand will show up probably 60 to 70% of the time. So that is actually something that people aren't used to, where, like, I want to show up 100% of the time. It's like, never the case. And so that's just kind of a hurdle that people need to get over. And it also helps with combating, like, when my CEO asks, Hey, why are we showing up for this search that I did? Well, it's like, oh, actually, it's probabilistic. You can have to explain all this kind of stuff. So getting over that hurdle and that conceptual side of things really helps, like, make things click at it.
Alastair Nee:So Kevin, given these AI search platforms don't actually share any actual data, right? How exactly is scrunch able to measure a brand's visibility and citation performance over time, right? I believe it's based on a set of synthetic prompts. And then how do brands figure out which prompts actually matter and which ones feed into the tool?
Kevin White:Yeah, that's a good question, because with personalization and things like that, the again, the probabilistic nature of things makes the results look different over time. And so we've put a lot of thought into how we both how we capture and track this data, and even to the extent of simulating personas, a marketing persona, versus a family buyer type of Persona, and then regional IP address mapping and stuff like that, it's never going to be perfect. It's never going to be like the exact thing that someone would see, because we don't have data on that. We can't actually see the actual individual user, what they're searching. The AI platforms definitely don't show any of that, at least not yet, maybe with advertising that's going to come soon. So we put a lot of fun to like how we actually represent a search that you would typically see, and that it will actually represent your brand and direction and search the right way. And we have multiple different ways of doing that. One is through various IP mappings. One is through panel data, which is actually giving you some representation of someone that they've essentially opted in and are showing you their search results across Google and chat, GPT and other things like that. So that is a way that you actually see some user data, and then you have to approximate that globally to get the understanding these are the problems people are asking and how your brand's showing up. So there is, like, some ways to get around it, but it's never the whole spectrum of things. And it's not like a perfect call it like footprint of everything, but like, the alternative was not to bury your head in the sand. And this is kind of like, the point I like to make is, like, even though you can't measure. Things perfectly doesn't mean you shouldn't be measuring them or trying to look at them at all. That is not an acceptable answer when your CEO, your board, is asking you how you're showing up in these platforms. So like, this is the best thing we've got, and as platforms evolve and share more of this data over time, we'll make that data more transparent, reportable, extensible, all that kind of stuff within
Morgan McLintic:scratch. So I have my different personas. I choose, okay for this persona, here are the, I don't know, 50 prompts that we think that they're going to ask, and maybe I can sense check that against my sales call data, or the other ways you said panels, for example, okay, because I want to kind of dial those in to be these are top middle and bottom of funnel prompts that each persona is going to ask. And then you're asking, you guys are pinging the different llms regularly, whatever the cycle or the cadence is, and monitoring the response that comes back and saying, okay, when I asked this yesterday, you didn't show up, but you're showing up today and you're so then that's basically how you're sort of reporting over time. So that makes good sense to me. How do I know that when I ask this question, because people are going to word the prompts differently, like, how exact match? Are the llms coming back? Or are they clustering them and saying, okay, everything that's sort of about a bit like this, is getting this kind of response. Like, how fuzzy are those?
Kevin White:Yeah, if I think back on the prompts that I use myself, I use a product called whisper, and I'm just, like, holding down a button and like, talking to an LLM. So my prompts are like, 50 words. It's like, you're never, ever gonna match that perfectly. And so that's a lost cause. But the way that the LMS actually like, take that intent and break it down is through something that's called fan out queries, or Microsoft Bing Webmaster Tools calls it grounding queries, and they actually expose some of this data within Bing Webmaster Tools, which is pretty cool to see. And this kind of gives you a sense of, okay, when something is asked like this, the LLM will chunk it into call it like five to a dozen different fan out queries. Essentially, it'll do multiple searches on those different fan out queries, breaking it down into the different components of the actual intent behind the prompt, and then it'll gather information in real time across those different search findings or its training data, and then pull it back to the user. So, you know, that's a little bit like, of a technical thing. You don't need to, like, get super hung up into that. And we will help you, like, expose some of those fan out queries that are like, common you can look at like it's actually helpful if you hook up Bing Webmaster Tools and look at what the fan out crews that you're showing up for, that's a really good source of information. But essentially what the strategy and the practice behind it is to cluster things by topic, like for this one topic on call it like Salesforce is like the classic example, like enterprise CRM software, what are like, the five things that people are asking about within like, a topic within that which platform, platforms have SOC, two compliance, which platforms scale to multi millions of users, or something like that, chunk out those different like topics, and then, like, you have a topic cluster, and then you can get, like, the footprint of things that someone would ask even though they're not going to ask it that exact way. That is almost like enough of a proxy or weighing measure to see if you're actually showing up in those results or not. So that's like, kind of the way we suggest people track prompts is like, you're never going to get the perfect prompt that someone's typing in. Maybe occasionally you'll hit it, but the volume there is going to be very ambiguous. And so it's best to try and think, what are people asking, build topic clusters around those themes, and then start tracking that. Now, we also have another product that our data science team has built that'll tell you, Okay, you're tracking all these prompts, and when you analyze those prompts, it'll tell you which ones are redundant and which ones you don't need to track anymore because you're showing up for the same citations like over and over again. It's actually a counterintuitive product, because we bill, or we charge based on consumption, which is prompt tracking, but we're essentially
Morgan McLintic:saying out those but to get value, you get more value out of okay, these two overlap, and you're basically asking the same question in two different ways. Yeah, exactly.
Kevin White:So we have some confidence interval there where we can say, Cut these prompts, and that is also helpful and reassuring and also helps with the efficiency side of things. So you're not trying to track all trying to track
Morgan McLintic:all this stuff.
Alastair Nee:That makes a lot of sense. So mentions are really good, right? But I think citations are typically seen as even more valuable for brands, right? One thing that we at firebrand have been doing pretty heavily is drilling into the citations reporting within scrunch and we find this to be really useful for understanding things like where our competitors are winning more citations than us, third party sources that tend to cite key topics, as you said, that we might want to also be mentioned, cited and mentioned alongside uncovering exact URLs, that sort of thing. What else are you seeing brands do with that citation data that's been surprising and interesting?
Kevin White:Yeah, a couple things there. One on the citation data that you can expose via our API, especially is to ask which citations are declining or disappearing or coming online, because you can get instead of just like looking at the. Roofing of citations, you can see, like, oh, this competitor dropped off. The citation that might be a source for us to overtake, or for us to like, tap into, because our competitor dropped off, or we dropped off, like, what are we doing wrong? What do we need to fix? And so looking at the citation trends, what's dropping off, what new citations are coming online, and that more of like a time window versus like point in time view is actually pretty helpful for identifying the citations that are, like, new sources that the AI is referencing, and then also sources that are declining. It's a tough game out there, because you have to, like, be really adaptable, and things move really fast in the results. And so like, having that kind of visibility on what's improving what's declining over time is super helpful. The other thing with citations that a lot of people, it's like a misconception or a myth, is that, you know, Reddit is the end all be all, or LinkedIn or Wikipedia is the end all be all. Where, if you actually look at what's being cited for, the prompts that you care about, usually, like at the max, Reddit is like 5% of the mix there. And then there's, like, this other longer tale of citations that, like, really matter. And then if you add on filters for intent and persona and other things like that. Oftentimes you'll find that Reddit dominates a lot on the top of funnel types of prompts. And even by dominating, I say it's like 5% of citations, it's not a lot. But then if you drill down into like the more, higher, intense types of prompts, and what's being cited there, it's not, sometimes something niche or that you aren't aware of. And so just exposing that those sources of truth that the AI platforms are using, and the volume of which they're citing the sources is like really telling and oftentimes a quick win for a lot of our customers,
Alastair Nee:right? I think surprisingly, content strategy is being shaped in part by what we're learning from the citation sources, looking at specific URLs that get cited a lot, and which key topic those map to and which persona so working alongside content teams, SEO people, geo people, have been finding some really good synergy there, and probably even as much or more so with PR for finding industry specific publications that are being cited that You could perhaps pay have a paid element to get also mentioned and cited in, right? So it's been really instrumental for that.
Kevin White:Yeah, exactly. You can identify both on like, volume and authority, like, which citations are the right citations, and that gives you, like, a hit list by either domain or, like, the specific URL itself. Oh, we should be partnering with Forbes and Fast Company or whatever, because those are the most cited sources within the prompts that we care about, if you can kind of stack rank, these are the publications that we want to be partnering with, which is really helpful, because that's what AIC says as the authoritative
Alastair Nee:source. Yeah, I'm sure the PR people love having a little more of something to actually report on, which is divorcely Difficult. That's
Morgan McLintic:been, yeah, brilliant, speaking as the PR guy. PR has died several times during my career. We've come back to life again. So I'm happy about that. So the way the scrunch works, it's polling these prompts against all the different models. And there are lots of different models, chat, GPT, and the different models within there, and perplexity and Gemini, et cetera. And that's interesting data that you can get back. And the models do vary. But I wonder, in a sort of practical sense, should brands be thinking about optimizing for a particular model? Is the state of play of I'm going to call it geo there yet that I can take these actions that deep seek really likes, or that Claude loves compared to the others,
Kevin White:we have a product that is called agent traffic, and essentially, what it'll do is identify the bots that are crawling your websites and tell you at what frequency, what pages they're consuming, all this kind of stuff. And what's actually happening there, when you think about it, on the user side of things is a user is prompting whatever LLM model they're using, and then that agent is going and fetching data from your sites. And so if you're able to identify the top, say, three models that are crawling your site, it's probably a pretty good proxy for the models that you should be focusing on. And like, chat, GPT definitely is dominating consumer, but Cloud's coming up fast. People use perplexity a lot. And so I think the thing that matters is getting that visibility into like, what models matter. And so if you can identify those top three models by looking at the traffic that's coming to your site, then you can kind of look at all the different cuts and filter by those models to inform your citation acquisition strategy or your, you know, content strategy. And so that's the way I would say it is like, go in and look, inspect and see, you know, what people are actually using. You can look at referral traffic too, but some of the models like don't expose that, so the best proxy we found is looking at the actual agents that are visiting
Alastair Nee:your site. Yeah, I think that's good advice, because inherently, depending on your product or service, your best users may be primarily using chat, G, P, T for their E, commerce research, whereas a, b to b, SAS buyer might typically be using perplexity. So it's a classic case of understanding who your audience is, what they're using, and then trying to optimize for that specific model, right?
Kevin White:And it changes frequently, too, where, like we're in, B to B SAS. Now everyone is using Claude and Claude code, and so we are seeing a lot more Claude visits because of the space we're in. But. I'm sure on the consumer side of things is still pretty heavy. Chat. GBT, so again, yeah, go in and inspect and see what it is, and also make sure to monitor over time, because it does change pretty frequently right now, until the market kind of hardens, which is probably like a couple
Morgan McLintic:of
Alastair Nee:years out. So I have a question in SEO, Google and Bing, to some degree, they tell us when their algorithm has had a major update or even minor updates, and they provide at least some information on what has changed and what that might mean for ranking equity for domains, right? So with llms, that's not really the case. They're constantly evolving and changing and rolling out new versions. However, they don't really tell us when they make those changes. So how do we interpret that data, that models change underneath? What does that typically look like in scrunch are there peaks and valleys that we can attribute there, or again, just monitoring over time?
Kevin White:Yeah, I always like to say there's not a map cuts of AI yet, which is unfortunate, like it would be nice to get some pulling back of the curtain and to know how the algorithms work. I mean, they do share regular updates, and we launched this new model, we have this new product feature, all this kind of stuff. And so we do get some visibility, but not much on direction on this is how the algorithm works, and I'm a big proponent of one, yeah, monitoring that stuff over time and seeing how things change, looking at trends is definitely like part of the game of the right kind of AI search strategy. But then also, like going out and trying things for yourself, is what I found, is like the best way to learn and know what works, looking at the data and then creating content, writing FAQs, trying to do citation acquisition, like going in and trying to carve off, like a corner of what you can improve, and then look at how what you've done changes or compares to the stuff that you haven't changed on your site, or the citation acquisition that you haven't got or got that is oftentimes like the best proxy for what works and what doesn't. And you tend to learn what the models prefer and like over time. If you're going in and being hands on and like doing the actual work, I like to say, also, don't listen to me. Don't listen to experts. Go out there and do the work yourself, because that's what's going to get you to learn this stuff the fastest.
Alastair Nee:Yeah, I like that. And along the lines of at least understanding, hey, there's a new version of this model that's rolled out, and we know a little bit about how it functions with query fan out. And for example, we know these models really love structured data that's easier for machines to read, as opposed to just straight HTML data that are copy that humans read, right? So given that we would be doing exactly that, where you would roll out, let's say, some more schema markup, right? Hey, we know that these models are ingesting information and deciding what to mention at site, because they like structure. Then let's go ahead and make sure the structure on our domain, our website, is really good and easy to read for them. And then, as you said, let's see if that has caused any movement, right and just constantly, effectively trying to roll out tactics and see what really works. Because it's classic case of, like you said at the beginning, two schools of thought, where, you know, a lot of people just say, hey, AI optimization, geo is just SEO, and others say it's completely different, where, really it's you've got to find out which tactics that are new and different actually do move the needle in this new space.
Kevin White:Yeah, yeah, exactly. There's lots of the go to tactics that people quote and that are not substantiated. But if you go out and learn it yourself, find out quickly, typically, which is, refresh your content at a regular rate, have authorship, make your content chunkable, add schema markup, add FAQs. These are all things that people say work, but like, you have to go out and test it yourself, and then, yeah, again, like on the SEO side of things, if you're doing SEO, you're probably like showing up pretty well on AI search, but you definitely want to go verify that, and then there are differences, and you'll find those differences by going out there and
Morgan McLintic:doing the work you had said that everyone was very excited about Reddit as a massive source for AI search, but actually, maybe it's not using that as much. What are you seeing in terms of just general buckets of sources? How that's changing over time. Are there any trends there?
Kevin White:Yeah, we've seen a lot, and we have a study coming out. Maybe by the time this is published, a study will be out on LinkedIn and how LinkedIn is becoming more of source for citations, and also like what works in terms of getting citations, but also what works in terms of getting more reactions and distributions, which is a fascinating topic for me as someone who's on LinkedIn a lot, so that's another channel that we see come up a lot. And I think one way to think about it is, rather than looking at like these specific domains themselves, is like, grouping them into categories, which is something you can do with a product like trench, where you can say, like, these are the social publishers, these are the news publishers, these are the niche publishers, these are review sites, these are competitors. And like grouping things and not kind of cluster is helpful to see like, Okay, which of these cohorts are getting the most traction, most citations, most authority. And then that kind of informs your strategy there, versus looking at like any one individual source. Sometimes there is one individual source that, like, just is really great, and that also informs like, this is a good partnership for us. But oftentimes it's like a mix of things. Things, and so that kind of like abstraction of like grouping things together, is it helpful way to look
Morgan McLintic:at
Alastair Nee:things? Absolutely, I think that's such a useful way for using citations, grouping them by sort of type, because we know, I think, pretty safely, that AI models are going to look for patterns across the web, and they're not just going to pull content from any one domain, be it your brand's domain, for deciding if your authority and whether you're worthy of being mentioned or cited. So they're going to look for a that brand's presence across multiple places, across the web, social media, as you mentioned, LinkedIn, Reddit, and then your traditional earned media. So really, it's what we've seen, and what we're touting now for our clients is good geo certainly starts with good SEO, but it's also a sort of multi channel marketing strategy that you're going to need, including content syndication across owned, earned and paid sources, right? So if you have a broad strategy there, and you're consistent with your brand voice, and you're targeting the right topics, and you're become a trusted voice. That seems to be the thing that really helps with increased visibility.
Kevin White:Yeah, yeah. That's another good point too. Is if you become the cited source, you can kind of control your own destiny in a way which is where maybe you would weight citation strategy, especially if it's, like, your own domain, as a higher weighting than, like, just showing up in prompts or not. It depends, though, like some CPG brands and stuff like that. You know, they're not going to ever have a ton of authority on the domain side of things, and so they kind of need to partner with lots of different publishers and brand awareness and stuff like that. So so it really depends on your unique part of the market, I guess.
Alastair Nee:Yeah, I love the idea of waiting, waiting the citation sources. That's a really useful way to leverage the data coming through that citations tool in scrunch is if we're noticing certain domains having a high citation consistency and influence score there, then we're obviously going to put more time and effort and perhaps marketing budget behind getting a bigger presence on those domains and URLs.
Morgan McLintic:So you said that for a lot of brands, you know, 60% citation rate is really good. And I found that when we're monitoring, you know, when you monitor SEO stats, you'll you'll take actions, and you maybe publish more content, and you can see it consistently go up, like in a predictable way. I mean, of course, it's competitive with AI search, you can take these actions as well, but it tends to be more level. It's not like, Okay, I did this, and then it just keeps stacking up. And I wonder whether that's because everybody else is taking or, you know, it's a competitive market and everybody else is doing the same things. Is it because the models are changing? Is that just because these things vary a little bit less, they're slightly less responsive. You have to do the right things, but it's not going to go in a hockey stick thing up. Just give me a bit of insight into why that's happening.
Kevin White:Yeah, once you see that occasionally, like on the longer time window, you do want to see things trending up, and it definitely ebbs and flows a lot. You know what might be happening? And what we've seen happen is that you get a strong brand presence, like quickly, or you see things move, and then it, like, drops the next day, and then, like, it ebbs and flows quite a lot. And what I think is happening there is the citation sources and the models, like the time window, the half life of like, what it is referencing there is pretty short, and so you need to always be looking for the next new thing to partner with. It's a pretty exhausting strategy, really, where it's like, I need to show up in these new citations that are showing up because these old ones that are stale are not doing it for me anymore. And so that's where you might see this kind of, like leveling off is just like, it just this constant hamster wheel of trying to, like, keep up, which is unfortunate, because no one wants to work like that, but it is kind of how the models worked. And so you do need to be in that kind of, like, really adaptive, quick moving, kind of motion, from what we've seen, I mean, and then I think over time it's going to get more like Google, like they're going to identify the more credible sources, and like, it's going to be more durable over time. But right now, we do have a study on this too, that the half life of the citation, meaning, like, what an LLM will cite, is like 30 to 40 days. And then, like, for bigger publishers, it's like maybe double that for like, notable publishers, but that's still a pretty short time window compared to SEO. You know,
Morgan McLintic:it's
Alastair Nee:almost like the way these models decide who and when a source is worthy of the mention or citation. In a way, it's almost more democratized than traditional crawlers, I think, which is great for the smaller brands, who can actually get their content more visible in AI search much faster after publishing than they would in Google or Bing. We've actually seen that in practice, like you roll something out for a client, a new piece of content that's well optimized, it actually can be mentioned inside it quite quickly. So I guess fundamentally, that's nice, but it can make the reporting aspect at the sort of top level in scrunch a little hard to decipher at times. However, once you dive into the different filtering capabilities, which we really. Love, like the key topic areas, or by persona or look really drill into each prompt, the picture is much more clear and then, therefore more actionable when you see Oh, around this particular key topic called security vulnerabilities, right? Let's take that as an example. We've seen that our citations inventions were really strong one month and the next couple months have dropped. So now I've been knowing you to take action on the specific prompts therein and also expand upon them. So I think it can be quite an actionable and useful way to look at it when you use the filtering,
Kevin White:yeah, it is very telling if you go to that level of detail of what is the prompt response, what are the citations like, at that like, very granular level. And then I also will confirm that the recency does like the models do tend to like cite content very fast, like you publish something like two days later it's showing up in your citations, and you're like, Wow, this is great, but at the same time, it also will filter that out fast. And so you just always need to be up to date on all this stuff and have the most recent type of content, which is again a challenge, but it's also a competitive advantage if you can
Morgan McLintic:do it the right way, just sort of related then and just on content publishing frequency, one of the products that you have is the agent experience platform, which is like a sort of a cache of the website for llms. So tell us a little bit about that, for those that are unfamiliar. And I guess a related question I could publish there just AI generated content a high frequency. Have you experimented with that? Is that a good strategy?
Kevin White:It's just like a bit on, just like the premise of the product. So in a world where zero click people are using chats and chatting with AI to get their answers, it means they're visiting your website a lot less. And the exchange there is that the agents from these models are visiting your sites on behalf of the user. So where, in a world where you sort of dismiss this bot traffic, like bot traffic is actually something that you definitely need to pay attention to nowadays, because there's a human with intent behind it, oftentimes. And so the premise of our agent experience platform is to one identify that bot traffic at the edge layer with your CDN, like Akamai or Cloudflare or vercel or something like that, and then, because the agents that are visiting your site don't consume content in the way a human would, they don't prefer JavaScript or media heavy content, and like a lot of superfluous JavaScript and code that is typically on a website optimized for the human experience. So we are identifying that traffic with agent experience platform at the edge and then serving an optimized version of that page. And so we're not generating new pages, per se. We're taking the pages that are already created on the human side of things, and at the very fundamental level, just doing a mirrored version of the HTML within that page, but in a way that the AI can consume it without all this superfluous code. It typically takes the like token count of consumption down by like two orders of magnitude, so like hundreds of 1000s of tokens down to like 1000s. And so that helps with AI crawlability, with consumption and all this kind of stuff. And then the other thing we can do with this product is take the intent of the page and, you know, optimize the content there, with FAQs, with internal knowledge, with other stuff, without degrading the intent of the page. Like that is something that you would definitely not want to do. Like completely change the intent of the page like that, you'll definitely get penalized. And with great power comes great responsibility. And we have put in safety guards in our products so that like that is not something that our customers can do. So it is a way to give the AI more context of the page and the intent of the page, while preserving the human experience when a human does
Morgan McLintic:visit the
Alastair Nee:page. So Kevin, what are some of the bigger trends in AI search monitoring right now? What's coming around the corner for the market as a whole and for scrunch specifically,
Kevin White:yeah, one thing that I'm really excited about as a growth marketer in paid advertising My background is that there does seem to be, at least with Google and with open AI, I feel like there is going to be paid solutions or pay for performance products that are coming out. I mean, it's already rolling out on chat, GBT, and right now, it's very early stages, but in the future, I don't think advertisers will put up with not having transparency there around search volume, click rates, all that kind of stuff. So like, we are going to open up the exposure to like, what volumes look like and how the models work, and so get more exposure to the algorithm very much, looking forward to that. And that is something that we will support within scrunch the other thing that I'm seeing a lot of, and everyone's talking about, is like Cloud code agents, openclaw things working on your behalf. And I feel like this is actually something to pay attention to, especially because those agents can go crawl your site, go do things and operationalize things on your website, or fill out forms, check out, do shopping cart type of things, get information. And so those agents will maybe look differently in the future, or there might be like, a sharding of, like, it's not just open AI, it's like the open AI agent or the cloud agent. And so having visibility and, like, tracking that stuff is going to be super important. And then on top of that, it's, you know, making sure that you are adhering to the standards and protocols. So the agents can, like, go do agentic things on your site. They can do the. Things that instead of just crawling the content, they can actually like take action. That's a trend that I'm looking forward to and that we are looking to enable at scrunch starting with our AXP product.
Morgan McLintic:Wonderful. Yeah, that's definitely so we're definitely keen on the extra visibility you're going to get from paid because, yeah, obviously people are going to want to see the competitiveness there and the volume that they're getting. So my final question here, then is, just for a brand that's getting serious about AI search, what's some advice that they maybe might not have heard already?
Kevin White:Yeah, I mean, maybe repeating myself here, but I think the best advice is, honestly, just go out there and do it and experiment yourself. Don't listen to a bunch of experts. I mean, listen to experts, but like, take that information and go experiment with the insights that they're providing. And that is always my piece of advice for how to get started is like, go in there, get hands on. Start learning these products. Start learning these platforms. You don't have to use a product like scrunch. You can use your own, you know, spreadsheet. You can probably vibe code some of it. There's certainly, like, a lot of products like scrunch out there, but it certainly helps if you have something like ours, and it's just gonna do all this kind of heavy lifting and manual work for you. But yeah, my advice is just go out there, do it and run some experiments and learn by doing, I would say, is the best thing you can
Morgan McLintic:do. Great advice. So Kevin White is the Head of Marketing at scrunch Kevin, thanks so much for joining us and sharing some of the advice here today. If people want to reach out to you, where should they go? The
Kevin White:best place is LinkedIn. We will probably drop the link in my show notes. We could also just Google Kevin White or AI search, sorry, Kevin White scrunch and I should show up there. LinkedIn is where I do most of my pontificating and posting and all that kind of stuff. So please follow
Morgan McLintic:me there. Wonderful Kevin. Thanks so much for your time today. It's great conversation. You.