In this webinar, I sat down with Wil Reynolds (founder of Seer) for a candid conversation about what he is seeing across hundreds of clients, how he is using AI day-to-day, and what it takes to keep a 200+ person agency moving while the ground keeps shifting.
Wil and I have known each other a long time. We used to compete head-to-head in agency pitches back in the Distilled days. What made this conversation different is that we have built a real friendship over the years, including swapping voice notes and talking through the messy, human side of leadership. That closeness made the interview feel less like a polished Q&A and more like two founders thinking out loud in public, pulling on threads and following them where they went.
This was not a tactical walkthrough or a neat framework session. It was a raw discussion about incomplete data, changing user behavior, internal experimentation, and the human cost of keeping up when the pace of change feels both exhilarating and exhausting.
What follows is a recap of the main themes from that conversation.
Key takeaways
-
Be careful with sweeping AI traffic claims. What looks like a universal trend is often an industry mix issue, so segment your data before you panic or pivot.
-
People are putting brands into prompts. Some users ask LLMs for answers through a brand lens (ex: 'What have McKinsey and Deloitte said?'), which shifts the game toward brand demand, not only rankings.
-
AI can turn non-technical marketers into builders. Tools like Claude Code shrink the gap between 'I wish we had a script' and 'here is a working prototype', speeding up internal and client-facing experimentation.
-
Real adoption is happening inside teams, even if leaders cannot see it day-to-day. Systems like Seer's 'grab-and-go' board help surface, approve, and ship many small AI solutions across the org.
-
The winning edge is mindset plus habits, not tools alone. Staying employable means treating AI learning like a second job, being honest about uncertainty, and building a culture of momentum and accountability.
Why 'AI traffic is down' can be true and false
Wil opened with a warning I want more people to internalize: broad claims about AI traffic often say more about the dataset than the world.
He has cross-client data broken down by industry, and his view did not match the recent 'ChatGPT traffic is down' narrative. In his numbers, overall volume was strong, while some verticals (like SaaS) were down. So two people can look at real data and walk away with opposite conclusions, because their sample is skewed.
The point was not 'trust my numbers'. The point was: default to nuance. Segment before you generalize, and treat industry-level differences as the starting point, not an edge case.
From performance certainty to incomplete measurement
Our industry grew up on measurement. We sold search and digital as the antidote to billboards because we could track outcomes.
Now we are stepping into a moment where some of the most important behavior is happening in places we cannot track cleanly: LLM conversations, prompt phrasing, brand mentions inside the question itself. Wil described the discomfort of watching an industry built on proof face a world where the data is incomplete.
That forces a change in posture. You still measure what you can, but you stop pretending your dashboards capture all of reality.
Watching humans use LLMs changes the game
One of the most useful parts of the conversation was Wil describing UX researchers observing real people using LLMs.
In a small panel of professionals, he saw a striking behavior: many people put brands directly into their prompts. Not 'return to office best practices', but 'what have McKinsey and Deloitte said about return to office'.
If people are using your brand name as a filter, then brand salience becomes part of 'search' again. It is also the kind of thing most teams are not tracking, which is exactly why it matters.
Adoption charts are boring, edge behaviors are gold
We talked about adoption curves and the usual headline that ChatGPT is the fastest adopted technology by several metrics.
The more interesting thread was the group of people who use an LLM once a month, or a few times a month, but are not daily users. Wil instantly went into researcher mode: who are they, what do they use it for, and what problem is valuable enough to return for, but not valuable enough to become a habit?
That question is a good pattern for avoiding hype. Stop repeating the graph everyone has seen. Start hunting for the weird segments that reveal what real changes are happening.
'No user manual' and why founders take to this stuff
LLMs are a weird product category because they feel like software with no onboarding. You get a box, it looks like search, and you have to invent the workflow.
That may be one reason founders and senior operators adopt AI quickly. Running a company is often turning intent into action through words. You describe what you want, people go do it, you iterate. LLMs fit that same loop, except the bottlenecks change.
Sometimes that means founders move faster. Sometimes it means the gap between the people experimenting and the people waiting gets wider.
Claude Code: 'I do not wait anymore'
Wil was blunt about how his day-to-day changed: Claude Code made him more impatient.
He keeps it open constantly and uses it to build scripts, prototypes, and small systems that he would not have built before. He is not calling himself a classic technical SEO. He is describing the reality that the barrier between 'idea' and 'working version' has dropped.
His favorite move is taking the transcript of someone explaining why something is hard, pasting it into Claude Code, and getting something usable back. That changes behavior. Instead of waiting for bandwidth, you ship a rough version and test it.
Client feedback first, internal refinement second
One subtle shift Wil described is where he shows early work.
He used to build things and show them internally, then eventually show clients. Now he often shows the rough version to clients first, because they are the ones who pay and their feedback tells you what matters. The team can refine and standardize later.
That approach will not fit every culture. Still, in a period where the surface area is changing weekly, the ability to get a fast 'does this matter?' answer is a competitive advantage.
The 'grab-and-go' board and the illusion of slow adoption
I asked about scaling adoption inside an agency. Is it chaos? Is there standardization? Is it uneven?
Wil shared a story about Seer's 'grab-and-go' board, where people post AI or automation ideas, division leaders approve them, and anyone can try building them. He also admitted he sometimes felt progress was slow based on what he saw in Slack and in his immediate orbit.
Then an internal email showed something like 35 people had built and executed AI solutions in a single month. His perception was wrong because he was not looking at the system that captured all the activity. That is a leadership lesson: experimentation often happens quietly, so build mechanisms that make it visible without making it performative.
The AI interviewer: better content, faster approvals
Wil's most concrete example was an internal tool that interviews subject matter experts (SMEs). It calls the SME, asks strong questions, records the conversation, then outputs usable drafts and rewrites in Markdown.
The big payoff is not only speed. It is fidelity. The content reflects what the SME actually said, so approvals move faster and accuracy improves. Wil shared that a long-tenured SME said the questions were so good he wanted them in advance because he was getting stumped.
This is the kind of AI use case I expect to stick: it reduces organizational drag, pulls expertise out of the heads of senior people, and turns that into assets teams can publish with confidence.
Paid media: 'the fastest way out is to pay'
We shifted into paid and what happens over the next couple of years.
Wil's view was pragmatic: many teams still have KPIs built for the old world. When organic is volatile and leadership still wants targets hit, paid becomes the fastest route to stability.
He also pointed out a worrying signal: many brands are paying more to convert on their own branded terms, which should be the easiest traffic. If brand CPCs rise across many accounts, that is a tax on demand.
Ads in LLMs: maybe inevitable, maybe a mistake
We talked about the obvious incentive: there is massive money in figuring out ads inside LLM experiences.
Wil's view was that if he were running an LLM company, he would avoid ads as long as possible. Not because he is anti-ad by principle, but because clean UX creates contrast with Google as Google squeezes more revenue out of the SERP and makes the experience worse.
Once LLMs show the same kind of ads users hate, the differentiation collapses. Then it is all the same game again.
Competition is back, and it feels like both adrenaline and fatigue
A theme that kept returning was the emotional texture of this moment.
Wil described it as being exhilarated and exhausted at the same time. I relate to that. The tools are exciting, the pace is intense, and the fear of falling behind is real.
He also said something I think is a useful leadership posture: he prefers to feel behind, because complacency is how you get surprised.
'You have a second job now' and the cost of staying employable
Wil was also unusually direct about learning.
He tells his team they have a 'second job' right now: learning AI. He framed it as honesty, not punishment. When the pace of change accelerates, the old allocation of 'learning time' often is not enough.
He even says versions of this to candidates: if you come from a company that blocked tools, you might be behind, and catching up may take intense effort early on. It is not pretty, but it is transparent.
Mindset: measuring giving, staying grounded, avoiding self-myths
We ended with mindset and leadership, which is where Wil and I often end up in private conversations too.
He talked about tracking thank you notes, tracking time spent helping team members, and comparing those numbers to how he feels month by month. It is a reminder that culture is not only values on a slide, it is behavior you can notice and measure in yourself.
He also talked about perspective: living in a city, seeing a community fridge in use, and letting that reality reframe what counts as a problem. Then there was the harder part: regularly revisiting moments where he did not stand up for someone, or took the easy path, as a way to avoid believing his own story about being better than he is. That level of self-scrutiny is intense, yet it matches the theme of the whole session: do not lie to yourself, and do not build your team on stories you cannot sustain.
A practical next step: Put Search in Control Mode with SearchPilot
One thread running through everything Wil said is that we are moving into a messier era: less clean attribution, more incomplete signals, and more pressure to make calls anyway.
If you want one place to bring some certainty back, search is a good candidate. SearchPilot makes SEO (and GEO) testable so leaders can move from guessing to knowing.
We run controlled experiments across category pages, product detail pages, navigation, and content, then deliver clear uplift with timelines and confidence. Teams progress from quick validation to a steady test cadence to full control, turning search into a performance channel you can plan and fund.
For ecommerce teams focused on product grids, Merchant Center feeds, and variant handling, the first step is a focused test plan. Measurement tracks impressions, clicks, and revenue so leaders can see the real impact.
Stop trying to predict the future. Experiment to discover it. If you want tailored test ideas for your top PLPs and PDPs, schedule a demo and we’ll share a starter list and a clear path from validation to velocity to control.