Search is getting tougher. Clicks can decline even when rankings look stable. And 'Google did it' is not a plan leadership will accept.
That was the setup for this SearchPilot webinar, where I was joined by Abby Gleason, Senior Product Manager for SEO at Upwork. Abby has a rare blend of experience: she came up through agency SEO, moved into product at Scribd, and now runs SEO in a true product operating model at Upwork, with an engineering team, a roadmap, and a steady drumbeat of experiments.
This session was about the system. How Abby picks what matters, writes it up so teams can ship, runs tests that stand up to scrutiny, and keeps growth moving even as the SERPs shift underneath us.
Below is a long-form recap of what we covered, plus practical templates and examples you can borrow.
What follows is a recap of the main themes from that conversation.
Key takeaways
- Agency experience can accelerate SEO skill growth, but in-house success depends on maintaining learning through communities and peer groups.
- PRDs are a practical bridge from SEO ideas to shipped changes, with a clear structure: problem, hypothesis, requirements, examples, timeline.
- Strong test ideas come from repeatable inputs: audits with checklists, friction mapping on key pages, SERP watching, competitor patterns, user research, and past wins.
- Run both SEO split tests and UX A/B tests so you can protect performance through algorithm noise and improve conversion when clicks decline.
- Communicate SEO in outcome language: stakeholders care about qualified clicks, signups, and revenue, not crawl mechanics.
Abby's path into SEO (and why it matters)
I asked Abby how she got here, because how someone learns SEO often shapes how they practice it.
Her story starts in generalist marketing. She was doing everything: social posts, blogging, traffic reporting, even packaging swag. One week she noticed something odd: traffic was rising even though she had not promoted new content via email or social. Digging into Google Analytics, she saw Google was driving the visits.
So she did what most of us did at some point. She Googled: 'How do you get traffic from Google?'
That led her into the classic rabbit hole: learning the basics, signing up for tools like Ahrefs, and getting hooked on the loop of making a change and seeing measurable impact. She described it as addictive, and I can relate. SEO is competitive, and there is a real satisfaction in moving a line on a chart.
From there, she spent about four and a half years at an agency, mostly on SaaS and ecommerce. Abby made a point I agree with: agency work can compress learning. You see multiple sites, multiple industries, and multiple weird problems, and you learn alongside other SEOs who are solving them too.
Then she moved in-house, first to Scribd as an SEO Product Manager, and later to Upwork. In product, the key difference for her was having a direct relationship with an engineering team and executing through sprints.
Her framing was simple: product is a framework. If you are a strong SEO strategist and you can work well with engineers, you can operate this way.
Why 'product SEO' is becoming the default
A theme that kept coming up was this: modern SEO is hard to scale without product thinking.
Large websites require more than keyword research and content briefs. You need a repeatable way to:
- choose what to build,
- explain why it matters,
- ship changes with engineering,
- measure impact through noisy conditions,
- and report results in ways the business understands.
Abby emphasized that being technical is not only about deep crawling details. It is also about being able to translate technical concepts into language that makes sense to non-SEOs. Agency work helped her build that translation muscle because client work forces clarity. You can not hide behind jargon when someone is paying for outcomes.
At Upwork, she sits closer to the logged-out experience, and she cares about content quality and intent matching, but also user engagement signals: how people interact, click, and move through CTAs and FAQs. That blend of SEO and UX is not optional anymore, especially as top-of-funnel clicks get squeezed.
The PRD as the bridge between SEO ideas and shipped work
We spent a good chunk of the webinar on the practical mechanics of working with engineering. Abby has talked publicly about PRDs (product requirement documents), and I asked her how she applies that in an SEO context.
Her answer was refreshingly unglamorous: the PRD is a written explanation of the problem, the expected impact, and the requirements for the solution. It is how you make a change legible to engineers and stakeholders.
She described her planning process like this:
- Start with an annual plan, accept it will get disrupted quickly.
- Move to quarterly or monthly planning.
- Prioritize by effort and impact.
- For small tasks, write a Jira ticket (title tag tweaks, quick fixes).
- For larger projects (template changes, technical shifts), write a PRD.
Her PRD outline was straightforward:
- Problem overview
What are we trying to solve? - Hypothesis and expected impact
If we do X, then we expect Y, because Z. - Why now
Why this matters and why it should be prioritized. - Requirements
What should change, what it should look like, what must not break. - Examples or rough mockups
She mentioned using tools like v0.dev to create quick mockups when you need to show intent without being a designer. - Timeline and steps
Especially helpful for larger projects, so expectations are explicit.
Then she shares the PRD asynchronously with the engineering lead, and they review it in a weekly call. Questions surface early, and if it is approved, it gets added to the roadmap and into a sprint.
The part I liked most was how she described product structure as a way to make work move smoothly. People know what comes next. They know what a good doc looks like. That reduces friction and speeds up execution.
Before the test: discovery is a loop, not a single brainstorm
Even before you get to SEO testing, there is a discovery phase. I pushed Abby on how she collaborates during that stage, because many SEO teams struggle with the messy middle between 'idea' and 'build'.
Abby said she tries to start from data, not vibes.
Her typical discovery flow:
- Start with impact concentration
Identify top converting pages or subfolders. A small uplift on a high-impact page beats a big uplift on something low volume. - Look outward
Review competitor pages, patterns, and positioning. - Use tools for gaps
She uses Ahrefs to spot topic gaps and see where competitors win. - Look at the SERPs manually
Abby is a strong believer in looking at the results yourself, day in and day out. Tools help, but they do not replace pattern recognition. You also need to notice things like AI Overviews and the mix of SERP features. - Pull in other signals
Ask the team for reactions, scan what peers share, and sometimes use AI tools for research and synthesis. - Reuse evidence
If she has old test results that resemble the current idea, she pulls them into the PRD as supporting evidence. This helps with prioritization and buy-in.
That last point is important. Most teams treat test results as a one-off report. Abby treats them as reusable ammunition for future prioritization decisions.
Where test ideas come from (and why 'best practices' is a trap)
I asked Abby where strong test ideas come from. She gave a list of inputs that is more systematic than most teams.
1) Self-audits with checklists
She takes a few hours, audits a section of the site, and writes down everything that looks improvable from an SEO or UX standpoint. She uses a checklist because there are too many factors to keep in your head. A checklist also keeps audits consistent over time.
This is where she often catches basics: pagination quirks, missing schema, weak internal links, template inconsistencies.
2) Friction mapping on high-impact pages
On key pages that drive a large share of SEO traffic and conversions, she writes down every point of friction. Not in theory. As the page exists today.
She shared a concrete example from a previous role: a CTA focused on 'Sign up for our trial'. Her instinct was that users do not want another subscription. What they wanted was the outcome: read a book free for 30 days.
So she changed the button copy to reflect the value prop. That single low-effort change lifted conversions by 13%, translating into mid six figures in revenue.
That example also matters for SEO teams because it shows the overlap: better engagement can support organic performance, not only conversion rate.
3) User research
Abby has worked with user research teams, and this has shaped her roadmap. She mentioned tools and approaches that gather real-time feedback, plus direct user interviews where participants try to complete tasks and highlight confusion.
One point I found interesting: in this round of research, they started users on the website (not at Google), but they positioned users as new visitors and asked what information they needed to build confidence before signing up. Abby created a PRD to define what they wanted to learn, and the research team designed the questionnaire and flow.
User research is not only a UX activity. It feeds SEO prioritization because it reveals what information users look for, what blocks trust, and where pages fail to answer the real question.
Two kinds of tests: SEO experiments and user-facing A/B tests
Abby runs two types of tests, and the distinction matters.
SEO tests (search-facing)
These test changes to pages as search engines see them. Abby described two approaches:
- Pre/post tests: make a change and observe performance changes over time.
- Split tests across URLs: change one group of pages, hold another group back, compare results.
Abby highlighted why split tests are stronger: they help guard against algorithm updates, seasonality, and other external noise.
UX A/B tests (visitor-facing)
These serve different page versions to different users. Abby runs controlled A/B tests where half of users see the control and half see the variant, then she tracks impact on metrics in real time.
Why does she do both? Because it is harder now to win traffic than it was five years ago. She called out what many teams see: click-through rates dropping even when rankings stay stable. In those conditions, your growth plan can not rely only on 'get more clicks'.
So she is doubling down on the other lever: converting the traffic you still earn.
This also connects to a pattern we see across teams. UX and conversion teams often worry that SEO changes will harm conversion. Far fewer teams worry that UX changes will harm SEO, even when they remove content or simplify templates in ways that can reduce discoverability. The best teams test both angles and treat SEO and conversion as one system.
Influence matters: relationships across teams protect the logged-out experience
A big challenge of being an SEO PM in-house is that the site is the product. Many teams ship changes to logged-out surfaces: home, search and recommendations, registration flows, content templates.
Abby said one of her first priorities at Upwork was identifying the PMs who touch those surfaces and building relationships. Her goal was to get involved early, not to become a late-stage blocker. She wants teams to loop her in so projects become stronger, not slower.
She made the point plainly: nothing wrecks your week faster than a botched rollout that damages organic traffic. No one wants to be the team responsible for that, so early collaboration helps everyone.
She also mentioned the double-edged sword of working in companies where SEO is visible up to the C-suite. Leadership attention is good, but it means you can not shrug off declines by blaming the algorithm. You need a plan for growth even when the SERPs are moving against you.
Communicating SEO like a product leader: 'no one cares about crawling'
One of the most memorable moments of the session was Abby's mantra: 'no one cares about crawling'.
She does not mean crawling is not important. She means stakeholders do not care about the mechanism. They care about outcomes.
Her approach:
- Read every doc and ask: am I writing this for another SEO, or for my boss's boss?
- Frame everything in business impact terms: qualified clicks, signups, revenue.
- Translate technical work into user and business value.
Instead of: 'This will improve crawling of X subfolder.'
Say: 'This helps search engines find these pages, which should drive more qualified visits and revenue.'
She also shared a practical tactic: train ChatGPT to review documents through the lens of leadership communication. She uploaded an example doc that performed well with senior leadership, then asked ChatGPT to critique her future docs against that standard, including calling out jargon and over-explaining.
I added one warning: if you use AI for feedback, ask it to be harsh. Otherwise it will be too polite to be useful.
Real tests you can borrow
Toward the end, I asked Abby to highlight a few experiments that surprised her or delivered outsized results. She shared three that stood out.
1) The publish date test: CTR up 20%
Upwork had both 'date published' and 'date modified'. Even though the team updated posts frequently, Google was showing the published date in the snippet, making content look old.
They removed the published date and kept only the modified date, so freshness was clear. Within a few days, Google reflected the change and CTR increased by 20%.
The lesson: small snippet-level cues can have a huge impact on clicks, and it is worth validating how Google is interpreting your signals.
2) Title tag split test: traffic up ~10%, conversion rate up 18%
Abby ran a title change as a split test across URLs. She used Search Console to find top queries and noticed two high-value terms were missing from titles. She rewrote titles to incorporate those terms and make them more value-prop focused.
Results: traffic increased close to 10%, and conversion rate increased 18%. The traffic was not only larger, it was higher intent.
The lesson: better targeting can improve both acquisition and efficiency. Do not chase traffic for its own sake.
3) Brand query FAQs and AI visibility
Abby described a test in progress aimed at brand queries and AI surfaces. Other sites were outranking them on queries about their brand. Her approach: look at the questions people ask about your brand, check whether you rank for them, and decide whether you like the answers on the page.
They added an FAQ section to a key page and are testing whether clearer answers improve visibility, including in LLM-driven experiences.
This is early, but it points to a shift many teams are making: treating brand SERPs, FAQs, and structured answers as part of your defensibility plan.
What I want you to take away
If you only remember one idea from this session, make it this: SEO teams that win next will look a lot like product teams. Not because they adopt a new title, but because they build a repeatable system. Pick the highest payoff page types. Turn ideas into testable hypotheses. Write changes so engineering can ship. Run controlled experiments. Tell the story in revenue language.
That is what Abby is doing at Upwork, and it is why her work travels inside the company. If you are trying to build a more reliable SEO program, start small: take one high-impact page type, write a tight PRD, run one split test, and report it like a product experiment. Do that a few times and the rest gets easier, because you will have proof, patterns, and momentum.
If that sounds like the operating model you want, the next question is simple: how do you run those controlled experiments without heavy engineering lift, noisy results, or endless debate about what 'worked'? That is where dedicated SEO experimentation comes in.
A practical next step: Put Search in Control Mode with SearchPilot
Search is often the biggest channel and the least understood, because too many teams are still forced to make big calls based on best guesses. SearchPilot makes SEO (and now GEO) testable, so leaders can move from guessing to knowing. We run controlled experiments across category pages, product detail pages, navigation, and content, then report clear uplift with timelines and confidence.
Teams typically move through three phases: quick validation, a steady test cadence, and full control, where search becomes a performance channel you can plan and fund.
For ecommerce teams focused on product grids, Merchant Center feeds, and variant handling, the first step is a focused test plan. Measurement tracks impressions, clicks, and revenue so you can see the real impact, not only rankings.
Stop trying to predict the future. Experiment to discover it. If you want tailored test ideas for your top PLPs and PDPs, schedule a demo and we’ll share a starter list and a clear path from validation to velocity to control.
-1.png?width=1800&height=300&name=Take%20search%20out%20of%20react%20mode.%20Put%20it%20in%20control%20mode.%20(1)-1.png)