Return to Articles 9 mins read

How does SEO testing sit alongside product testing?

Posted October 5, 2022 by Will Critchlow
Last updated March 6, 2024

There’s a very common question that we hear regularly when we’re talking to companies that are considering implementing an SEO testing program for the first time which is:

Can we run SEO testing alongside our existing product / CRO testing?

It comes up all the time because we are typically talking to companies that are quite advanced in their general approach to testing as an organisation - teams who have a test and learn mentality, are already testing the easiest channels and activities (e.g. paid search, conversion rates), and are generally quite far along the maturity curve when it comes to testing. SEO testing is a newer thing, and most people we speak to want to add it to their existing testing mix.

Most Commonly Asked Question

The good news is that it’s possible - and indeed, desirable - to run product testing alongside SEO testing. That’s the punchline. Let’s get into the details.

The first thing to realise as you plan adding SEO testing into the mix is that if you are already doing any on-site SEO, your SEO and product development programs already need to work nicely together. In my experience, this is the best place to start when thinking about how to bring SEO testing into the mix.

How does SEO currently work with product?

The key to a successful rollout of SEO testing is to build upon the existing processes and priorities that your org has built to collaborate on SEO and product changes. In my experience, this starts by asking “What happens right now when the SEO team believes that a user-facing change will benefit organic visibility?”

In the most high-performing teams I’ve come across, the answer is based on tight collaboration founded on mutual respect. SEO needs to believe in the value and effectiveness of the work the product team does, and product org needs to internalise the truth that without organic search traffic, most web-centric businesses would be in dire trouble, if not failing entirely. From this position of mutual trust, teams can build processes that might include:

  • Visibility into each others backlog and idea queue
  • Verification of individual tickets early in the process to flag potential conflicts between teams
  • Collaboration on resolving potential conflicts productively

In practice, this might include conversation starters like:

  • “We noticed that the product team has an idea for a new tool that meets an interesting user need - we know there is search volume for queries in that area, so can we collaborate on the information architecture of the new section of the site to serve that demand as effectively as possible?”
  • “The product team is planning on consolidating page type A with page type B to serve a particular user story more effectively. Would that have a significant impact on organic visibility? What do we need to be sure to do from an SEO perspective?”
  • “SEO recently rolled out a content block on category pages, and we are concerned that it may be reducing clickthrough rate from those pages to product pages and hence reducing conversion rate. How can we continue to capture the SEO benefit of that content without damaging conversion rate?”

I’m very aware that not all organisations have relationships that are this functional, so if yours doesn’t sound like this, then planning to roll out SEO testing is a great opportunity to reset these relationships and make things more productive.

Most organisations that we work with came to us running product / UX / CRO testing but not yet running a sophisticated SEO testing program. We typically advise them that the first stages of building SEO testing into the process are to swap “test the SEO impact of change X” into the process wherever the current process reads “make change X requested by the SEO team”. That might mean different things for different kinds of hypothesis:

  • If you’re talking about a change to non-user-visible elements (e.g. meta description, structured data) you may have a process where the product team is informed about changes but don’t need to be consulted about whether they are a good idea or not. These ideas can be run as an SEO test after informing product in the same way

  • If you are talking about a user-visible change, then the product team may be consulted before the change is made

    • Green: If the product team gives the go-ahead (“this change doesn’t conflict with running product tests” for example) then the idea can be run as an SEO test safely.
    • Amber: If the product team flags that there are active product tests running on other parts of the same pages that could conflict, then a more detailed analysis is needed (see below).
    • Red: If the product team has an active test running on the same element on the same pages, then the SEO change would generally have to wait for the product test to finish and then be discussed in that light. In general, the same is true for SEO tests.

(Note: we tend to get the question that triggered this post this way around - about whether introducing SEO testing will impact product tests - but it’s worth noting that product changes and product tests can interfere with SEO efforts as well! Most of my recommendations here can be read the other way around regarding the care that needs to be taken with rolling out product changes and tests to avoid messing with SEO performance or tests, but for some reason that seems to be a much smaller concern for most of the people we speak to. Perhaps this is just due to the relative maturity of the testing disciplines.)

I am a fan of using explicit RACI charts to keep track of who should be responsible, accountable, consulted, and informed about what changes and which tests:

RACI chart

What to do when SEO tests and Product tests might conflict?

Some tests (the ones denoted “red” above) are clearly not possible simultaneously. If the SEO team wants to move an element up the page while the product team wants to remove that element from the page entirely then no flow chart or process diagram is going to resolve the conflict. In these situations you will have to dig into the underlying thinking and reasoning to understand whether one or other priority should win out for the business, or whether you need to run a more sophisticated set of tests to understand the relative value of different moves.

With amber tests, however, it’s a different story. These are the ones I denoted as “could conflict” above. Typically this happens when both teams are considering tests that modify the user experience on the same page, but, unlike with red tests, they are potentially compatible. The easiest version to understand is something like:

  1. The product team is testing new confidence messaging by adding guarantee information near the “add to cart” button
  2. The SEO team is testing moving product description content out of tabs on the mobile layout

It is possible for any combination of test results: product test winning with SEO test winning or losing, or product test losing with SEO test winning or losing.

SEO tests divide up pages, but give the same user experience to every user (including googlebot) on any given page. Product tests divide up users on each page, but for any one user make the same change to all pages in the test. This means that as long as they don’t mess with each other’s changes directly - such as in the example outlined above - the tests can be what is called orthogonal. Orthogonal tests can be analysed independently even if they ran over the same set of users and pages.

In a situation like this, we have all four areas of the quadrant in play:

The SEO test compares the performance of column A vs column B:

SEO test comparies A vs B

While the user test compares the performance of row X vs row Y:

User  test comparies X vs Y

For these amber tests, it is a judgement call to decide whether there is too much conflict (e.g. the SEO test wants to move the element that product is modifying right to the bottom of a very long page) in such a way that the signal may be drowned out in the noise, or whether the changes are essentially independent. If you can imagine any of the 4 sections of the quadrant being the right answer, then in general you can carry on and run the two tests in parallel without them clashing or affecting each others’ methodology.

Edge cases: we’re doing business not science

There will always be subtle edge cases.

A classic question comes from the possibility that the SEO test brings a different blend of traffic that skews the conversion rate, meaning that the product test is a winner on SEO variant pages but a loser on SEO control pages, yet the SEO test is a failure such that SEO wants to roll out control. This is a very difficult situation to identify or analyse - but it’s also likely to be rare, and we fall back on our common mantra that we are doing business, not science and make pragmatic decisions. If we wanted to be extremely confident in every single test result, we might need to go slower and run more tests of the various combinations, but speed has a benefit all of its own, and in most situations our experience has been that biasing for speed and cadence wins out over the portfolio of possible tests.

From a theoretical SEO perspective, you could also make the objection that user signals could be a ranking factor, and by showing a different user experience to some percentage of page visitors, you may muddy the water of the pure SEO test. Given the amount of personalisation present on many websites, the difference between logged-in and logged-out experiences, and the prevalence of UX and CRO testing across the whole web, my view is that this is very unlikely to impact the outcome of an SEO test. As a result, we should take the pragmatic view that we are looking for SEO impacts that are large enough that they are robust to this kind of confounding factor. In particular, we want to roll out SEO winners that are robust to survive future product test iterations like the UX test in question.

Advanced level: combining SEO and product tests

I’ve focused most of this post on the question of how existing product and SEO processes can fit together as you add SEO testing into the mix. There are, however, obvious logical extensions of the underlying questions. As you roll out SEO testing, you are likely to go quickly from:

Can we run SEO testing alongside our existing product / CRO testing?'

To questions like:

SEO and product changes can both impact conversion rates and organic visibility. How do we measure the combined impact?

Question A becomes Question B

The answer to this question is full funnel testing which my colleague Craig wrote more about here. This is a methodology we have developed that enables us to run a single test to measure the conversion rate and the visibility impact of a change all in one go.

It is significantly more complicated, and requires true integration of product and SEO thinking, so it’s out of scope for this first question about how to move from SEO recommendations to SEO testing without disrupting the product team’s workflow, but I thought it worth mentioning as it is the next stage up the maturity curve of SEO testing. It lets you connect SEO initiatives not only to visibility and traffic, but all the way through to conversions and revenue:

SEO maturity

Overall summary of how to bring SEO testing into the mix alongside product testing

Product testing’s relationship to SEO testing should be essentially indistinguishable from the current relationship between product testing and (untested) SEO.

Instead of deploying SEO changes without testing after whatever conversation would normally happen between the product team and the SEO team, those tests would move to SEO testing rather than straight to deployment.

Everything else I’ve written in this post is about the subtleties of those conversations between the SEO and product teams! The fundamental point is that if you are currently running product / UX / CRO tests and running any on-site SEO initiatives, then it will be possible to integrate SEO testing into the workflow without changing anything on the product / UX / CRO testing side.

If you have any questions, drop me a line on Twitter to discuss: @willcritchlow.