As many of you know, we believe that it’s increasingly important to be testing your hypotheses about what will affect your search performance. As digital channels mature, and as Google rolls more and more ML into the algorithm, it’s increasingly hard to rely on best practices. To make this easier, we have been rolling out our SEO split testing platform to more and more of our clients and customers.
As we get our platform deployed on a wider range of sites with different architectures and technologies, we’re able to start testing more and more of the assumptions and best practices held around the industry.
You can check out a bunch of case studies that we have already published on our site (and sign up for our case study email list here) - and you can find more details in this presentation (particularly slide 73 onwards) that my colleague Dom gave at a recent conference. We also included some teaser information in this post about a big win that added £100k / month in revenue for one customer even while only deployed on the variant pages (half of all pages).
The hypothesis: there is a downside to relying on JS indexation
Before our change, the pages looked like this with JS disabled:
After the change, they looked more like this (which is exactly how they used to look with JS enabled):
The platform’s analysis showed a greater than 6% uplift in organic search performance to these set of pages, which amounted to over 3,000 additional sessions per month. This was an amazing win for such a small change (the chart above comes from the dashboard built into our SearchPilot platform).
As an aside, the mathematicians on our team are constantly working on refinements to the way we detect uplifts with statistical confidence (see Google’s paper Inferring causal impact using Bayesian structural time-series models for more background). We use a variety of synthetic tests, null tests and cross-checked data sources to make improvements to the accuracy and sensitivity of the automated analysis. We also apply a variety of treatments to the analytics data to account for various behaviours (dominant pages, sparse traffic distribution, seasonal products etc.), as well as some modifications to how Google’s Causal Impact methodology is employed.
In the test above we have since improved the accuracy of the analysis (it did even better than the initial analysis suggested!), which is exciting. It also means we are capable of detecting tests that result in smaller uplifts than previously possible, helping lead to improved performance and improved attribution.