Return to Articles 5 mins read

How to tell good SEO advice from bad

Posted November 12, 2021 by Will Critchlow
Last updated August 7, 2023

I think I’ve probably put more thought into this than most, as it has always mattered a lot to me when programming SearchLove, that we amplified great advice.

Aside from just putting on a good conference, I believe this really matters in the real world, because I think that steps back in performance are at the root of a lot of underperforming SEO initiatives and as SEO gets harder it also gets more confusing, and harder to tell the good from the bad.

This post started as a tweet thread. You can see the original, along with the discussion and replies here:

We have to accept that we can never assess correctness perfectly, and so we need to come at it probabilistically. For me, that means for any given piece of advice thinking about a) how surprising is it? and b) how risky is it?

For both a) and b), my starting point is first principles:

  • Does it seem like something that could be true?
  • Does it seem like something that should be true?
  • Does it seem like something Google would want to be true?

That last one is interesting - it’s not always obvious whether Google wanting something to be true would increase my assessment of whether it’s likely to be true.

If it’s in an area where they have full information (e.g. “contents of title tag”) and clear algorithmic understanding (e.g. keyword matching) then broadly only things Google wants to be true will be true.

If it’s in an area where they have less information (e.g. “intention of someone creating this link”) or less algorithmic understanding (e.g. “actual expertise of this author”) then it’s quite likely that the louder Google is saying it, the more sceptical you should be.

How to tell good SEO advice from bad


This one is relatively self-explanatory, but obviously becomes more powerful the more experience you have. It’s essentially an evaluation of how likely you think the general principle is to be true, given everything you know. Or, equivalently, how likely you are to tweet it with this emoji: 😮


Riskiness is in the eye of the beholder (though generally anything @ohgm suggests is worth bucketing in this group - see for example laundering irrelevance).

When I talk about “riskiness” I mean it mainly in the sense “chance of an outcome I don’t want” rather than in the possibly-more-common-in-SEO sense of “outside Google’s guidelines”. Forcing Google to write your meta data is risky, but not at all against Google’s guidelines.

I want to think about this in the context not only of my own risk, but the risk of those who follow / trust me. I want to consider:

  • Scale - is this going to have a sitewide effect or just affect a single page?
  • Reversibility - how likely is it that if this goes wrong, we can undo the change and go back to how things were?
  • Nature of the downside - a potential drop in clickthrough rate is less severe than a potential penalty for example

What to do with your evaluations

Then you need to think about what you want to do with this information about surprisingness and riskiness. Here’s what I do:

What to do with your evaluation of surprisingness

The more surprising something is, the more I seek to validate the process by which someone came to the conclusion that they are referring to. I want to see more data, interrogate the logic of the conclusion, think about what could have confounded it, etc.

It’s also true that the scale of surprisingness matters. I want to differentiate between surprising-but-maybe-true and so-surprising-i-suspect-this-is-false. For the latter, most of the time I ignore it. Occasionally, I get sucked down the rabbit hole and attempt to debunk

What to do with your evaluation of riskiness

The more risky something sounds, the more I seek to hear a breadth of experiences. Does this always hold? Has anyone experienced the downside I am scared of? Can I socialise the result a bit?

Putting it all together

Those who know me won’t be surprised to hear that I often think in a 2x2:

  • Surprising, but not particularly risky: try to understand why it’s surprising, file it under “things to try”. Feel comfortable sharing while saying I’m not sure about this / it seems surprising
  • Risky, but not surprising: if trying for myself, test in the safest way possible (consider scale, duration, reversibility). Consider sharing with health warning
  • Risky and surprising: express scepticism, avoid testing until I can get my head around it

(Of course, most not surprising / not risky results are already widely-known and are uninteresting for the purposes of this discussion).

What about reputation?

It is a repeated game, so you can consider who’s sharing it - but you shouldn’t rely on reputation so much as your assessment over time of the quality of their information.

One key caveat on this in a social media age is that you should pay close attention to whether someone is truly vouching for a piece of information or just RT’ing. In rough order of credibility:

  1. Their own personal work
  2. Done by their team / with their oversight
  3. Shared explicitly with strong endorsement that suggests it matches their experience or they’ve dug into it
  4. They are “just” sharing it

Even for someone I trust greatly, if they are “just” sharing someone else’s information, I treat that as a very weak signal and it doesn’t add a lot of trust above my default internal snarky scepticism. Many people who do exceptional work share things for many reasons including that they haven’t read it but were intrigued by the headline, it vaguely sounded like it fit their priors, they like the person who wrote it, or they owed them a favour, or even that they wanted others to debunk it for them!

This whole set of advice is probably more useful to folks with at least some experience. If you’re earlier in your career, focus on listening to a few folks you really trust, read widely, and do your own research.

If you’re trying to build credibility, talk about your methodology (we do that here) and contextualise your results as much as you can.

Image credit: Tasos Mansour.