Return to Articles 6 mins read

SEO Is a System, Not a Channel: Lessons from Zalando, Omio, and idealo

Posted April 2, 2026 by Will Critchlow

When people talk what's changing in search right now, AI is usually the first word out of their mouth. Fair. Still, that answer skips the harder bit: how you keep the crucial work understandable, measurable, and worth funding in the face of uncertainty.

In this session, I sat down with Norman Nielsen. We first met when he was VP of Growth at Omio (a longtime SearchPilot customer), and we have spent plenty of time in the weeds together: debating the best way of evaluating a testing program, and trying to tell the story of search performance in a way the wider business can act on.

 

Since then, Norman has moved into a new role as VP of Organic Growth at idealo where he has just brought SearchPilot in too. That gave the conversation a useful angle: what changes when you are responsible for organic growth at marketplace scale, across markets. We also discussed team-building, leadership scrutiny and what carries over from the last era of search into the next.

Norman has lived that journey across idealo, Omio, and Zalando. He kept coming back to one idea: if you want organic growth to hold up under pressure, you need a system, not a pile of tactics.

Here is what I took away.

 

Key takeaways

  • The fundamentals of being found still come down to two things: what you say about yourself on your own site, and what the rest of the web says about you.
  • 'GEO' will attract hacks early, but loopholes will get patched quickly, so build for durability, not quick wins.
  • A good testing program is a system, not a list of one-off experiments, and it needs a roadmap tied to strategy.
  • AI is changing how fast teams can ship, but it raises the bar on quality control, measurement, and internal trust.
  • The hard part is not running a test. It is communicating value to leadership without SEO jargon, using a story they can fund.

Norman's path: SEO skills that scale into leadership

Norman started in search, then grew into broader marketing leadership. One thing that came through is how transferable SEO skills become once you stop treating them as 'SEO skills' and start treating them as decision-making tools.

Pattern recognition, competitive awareness, and a habit of asking 'what would we have to believe for this to be true?' are useful far beyond title tags. Norman framed a lot of his growth work as applied curiosity: observe the market, spot the constraint, propose a bet, then measure it with enough discipline that you learn something either way.

AI feels new, but the cycle feels familiar

We talked about how the current AI wave resembles earlier platform moments, like the web in the 90s: excitement, skepticism, a gold rush of tools, and a scramble to figure out what 'good' looks like.

Norman shared that he was skeptical early, in the same way many smart operators were skeptical about past hype cycles. What changed for him was not a single headline or a demo. It was the practical reality that the tools started saving time in ways that were hard to ignore, especially when combined with strong human judgement.

The useful point here is not 'AI is magic.' It is that the speed of iteration is increasing. If your organization cannot learn quickly, it will struggle to compete with teams that can.

Why AI makes testing more important

One theme kept resurfacing: faster shipping increases the need for measurement.

If AI lets you draft content, build small tools, or spin up prototypes without waiting on scarce engineering time, that is great. It also means it becomes easier to flood your site with changes you cannot properly evaluate.

Norman is blunt about this. You still need a testing roadmap. You still need to decide what matters, what risks you will accept, and what metrics tell you if you are winning. Otherwise you are swapping one form of guesswork for another, only at higher volume.

That is also where SEO teams have an advantage. If you have already lived through algorithm volatility, you are less tempted to confuse activity with impact. You have seen what happens when people ship 'best practices' without proof.

A testing program needs a roadmap, not a queue

We spent time on something that experienced teams know, but newer teams learn the hard way: one test does not equal a testing program.

A program has a structure. It ties back to strategy. It has themes, not random ideas. It also treats results as inputs, not trophies. A win is a starting point for a tighter follow-up question, and a loss is a clue about what you misunderstood.

We also talked about the half-life of ideas. If a test is only valuable for a brief moment, it is not a strong foundation. The goal is to generate insights, then keep re-validating them as platforms change.

'GEO' and the temptation of shortcuts

We also talked about early-stage 'GEO' tactics. Norman's view is simple: yes, some of this is easy to game right now, and yes, people are selling the gaming as a service.

That does not mean it will last.

The parallel to early SEO is obvious. Loopholes appear, marketers exploit them, platforms patch them, and the cycle repeats. The difference now is speed. Norman expects patches to land faster than they did in the old Google cat-and-mouse era, which makes fragile tactics even less attractive.

So what should teams do instead? Double down on the inputs that survive patches: clear, exhaustive information on your own site, and credible third-party validation across the web.

The internal job: making the work legible to leadership

A big chunk of the conversation was about communication, because this is where many SEO and AI initiatives fail.

Norman has seen that if you explain the work in channel jargon, it dies in the room. If you explain it in outcomes, it travels. Testing helps because it turns fuzzy claims into something closer to a business story: 'we ran a controlled change, we measured impact, here is what moved, and here is what we will do next.'

That is also how you protect your team. When platforms shift and performance wobbles, leaders want a plan. A testing program becomes part of that plan because it creates a repeatable way to make decisions under uncertainty.

Closing: the 'second job' problem, reframed

If there is a quiet tension under all of this, it is the workload. Teams are being asked to keep shipping while also learning a new set of tools and a new set of platform rules.

Norman's take is practical: treat AI capability like a real investment. Create frameworks. Make space for experimentation. Push beyond 'look, I made a custom GPT' and into 'here is how we connect tools, workflows, and measurement so the company gets leverage from the learning.'

If you do that, you build an organization that can adapt, and that is the only real edge when search keeps changing shape.

Put Search in Control Mode with SearchPilot

A lot of what Norman and I discussed comes back to the same principle: shipping faster only helps if you can tell what worked.

That is where SearchPilot fits. We help teams run controlled experiments on real sites, across templates, content, navigation, and key commercial pages, so you can move from opinions to evidence. That matters in classic SEO, and it matters even more as teams start measuring new surfaces like AI-driven discovery alongside the blue links.

If you want to turn search into a channel you can plan and fund, schedule a demo and start with a focused test plan and a cadence you can sustain. When you can prove impact, the conversation changes.

Sign up to receive the results of two of our most surprising SEO experiments every month