How to Build CoE with the Right Tech, Process, and Contract
Table of contents
Stewart Ehoff is the head of experimentation at RS Group, an FTSE 100 distributor, and service and solutions provider of electrical components, widgets, thingamabobs, and whatchamacallits.
He transformed a centralized experimentation team into a Center of Excellence that helped 12+ teams start up experimentation, all while increasing velocity and connecting tests to broader business strategy. The centralized team tested 5% of shipped stuff. Now, they’re well on their way to making experimentation modus operandi for the whole company.
This is a story about their journey.
Our CoE journey started three years ago. Back then, we had a centralized experimentation team running tests end to end. The whole team included developers, designers, and analysts. They were doing everything related to testing.
Here’s the kicker: We quickly realized that even though we had high test velocity, our team was covering only 5% of all the stuff the business was shipping.
The tests weren’t always connected to the broader product strategy, product goals, and roadmaps. We tested 5% of stuff, while 95% slipped through our fingers.
Helpful blueprint: Org Charts — understand 3 main ways to structure your experimentation teams so you can increase maturity and distribute capabilities.
We wanted to make testing the default operating method of how our business ships stuff. Not just for 5-10%. You can only do this if:
- You have a process that lets teams leverage experimentation easily as part of their day-to-day work.
- The underlying technology that enables you to embed experimentation within product and engineering teams.
So, our journey toward CoE is a story of those two aspects — process and technology. For instance, moving from an internally managed client-side tool to a server-side tool was a big step toward maturity.
We also created all the governance processes — how product teams can use experimentation to reach their goals, reach them faster, and with more evidence. Kill stuff that doesn’t work, and more.
We even created a step-by-step, paint-by-numbers process for taking teams from zero to one with experimentation. It’s basically an agreement between us and product teams, speaking about:
- Who’s responsible, accountable, and consultable for what?
- What’s experimentation and what value does it bring?
- Product teams' KPIs, priorities, and strategies to reach them.
Helpful Blueprint: XOS Tool Stack — glance at all the specific tools you need to start and run effective experimentation programs.
It's very difficult to go from a fully centralized team where you do everything, to giving away the keys to the city to everyone else and say: now everything is up to you. That's a really harsh transition. You can't cut through it in a week.
So we started slow, with one squad like a demo version, while the central team continued working as before. This allowed us to learn and keep testing while we scaled experimentation across the company.
Now we work with 12 different squads across the company and there’s no centralized model anymore. The Center of Excellence is a globally recognized business function across the organization. We work within the EMEA, APAC, and Americas regions spanning 31 countries globally.
My team does not come up with test ideas, we don't build tests, we don't ship stuff. All of the experiment velocity within our business is driven through the product teams who own profit, loss, metrics, and goals.
What we do govern is the governance itself. Everyone must use the same tools, technologies, and processes as us. All experiments are recorded in a central operating system and database where you can see everything.
Helpful Blueprint: Program Metrics — monitor the success of your experimentation program, and spot bottlenecks and issues in the fastest way possible.
Besides governing, we also soft support product teams that experiment. Some squads get more intimate support, depending on the situation at hand. Here’s the deal: it’s not our goal to be a first-line support for these squads.
My team takes squads from zero to one experiment. That’s their goal. Only then helping those teams scale and to loosen the reins as they get better at experimenting.
The idea is that over time, we build full-on experimentation squads who just touch base with us around particular challenges or specific experiments.
But we’re not over our biggest challenge yet—most of the test analysis is done by my team. Currently, we don't have the structure or resources for dedicated analysts in product squads.
We still can’t delegate the full experimentation process to a squad because we still depend on CoE for test analysis. But we are where we are.
Currently, we’re looking to scale through this. How can we manage the near-term capacity of a business with a 4x velocity curve?
We’re also thinking about the future model of our CoE and broader organization. How to construct our squads and align them around metrics, customer problems, and transient horizontal journeys?
I like to split the experimentation specialism role into two distinct roles:
- Experimentation specialists with good communication and stakeholder management skills, some product experience, and lots of experimentation experience.
- Data specialists who look after data quality, technology, test validation, test analysis, and guardrail metrics. They meta-analyze the whole program.
This is a hot, unconventional take. But while scaling I found out that finding people who can do both in a single role is hard. Very hard. I know they exist because I know many of them. But they're unicorns.
This is something we do differently at RS Group. There's no one-size-fits-all solution for how you build your team and Center of Excellence. The conditions of the market and hiring often determine how you develop.
I'm a big believer in trying stuff out and breaking things, as fast as possible. Then you catch up and learn. We honed this and did it 12+ times by now, so we’ve built something akin to a blueprint/framework.
It's a step-by-step, paint-by-numbers onboarding process. It may not work for other businesses, but for our company, it’s a great education program. On a high level, it answers questions like:
- What’s experimentation?
- Why is it important for our company?
- What value does it bring?
- How do we operate as a CoE?
- What are our priorities and KPIs
- How to reach those priorities and KPIs?
Sometimes when you engage with other teams (especially at the beginning), people get a bit defensive. We often show them that we’re to help them achieve their goals faster through experimenting, and that quickly goes away.
But if you don’t help them in the right way, it may seem like you’re trying to take away their stuff/product. This way, they don’t feel like we’re here to kill their darlings.
If we were to engage our squads without this, everything would fall apart because there’s no agreement. We wouldn’t be clear on the goals and KPIs. We wouldn’t be clear on responsibilities and accountabilities:
- Who does what in the ecosystem?
- Who’s responsible for specific parts of the experimentation process?
- Who should be consulted when things go wrong?
When you pull all these different people together across a squad, everyone knows where they stand at different parts of the process. Where they add value and where they can (perhaps) step away.
Helpful Blueprint: What’s in a Strategic Testing Roadmap? — a great OKR-style action plan for a testing program and a punch list of insights that help you move the needle on goal-associated KPIs.
Did you know we organize workshops for increasing the maturity, speed, and impact of your testing/experimentation programs? Feel free to take a look.