Experimentation / CRO

Experimentation and Testing Programs acknowledge that the future is uncertain. These programs focus on getting better data to product and marketing teams to make better decisions.

Research & Strategy

We believe that research is an integral part of experimentation. Our research projects aim to identify optimization opportunities by uncovering what really matters to your website users and customers.

Data and Analytics

90% of the analytics setups we’ve seen are critically flawed. Our data analytics audit services give you the confidence to make better decisions with data you can trust.

Three Key Steps to Launching CoE

Three Key Steps to Launching CoE

Melanie Kyrklund has 14 years of experimentation experience under her belt. Currently, she’s the head of experimentation at Specsavers, a 40-year-old optics business present in 11 countries. 

She has worked in global, complex organizations before (Booking.com, Staples), but this is the first time in her career she has had the opportunity to establish an experimentation Centre of Excellence. Explore how, six months into launching the Centre of Excellence at Specsavers, she has identified 3 key process steps that she hopes will be useful to other practitioners who are looking to embark on this journey.

Struggling to scale

Experimentation Org Structures

I have a small team of three. We were running as many experiments as possible within our resources, working across a range of digital products and markets with the support of an agency. When you’re spread so thin, ideation rapidly becomes a bottleneck, not just development.

The other side of the coin was the commercial perspective. How do you handle the increased demand for experimentation coming from different teams? This is when I realised that it would be more cost-effective if I started distributing the experimentation process instead of trying to add more experimenters to my team.

Helpful blueprint: Experimentation organization chart—3 most common ways to structure your experimentation teams.

Step 1: Define & standardise how experimentation is done

A/B testing QA process

My first step toward solving it was to define the whole experimentation process. A standard framework needed to be in place that could create consistency and clarity around how experimentation should be done at Specsavers.

You need to be able to say: “Ok, these are exactly the phases you go through when experimenting. These are the constituent parts of each process step.” What I needed here was a framework for experimentation. We ended up adopting  Clickvalue’s framework, which breaks the process into 4 high-level areas:

  1. Problem discovery
  2. Problem validation
  3. Solution discovery
  4. Solution validation

This framework became a cornerstone of how we talk about and train people on experimentation, which leads me to the next part of this story: planning the people.

Helpful blueprint: A/B testing QA process—A step-by-step guide for QA in A/B testing. The CRO process—all the steps, roles, and activities you need for a successful CRO process, from start to finish.

Step2: Plan the People

To scale to a CoE, you need to build competence within the teams. All of this starts with a maturity model and gap analysis. You bucket teams across the organization based on common attributes and their capacity to absorb the experimentation process across different disciplines. Then you know which teams need more resources from the CoE and which are more autonomous.

You adjust your training approach and resource plan to each maturity level accordingly. The gap analysis is a checkpoint for stakeholder feedback. What's important to them? As opposed to just creating a program without knowing its gaps.

Alongside training programs we needed to build out documentation. Not just documenting technical stuff such as the QA process. The part I struggled with initially was how to train and document the idea evaluation process, as increasing the number of hypotheses coming from teams was a key objective of ours. In this situation, creating a workflow and articulating it in the best way seemed hard.

  • How do you ensure that you have well-defined ideas so you don’t end up in the bottleneck in the first place?
  • How do you make sure you’re effective and consistent with your ideas, while you align them with the company goals?
  • How do you improve and refine ideas before you execute them?

Alongside providing training in research and problem discovery, the solution turned out to be rather simple - a briefing template. When someone submits an experiment idea, they have to fill in information relating to:

  • How they arrived at the problem
  • What’s the problem
  • Have they quantified the problem

Helpful blueprint: Experimentation Audit - find all gaps in your program, whether they come from skills, tools, data, process, or strategy.

Step 3: Govern the Program

Looking back I realize how critical this pillar of the CoE process was as it creates accountability and fosters the right way of working. At the highest level, we drafted OKRs for the CoE and the individual teams we were working with to mutually agree what initiatives we will work on in the upcoming quarter. OKRs fell into two buckets:

  1. Capability building. Here we look at training objectives and competence within teams, and which enablers to focus on at CoE level
  2. Experiment velocity and quality KPIs. This is your key to actual experiment delivery and also includes commercial KPIs where relevant

The following frameworks were also set up:

  1. Governance model
  2. RASCI model

The governance model was born out of the gap analysis. Marketing stakeholders were unclear on what autonomous testing meant - and in which areas they could have autonomy. We had never been explicit about separating what fell under product optimisation (and the technology team) and commercial optimisation (which fell under regional marketing). The CoE set up a governance model to determine who tests where. 

We analyzed our website and found all the ‘commercial real estate’: these are marketing and merchandising driven areas/features that could be configured locally by regional teams. The next steps here was to create testing templates in this area that marketing teams can use to quickly configure and launch experiments.

A RASCI model helped us determine who’s responsible for each part of the experimentation process across teams.  It provides clarity on how we work and communicate with each other in a complex global business. 

Finally we had to think about the workflows which foster the correct way of working - we are building out our capabilities in Airtable, providing frameworks and templates and optimizing our program reporting which can feed back directly into the OKRs. Governance wasn’t my comfort zone coming from a background in experimentation strategy.. But retrospectively nailing the governance piece was key as it provides clarity for everyone.

Helpful Blueprint: RASCI Matrix for Experimentation - decide easily who's accountable and responsible, who supports or consults, and who needs to be informed about your testing activities.

Closing thoughts

Last year I told  leadership I wanted to double the number of experiments and establish a CoE model. While there was a complete alignment, I was challenged on how my teams would provide value during the transition. This created a unique problem for me—to develop CoE, I need to focus on and deliver operational maturity but also align that with (providing) value. 

This forced me to think about the tension of optimal operational maturity vs value:

  • How can I make the two coexist? 
  • How can I align the company goals with the center of excellence workflow that follows the value structure?

As an experimenter, you simply derive satisfaction from the process. I know how much experimentation has provided me with a pragmatic way to navigate difficult business situations - and would like more teams to benefit from it.

However, leadership isn’t always sold on the value of the process. They’re also interested in the commercial value experimentation can deliver. This is (somewhat) at odds with the CoE itself because the CoE is about building enablers first to generate value later. We ensure we follow organizational goals through the OKRs we have in place. When setting OKRs with individual teams we include key results that are focused on the business outcomes they are looking to achieve, alongside capability-based ones.

Currently the CoE funds experimentation resources in some teams to generate value - from developers working server-side on feature experimentation and roll-outs, to research and clientside builds for less mature teams. Over time we would like more teams to be 100% resourced to run experimentation without our help. 

The CoE will continue to evolve and grow over the coming years as more teams and departments become prepared for experimentation and our internal structures and business models evolve. Reflecting on our journey to date however, these three areas have been key to initiating our shift from a centralised team to a CoE model focused on enablement:

  1. Defining and standardising the experimentation model
  2. Planning for people
  3. Establishing governance

Did you know Speero organises workshops for increasing the maturity, speed, and impact of your testing/experimentation programs? Take a look.

Did you like this article?

(Your feedback helps us write better!) worst 1 - 10 best

Did the article resonate with you?

What aspects did you enjoy or find lacking?
Were there elements you felt we should've covered?
Thank you.
Oops! Something went wrong while submitting the form.

Related Posts

Who's currently reading The Experimental Revolution?