Experimentation / CRO

Experimentation and Testing Programs acknowledge that the future is uncertain. These programs focus on getting better data to product and marketing teams to make better decisions.

Research & Strategy

We believe that research is an integral part of experimentation. Our research projects aim to identify optimization opportunities by uncovering what really matters to your website users and customers.

Data and Analytics

90% of the analytics setups we’ve seen are critically flawed. Our data analytics audit services give you the confidence to make better decisions with data you can trust.

CRO vs. CXO vs. Experimentation

What do you do? Is it CRO, CXO, or Experimentation? 

How do you measure what you do? Conversion rates, learn rates, or win rates?

What’s the point of what you do? Is it to drive revenue, customer understanding, or decisions?

Do marketers even care about this nomenclature? Are product owners busy experimenting unaware of the differences? Meanwhile, are UX folk being shoehorned into the “experience” related acronym offered up by “CXO”?

It’s a fun (ongoing) debate in our field, especially the “is CRO dead?” flavor of it.

Link 


Will it go on forever? As Jorden rightly points out, we need some next steps and I’m pretty sure I have a way to resolve this debate once and for all. I think all three terms describe slightly different approaches and attitudes to the same discipline.

To convince you of the above, I share:

  1. My framework for defining and differentiating between CRO, CXO, and experimentation.
  2. The metric strategy which relates to each of the three schools of thought. 

I’m thinking this can be our industry’s new language. So whether you’re in marketing, product, sales, or ‘growth’, or you’re the CEO or COO, you have a common language, understanding, and expectations relevant to the flavor of the discipline you implement.  

 First, let's get square with the definitions.

Terminology is dependant on context 

The three terms and what they mean depends on the ‘persona’ of the company from where they set up testing. It's based on their ‘goal’ of the testing program. 

There are 3 places, goal areas, to start a testing program:

1. Testing goal = a process focus. This is “Experimentation”.

Characterized by thinking about the team, guardrails, and systems to lower the cost of testing while increasing output. The risk with this approach is missing the forrest from the trees. You can watch a good example of the “experimentation” approach from Mongo DB.

2. Learning goal = the customer-centric approach. This is “CXO”

Characterized as a more mature testing approach focusing on customer problems, and therefore a rare place for businesses to start with testings. Lots of lip service here. The risk with this approach is not focusing on the system enough or getting enough wins for leadership buy-in. You can watch a good example of this CXO approach from The Bouqs 

3. Win goal = a numbers game. This is CRO

Characterized by a focus on conversation rates. The risk with this approach is that work gets derailed when the program hurts LTV by driving short-sighted wins. 


In an ideal world, the three terms would actually be phases of testing maturity that businesses move through, from left to right. But they're not. Unfortunately. 

Most companies that try to do testing start on the right. And there’s certainly the support and an argument to do so:

  • Testing software has lowered the accessibility and costs of testing
  • There’s money on the table, right?
  • And if you believe the case studies out there, everyone’s insanely ROI positive.
We can’t talk, we have some of the same. 


So simple to get these returns right? Wrong. We’ve previously debated the feasibility to attribute accurate ROI figures from experimentation

Link


There’s a massive disconnect between what’s expected, and what’s achievable and can be attributed to A/B testing. 

New practitioners and metric owners see testing as a solution, a revenue ‘channel’ rather than what it is, an operating system and process. 

This speaks to the root cause of our love-hate relationship with the “CRO” term. The way to improve testing isn’t a focus on conversion rates, it’s about making improvements to your team, culture, and processes. 

This is why programs centered around 'CRO' can get off track. With the CRO focus, testing can be divorced from companies’ senior leaders, product, marketing, and brand strategy. It often incentivizes the wrong activities. It’s a tail-wagging the dog situation. 

It’s a situation where:

  • A website can start to look like an arcade game with every “growth hacking” technique thrown at it. 
  • Customers are trained to look for promotions before buying.
  • Teams buy cheap conversions, juicing revenue at the expense of long term growth

The incentives can get perverse easily. 

I LOVE how Chetan Sharma describes this situation with an example from Airbnb, where the metric strategy greatly influenced how the experimentation and product teams were incentivized:

​​


So this leads us into part two of this post: metrics. A successful testing program aligns a metric strategy with the product and brand strategy.

Metric Taxonomy

Metrics are the cornerstone of measurement, and differences in measurement are how I’m ultimately defining CRO vs. CXO vs. Experimentation. Here’s a metric taxonomy I find helpful. 

1. Goal metrics

These are lag metrics. Classically they center around revenue, the customer, or processes. 

Side note: 4 Disciplines of Execution and Playing to Win both stress that customer goal metrics are where to focus rather than revenue metrics. 

Goal metrics are what to set OKRs and WIGs around. They are the ‘GPS’ of your program or company. They are the ‘north star’ metrics we hear a lot about. I like what Lenny Rachitsky writes on the topic of North Star metrics

2. Driver metrics

Driver metrics are short-term, time-sensitive metrics. They are the “what to measure to see effects” metrics. Driver metrics are surrogates, indirect metrics, and leading indicators. Classic leading indicators include conversion rate, bounce rate, and form completions.   

Driver metrics are the speedometer of the business. They are helpful, but without the context of the GPS, useless.

Here are some example driver metrics:


3. Guardrail metrics

This is a category similar to driver metrics but focused on the performance of the program. These metrics protect the downside, assure trustworthiness, and are meant to help ‘grease the wheel’ of the machine. They are driver metrics of the program itself.

Here is a working set of guardrail metrics for a testing program:


So, now we have a framework for metrics. So how do we use all this to differentiate CRO vs. CXO vs. experimentation?


For me, experimentation is the process and CRO/CXO are the applications or mindsets applied. 

But that said, as I discussed above, experimentation could also be considered a mindset/persona. There are many (mostly enterprise) programs that are jumpstarting experimentation with a core focus on process first. So I think it deserves to be at the same conversation level as CRO/CXO.

If we connect our test program personas to the associated goals and metrics, here’s what we have:


I know there are issues with this simplification. It’s a model, and all models are wrong. But it helps me distinguish between objectives and align a conversation I’m having with a client, colleagues, and prospects on where and what to focus on. 

I hope it helps you too.

Related Posts

Who's currently reading The Experimental Revolution?