Experimentation / CRO

Experimentation and Testing Programs acknowledge that the future is uncertain. These programs focus on getting better data to product and marketing teams to make better decisions.

Research & Strategy

We believe that research is an integral part of experimentation. Our research projects aim to identify optimization opportunities by uncovering what really matters to your website users and customers.

Data and Analytics

90% of the analytics setups we’ve seen are critically flawed. Our data analytics audit services give you the confidence to make better decisions with data you can trust.

Speero Experimentation Blueprints

Experimentation Operating System (XOS) Blueprints help visualize organizational processes in order to optimize how a business delivers an experimentation program.

They have two parts: 1. they are decision support tools that are built on top of and customized, and 2. are connected with some program or business metric such as research velocity, or decision quality, speed, etc.

We present them as downloadable 'tools' (Figma, Miro, Decks, Docs, Sheets) for you to take, customize, and optimize your program with.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Filter blueprints by pillars of the XOS:

What are you testing for?

When running experiments systematically,  it is easy to start just testing for the sake of testing and missing the point of the actual actions and changes we wanna drive through experimentation. That’s why Experimentation Decision Matrix Blueprint provides you with an action plan before you start any testing. 

Based on your primary success metric —and supported by the secondary ones— what's going to be your action after concluding the test? This way, you can remind not only you and your team, but your stakeholders as well about the actual actions you plan to take after you conclude each one of your experiments. The suggestion here is just to implement a set of action tags in your knowledge database so that you can classify and take action on these results.

This framework will give you a reference on how to tag and classify your results depending on the type of hypothesis (superiority vs non-inferiority) and the action to take afterward. It will also help you enrich a narrative where “winning” and “losing” is not necessarily what’s impactful for the business.

Use Cases:

- Decide whether to implement, iterate or abandon a hypothesis.
- Update your experimentation roadmap based on prior results.
- Classify the experiments in your Knowledge Database.

‍Experimentation Decision Matrix Blueprint

The result of a test can have little to do with what comes next. So it's important to separate the concepts of 'result' from 'action'
View blueprint »
Artifact
Decision & Execution

Validation Methods

The Validation Methods Blueprint helps you decide which validation method to use when you’re doing website changes or when you need to defend validation tools for copy functionality. Shipping out big changes on the site without prior validation can be very costly and even create more problems than it solves. Disruptive ideas should be tested or validated with further research before releasing them to the public. 

While AB testing can be used to get quantitative data on how your changes are performing on the site, sometimes you might want to validate different ideas before you use your resources to build a new test, if you're not confident in the current solution. In some situations, you might not even have the bandwidth or resources to validate the idea with AB testing alone. Luckily, there are numerous options out there to get additional qualitative or quantitative data to validate your ideas.

Use Cases:

- Decide which validation method to use when doing site changes
- Get visibility to different validation tools for copy, functionality, design, etc.

Validation Methods Blueprint

Shipping out big site changes without prior validation can be costly for a business and create more problems than solve.
View blueprint »
Artifact
Planning & Process

Why Experiment?

Randomized controlled trials are known from medical scientific research. But since 2000 this scientific approach is also being used to improve and learn from internet pages. The research is double-blind (the researcher and visitor don’t know they are in the experiment, there is a hypothesis that is ‘proven’ until it’s falsified.

Use Cases

- prove that A/B-testing is a scientific research method
- show history behind experimentation

Why Experiment?

Randomized controlled trials are known from medical scientific research. But since 2000 this scientific approach is also being used to improve and learn from internet pages.
View blueprint »
Ritual
Planning & Process

Program Maintenance Metrics

The Program Metrics Blueprint lets you monitor the success of your experimentation program. Why is this important? Besides tracking revenue metrics like the number of wins or losses from the experiments, you should also look at program metrics to report back to on a monthly or quarterly basis.

If you're in a situation where you are in charge of running experiments on the site but are facing some slowdowns or issues in efficiency, you need to think about the relevant metrics that will help you identify the bottlenecks in your program.

For example, if the test velocity has been going down, you can identify where the problems lie by reviewing how many ideas are being submitted every month, or maybe even how many ideas are in the backlog. If idea generation is not the problem here, you should be tracking how long test creation is taking for the team by looking at time and sub-tasks.

Use Cases:

- Increase your testing velocity.
- Monitor the success of your experimentation program.
- Improve the efficiency and effectiveness of your program.

Program Metrics

Alongside individual experiment goals and metrics, you should be measuring the overall success of your experimentation program.
View blueprint »
Artifact
Decision & Execution

Goal Tree Mapping

The Goal Tree Mapping blueprint is a logical and hierarchical visual representation of goals, critical success factors, and necessary conditions. Using a goal tree map, you can easily visualize the metrics that matter to your business and identify sub-goals that must be achieved first.

Do you see constant changes in your company's strategy and find it hard for teams to stick to and focus on projects? Can’t find the common goal? Or maybe your stakeholders only care about top-level business goals and disregard any improvements in smaller metrics?

Goal Tree Map Blueprint lets you list all the different metrics and goals you should keep in mind when designing your program. You can also use it to break down the KPIs into smaller metrics where you can see improvements faster and communicate better. The main benefit of the Goal Tree Map is that you align your CRO team to work on the same goal.

Use Cases:

- Get buy-in for experimentation.
- Define the strategic KPIs that fuel optimization efforts.
- Identify metrics that are crucial to the company's success.
- Break down KPIs into smaller metrics where we can see improvements faster.

Goal Tree Mapping Blueprint

A goal tree is a logical and hierarchical visual representation of goals, critical success factors, and necessary conditions.
View blueprint »
Artifact
Assesment & Integration

Do you find yourself in a rut? Iterating yourself into a corner? Are you at the local maximum? Have you squeezed all the juice that's gonna come out of your lemon?

Speero’s Solution Spectrum Blueprint™ helps you manage one or more in-house experimentation programs effectively and overcome these hurdles. Solution Spectrum Blueprint lets you think about ways to be more disruptive in your tests. It lets you tag your tests as:

— Iterative

— Substantial

— Disruptive

Ultimately, if you tag your test or research solutions with these categories, you can start to measure a 'test portfolio balance' metric which allows the management of how risky or iterative the program is over time. Mature programs should have a balance of big bets focused on learning and small tweaks focused on optimization and margin gains.

During good times, it’s nice to just iterate. But when you’re in a rough patch… You have to shake up the globe.

Use Cases:

— Tag tests then measure how many small, medium, and large bets your team is making.

— Based on measuring this, see if you’re overly focused on too much quality or quantity.

— Communicate to leadership and other teams what kinds of solutions are being discussed, and what categories of outcomes are intended.

Solution Spectrum Blueprint

Solutions aren't created equal. They sometimes are small tweaks, sometimes they are huge bets. This solution spectrum provides language to help categories and communicate what types of solutions the team is working on.
View blueprint »
Artifact
Planning & Process

One of the most asked questions ever: should I run experiments simultaneously?

Generally, the answer is yes, you should. If you don't you face multiple worse 'evils'. But otherwise, there is always this decision tree.

Use Cases:

- Know when to run simultaneous experiments.
- Understand how to handle running simultaneous experiments.

Should I Run Experiments Simultaneously? Blueprint

One of the most asked questions ever: should I run experiments simultaneously? Generally, the answer is yes, you should. If you don't you face multiple worse 'evils'. But otherwise, there is always this decision tree.
View blueprint »
Artifact
Test & Learn

Why PXL?

Most experimentation programs start with prioritizing ideas based on a model like ICE or PIE. If you want to improve your prioritization you move on to PXL Model. It is much more detailed and tailored to A/B-tests ideas and hypotheses.

ICE and PIE models are subjective, based on who’s calling the shots about the changes. But the PXL model is more nuanced, with added questions like—Is the change above the fold? Changes above the fold are noticed by more people, thus increasing the likelihood of the test having an impact. Is the change noticeable in under 5 seconds? If not, it’s likely to have less impact.

How you prioritize your experiments matters. A lot. Prioritization models help you base your experiments and changes on valid foundations like user research, other data, the change’s impact, and a lot more. They are also great when you have a big backlog of ideas, to sort and prioritize them right. Prioritization models will help you get out, start things right, and dig gold from the dirt.

Use Cases:

- Prioritize your experiments based on objective standards.

- Help someone who’s only starting experimentation.

PXL Blueprint

Most experimentation programs start with prioritizing ideas based on a model like ICE or PIE.If you want to improve your prioritization you move onto PXL Model.
View blueprint »
Artifact
Planning & Process