Experimentation / CRO

Experimentation and Testing Programs acknowledge that the future is uncertain. These programs focus on getting better data to product and marketing teams to make better decisions.

Research & Strategy

We believe that research is an integral part of experimentation. Our research projects aim to identify optimization opportunities by uncovering what really matters to your website users and customers.

Data and Analytics

90% of the analytics setups we’ve seen are critically flawed. Our data analytics audit services give you the confidence to make better decisions with data you can trust.

Speero Experimentation Blueprints

Experimentation Operating System (XOS) Blueprints help visualize organizational processes in order to optimize how a business delivers an experimentation program.

They have two parts: 1. they are decision support tools that are built on top of and customized, and 2. are connected with some program or business metric such as research velocity, or decision quality, speed, etc.

We present them as downloadable 'tools' (Figma, Miro, Decks, Docs, Sheets) for you to take, customize, and optimize your program with.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Filter blueprints by pillars of the XOS:

What are the main Quality Assurance steps in A/B testing?

A step by step guide for Quality Assurance process in A/B testing.

How can you differentiate each part of QA?

What are the main areas and possible issues you should keep an eye on?

Use Cases:

- Quality Assurance is a must when you are A/B testing.
- While there are different tools and use cases, the main process needs to be the same for each experiments.
- Review and verify the setup, then test in different browsers and devices.

AB Testing QA process Blueprint

A step by step guide for Quality Assurance process in A/B testing.
View blueprint »
Ritual
Planning & Process

How do I present ROI and customer learnings across tests for a program?

The structure of the database of test learnings is important for communicating to stakeholders and assuring that decisions and actions are documented correctly. It can be a cultural tool above all, as it is a recipe for changing culture, from a data foundation. Not just arm waving about the theoretical benefits of democratized decision making.

Use Cases:

LOTS to take away from this structure below. Most of the takeaways I'd think should effect how you communicate to stakeholders.

For example:

1. A loss can equal a save

2. A flat test can still be 'implemented' (a win, it confirmed smth)

3. It shows the emphasis on RELATIVE effects across tests, which isn't talked enough in our industry dialog. Accuracy is over rated IMO, Precision FTW.

4. A financial model is critical. Create a goal tree map, then do a model to translate this into relative potential revenue. Use for BOTH prioritization and for reporting like this. It changes the game.

5. But don't only present $ numbers, also pair EVERY TIME with customer learning sentences, to TLDR what it meant for your customer's behavior and/or perceptions.

How to structure reporting across tests? Blueprint

The structure of the database of test learnings is important for communicating to stakeholders and assuring that decisions and actions are documented correctly.
View blueprint »
Artifact
Decision & Execution

How should I structure my experimentation program test reporting process?

Test reporting is critical to decision making, and also overall program velocity. The faster you can report out, the faster the decision can come. This is Agility as a metric. We think there are two sides to test reporting, the automated, and the bespoke or manual side. 1. BI tools like looker studio and even test tools themselves provide the automated side of things, and 2. the 'story telling' where different metrics are highlighted and the implications and insights are presented is the custom side of things. This blueprint acknowledges the need and balance for these to parts for experimentation test reporting.

Use Cases:

- increase awareness for what goes into a test reporting phase in the experimentation process

- align the team on who does what part, and what part is needs more work

- define your own programs component parts for this process

How should I structure my (automatic) test reporting? Blueprint

Test reporting is critical to decision making, and also overall program velocity. The faster you can report out, the faster the decision can come. This is Agility as a metric.
View blueprint »
Ritual
Decision & Execution

What are the differences between synchronous and asynchronous testing tool snippets?

What are the differences between synchronous and asynchronous testing tool snippets? Which one suits better for your website and testing program?

Use Cases:

While the difference between synchronous and asynchronous testing tool snippets may seem small, the actual impact this has on your website loading speeds, test flickering and overall user experience can be quite significant.

Synchronous vs Asynchronous Testing Tool Snippets - Pros and Cons Blueprint

What are the differences between synchronous and asynchronous testing tool snippets? Which one suits better for your website and testing program?
View blueprint »
Artifact
Planning & Process

Where are the key areas of opportunity on our website? How well does our website meet with key UX principles?

The definition of a “heuristic” is “a mental shortcut that allows people to solve problems and make judgments quickly and efficiently”. As such, our UX heuristic framework is made up of a set of guidelines which allow our team to assess and analyse any digital user experience and identify areas of opportunity for optimization.

Speero's UX heuristic framework was developed by combining and consolidating the frameworks used by the industry’s leading UX and CRO agencies. This resulted in a set of 60 guidelines across 5 heuristic themes;

Value: does the content communicate the value to the user?
‍Relevance: does the page meet user expectations in terms of content and design?
‍Clarity: is the content/offer on this page as clear as possible?
‍Friction: what is causing doubts, hesitations, uncertainties, and difficulties?
‍Motivation: does the content encourage and motivate users to take action towards the goal?

Frameworks similar to Speero's Heuristics Blueprint include:

MECLABS Conversion Index

Conversion's The Lever Framework

Conversion's The Lift Model

Use Cases:

- Assess any digital experience to understand and identify areas of opportunity for optimisation
- Use as a framework to tag and track action (JDIs, experiments, etc)

Heuristics Blueprint

The definition of a “heuristic” is “a mental shortcut that allows people to solve problems and make judgments quickly and efficiently”.
View blueprint »
Ritual
Assesment & Integration

How do I turn research insight into action?

Use this decision tree to help you to effectively categorize the insights generated via research. Effective categorization is where we turn insight into action and is the first step in developing an experimentation roadmap from research. Once you've categorized your insights, each list of insights can be dealt with accordingly, e.g. JDIs can be added to the development backlog or the next sprint, Instrument items can be handled by your analytics or development team, etc.

Use Cases:

- Turn research insight into an experimentation roadmap
- Create actionable workstreams for different teams
- Avoid the implementation crisis

Insight Categorisation Blueprint

Use this decision tree to help you to effectively categorize the insights generated via research.
View blueprint »
Ritual
Assesment & Integration

How do I align my program with business and customer goals?

Use Strategy Maps to orient your experimentation program. The focus will depend on the strategy and positioning of the brand.

Use Cases:

Understand if your experimentation is focused around Brand/Performance/Product marketing and Acquisition/Monetization/Rentention and whether your positioning is Sales/Marketing/Product led.

Strategy Maps Blueprint

Use Strategy Maps to orient your experimentation program. The focus will depend on the strategy and positioning of the brand.
View blueprint »
Artifact
Assesment & Integration

How to define different research objectives?

It can be challenging to effectively incorporate user research into experimentation programs on an ongoing basis. However, categorising all research initiatives into one of these three categories, can help you to plan research more effectively. Use this framework to communicate these three core research objectives.

Use Cases:

- To plan how to incorporate user research into your experimentation program on an ongoing basis
- To plan specific research initiatives, and keep them focused on a primary objective
- To communicate the different types of research being conducted to the wider business

Research Objectives Blueprint

It can be challenging to effectively incorporate user research into experimentation programs on an ongoing basis.
View blueprint »
Artifact
Assesment & Integration

What is the goal of the testing program?

There are different reasons to test and experiment, and they range from revenue to customer to process goals. CRO programs care about wins, they want to push for more money. CXO programs care about customers and thus are more focused on metrics around customer satisfaction and retention as (surrogates for money). XOS or 'experimentation' programs believe that a culture of using data and ultimately science is a better way to run an innovative company.

Use Cases:

- to help communicate with stakeholders where the focus should be

What is the goal of the testing program?

There are different reasons to test and experiment, and they range from revenue to customer to process goals.
View blueprint »
Artifact
Assessing and Scaling the Flywheel

Is my experimentation program profitable?

Assessing the revenue impact of your experimentation program is essential for informed decision making in business. Using a testing revenue model to measure ROI impact will help you understand the effectiveness of your strategies. Evaluating the direct revenue contribution of individual experiments allows you to determine where to allocate resources based on their effectiveness, ultimately enabling data-driven decision-making.

Use Cases:

- to measure the potential experimentation impact over time
- monitor the success of the experimentation program
- to acknowledge test performance decline over time

Testing revenue model

Assessing the revenue impact of your experimentation program is essential for informed decision-making in business.
View blueprint »
Ritual
Decision & Execution

Should I implement a data warehouse in my business or not?

Using Data Warehouse vs Not Using Data Warehouse Blueprint helps with just that. It shows that a data warehouse can help you bring all your data together to make better decisions and improve your business. Without a data warehouse, these tasks can be more challenging and less efficient.

Use Cases:

- The blueprints help you see why a data warehouse is good for your business.
- By looking at the table, you can decide if you need a data warehouse.
- The table guides you on what features to focus on if you're thinking about getting a data warehouse.

Using Data Warehouse vs Not Using Data Warehouse Blueprint

The "Using Data Warehouse vs Not Using Data Warehouse" blueprint compares using a data warehouse to not using one for your business.
View blueprint »
Artifact
Planning & Process

"Hypothesis Testing" and "Do No Harm" treatments represent different experimental goals, hence it is important to differentiate them to ensure correct statistical analysis and interpretation of results, as well as increase your testing velocity. Hypothesis testing is used to test whether one variation is significantly better than another, while a "Do No Harm" treatment is used to test whether one variant is not significantly worse than another by a predetermined margin.

Use Cases:

- Estimating the duration of an experiment
- Balancing a portfolio of experiments in your program
- Increasing your testing velocity

Hypothesis Testing vs “Do No Harm” Treatments in A/B Testing

"Hypothesis Testing" and "Do No Harm" treatments represent different experimental goals, hence it is important to differentiate them to ensure correct statistical analysis and interpretation of results.
View blueprint »
Artifact
Test & Learn

Based on my website's traffic and conversion numbers, how many experiments can I run at each step of the user journey?

A/B testing is interconnected with statistics and statistics always requires a certain sample size to draw any sort of meaningful results. Using a test bandwidth calculator and the —Where and how should I test to make the most money? Blueprint—you can determine whether your website's traffic is fit for A/B testing and whether you have the ability to go deeper into segments and down-the-funnel metrics.

Use Cases:

- Is your website eligible for A/B testing based on its current traffic and conversion volumes?
- Is it feasible for you to run experiments with smaller impact or should you focus on high-impact experiments higher in the funnel?

Where and How Should I Test to Make the Most Money? Blueprint

Based on my website's traffic and conversion numbers, how many experiments can I run at each step of the user journey?
View blueprint »
Artifact
Assesment & Integration

Experimentation is rife with tools. Which to use?

While you keep the budget lean? XOS Tool Stack blueprint provides an overview of the main software categories you should be familiar with, as well as some examples of specific tools you can use in each category.

Use Cases:

- Assess the required software tools for an experimentation program
- Budget and estimate costs for the experimentation practice

XOS Tool Stack Blueprint

Get to know the most common software categories and tools that professional experimentation teams use to perform their job.
View blueprint »
Artifact
Assesment & Integration

User research methods framework; which method is best suited to your research objective?

User research methods framework; which method is best suited to your research objective? Blueprint helps you plan which research or validation method suits best your current business needs. Use this blueprint to support user research strategy and planning. You can also use it to identify which research activity best suits your research goal or business case based on:

  1. Data type
  2. Effort (time and money)
  3. Value (signal strength or confidence level)

For instance, Heuristic Review gets a quite low-value score. Why? Because it's as close to having an opinion as we’re ever going to get. Even though it's a vital part of the process, UX Review doesn’t have the same signal strength as Customer Interviews or Usability Studies. But it requires a lot less effort (time and money) than Diary studies, for example.

Use Cases:

- Assess the suitability of different user research methods depending on your research objectives or business questions.
- Understand the effort vs. value involved in different user research methods to help plan the most appropriate methodology.
- Identify opportunities to combine research methods to cover a range of research "types" and increase the strength of a signal.

User Research Methods Framework; Which Method is Best Suited to Your Research Objective? Blueprint

This is a framework to support you in planning which research/validation method is best suited to your business needs.
View blueprint »
Artifact
Assesment & Integration

With User- or Session-based Test Analysis?

With User- or Session-based Test Analysis? blueprint, you can formalize the mental model you should have for using session or user counts when you’re designing your test, deciding on the metrics, or performing the post-test analysis.

Use Cases:

- When setting up a study design, metric selection.
- When setting up metrics in behavioral tools.

User- or Session-based Test Analysis? Blueprint

The metric you choose for doing a post test analysis matters a lot, while most experiments use session counts as the default metric for test analysis, this blueprint helps you clearly understand the different scenarios i
View blueprint »
Artifact
Planning & Process