Use Strategy Maps to orient your experimentation program. The focus will depend on the strategy and positioning of the brand.
Understand if your experimentation is focused around Brand/Performance/Product marketing and Acquisition/Monetization/Rentention and whether your positioning is Sales/Marketing/Product led.
It can be challenging to effectively incorporate user research into experimentation programs on an ongoing basis. However, categorising all research initiatives into one of these three categories, can help you to plan research more effectively. Use this framework to communicate these three core research objectives.
- To plan how to incorporate user research into your experimentation program on an ongoing basis
- To plan specific research initiatives, and keep them focused on a primary objective
- To communicate the different types of research being conducted to the wider business
There are different reasons to test and experiment, and they range from revenue to customer to process goals. CRO programs care about wins, they want to push for more money. CXO programs care about customers and thus are more focused on metrics around customer satisfaction and retention as (surrogates for money). XOS or 'experimentation' programs believe that a culture of using data and ultimately science is a better way to run an innovative company.
- to help communicate with stakeholders where the focus should be
Assessing the revenue impact of your experimentation program is essential for informed decision making in business. Using a testing revenue model to measure ROI impact will help you understand the effectiveness of your strategies. Evaluating the direct revenue contribution of individual experiments allows you to determine where to allocate resources based on their effectiveness, ultimately enabling data-driven decision-making.
- to measure the potential experimentation impact over time
- monitor the success of the experimentation program
- to acknowledge test performance decline over time
Using Data Warehouse vs Not Using Data Warehouse Blueprint helps with just that. It shows that a data warehouse can help you bring all your data together to make better decisions and improve your business. Without a data warehouse, these tasks can be more challenging and less efficient.
- The blueprints help you see why a data warehouse is good for your business.
- By looking at the table, you can decide if you need a data warehouse.
- The table guides you on what features to focus on if you're thinking about getting a data warehouse.
"Hypothesis Testing" and "Do No Harm" treatments represent different experimental goals, hence it is important to differentiate them to ensure correct statistical analysis and interpretation of results, as well as increase your testing velocity. Hypothesis testing is used to test whether one variation is significantly better than another, while a "Do No Harm" treatment is used to test whether one variant is not significantly worse than another by a predetermined margin.
- Estimating the duration of an experiment
- Balancing a portfolio of experiments in your program
- Increasing your testing velocity
A/B testing is interconnected with statistics and statistics always requires a certain sample size to draw any sort of meaningful results. Using a test bandwidth calculator and the —Where and how should I test to make the most money? Blueprint—you can determine whether your website's traffic is fit for A/B testing and whether you have the ability to go deeper into segments and down-the-funnel metrics.
- Is your website eligible for A/B testing based on its current traffic and conversion volumes?
- Is it feasible for you to run experiments with smaller impact or should you focus on high-impact experiments higher in the funnel?
While you keep the budget lean? XOS Tool Stack blueprint provides an overview of the main software categories you should be familiar with, as well as some examples of specific tools you can use in each category.
- Assess the required software tools for an experimentation program
- Budget and estimate costs for the experimentation practice
User research methods framework; which method is best suited to your research objective? Blueprint helps you plan which research or validation method suits best your current business needs. Use this blueprint to support user research strategy and planning. You can also use it to identify which research activity best suits your research goal or business case based on:
For instance, Heuristic Review gets a quite low-value score. Why? Because it's as close to having an opinion as we’re ever going to get. Even though it's a vital part of the process, UX Review doesn’t have the same signal strength as Customer Interviews or Usability Studies. But it requires a lot less effort (time and money) than Diary studies, for example.
- Assess the suitability of different user research methods depending on your research objectives or business questions.
- Understand the effort vs. value involved in different user research methods to help plan the most appropriate methodology.
- Identify opportunities to combine research methods to cover a range of research "types" and increase the strength of a signal.
With User- or Session-based Test Analysis? blueprint, you can formalize the mental model you should have for using session or user counts when you’re designing your test, deciding on the metrics, or performing the post-test analysis.
- When setting up a study design, metric selection.
- When setting up metrics in behavioral tools.
Server-side testing is often seen as the holy grail of experimentation. Something that will solve all our problems with data and cookies. This may be the case. But it also has downsides. Client vs Server-Side Testing Tools—Pros and Cons blueprint shows you all the pros and cons of client and server-side testing, so you can decide which is better for you in your current position.
- Decide when to use client-side and when to use server-side testing.
- Understand the pros and cons of both and how they fit in your business.
What works to change people and ways of work, is different depending on the organization. You need to experiment with ways to engage and train our teams.
By creating a culture around experimentation, you can make it more accessible and enjoyable for people of all ages and backgrounds to get involved in the process of discovery and exploration. This blueprint helps with just that. You’ll also have ways to get buy-in.
- Increase engagement and inputs into a testing program.
- Get feedback for testing efforts.
- Train and educate on testing principles.
- Get buy-in for your experimentation program.
A/B testing is expensive. In this checklist from Tal Raviv, Should I Run an A/B Test? Blueprint provides a set of questions to try to be objective in whether you should run the test or spend your calories elsewhere.
- Decide if something is worth testing.
- Align and train your team to know when something is worth testing.
A strategic testing roadmap Blueprint is the culmination of research and the basis of a great OKR-style action plan for a testing program. The boxes are insights that come from triangulated research data (quantitative and qualitative). Some boxes are strategic, and some are tactical, but overall, it's a punch list. The 'Key Results' part of the OKR.
The objective part is framed as a powerful 'how might we...' question. And this question isn't determined ahead of research, it comes from the research itself, after coding among the insights and finding problem or opportunities patterns. The specific KPIs aren't important, but now you can make the goal SMART. You go through the punch list of insights and watch the needle move on those goal-associated KPIs.
- Create a research-based strategic roadmap for a testing program.
- Communicate with your team on objectives and key results for a test program.
- Organize tactics and strategies against research and metrics.
Cadence for Experimentation Meetings? blueprint helps you answer these questions. Experimentation is a process, so you need meetings to align, coordinate, and communicate. This clever little blueprint lets you pair all of this together by providing the questions you should ask during quarterly, monthly, and weekly meetings.
- Create systems of feedback with your team
- Effect the culture of your program or org
- Train and educate your team
- Have accountability systems
The result of a test can have little to do with what comes next. So it's important to separate the concepts of 'result' from 'action' so you can independently track how your program is doing related to 'win rate' vs 'action rate'. The goal is not to 'win' with tests, it's to make good decisions that effect change. Results Vs Actions Blueprint lets you report on the test you ran and their impact.
It lets you develop tags in your tests as results or actions coming from that test. This way, you’ll be able to calculate your win rate and testing agility.
The true power of this blueprint becomes apparent only down the line. Once you implement it, you can get the portfolio of your testing program. Over time, you’ll see the difference between substantial vs disruptive tests, their results, and actions made from results, all of which become powerful from a program perspective once you start to scale the number of tests.
- Calculate 'win rates' vs 'action rates'.
- Look at the percentages of iterations vs what was shipped.
- Show how test results aren't connected to outcomes, changes, for a website for example.
- Measure the agility of your experimentation.