When running experiments systematically, it is easy to start just testing for the sake of testing and missing the point of the actual actions and changes we wanna drive through experimentation. That’s why Experimentation Decision Matrix Blueprint provides you with an action plan before you start any testing.
Based on your primary success metric —and supported by the secondary ones— what's going to be your action after concluding the test? This way, you can remind not only you and your team, but your stakeholders as well about the actual actions you plan to take after you conclude each one of your experiments. The suggestion here is just to implement a set of action tags in your knowledge database so that you can classify and take action on these results.
This framework will give you a reference on how to tag and classify your results depending on the type of hypothesis (superiority vs non-inferiority) and the action to take afterward. It will also help you enrich a narrative where “winning” and “losing” is not necessarily what’s impactful for the business.
- Decide whether to implement, iterate or abandon a hypothesis.
- Update your experimentation roadmap based on prior results.
- Classify the experiments in your Knowledge Database.
The Validation Methods Blueprint helps you decide which validation method to use when you’re doing website changes or when you need to defend validation tools for copy functionality. Shipping out big changes on the site without prior validation can be very costly and even create more problems than it solves. Disruptive ideas should be tested or validated with further research before releasing them to the public.
While AB testing can be used to get quantitative data on how your changes are performing on the site, sometimes you might want to validate different ideas before you use your resources to build a new test, if you're not confident in the current solution. In some situations, you might not even have the bandwidth or resources to validate the idea with AB testing alone. Luckily, there are numerous options out there to get additional qualitative or quantitative data to validate your ideas.
- Decide which validation method to use when doing site changes
- Get visibility to different validation tools for copy, functionality, design, etc.
Randomized controlled trials are known from medical scientific research. But since 2000 this scientific approach is also being used to improve and learn from internet pages. The research is double-blind (the researcher and visitor don’t know they are in the experiment, there is a hypothesis that is ‘proven’ until it’s falsified.
- prove that A/B-testing is a scientific research method
- show history behind experimentation
The Program Metrics Blueprint lets you monitor the success of your experimentation program. Why is this important? Besides tracking revenue metrics like the number of wins or losses from the experiments, you should also look at program metrics to report back to on a monthly or quarterly basis.
If you're in a situation where you are in charge of running experiments on the site but are facing some slowdowns or issues in efficiency, you need to think about the relevant metrics that will help you identify the bottlenecks in your program.
For example, if the test velocity has been going down, you can identify where the problems lie by reviewing how many ideas are being submitted every month, or maybe even how many ideas are in the backlog. If idea generation is not the problem here, you should be tracking how long test creation is taking for the team by looking at time and sub-tasks.
- Increase your testing velocity.
- Monitor the success of your experimentation program.
- Improve the efficiency and effectiveness of your program.
The Goal Tree Mapping blueprint is a logical and hierarchical visual representation of goals, critical success factors, and necessary conditions. Using a goal tree map, you can easily visualize the metrics that matter to your business and identify sub-goals that must be achieved first.
Do you see constant changes in your company's strategy and find it hard for teams to stick to and focus on projects? Can’t find the common goal? Or maybe your stakeholders only care about top-level business goals and disregard any improvements in smaller metrics?
Goal Tree Map Blueprint lets you list all the different metrics and goals you should keep in mind when designing your program. You can also use it to break down the KPIs into smaller metrics where you can see improvements faster and communicate better. The main benefit of the Goal Tree Map is that you align your CRO team to work on the same goal.
- Get buy-in for experimentation.
- Define the strategic KPIs that fuel optimization efforts.
- Identify metrics that are crucial to the company's success.
- Break down KPIs into smaller metrics where we can see improvements faster.
Speero’s Solution Spectrum Blueprint™ helps you manage one or more in-house experimentation programs effectively and overcome these hurdles. Solution Spectrum Blueprint lets you think about ways to be more disruptive in your tests. It lets you tag your tests as:
— Iterative
— Substantial
— Disruptive
Ultimately, if you tag your test or research solutions with these categories, you can start to measure a 'test portfolio balance' metric which allows the management of how risky or iterative the program is over time. Mature programs should have a balance of big bets focused on learning and small tweaks focused on optimization and margin gains.
During good times, it’s nice to just iterate. But when you’re in a rough patch… You have to shake up the globe.
— Tag tests then measure how many small, medium, and large bets your team is making.
— Based on measuring this, see if you’re overly focused on too much quality or quantity.
— Communicate to leadership and other teams what kinds of solutions are being discussed, and what categories of outcomes are intended.
Generally, the answer is yes, you should. If you don't you face multiple worse 'evils'. But otherwise, there is always this decision tree.
- Know when to run simultaneous experiments.
- Understand how to handle running simultaneous experiments.
Most experimentation programs start with prioritizing ideas based on a model like ICE or PIE. If you want to improve your prioritization you move on to PXL Model. It is much more detailed and tailored to A/B-tests ideas and hypotheses.
ICE and PIE models are subjective, based on who’s calling the shots about the changes. But the PXL model is more nuanced, with added questions like—Is the change above the fold? Changes above the fold are noticed by more people, thus increasing the likelihood of the test having an impact. Is the change noticeable in under 5 seconds? If not, it’s likely to have less impact.
How you prioritize your experiments matters. A lot. Prioritization models help you base your experiments and changes on valid foundations like user research, other data, the change’s impact, and a lot more. They are also great when you have a big backlog of ideas, to sort and prioritize them right. Prioritization models will help you get out, start things right, and dig gold from the dirt.
- Prioritize your experiments based on objective standards.
- Help someone who’s only starting experimentation.