A step by step guide for Quality Assurance process in A/B testing.
How can you differentiate each part of QA?
What are the main areas and possible issues you should keep an eye on?
- Quality Assurance is a must when you are A/B testing.
- While there are different tools and use cases, the main process needs to be the same for each experiments.
- Review and verify the setup, then test in different browsers and devices.
The structure of the database of test learnings is important for communicating to stakeholders and assuring that decisions and actions are documented correctly. It can be a cultural tool above all, as it is a recipe for changing culture, from a data foundation. Not just arm waving about the theoretical benefits of democratized decision making.
LOTS to take away from this structure below. Most of the takeaways I'd think should effect how you communicate to stakeholders.
For example:
1. A loss can equal a save
2. A flat test can still be 'implemented' (a win, it confirmed smth)
3. It shows the emphasis on RELATIVE effects across tests, which isn't talked enough in our industry dialog. Accuracy is over rated IMO, Precision FTW.
4. A financial model is critical. Create a goal tree map, then do a model to translate this into relative potential revenue. Use for BOTH prioritization and for reporting like this. It changes the game.
5. But don't only present $ numbers, also pair EVERY TIME with customer learning sentences, to TLDR what it meant for your customer's behavior and/or perceptions.
Test reporting is critical to decision making, and also overall program velocity. The faster you can report out, the faster the decision can come. This is Agility as a metric. We think there are two sides to test reporting, the automated, and the bespoke or manual side. 1. BI tools like looker studio and even test tools themselves provide the automated side of things, and 2. the 'story telling' where different metrics are highlighted and the implications and insights are presented is the custom side of things. This blueprint acknowledges the need and balance for these to parts for experimentation test reporting.
- increase awareness for what goes into a test reporting phase in the experimentation process
- align the team on who does what part, and what part is needs more work
- define your own programs component parts for this process
What are the differences between synchronous and asynchronous testing tool snippets? Which one suits better for your website and testing program?
While the difference between synchronous and asynchronous testing tool snippets may seem small, the actual impact this has on your website loading speeds, test flickering and overall user experience can be quite significant.
The definition of a “heuristic” is “a mental shortcut that allows people to solve problems and make judgments quickly and efficiently”. As such, our UX heuristic framework is made up of a set of guidelines which allow our team to assess and analyse any digital user experience and identify areas of opportunity for optimization.
Speero's UX heuristic framework was developed by combining and consolidating the frameworks used by the industry’s leading UX and CRO agencies. This resulted in a set of 60 guidelines across 5 heuristic themes;
Value: does the content communicate the value to the user?
Relevance: does the page meet user expectations in terms of content and design?
Clarity: is the content/offer on this page as clear as possible?
Friction: what is causing doubts, hesitations, uncertainties, and difficulties?
Motivation: does the content encourage and motivate users to take action towards the goal?
Frameworks similar to Speero's Heuristics Blueprint include:
MECLABS Conversion Index
Conversion's The Lever Framework
- Assess any digital experience to understand and identify areas of opportunity for optimisation
- Use as a framework to tag and track action (JDIs, experiments, etc)
Use this decision tree to help you to effectively categorize the insights generated via research. Effective categorization is where we turn insight into action and is the first step in developing an experimentation roadmap from research. Once you've categorized your insights, each list of insights can be dealt with accordingly, e.g. JDIs can be added to the development backlog or the next sprint, Instrument items can be handled by your analytics or development team, etc.
- Turn research insight into an experimentation roadmap
- Create actionable workstreams for different teams
- Avoid the implementation crisis
Use Strategy Maps to orient your experimentation program. The focus will depend on the strategy and positioning of the brand.
Understand if your experimentation is focused around Brand/Performance/Product marketing and Acquisition/Monetization/Rentention and whether your positioning is Sales/Marketing/Product led.
It can be challenging to effectively incorporate user research into experimentation programs on an ongoing basis. However, categorising all research initiatives into one of these three categories, can help you to plan research more effectively. Use this framework to communicate these three core research objectives.
- To plan how to incorporate user research into your experimentation program on an ongoing basis
- To plan specific research initiatives, and keep them focused on a primary objective
- To communicate the different types of research being conducted to the wider business
There are different reasons to test and experiment, and they range from revenue to customer to process goals. CRO programs care about wins, they want to push for more money. CXO programs care about customers and thus are more focused on metrics around customer satisfaction and retention as (surrogates for money). XOS or 'experimentation' programs believe that a culture of using data and ultimately science is a better way to run an innovative company.
- to help communicate with stakeholders where the focus should be
Assessing the revenue impact of your experimentation program is essential for informed decision making in business. Using a testing revenue model to measure ROI impact will help you understand the effectiveness of your strategies. Evaluating the direct revenue contribution of individual experiments allows you to determine where to allocate resources based on their effectiveness, ultimately enabling data-driven decision-making.
- to measure the potential experimentation impact over time
- monitor the success of the experimentation program
- to acknowledge test performance decline over time
Using Data Warehouse vs Not Using Data Warehouse Blueprint helps with just that. It shows that a data warehouse can help you bring all your data together to make better decisions and improve your business. Without a data warehouse, these tasks can be more challenging and less efficient.
- The blueprints help you see why a data warehouse is good for your business.
- By looking at the table, you can decide if you need a data warehouse.
- The table guides you on what features to focus on if you're thinking about getting a data warehouse.
"Hypothesis Testing" and "Do No Harm" treatments represent different experimental goals, hence it is important to differentiate them to ensure correct statistical analysis and interpretation of results, as well as increase your testing velocity. Hypothesis testing is used to test whether one variation is significantly better than another, while a "Do No Harm" treatment is used to test whether one variant is not significantly worse than another by a predetermined margin.
- Estimating the duration of an experiment
- Balancing a portfolio of experiments in your program
- Increasing your testing velocity
A/B testing is interconnected with statistics and statistics always requires a certain sample size to draw any sort of meaningful results. Using a test bandwidth calculator and the —Where and how should I test to make the most money? Blueprint—you can determine whether your website's traffic is fit for A/B testing and whether you have the ability to go deeper into segments and down-the-funnel metrics.
- Is your website eligible for A/B testing based on its current traffic and conversion volumes?
- Is it feasible for you to run experiments with smaller impact or should you focus on high-impact experiments higher in the funnel?
While you keep the budget lean? XOS Tool Stack blueprint provides an overview of the main software categories you should be familiar with, as well as some examples of specific tools you can use in each category.
- Assess the required software tools for an experimentation program
- Budget and estimate costs for the experimentation practice
User research methods framework; which method is best suited to your research objective? Blueprint helps you plan which research or validation method suits best your current business needs. Use this blueprint to support user research strategy and planning. You can also use it to identify which research activity best suits your research goal or business case based on:
For instance, Heuristic Review gets a quite low-value score. Why? Because it's as close to having an opinion as we’re ever going to get. Even though it's a vital part of the process, UX Review doesn’t have the same signal strength as Customer Interviews or Usability Studies. But it requires a lot less effort (time and money) than Diary studies, for example.
- Assess the suitability of different user research methods depending on your research objectives or business questions.
- Understand the effort vs. value involved in different user research methods to help plan the most appropriate methodology.
- Identify opportunities to combine research methods to cover a range of research "types" and increase the strength of a signal.
With User- or Session-based Test Analysis? blueprint, you can formalize the mental model you should have for using session or user counts when you’re designing your test, deciding on the metrics, or performing the post-test analysis.
- When setting up a study design, metric selection.
- When setting up metrics in behavioral tools.