Too often, I see optimization and experimentation efforts assigned to just one or two teams in an organization, rather than it being distributed throughout the whole company.
It’s often the perception that optimization (which should involve customer research, data analysis, and experimentation, usually via A/B testing) can only help with improving conversions. The term CRO has a lot to answer for in this sense.
However, the practice of experimentation can go way beyond optimizing websites. More advanced businesses are creating centralized experimentation functions that help teams across their business in decision making, optimization, and increasing learnings about their customers.
If your team is one of the many who haven’t yet adopted an experimentation strategy, rest assured, you’re not alone. Here’s a list of use cases for how experimentation can help teams across your business.
Not only does having the top of the business lead by example help the rest of the company understand the importance of data-driven decisions, it helps your CEO to be armed with validation for big/risky decisions when presenting strategy to the board or shareholders.
While the C-suite might not be running experiments themselves they can work with teams who can implement their ideas and report back on the findings.
CEO use cases for experimentation
- Test market expansion plans before going ahead.
- Innovate through experimentation. (e.g. test out new business models or category creation with real customers)
- Test out new partnerships or suppliers before committing. (e.g. what’s the value in using a 3rd party for deliveries vs. our own fleet)
- Validate investments or financial strategies. (e.g. should we sell our product as a one-off fee, or license it monthly)
Historically, marketing teams have tended to own optimization, personalization, and A/B testing, as they are responsible for digital elements of the user journey, as well as improving conversions.
However, marketing teams often focus on just one part of the customer journey—the initial purchase experience—at the expense of optimizing the whole customer journey for retention and lifetime value. Which can make a huge difference to the long-term profit of an organization.
But in order to optimize an omnichannel experience, marketing teams need to improve their segmentation and tracking for each user, across devices and channels throughout their journey.
Getting the data you need to do this is hard. There are also a few challenges you should be aware of. Many browsers have brought in new cookie policies which block client-side and third-party cookies. There are also region-specific privacy laws like GDPR and CCPA to abide by. The above restrictions mean that if your testing tool depends on client-side cookies to recognize users, track data, and deliver experiences, you will find major inaccuracies and holes in your data when looking across browsers, devices, and regions. If you want to dive deeper into this topic, read our post on ITP/ETP browser privacy changes and Google’s recent announcement to find out how you can solve these issues.
Marketing use cases for experimentation
- Use testing to find out which products, copy, creative, promotions, etc. work best for different user segments. Use the findings to improve your targeting of high-value customer groups in other marketing channels.
- Test creative or messaging before putting ad spend behind it.
- Experiment with loyalty programs and retention initiatives to improve marketing metrics.
Like marketers, merchants have also been known to own optimization within organizations.
Product and category pages are obvious candidates for experimentation, as improvements here boost revenue per user, add-to-cart, items-per-cart, and overall revenue.
When it comes to product recommendations, tools can help. But I often see a missed opportunity to A/B test or personalize these recommendations, including testing different algorithms, placements, and layouts.
A common request we get from merchants is the ability to get insights into the “black box” of product recommendation algorithms, to give them the power to modify, filter, and amend the recommendations based on their own rules. For example, UK-based gift and greeting card retailer, Moonpig found that their product recommendations weren’t ordered or categorized in a way that made products easy to browse.
To improve the experience, Moonpig designed a new carousel format with items grouped by category and displayed in horizontal groups. To do this they added new metadata so products could more effectively be grouped, and items/categories that were closer matches for the customer were prioritized in the display. The updated algorithm and a new carousel display format (pictured below) allowed for a better user experience and led to increased revenue per user.
Merchants use cases for experimentation
- Use smoke screen tests to gauge real buying intent before sourcing new products.
- Test and personalize all aspects of your product recommendations, cross-sells, and up-sells including algorithm, display, placement, and filtering rules.
- Run experiments to understand how pricing, promotion, and discounts affects sales of specific products or categories.
- Test new pricing strategies.
Product Managers & UX teams
As a VP of Product, this area hits close to home for me. I am constantly relying on data and feedback to inform which problems I need to solve and whether or not my solutions to those problems are any good. My backlog is always longer than I’d like it to be, so validating my ideas before I spend time and money building something is critical.
Not enough product managers validate their ideas by trying something quickly and cheaply via a test or experiment. They tend to point to a few conversations they have had, make a bet, and start building. That can get pretty expensive.
If you validate ideas upfront, you can weed out ideas that look great on paper but not in practice. Implementing feature flags is a great way to test new features–by turning a feature ‘on’ for smaller, targeted groups of users, you can learn, iterate, and mitigate the risk of creating new things that don’t resonate with your customers.
I recommend checking in with your DevOps teams first to learn more about a true release test as opposed to a feature flag. A feature flag allows you to turn a feature on and off for a targeted audience, whereas a true release test means rolling out and A/B testing a full release, infrastructure component, or app framework.
But, by adopting feature flags into your product planning strategy, you not only get those critical success metrics upfront, before you invest all of your resources into a feature that may or not improve the product experience, you can also track metrics that matter to other teams.
Product & UX use cases for experimentation
- For digital products, improve the functionality and user experience throughout the customer journey through experimentation.
- Feature flag testing/toggling and full release testing to ensure users are happy with any new functionality or experience (and new features are working properly).
- Use the optimization processes to validate and guide new product developments before you invest in creating and launching them.
- Learn about your customers and create data-backed personas. Understand different users’ wants, needs, and what elements of your product resonate most with them.
Developers can leverage experimentation with prototypes in production, algorithms, API improvements, architecture frameworks, and migrations. Allowing you to test out new ideas with fewer resources, and less risk-taking.
When implementing a new feature flag strategy, make sure you investigate the technical debt of implementing feature flags at scale with your given tool. Dropping code blocks everywhere that make calls to third parties can get expensive in terms of both time and performance.
At SiteSpect, we have reduced some of those issues by including more information about what experience should be shown from the user’s initial browser request. We can do this because our solution sits in the flow of traffic like a CDN. This can minimize technical debt when things get to scale and also helps you compare full releases, not just features. I believe developers can benefit immensely from an optimization strategy, both in terms of improving the metrics that matter to the team and in offering more freedom and resources to develop better code.
Developer use cases for experimentation
- Feature flag testing/toggling to ensure a feature is performing as expected and doesn’t break.
- Test a platform migration by routing users between the legacy platform and the new platform
- Test new algorithms, API improvements, and architecture frameworks.
Mitigating the risk of any release and having the ability to QA in production for business, engagement, and speed metrics can be a game-changer for DevOps.
The key here is to do proper canary testing (testing new releases on small, specific segments of users) and Blue / Green testing (toggling between different environments or code candidates) as a normal part of the release process. As you are rolling out a release, your testing tool can point some users to servers with a new release and everyone else to servers with the old release. This means you’re actually testing the release itself as opposed to a feature of that release–no code blocks required. This will help you fundamentally understand the impact that a release has on your business.
We have seen this functionality used by more mature businesses that move to continuous release models, where each part of the experience is broken into separate apps that have separate release cycles. Pointing any user to any combination of these apps with immediate feedback on any type of metric is where you start to see true optimization of the customer experience.
DevOps use cases for experimentation
- Test a full release to a specific, targeted audience to ensure it is working properly and not harming the user experience.
Digital optimization doesn’t have to stop at the software layer. The same experimentation and risk mitigation that applies to a page design, feature, or release can also be used to understand the impact on your platform, infrastructure, or network.
For example, you can use it to gain a better understanding of how your CDN is impacting your business. Or, to better understand the business impact of moving from your current data center to AWS. With the control to define which audiences will be routed to your new app version or platform, and easily roll back any of these actions, you minimize downtime during any transition.
Network Operations use cases for experimentation
- Optimize site performance by analyzing how everything (A/B tests, new releases, third-party platforms) affects speed at a granular level.
Think of all the use cases and benefits we have already discussed and how those can be tracked and evaluated. For example, you can easily answer a question like, what is the ROI of your capital expenditure on moving to a Single Page Application last year? That can span multiple months (hopefully not years) and involve multiple teams from across the organization.
Finance can leverage and aggregate the data from each group and better manage the entire business moving forward. This goes to show that you can, at the corporate level, track and evaluate how truly CX-centric your company can be.
Finance use cases of experimentation
- Test out new payment providers, and payment or returns processes.
- Experiment with new finance processes to assess ROI or efficiencies if implemented.
No matter how mature an organization is, there is probably at least one area that isn’t currently benefiting from experimentation and optimization — even if it’s just a matter of sharing data. Grab a virtual lunch with some of your coworkers and see what opportunities you have available and how you can start implementing the skills and practices to run experiments throughout your organization.