Welcome to Briefly Experimental
This edition was written by Haley Carpenter, Senior CX Strategist at Speero.
Every two weeks we'll deliver the best experimentation content and commentary, curated by a member of the Speero team. We'll break things down into the four key pillars needed for any successful experimentation program.
Strategy & Culture
🧠 Operationalizing breaks for our brains
Microsoft's brain wave research proves what many of us are feeling; back-to-back video calls cause additional stress, less engagement, and poor focus.
Thank goodness the remedy is simple and doable. Give yourself quick breaks between meetings. One way to do this is by starting your meetings a few minutes later or ending them a few minutes earlier.
I’ve actually started trying to cut meetings in half or nix them from my schedule entirely. If I can discuss a topic with people asynchronously, I push for that instead.
These efforts have helped my fatigue and stress quite a bit, but honestly, even just knowing my company supports these decisions has been a reliever in and of itself. Give your team the freedom and authority to make these kinds of decisions. In addition, try to operationalize break-taking into your daily work cadence, providing guidance on when to (and not to) book meetings with colleagues.
Process & Methodology
💲 The perfect pricing page?
Patrick Campbell, CEO of ProfitWell considers Slack's pricing page to be "about as close to pricing perfection as any company is going to get."
Why? Slack does an amazing job of matching customer's willingness-to-pay (WTP) with their subscription tiers. They also understand which features increase WTP and retention, such as the number of integration a customer is using.
Watch the Slack pricing page teardown
🔥 Google Page Experience: how it impacts optimization & UX
We try not to include too much of our own work in this newsletter but this piece was too good to pass up.
It's probably the most detailed article you'll find on the recent Google Core Web Vital updates and how they impact your approach to UX and optimization. If you want to ensure your experimentation work doesn't impact your search ranking, have a read.
People & Skills
🤝 Stop 'syncing' and start using the right rituals to fuel growth
There’s nothing worse than realizing you got nothing out of a 60-minute meeting. Too often meetings achieve “fake alignment” where it seems like everyone is on the same page, but actually, it's because people didn't have enough time to think critically about the topic in the meeting, or groupthink.
In this article, the team at Reforge suggests instead of sync meetings you should use the time to "drive company growth through processes that foster shared experiences, language, purpose, and mindset." To do this, here are five types of rituals, how to use them, and why.
👀 Job opportunities
Here are a few interesting roles that have been posted in the past week;
- Conversion Optimisation Specialist at Moneyfarm (London, UK)
- Website Optimisation Analyst at Estée Lauder (London, UK)
- Growth Optimisation Specialist at ThinkMoney (Manchester, UK)
- Global Head of Digital Optimisation & Delivery at Merlin Entertainment (Basingstoke / Poole, UK)
- Manager, Conversion Rate Optimization at Root Inc (Remote, USA)
- CRO Analyst at Chubb (Florida, USA)
- Senior Product Manager (Experimentation) at NerdWallet (Remote, USA)
- Growth Manager, CRO at Atom Finance (New York, USA)
- Sr Growth Manager, Experimentation and Monetization at Malwarebytes (California, USA)
- Senior Product Manager, Experimentation at ZipRecruiter (Remote, USA)
We're looking for someone to join the Speero team too. If you know any great Digital Project Managers looking for a remote UK or East Coast US role, send them our way.
Data & Tools
🧐 Top 5 mistakes with statistics in A/B testing
Here's Georgi Georgiev's top five statistic mistakes.
It's a great list but I would say a top mistake I see often (that's not listed), is teams not doing pre-test calculations or if they do, ignoring the calculations completely.
I’ve seen teams conclude tests after just a few days or weeks with barely any traffic, and no significant indications of any kind. Yet they share results company-wide as though their findings are concrete and actionable.
In data we trust, but from experience, you should question any data presented to you. Always. Reading the above article should give you a good refresher on the pitfalls to avoid so you can trust your numbers.
📏 What KPIs should you use to measure experimentation?
Christophe Perrin, Group Product Manager at Booking.com wrote about how they measure their internal product - the experimentation platform.
What really stood out is the importance of selecting KPIs that match the experimentation program's level of maturity. And as businesses mature, what was once a north start metric becomes more of a vanity metric.
Given that Booking.com has 16 years of experience working on experimentation, their maturity is very high, so for them, measuring "experimentation quality" is key.
Christophe shares how they arrived at the quality metric, how they quantify 'quality', and their plans to iterate on this measurement in the future. Well worth a read.
📢 Chad Sanderson discusses experimentation metrics
If you haven’t watched Ben Labay’s Testing Insights show on YouTube, finish this newsletter and go there next. In each episode Ben interviews someone from a different business, giving you a glimpse into how different teams tackle experimentation.
One of my favorite episodes is with Chad Sanderson from Convoy, specifically the discussion on how to break down experimentation metrics categorically and its good thinking material after you've read the previous article.
The pair discuss using program metrics, platform metrics, and customer metrics as a way to categorize the different types of KPIs.
I agree that breaking out revenue and efficiency as their own categories isn’t necessarily best, (technically they can both fall under the other category headers) but perhaps they should be subcategories?
Watch the episode and let me know how you structure your experimentation metrics.