Author: Martin P.
Title: Content Marketer
EP 28: User Research Methods Framework
If we knew what it was we were doing, it would not be called research, would it? Albert Einstein
Yo, yo, Martin P. is back in the house with the latest news, events, and frameworks in the world of experimentation.
Google Optimize going sunset, a $1000 call for a transparent review, sexy hypothesis… let’s see what this week has in store for us.
This Week in Experimentation:
Blueprint of the week: User research methods framework — helps you plan which research/validation method is right for your business needs. Link.
Talk of the week: Making A/B Testing Hypothesis Sexy — What’s a hypothesis? How vital it is? How can you make it sexy and does it actually need to be sexy? Link.
Read #1: Wynter CMO Survey 2023 — 79% of respondents say they're going to invest in increasing conversion rate optimization over more paid acquisitions. Link.
Read #2: Google Optimize Sunset, What Next — if you were running experiments there, this is bad news. Here’s what you should do, according to Deborah O’Malley. Link.
Read #3: How to create an optimized experimentation strategy using the ResearchXL model — don’t test until you have insight into your users' FUDs and the current barriers in your UX. Link.
Opinion of the week: Dennis van der Heijden — I’ll give $1000 USD to the first Google Optimize customer that publishes a transparent and public Google Sheet with all A/B testing tools mentioned in the CXL 2023 A/B testing tool article. Link.
Event of the week: Going Beyond A/B Testing for Faster Results — What if you could win more gradually, rather than waiting for the ‘final answer at the end of an experiment? Link.
Blueprint of the Week: User research methods framework
This is a framework to support you in planning which research/validation method is best suited to your business needs. Use this framework to support user research planning and strategy.
Or to identify which research activity best suits the research objective/business case based on data type, effort, and level of confidence (e.g. value) required.
Use Cases:
- Assess the suitability of different user research methods depending on your research objectives or business questions.
- Understand the effort vs. value involved in different user research methods to help plan the most appropriate methodology.
- Identify opportunities to combine research methods to cover a range of research "types" and increase the strength of the signal.
Link.
Talk of the Week: Making A/B Testing Hypothesis Sexy
Link.
What’s a hypothesis? How vital it is? How can you make it sexy and does it actually need to be sexy?
If you run A/B tests, your hypothesis matters a lot. The hypothesis is the foundation of the A/B test.
If you skip it or treat it as a checkbox exercise, you set a weak foundation. And we all know what happens with weak foundations.
Johann Van Tonder, after a decade in A/B testing, says this is one of the root problems he sees in many experimentation programs.
If you wanna build a good foundation, take a look at the video.
Link.
Reads of the Week:
Read #1: Wynter CMO Survey 2023 — Link.
Four out of 5 companies (79% of respondents) say they're going to invest in increasing conversion rate optimization over more paid acquisitions. Now, it’s no surprise that CMOs want to double down on experimentation/CRO. And for a good reason.
Leads keep getting more costly, don’t they? In 2022, the average cost per lead(CPL) went up for 91% of industries compared to 2021. For the B2B tech sector specifically, the average CPL comes in at $208.
And if you don’t have more money to push into a lead generating machine, your best solution to grow is to convert the ones you already got.
When fixing your funnel, start from the end. Fix problems in the later stages first, because influencing customers there tends to provide the biggest ROI in the long run.
Plug leaks first, instead of pouring more water into a leaky bucket.
Read #2: Google Optimize Sunset, What Next? — Link.
Google Optimize is shutting down. If you were running experiments there, this is bad news. Here’s what you should do, according to Deborah O’Malley:
1. Start saving and documenting your old tests today. Otherwise, they’ll be going with the wind from September 30th. You will wanna look back and see what worked, what didn’t, and what’s good to present to stakeholders.
2. Look for testing platform alternatives: Convert com and VWO have new low-cost plans, Optimizely, Kameleoon, AB Tasty, Adobe Target, SiteSpect, and a lot more other tools are on the table.
3. Think about other types of experimentation: Google Optimize was criticized as a platform catering to smaller experimentation teams — ones suffering from low traffic. There may be more effective experimentation measures that don’t require as much traffic, including UX testing, consumer surveys, exit polls, or customer interviews.
Read #3: How to create an optimized experimentation strategy using the ResearchXL model. Link.
Experimentation shouldn’t be left to chance; if you’re committed to data-based customer experiences, you need to have a clear and well-researched testing strategy.
Emma Travis, strategist at Speero explains what the ResearchXL model is and how it sets you up for experimentation success.
A good customer experience optimization (CXO) strategy relies on A/B testing, but testing shouldn’t be random.
While it’s tempting to dive in with one of the myriads of available tools, don’t start testing until you have clarity and insight into your users' FUDs (fears, uncertainties, and doubts) and the current barriers in your user experience.
Focus on a detailed research methodology and solving genuine problems for your users. This way, you’ll build confidence in testing and to identify customer needs and conversion points that both serve them and your biz.
Opinion of the Week:
“I’ll give a $1000 USD to the first Google Optimize customer that publishes a transparent and public Google Sheet with all A/B testing tools mentioned in the CXL 2023 A/B testing tool article (in the comment).
Conditions:
1) Take every tool mentioned in the article, sign up (or try) and compare its features.
2) Try and get an A/A and A/B experiment live (which should be the start of every tool) on a real domain.
3) You probably have to do demos (but you will be the hero if you do them, not the other ten thousand people).
4) You can publish it on any domain…
5) You can NOT be a Convert customer, affiliate, writer, formal employee, or have worked with our tool for the last 5 years.
6) Don’t be a jerk and also have no affiliation with any of the other vendors (I won’t be able to check this obviously so I just have to trust you with this)
7) Compare pricing 50,000 / 100,000 / 500,000 / 1M and 5M tested visitors.
8) Don’t ever share with me or my team you are writing this blog or making this sheet (even in demo’s).
9) Write a blog post about the experience of the demo, what vendors can improve and how you feel about them.
10) You will be the hero of thousands, make a name for yourself and help out the community. As compensation for the carbon this competition is going to create we will offset by planting 500 trees 🌳 using The International Small Group and Tree Planting Program (TIST) (we are fans) using the Digital Humani integration, our team made (available on GitHub if interested).
11) Tag me when you're done and DM me your email so I can check the records that you actually did a demo. It’s impossible you have this done before the end of February, so don’t try and put GPT-3.5 up for the task, do it yourself.
You're my hero!
PS: Share and comment if you also need this Google Sheet as we will post its final version when I get the confirmation when it’s done.
PPS: Sparsh Gupta / VWO is willing to put up another $1000 in cash and $500 in TIST trees 🌲 for the second blogpost and second Google Sheet that comes up… we would all be really grateful if people don’t have to take hundreds of individual demos”
Event of the Week: Going Beyond A/B Testing for Faster Results
What if you could win more gradually, rather than waiting for the ‘final answer at the end of an experiment? What if you could earn while you learn?
Well, there is a way to go beyond A/B testing and swap low-performing options with fresh options, smoothly.
We’ve dealt with similar situations, as Speero helped MongoDB build their testing program from zero to 100 tests per year.
In this webinar from Intellimize, Sam McAlear, MongoDB Director of Growth, Ben Labay, Speero Managing Director, and Guy Yalif, Intellimize CEO, will show you how stand-out experimentation programs move beyond A/B testing to learn faster and convert more.
Come and learn:
How to determine what kind of test to run and how long it should run for
Why the methodology behind testing matters
How to move beyond just A/B testing for faster results
Link.
That’s it for another week.
If you want me to share your podcast or blog post, just reply to this email or reach out to me personally at Martin@speero.com
PS — Think anyone could use all this info? Share the newsletter with them!