Jeremy Epperson is the Chief Growth Officer at ConversionAdvocates, and to say he knows a thing or two about CRO and experimentation would be an understatement. For our Guest Blog series, he sat down with Speero strategist Shiva Manjunath to share the wealth of knowledge that he has gained from his perspective as an agency consultant. Jeremy states the first 90 days of a new program are always the most challenging, but he goes on to provide a clear roadmap made of 6 key areas that will allow you to set out a successful program :
- Common roadblocks in the first 90 days
- How to source good ideas
- Tips for hypothesis writing
- The link between research and process
- How personalization fits into experimentation
- Why research and testing are two sides of one coin
Whatch the video interview here
What is your background in experimentation?
“Technically, I've been doing CRO for 14 years now. But from an agency consulting perspective, I think I'm rolling up on 10 or 11. I've focused my whole career on one thing, and that's conversion rate optimization and specifically helping businesses get started. Because the first 90 days are the hardest,” Jeremy says. “There's just so many pitfalls and roadblocks and issues and bureaucracy, and I just wanted to find out, can we isolate all these issues to the point where people can skip the learning curve, they can like avoid some of the mistakes?”
Jeremy has walked over 250 businesses through the formal process of assessing, observing, and analyzing their level of maturity, and then building growth plans for them based on those inputs.
He notes that while there are brilliant people working in CRO today, people often misunderstand or misinterpret that a CRO program in year one looks dramatically different than when you're trying to do one-to-one personalization.
What are common roadblocks that you see occur in those first 90 days?
“So 85-115 days is the window when you start hearing the grumbling. But 90 days is an arbitrary number we drag where we're like, Hey, we have to show some kind of progress. And it doesn't always have to be a bunch of winning paths. Sometimes it doesn't even have to be a winning test. But there's got to be a level of buy-in, engagement, and excitement that happens. You have to get that level of trust. And what I've seen so many times, it breaks my heart as somebody who does CRO, is ‘CRO doesn't work,’ or ‘CRO doesn't work for us.’ Because they just are not putting the things in the right place,” says Jeremy.
Jeremy adds that there are two big categories that roadblocks fall into, and are easiest for a novice to wrap their head around. Number one is not knowing what to test. And number two are that operational inefficiencies that arise and prevent teams from getting tests live.
If you can solve those two things, then you’ll have enough bandwidth or runway to be able to solve other problems that come up.
How do you avoid ideas that are not backed by research?
“The ways we tried to circumvent that with teams that don't have maturity is a couple of things. My favorite thing is: scrape from what you have. There is hidden gold in so many places. It's like, somebody did research and it was on the marketing team three years ago and it was never activated, and we forgot it was sitting in a drive somewhere. There's so much stuff you can pull from stakeholder interviews. There's stuff that's lying around that just hasn't been activated,” Jeremy says.
If you're really struggling to find a test, just look around you. You can find plenty of test hypotheses if you take the time to audit existing material. Another hack Jeremy uses is if they have a couple of customer data points that they're pulling together, they try to get the hypotheses and variation count up.
Typically an hour long meeting with the right people can give rise to many more variations, so that team don’t get stuck. It’s simple, but effective.
What advice or thoughts do you have on hypothesis writing?
“I don't like clients or internal teams getting stuck on like, ‘we have to write this thing a certain way,’ because it's missing the point. The point is we are challenging this concept. We're challenging this idea. How do we form this into a written statement, and pull together something that's pretty complex: copy, design, development, etc.? I try to focus less on ‘here's the structure we need’ and I try to focus more on presenting an idea in front of multiple people on a team that will have varied skills and backgrounds challenging that idea. Not to kill it, but what are other ways that we can think about that? What are the other ways we can implement it? So it's almost like workshop style,” says Jeremy.
Jeremy uses this workshopping method both internally with his own team, and also models it for client so their team can step in and support experimentation efforts. To be truly effective and impactful, you need the feedback from varied backgrounds and perspectives. It also means you have to create a working dynamic where people feel comfortable but uncomfortable at the same time.
How do you get people into the mindset that you are there to help them?
“The term that I use is worldview. Like when I hire, one of the interesting things is that you can't teach worldview. You can't teach perspective. You can teach any skills. I can teach anybody design or analytics or research skills. Those are all teachable skills. But it's very hard to mold and change someone's worldview,” says Jeremy.
If you select the right people who have that natural optimization mindset, it just makes things significantly easier for your team.
Research tools are important, but so are the right processes. How do you approach process?
“My framework is 33 factors. and that's not arbitrary. I tinkered with it for a long time, with a lot of people,” Jeremy says. “We basically boil it down to a manageable amount of things that you're delineating and that you're measuring and ranking. It's not too overwhelming, but it's not too simplified either, because you can leave out really important points. So we're basing that on your six principles and there's a couple things within each of those. And that gives us a good enough idea that we can peg somebody and be like, okay. Now we know the exact inputs, and we're going to create a customized solution for how you need to approach this based on where you are today. And I think that's the cool thing, because the success increases exponentially when you give somebody a customized plan that fits their team. It's just a night and day difference.”
How do you view the relationship between personalization and experimentation?
“The way that I view it, it's a level of progression, it's a level of increasing maturity or sophistication. So if you can't have a high-functioning A/B testing program, to call it that, then how are we going to get more sophisticated with analytics and tying together and integrating discrete data sources? You're not going to be able to do that if you don't have the chops to do it. So it's kind of both. It's a toolbox, just like quantitative and qualitative research are a toolbox.”
For Jeremy, each tool in the toolbox is valuable on its own, but there is also a more sophisticated value when you blend them together.
You wrote a post about experimentation and research working in conjunction. Can you talk about that?
“My analogy is two sides of a coin. That's what seems to resonate when I'm talking to clients. We have plenty of math and data to support the idea that you're validated win rate goes up dramatically when you add customer data and research methods. And we've seen it because we've modeled this across so many different businesses. I can go and pull all the data and I literally database it, and I have everybody at stuff. So I know. But the other part of it too is if we're talking just about your first year of CRO, we do what I call a dual-track approach. What you don't want to do is sit and do research for six months—I'm exaggerating for effect—and not launching any tests,” Jeremy says.
You do need to be able to show the ROI to get buy-in. That’s why Jeremy focuses on how quickly he can get tests live, and scrape ideas from what already exists to ensure that they are as data-driven as possilbe from day one, even if no process or structure exists yet. From there, research and testing continues to happen on parallel tracks.