Codecademy started out in 2011 as one of the first free products that taught people to code. Since then, we have helped over 45 million learners improve their lives through programming while making major improvements to our product—driven by experimentation.
Our goal is to empower the world through tech education, reaching as many learners as possible to support our vision.
As a subscription business, we knew that small repeatable wins could compound to earn us millions of dollars in additional revenue. As part of the growth team, we were tasked with experimenting on any part of the business that could drive an impact, so we focused on the big, key levers of our monetization flow.
From spaghetti testing to strategic testing
When I think back to where we were two and a half years ago, the biggest issue with our testing program was that we tried several small ideas in different places and didn’t iterate enough on concepts.
Our ideas were driven by individual opinions and changes that we saw our competitors making, which is one of the worst ways to run a testing program.
The tide turned for us when we began focusing on our own learners and what we knew about them—their perceptions of our product and consumer behavior that was supported by evidence.
This research informed better hypotheses, which allowed us to launch better tests and start unraveling bigger problems—problems we couldn't solve without changing fundamental parts of our business. That’s what led us to strategic testing. Once we realized this, we had to prioritize which areas of the business to test first.
We already had a really strong brand and top of funnel traffic. So we settled on growth levers at the bottom of the funnel–experimenting with pricing, the checkout page, plan mix, and our trial model.
Wins deep in the funnel would have an immediate impact on the business and deliver the growth we needed. The following are some of the strategic tests we ran at Codecademy.
Testing our pricing strategy and plan-mix
In my opinion, price and plan-mix are the two most powerful growth levers in any subscription business. You can capture more money upfront, lower churn, and increase user motivation all by experimenting in these two areas.
All users make a pricing decision when purchasing, making it one of the few truly global levers in your business.
Experimenting here is no small undertaking, however. Price and plan-mix are complex areas to test because they impact several areas of your business and product, so any changes need to be made carefully. In addition, you need to coordinate with other teams that touch pricing, such as marketing, finance, customer support, and sales.
The first experiment we ran was around exaggerating the difference in price between the monthly and annual plans in high-GDP countries, which had a massive positive impact on the number of annual subscriptions.
If you can find ways to incentivize users to enter an annual plan, you're probably capturing more revenue upfront than the revenue generated from the average lifetime of a monthly user. While it’s a large undertaking, experimenting is likely to have a big payoff in these areas.
Testing our checkout process
When we started working with Speero, one of the first tests we ran was on the checkout page. We started by reordering the plan cards, so a user's eye would be drawn to the most attractive plans first, followed by the less-attractive plans. We also highlighted the savings on the checkout page.
This test leaned on a psychological principle called the Rule of 100, which suggests that users perceive dollar amounts over 100 as being greater in value than percentages, despite both equating to the same amount. Because the savings on our annual plan was above $100, we tested it by showing dollar amounts.
We saw a 28% lift in annual Pro plans, as well as a small bump up for overall page conversions.
It was good to see a classic psychology principle play out successfully in our experiment.
Testing our free-trial models
After seeing success in our pricing and purchase flow, we then moved on to test our free-trial model, which is the way that free users can experience the paid product before fully committing. This was one of the harder tests to execute and had a high level of risk and reward because we were tinkering with the company’s core revenue engine.
The best trial models are the ones that align incentives to start a trial closest to where your users find value. But back in 2016, we had no trial model at all. Users entered the site and if they wanted, they could upgrade. Unsurprisingly, only a very small number of users opted for the paid Pro version.
In 2017, we shifted toward a trial model where everybody who enrolled in our free plan was automatically enrolled in a trial of the Pro version. While this was better than where we were before and had an impact on paid conversions, it did come with a few downsides.
For one, users experienced the whole product at once, so they weren’t able to differentiate between paid and free features.
Additionally, because we were experimenting with the core revenue model for the business, we wanted to do it in a slow and measured way. We didn't want to ship something that we saw succeed within the experiment window but would have a long-term detrimental impact on the business.
So we shipped our ideas in three stages:
- The first was aimed at existing users who had used the product for 60 days or longer. We showed them an offer for the Pro trial, which they had to opt into. We saw a strong positive result, so we kept moving.
- In the second round, we tested whether a “credit card upfront” trial model would work better for new users than the control. This was perhaps our most important test because the new user experience is one of the core pillars and revenue engines for the business. We shipped a variant, monitored it for two months, but didn't see the results we were looking for. We went back to the drawing board to see what went wrong.
- We shipped the third version a month or so later with a slightly different paywall structure. Finally, we saw a variant beat the control. It was ultimately a huge success for the business and the product.
Testing our messaging, clarity, and propositions
We then moved up another step in the funnel, focusing on our checkout page messaging and key decision points in the app.
The most successful tests we’ve run are based on understanding our users’ perceptions, problems, objections, fears, and uncertainties. The way we use these insights, even in small tactical testing, ultimately ladders up to bigger strategic areas, such as helping our users understand our free versus paid product.
We tested generic content versus more personalized content at checkout, based on up-funnel navigation patterns. Our aim was to increase the clarity and relevance of our product to specific users.
Such tests helped us understand which messaging really resonated with our users and what triggers them to sign up. This insight can help us in marketing—as well as product development—moving forward.
To move away from testing irrelevant ideas to more strategic experiments, you need to gain real insights and data about your users. Use experimentation to test the business’s most highly guarded areas, such as price, proposition, and even the product itself.
If you want to grow way beyond the local maximum, the only way to do that is through testing the strategic parts of your business.