What do visitors do when they land on your website?
- Do they get confused?
- Do they get stuck?
- Do they trip on your trip wires?
When we talk about user behavior, our goal is often to uncover points of friction.
It’s a win-win when customers can move seamlessly through a website experience. They should be able to find what they need, realize that they need it, and easily complete the purchase or desired action.
To gain valuable behavioral insights from your customers or users, in their own words, you can start with two methods:
- Collect passive feedback
- Conduct user testing
Passive feedback is the voice of customer data that you are continuously, passively collecting in the form of user surveys, polls, etc.
You can think of it as a hands-off, automated monitor that’s taking in crowdsourced data and keeping a finger on the pulse of your users.
There are several helpful tools that you can use to collect targeted passive feedback, such as Usabilla, Hotjar, and Userleap.
While passive feedback alone can’t help you fully understand user behavior, it acts as a helpful safety net. It ties together all the metadata surrounding the user experience with the feedback provided.
Additionally, If you're doing a lot of continuous code releases and something goes wrong, this type of feedback is valuable in helping you quickly spot and solve problems.
Benefits of collecting passive feedback
Two real-world examples serve to illustrate the benefits of collecting passive feedback.
Often, companies make it very difficult to provide feedback, especially when it comes to mobile sites or apps but Virgin Airlines added a feedback CTA in their main menu. For positive feedback (above a star rating threshold), they then triggered a CTA to leave an app store review. This provided a quick way to continuously catch feedback overall, as well as getting the positive feedback funneled into a super useful channel. They saw a 300% uplift in positive app store reviews.
“The company [Usabilla] was able to provide us with screen capture of what the customer sees at the time of logging the issue. Being able to see what the customer sees has made it much easier to reproduce and understand customers’ pain points in a timely manner.
“We also liked the simplicity of distributing surveys for different purposes without needing to wait for the next release deployment, and the functionality to capture useful metadata, which is useful for us because the data provides our development teams with the necessary information to troubleshoot quicker and faster." - Digital product specialist, Brendan Valmont at Virgin
With a new-and-improved call to action to leave feedback, customers had an easy off-ramp to provide additional input regarding their experience.
The key here is to allow feedback generally, make it apparent and accessible. Then you can push and use this data in cool ways. Not only fixing problems!
TUI was using Usabilla to collect ongoing feedback from customers. One day, they implemented a new software update to their site. Suddenly, they started to get a lot of customer feedback through the Usabilla channel.
Thanks to that feedback, TUI quickly realized that 11% of their mobile traffic had been frozen out by the software update - at a potential toll of $400,000 per week in lost revenue.
The practice of collecting passive feedback allowed them to quickly act and remedy the problem.
How NOT to collect passive feedback
Many businesses make it incredibly hard for users to give feedback. Take this example from Paypal where only logged in users can message the business. If you're really eager to provide feedback, you might hit across the "feedback" link in the site footer. Not ideal.
Or take the example of the blender website ninjakitchen.com. While Ninja has a good site-wide feedback option on the right hand side of the screen, they are asking visitors for a huge amount of user feedback, about the overall experience to navigation, and more.
An overwhelming survey request like this is essentially asking users to comment on the entire user experience, while simultaneously providing a horrible experience. It's as if the Ninja team unwrapped a new survey feature, and on a scale of 1-10 turned it to 11. There are too many questions and different elements asked here, with no clear CTA.
Just like when you target cross-sell and upsell campaigns, passive feedback needs to be targeted.
This method of collecting data on user behavior often requires you to incentivize people to go through the process of using your website, completing a set of tasks, while you observe them.
User testing can be conducted at an onsite facility but remote user testing is often the more logical option and offers a better bang for your buck overall. Popular tools to conduct remote user testing include:
Four user testing pitfalls
Before we dive into how to conduct user testing, let’s cover four main pitfalls to be aware of.
User testing data can be both powerful and dangerous. That’s because when taken out of context, the findings can have a disproportionate sway on how people think about users if it’s not interpreted and presented correctly.
You don’t want to end up in the situation where you present a specific finding about how one or two users interacted with your website, and allow that to dictate major and unwarranted changes that don’t apply more broadly.
Pitfall #1: Seeking perceptions vs. behaviors
User testing is about observing behavior.
This is one of the main differences between unmoderated and moderate research. The former method is primarily used to observe behavior, the latter can achieve this but can be used to also probe deeper into users' perceptions and motivations. We’ve previously written a thorough guide to moderated user research so check it out if you need guidance.
For unmoderated user testing, give users their instructions, then get out of their way so you can focus on what they do without any interference or additional prompting. Only then will you see the natural places where users are tripping up and encountering problems within the website journey.
Pitfall #2: Giving leading information
While you won’t be asking questions during the user’s test, you will need to set the stage for their task and give them some instructions upfront (more on that later).
Whatever written or verbal material you do give them, ensure that you are avoiding leading questions or leading tasks. For example, phrases like “After searching, find X on the website” This instruction could lead the user to think they need to use the search functionality and thus biases their behavior.
Pitfall #3: Launching all at once
The third big mistake is launching all at once. This isn't A/B testing, so we don't necessarily need the first user test to be completely analogous with the second one.
We can launch the first user test, watch it, and then adjust the tasks as needed to iron out the kinks and make the test process smoother if necessary.
Pitfall #4: Ignoring user intent
The fourth big mistake is not accounting for user intent when setting the task or scenario. Intent encompasses the following user goals:
- I want to know
- I want to go
- I want to do
- I want to buy
For one of our clients, Grubhub, we had to think about this quite a bit when setting up the user testing scenarios and flows. We had to consider the different journeys users might take depending on their intent such as wanting to buy from a nearby restaurant, deals, cuisine type, etc.
Each of these queries falls under the “I want to buy” user goal, but the resulting experience through the app should be different - and user testing can inform what's on those pages if you account for different user intent when designing the test scenarios.
How to conduct unmoderated user testing
Once you’re ready to tackle user testing, it’s time to set up your user testing scope. I’ll use Ninja Blenders as an example to walk through the process.
Step 1: Create a scope document
Generally what we're doing with this type of document is establishing the plan. We're communicating the plan via a scope document to various stakeholders to facilitate planning and approvals.
We're also documenting what we're doing so that six months from now when we’re ready to do it again, we can review exactly what steps we took the first time.
Within the scope document, you’ll want to include clear goals. In this example, we're focusing on the monetization funnel to eliminate friction and add clarity.
Step 2: Recruit users
We're not getting data on users’ opinions or their motivation. Because of this, we don’t need to recruit actual customers. Instead, we can recruit from a panel of testers that we will pay. It's really about finding the tripwires.
As I mentioned previously, there are many great tools available these days such as UserTesting.
If you're going to be testing frequently or have a very niche product, I recommend curating a custom panel with a tool like Amazon MTurk. To do that, you're going to want some advanced screeners or provide some other scope documents such as this one.
How many testers do you want? A good rule of thumb is five testers per device. It’s smart to recruit one or two extras, in case one of them doesn't pan out.
Step 3: Setting the tasks
This is where you instruct your user tester on what you’d like them to do. You’ll provide these instructions at the beginning of the test, to guide their order of operations.
You’ll also explain that you want them to speak aloud, explaining what they are doing and why they are doing it as they move through each step of the test.
In the case of Ninja blender, you could set up your task instructions as follows. The first question, is intended to get a scenario going inside the user’s mind.
- Imagine you are looking for a new blender. What is important to you when deciding on a blender to purchase?
- Find blenders you might like to buy on Google.
- Go to ninjakitchen.com
- Learn about the blenders offered. How are they different from other products on the market? What are people saying about them?
- Find a blender that you can use with hot or cold food. Once you find one you like, add it to your cart.
- Add an accessory to your cart.
- Proceed to checkout, but end your task before purchasing the items.
Don’t be too prescriptive
These steps are intended to bring the user through the entire funnel and have them articulate and rationalize their experience along the way, mimicking a typical buyer journey.
In general, we should avoid outright telling a user to do something, as this doesn't give us insight into what users would do naturally. A good tip for writing tasks is starting broad and then getting more specific.
For example, say you really want to know how users interact with an element of the homepage that is below the fold. In this case, we shouldn't ask users directly to "scroll down and use x element". Rather we should keep things broad at first and observe the user behavior. Does the user scroll down without a prompt? The next task could then be more specific. This way we can ascertain for example, that although users were positive about the element, none of them would have used/seen it in the real world (cue test idea to increase the prominence of this element).
Start from Google
No one just lands on a website out of nowhere so to emulate a more realistic and natural user experience, we should begin all user tests with an exploratory/research-based task.
This has multiple benefits including;
- Gathering insight into the acquisition strategy.
- It helps us better understand users' frame of mind / the wider user journey.
- It helps ease users into the research session before they arrive on your website.
Finding X tasks
For ecommerce businesses, ask the user to find a product with a specific price range/material or other specifications, allowing you to observe their "product finding" process. This helps you observe the usability of the filters, site navigation and etc.
For SAAS businesses, you can ask the user to compare different plans and choose one. This will help you understand what draws their attention and how they find the information (or not) relevant to the decision.
Step 4: Code your sessions
You will want to note down all comments from the user session, as well as their behavior, e.g anything that seems to be broken, not working very well, or causing users hesitation, confusion, frustration, or difficulty.
While reviewing the sessions look out for;
- Benefits: What did the user like about the site? What is working well?
- Opportunities: Suggestions from users that could improve the site e.g. the user asked for a savings calculator to compare their current electricity plan to yours.
- Issues: Any bugs, technical issues, difficulties that the user faced, questions that didn't get answered, etc.
To help make sense of the sessions highlight your notes or separate findings into different columns/sheets or slides under the above headings.
From the coded findings you can then summarise;
- The benefit/opportunity or issue
- Supporting quote, screenshot, or video from the sessions. This is particularly powerful if you need to get buy-in from others for the recommended solutions.
These two methods of securing voice of customer data can provide a treasure trove of insights into how your customers are using your website and where problems occur. Both methods can be deployed relatively easily and the payoff is huge when actions (such as experimentation hypothesis) are taken based on research.
Consider where, when and how best you can gather more voice of customer data into your business to help with your experimentation efforts.