What are the challenges and barriers your visitors experience when they land on your website?
- Do they get confused?
- Do they get stuck?
- Do they find what they need?
By the end of this short read, you should have a structured understanding of the processes involved in gathering a true perspective of the strengths, challenges and opportunities within your user experience journey. To gain valuable behavioral insights from your customers or users, in their own words, you can start with two methods:
- Collect passive feedback
- Conduct user testing
Passive Feedback
Passive feedback is the Voice of Customer data that you are continuously, passively collecting in the form of user surveys, polls and feedback forms.
You can think of it as a hands-off, automated monitor that’s taking in crowd sourced data and keeping a finger on the pulse of your users' evolving behavior patterns. There are several helpful tools that you can use to collect targeted passive feedback, such as Usabilla, Hotjar, and Userleap [recently rebranded to Sprig].
While passive feedback alone can’t help you fully understand user behavior, it acts as a helpful safety net. It ties together all the metadata surrounding the user experience with the feedback provided. Additionally, If you're doing a lot of continuous code releases and something goes wrong, this type of feedback is valuable in helping you quickly spot and solve problems.
Benefits of collecting passive feedback
Two real-world examples serve to illustrate the benefits of collecting passive feedback. The first example is Virgin Airlines.
Often, companies make it very difficult to provide feedback, especially when it comes to mobile sites or apps but Virgin Airlines added a feedback Call To Action[CTA] in their main menu. For positive feedback (above a star rating threshold), they then triggered a CTA to leave an app store review. This provided a quick way to continuously catch feedback overall, as well as getting the positive feedback funnelled into a super useful and visible channel. This resulted in a 300% uplift in positive app store reviews.
“The company was able to provide us with screen captures of what the customer sees at the time of logging the issue. Being able to see what the customer sees has made it much easier to reproduce and understand customers’ pain points in a timely manner. We also liked the simplicity of distributing surveys for different purposes without needing to wait for the next release deployment, and the functionality to capture useful metadata, which is useful for us because the data provides our development teams with the necessary information to troubleshoot quicker and faster." - Digital product specialist, Brendan Valmont at Virgin. You can read the full casestudy here
With a new-and-improved call to action to leave feedback, customers had an easy and intuitive method to provide additional input regarding their experience.
How NOT to collect passive feedback
Just because passive feedback is easy to implement doesn’t mean that you should do it without thought and care. Take the example of the blender website ninjakitchen.com. Ninja was asking visitors for a huge amount of feedback, about their overall experience, covering ease of navigation, searchability, functionality ratings, product range, and more.
An overwhelming survey request like this, is not only a huge ask of the user with no clear CTA, but also detracts from the users overall experience. Just like when you target cross-sell and upsell campaigns, passive feedback needs to be targeted.
User testing
This user behavior analysis process often requires you to incentivize selected users to test the site by completing a set of predetermined tasks, while you observe them.
User testing can be conducted at an onsite facility but remote user testing is often the more logical option and offers better bang for your buck. Popular tools to conduct remote user testing include:
- UserZoom - Design iterations solutions and Design Experience Measurements
- UserTesting - Web and mobile testing around Marketing, UX, Design, and NPD
- Userlytics - Multichannel, Prototype, App, and Systems testing, now with live chat
- UserBob - Fast and affordable, with a focus on reducing bounce rates
4 User testing pitfalls
Before we dive into how to conduct user testing, let’s cover four main pitfalls to be aware of.
User testing data can be both powerful and dangerous. That’s because ‘context’ is everything; when taken out of context, the findings can have a disproportionate sway on how people think about users if it’s not interpreted and presented correctly.
You don’t want to end up in the situation where you present a specific finding of how one or two users interacted with your website, and allow that to dictate major and unwarranted changes that don’t apply to a broader spectrum of users. .
Pitfall #1: Seeking perceptions vs. behaviors
User testing is about observing behavior. This is one of the main differences between unmoderated and moderate research. The former method is primarily used to observe behavior, the latter can achieve this but can be used to also probe deeper into users' perceptions and motivations. We’ve previously written a thorough guide to moderated user research so check it out if you need guidance.
For unmoderated user testing, give users their instructions, then get out of their way so you can focus on what they do without any interference or additional prompting. Only then will you see the natural places where users are encountering problems within the website journey.
Pitfall #2: Giving leading information
While you won’t be asking questions during the users’ test in unmoderated testing, you will need to set the stage for their task and give them some instructions upfront (more on that later).
Ensure that you are avoiding leading questions or leading tasks whether you provide written or verbal instructions. For example, phrases like “After searching, find X on the website” This instruction could lead the user to think they need to use the search functionality and thus biases their behavior.
Pitfall #3: Launching all at once
The third big mistake is launching all user tests at once. This isn't A/B testing, so we don't necessarily need the first user test to be completely analogous with the second one.
We can launch the first user test, observe, then adjust the tasks where needed to make the test process smoother if necessary.
Pitfall #4: Ignoring user intent
The fourth big mistake is not accounting for user-intent when setting the task or scenario. Intent encompasses the following user goals:
- I want to know
- I want to go
- I want to do
- I want to buy
For one of our clients, [Grubhub, we had to think about this quite a bit when setting up the user testing scenarios and flows. We had to consider the different journeys users might follow depending on their intent such as wanting to buy from a nearby restaurant, deals and cuisine type. Each of these queries falls under the “I want to buy” user goal, but the resulting experience through the app should be different - and user testing can inform what's on those pages if you account for different user intent when designing the test scenarios.
How to conduct unmoderated user testing
Setting the scope
Once you’re ready to tackle user testing, it’s time to set up your scope. Let's go back to Ninja Blenders and use them as an example to walk through the process.
Step 1: Create a scope document
Generally what we're doing with this type of document is establishing the plan. We're communicating the plan via a scope document to various stakeholders to facilitate planning and approvals. We're also documenting what we're doing so that six months from now when we’re ready to do it again, we can review exactly what steps we took the first time.
Here’s a template scope document you can use.
Step 2: Set goals
Within the scope document, you’ll want to include clear goals. In this example, we're focusing on the monetization funnel to eliminate friction and add clarity.
Step 3: Recruit users
As we’re not getting data on users’ opinions or their motivation, we don’t need to recruit actual customers. We can simply use a panel of testers; remember this is about finding the tripwires.
As I mentioned previously, there are many great tools available these days such as UserTesting to help you recruit participants and run the research. If you're going to be testing frequently or have a very niche product, I recommend curating a custom panel with a tool like Amazon MTurk. To do that, you're going to want some advanced screeners or provide some other scope documents such as this one.
A good rule of thumb for the number of testers you need is five per device. It’s smart to recruit one or two extras, in case one of them doesn't pan out.
Step 4: Setting the tasks
This is where you instruct your testers on what you’d like them to do. You’ll provide these instructions at the beginning of the test, to guide their order of operations.
You’ll also explain that you want them to ‘speak aloud’, explaining what they are doing and why they are doing it as they move through each step of the test.
In the case of Ninja blender, you could set up your task instructions as follows. The first question is intended to get a scenario going inside the user’s mind.
- Imagine you are looking for a new blender. What is important to you when deciding on a blender to purchase?
- Find blenders you might like to buy on Google.
- Navigate to ninjakitchen.com
- Learn about the blenders offered, how are they different from other products on the market? What are people saying about them?
- Find a blender that you can use with hot or cold food. Once you find one you like, add it to your cart.
- Add an accessory to your cart.
- Proceed to checkout, but end your task before purchasing the items.
Tips for task setting
Don’t be too prescriptive
These steps are intended to bring the user through the entire funnel and have them articulate and rationalize their experience along the way, to mimic a typical buyer journey. In general, we should avoid outright telling a user to do something, as this doesn't give us insight into what users would do naturally. A good tip for writing tasks is starting broad and then getting more specific.
For example, you are looking to establish how users interact with an element of the homepage that is below the fold; in this case, we shouldn't ask users directly to "scroll down and use x element". Rather we should keep things broad at first and observe the user behavior. Does the user scroll down without a prompt? The next task could then be more specific and then ask them to use the element. This way we can ascertain for example, that although users were positive about the element, none of them would have used/seen the element below the fold in the real world (cue test idea to increase the prominence of this element).
Start from Google
No one just lands on a website out of nowhere so to emulate a more realistic and natural user experience, we should begin all user tests with an exploratory/research-based task.
This has multiple benefits including;
- Gathering insight into the acquisition strategy.
- It helps us better understand users' frame of mind / the wider user journey.
- It helps ease users into the research session before they arrive on your website.
Finding X task
For eCommerce businesses, ask the user to find a product with a specific price range/material or other specifications, allowing you to observe their "product finding" process. This helps you observe the usability of the filters and site navigation.
For SAAS businesses, you can ask the user to compare different plans and choose one. This will help you understand what draws their attention and how they find the information (or not) relevant to the decision.
Step Five: Code your sessions
You will want to note down all comments from the user session, as well as their behavior, e.g anything that seems to be broken, not working very well, or causing users hesitation, confusion, frustration, or difficulty.
While reviewing the sessions look out for;
- Benefits: What did the user like about the site? What is working well?
- Opportunities: Suggestions from users that could improve the site e.g. the user asked for a savings calculator to compare their current electricity plan to yours.
- Issues: Any bugs, technical issues, difficulties that the user faced, questions that didn't get answered.
To help make sense of the sessions, highlight your notes or separate findings into different columns/sheets or slides under the above headings. From the coded findings you can then summarise;
- The benefit/opportunity or issue
- Supporting quote, screenshot, or video from the sessions. This is particularly powerful if you need to get buy-in from others for the recommended solutions.
Summary
These two methods of securing Voice of Customer data can provide a treasure trove of insights into how your customers are using your website and where problems occur. Both methods can be developed relatively easily and the payoff is huge when thought and actions (such as experimentation hypothesis) are taken based on research.
Consider where, when and how best you can gather more Voice of Customer data into your business to help with your experimentation efforts. Here you can learn more about our conversion research methodology, ResearchXL, and the processes we use at Speero to discover what motivates your users.