User testing is an effective form of research that can provide valuable insights into how real people navigate to and within your website.
While no research method will ever provide an exact replica of real user behavior, it is possible to set up your user testing sessions in a way that uncovers more accurate insights.
On The Research Lab, Speero Experimentation Strategist Emma Travis shared her top three recommendations
You can watch the full guide here.
Tip 1: Use pre-session questions
“My first tip is to include pre-session questions in your research script. Whether you’re running remote or moderated sessions, it’s important to ensure the participant is open and willing to provide feedback. Although observing behavior is as important as verbal feedback, the magic really happens when the two combine. Pre-session questions help the participants ease into the session. They also help the participant understand we are interested in what they have to say and finally they help prime the participant for the tasks,” said Emma.
For example, if the research you’re running is for a B2B business, ask about:
- The participant’s company
- Their participant’s role
- The participant’s daily responsibilities
If the research you’re running is for a B2C business, find out about:
- The participant’s typical habits
- The participant’s main purchasing considerations
When prompted to begin with these questions, the relevant information will be front and center in your participant’s mind as he or she moves through the remaining tasks. This in turn will increase the chances that your users complete the tasks realistically—keeping their genuine needs and considerations in mind.
Tip 2: Start broad, then get specific
“My next tip relates to how to write the tasks. My advice is to start broad and then get more specific. This is a good way to avoid all of the tasks being too prescriptive, while also making sure you meet the specific research goals. Starting with a broad task, giving the user the chance to explore the website as they would usually (albeit with a goal in mind) will allow you to observe the natural user behavior. If you have specific areas of the website or pieces of functionality you’re interested in, you can firstly understand if the participant found or interacted with this area, page, or functionality, which is interesting itself. More specific tasks should then be more tailored to the specific areas, pages, and functionality you’re interested in,” said Emma.
For instance, imagine that you’re specifically interested in the filter functionality on your E-commerce website. Your first task should be something like: “you’ve arrived at this website looking for some new shirts to wear for work. Use the website to find something suitable for you.” This gives the participant a clear goal, but is very open in terms of how the task should be completed.
While completing the above task, users may or may not use:
- The search box functionality
- The main navigation menu
- The filters
The next task you assign can get more specific, such as: “You’re looking for a blue check shirt to wear for work.” You want to avoid outright telling the user what to do, but giving a more specific goal increases the chance they will filter.
If you’re running moderated sessions, you can then ask follow-up questions after observing their action to find out why they did or did not use certain tools, such as filters. Maybe they didn’t notice them? Maybe they didn’t find them useful?
“Something else I like to include is what I call an ‘acquisition’ task. No one just arrives on a website out of nowhere. Even users who come to the site direct had some trigger to get them there, maybe a friend or family member told them about your brand, or perhaps they saw your advert on the side of a bus. Use data from your Analytics to understand how users are arriving at your website and use this to inform your first task,” said Emma.
For example, if the majority of your users arrive at your site from Google, set them a task to find the product you sell or the service you offer from Google. It can be illuminating to observe whether participants click on your ads, and end up on your website or a competitor site, assuming that your brand name was blinded from the assignment.
Tip 3: Consider your words carefully
“My final tip in writing an effective script for user testing sessions is to be super careful with the phrasing you use in your tasks, scenarios, and questions. The term ‘user testing’ is problematic in some ways in itself, as it suggests that there’s a right or wrong, a pass or a fail. If a user can’t complete a task on your website, that’s on you. It’s not necessarily the user’s fault. It’s pretty rare that a research participant will ever hear the term ‘user testing,’ but we need to apply this same logic to the script,” Emma said.
Even if you are using “dummy data” in the test so that users don’t fill out real forms or complete real checkouts, you want to avoid words such as the following:
- Pretend
- Fake
- Test
Asking users to enter “test@test.com” when checking out, for example, may subconsciously impact users and change their subsequent behavior and feedback as the test continues. This in turn reduces the amount of genuine insights you will uncover about the usability of your form or checkout process.
PII collection rules may prohibit you from using real user data in such a scenario, but you can still avoid the word “test” and bridge the gap by giving each participant a different set of “made up” information, if desired.
Because no research method will ever provide an exact replica of real behavior, getting as close to the truth as possible will ultimately rely on using a variety of research methods and drawing on quantitative data.
When your user testing is organized as effectively as possible, you will be better positioned to trust your insights and obtain your research goals.