
Author: Martin P.
Title: Content Marketer
Learn. Test. Grow. Join the experimentation revolution.
EP 11: Blueprint of the Week: Heuristic Scoring
Two weeks before I talked about how to do a UX Heuristic Review of your site/app. This week, I’ll discuss how you can score tasks and insights gleamed from it.
Like in the pic above, UX Heuristic sessions gathers top-level insights from a broad range of experts. But we know we can’t go really deep in 1 hour.
This is where Heuristic Scoring comes in. This free Template Scoring Spreadsheet from Speero includes all the different guidelines and instructions you need for scoring.
The scoring instructions:
Scoring Key:
0 = No, there is no evidence that the website meets this guideline
1 = Yes, there is some evidence that the website meets the guideline
2 = Yes, there is strong evidence that the website meets this guideline / exceeds expectations
Feel free to add notes/explanation/links to screenshots/URLs etc. to support discussion and reporting.
Once you download and fill in the spreadsheet, send it to at least 3 members of the project team. Each team member should review the key pages of the site against the guidelines, scoring accordingly and adding notes on the rationale for their scores where relevant.
Once complete, the PM should combine the scores from the individual spreadsheets into one and then calculate the averages for each guideline and the 5 UX principles.
Guidelines that have a total score of 0 are key opportunities that need to be worked on.
Some discussion should take place within the project team if scores are significantly different, e.g. one person scores a guideline 0 and someone else scores the same guideline with a 2.
As well as this helping to identify more specific insights and opportunities, this activity gives us a top-level view of where the main opportunities lie in terms of the 5 main UX principles.
Talk of the Week:
As Global Experimentation Lead at Specsavers, Melanie is constantly balancing the need to experiment with the need to perform.
With 11 years of experimentation experience under her belt, she's in charge of coordinating experimentation work across 10 countries, each with a unique local market within a business that's traditionally prized brick-and-mortar operations.
In this episode of Testing Insights, she tackled questions like:
1. How should experimentation programs function across channel owners?
2. How does Specsavers distribute its experimentation workflow?
3. How can experiment learnings be communicated across teams?
4. What is the biggest obstacle to positive experimentation ROI?
Reads of the Week:
Why statistical significance is killings your tests: Even though statistical significance is one of the “holy benchmarks” for deciding when to stop the test or rollout a new feature, waiting to collect enough data for statistical significance can damage the testing velocity. Here’s how you can double your experimentation rate and output. Link.
Laws of UX: a top collection of best practices in UX that designers gotta consider when building UIs. Explore all the “UX laws” from Doherty Threshold and Fitt’s Law to Von Restorff and Zeigarnik effects. Link.
How Data Maturity Drives Business Success: a new IDC white paper that reveals a direct correlation between business success and data maturity. Regardless of industry, size, or location, companies that are data mature enjoy nearly 2.5X better business outcomes than those that aren’t. Link.