A guide to carrying out usability reviews
Think that you need to be a usability expert to carry out a usability review? Well I won’t deny that it helps (spoken like a true expert!) but since user experience certainly isn’t rocket science anyone can have a good stab at carrying out a usability review and learn a great deal in the process. In this article I walkthrough a method for examining the usability of an interactive system (website, application, mobile app etc.) by using a combination of a scenario based and heuristic (i.e. best practice guidelines) based review. I also discuss when usability reviews are most useful and outline some of the pros and cons of the technique.
What are usability reviews?
Usability reviews are a structured means of examining the usability of an interactive system by evaluating it against a set of recognised usability best practice principles. Reviews are usually carried out by usability experts, but don’t let that put you off carrying out your own because with a bit of know how and a good set of guidelines anyone can have a go.
Broadly speaking there are two different types of usability review – scenario based reviews and heuristic based reviews. Scenario based reviews evaluate a system against likely user scenarios, such as buying a product for an ecommerce website. Heuristic reviews on the other hand evaluate an interface against a set of usability heuristics (i.e. best practice), such as links being clear, descriptive and well labelled. The two are by no means mutually exclusive and are often most effective when used together (as you will see).
Usability reviews – the good news
Usability reviews have a lot going for them as a usability technique, including:
- Usability reviews are quick and cheap to carry out. A comprehensive usability review can often be carried in just one or two days.
- Usability reviews can be carried out by relative novices (provided that they have received at least some training in the technique and have a good set of guidelines to follow).
- Usability reviews can have a wide scope. Whereas usability testing generally evaluates a few specific tasks, usability reviews can quickly cover lots of different functions, usage scenarios and usability areas for a system.
- Usability reviews are great for competitive benchmarking. Different systems can be easily compared against the same usability criteria.
Usability reviews – the bad news
Alas it’s not all good news. Usability reviews also have a number of short comings that you should keep in mind:
- Usability reviews don’t actually tell you how usable a system is, just how usable it should be. The only way to really test the usability of asystem is to do just that – test it using usability testing and by examining real world usage.
- Important usability issues can be missed and suspected usability issues might not be issues at all as usability reviews are by their nature educated guesses.
- Usability reviews can be inconsistent. Numerous studies have shown that multiple evaluators evaluating the same system often identify markedly different sets of usability issues (known as the evaluator effect).
- Usability reviews are subjective and because there’s no empirical evidence it can be difficult to argue the case for fixing suspected usability issues.
- Usability reviews are dependant on the expertise of the evaluator. Although usability reviews can be carried out by relative novices, if adequate training and guidance is not provided then results can be inconsistent and significant usability issues can be missed.
How to carry out a usability review
The usability review method that I’m going to walkthrough involves carrying out a scenario based review in the form of a cognitive walkthrough, and then a heuristic based review using a usability review scorecard that I’ve put together (based on usability best practice principles). I’ve found that using the two techniques together works really well because it means that lots of bases are covered and ensures that a system is evaluated against likely usage. Carrying out the cognitive walkthrough first also helps builds up a good understanding of the system prior to the heuristic based review. Want to know more? Here’s how to carry out a comprehensive usability review.
1. Define your usability review scenarios
A cognitive walkthrough provides a means of evaluating the usability of a system by testing how well it supports a common user task in the form of a scenario. You basically take a scenario, such as a punter placing a bet on a betting website, and walk through the steps that this user might take to achieve their goal. To do this you should first define some user scenarios to walkthrough. These should cover common and important user tasks that are crucial to the success of the system. For example, if you were evaluating a mobile phone you might look at adding a new contact; sending a text message and making a phone call as just some of the user scenarios to review. For each scenario you should answer the following questions:
- Who is using the system? Has he or she used it before? You don’t need to build a rich persona for your user, just enough information to start making educated guesses about their behaviour.
- What is he or she trying to do? What task is he or she attempting to undertake? Is this the first time this person has attempted this task?
- Why is he or she using the system? What is their goal?
- Where is he or she using the system? If a website is being reviewed, which browser is this person likely to use?
The number of scenarios you review will depend on how extensive the usability review needs to be, and how wide the scope of the system is. For most reviews you’ll find that you will only probably need to cover 2-5 scenarios. It’s also a good idea to cover multiple scenarios with the same persona wherever possible.
2. Walkthrough each scenario
Having defined each scenario you’ll now need to walkthrough the steps that your persona is likely to take to achieve their goal. Of course it’s impossible to know exactly how someone will undertake a task so this is very much a best guess based on asking the following questions at each step:
- Will the user know what to do?
Is there a clear route for the user to take? Is it obvious what the user needs to do at this step?
- Will the user see how to do it?
Is the call to action obvious? Is it clear how the user completes this step?
- Will the user know whether their action was correct?
What sort of feedback is provided? Is it clear whether the user is on the correct path or not?
You should continue the walkthrough for each scenario until you think the user is likely to have achieved their goal, or conversely is likely to have given up. Exactly how you capture each walkthrough is of course up to you. In the past I’ve found using a separate screenshot on each page with the above questions (Will the user know what to do? etc…) and answers for each step a good template to use.
3. Fill out the usability review scorecard
Having carried out a walkthrough for each scenario you should now be in a good position to carry out a heuristic review. Rather than using Nielsen’s 10 usability heuristics I’ve found it easier to use more explicit usability heuristics based on usability best practice principles, such as making sure that error messages are descriptive and written in plain English. Nielsen’s usability heuristics are certainly useful and are something to keep in mind when carrying out a usability review but I find that they can be somewhat subjective and that less experienced reviewers can struggle to directly apply them. More explicit usability heuristics are more transparent, making it easier to see the link between a system and it’s judged usability, and are easier for less experienced reviewers to follow.
To carry out the heuristic part of the usability review simply download the Excel template below and you enter a score (very poor, poor, moderate, good etc…) and comments for each of the 45 best practice usability principles. The principles cover:
- Features and functionality
- Homepage / starting page
- Content & text
- Performance (e.g. response rate and absence of crashing)
The scorecard will deliver a final usability score (out of 100), allowing for different systems to be compared against the same set of usability criteria. This is especially useful for competitive benchmarking but please bear in mind that a high score doesn’t necessary mean that something is usable, merely that it follows best practice principles (if that makes sense). Heuristics are weighted to ensure that more important points, such as ‘Features and functionality meet common user goals and objectives’ have a greater affect on the overall score than less significant ones. If you’re somewhat old school and prefer pencil and paper to typing comments directly in you can also download a PDF template for doing just this.
When are usability reviews most useful?
Usability reviews are perhaps most useful for a first pass evaluation of a design (all the way from conceptual to high fidelity), and for evaluating a pre-existing design to investigate potential areas for improvement. Also don’t think that it’s a case of either carrying out a usability review or usability testing because like bananas and custard, or gin and tonic, the two are a great combination. A usability review will give you an idea of some of the usability issues that might exist and usability testing will allow you to validate these assumptions and to explore any areas of concern in more detail.
Usability reviews are also incredibly useful for carrying out competitive benchmarking because they allow you to easily compare the usability of a set of systems against the same criteria. For example, you might work out the usability score for a raft of competitors to see how they compare to your system.
More about usability reviews
- Expert usability review vs. usability testing (Webcredible)
- The Evaluator Effect: A Chilling Fact about Usability Evaluation Methods (International Journal of Human-Computer Interaction)