Usability testing hints, tips and guidelines
13 minutes read
Usability testing (a.k.a. user testing) is the undisputed daddy of usability techniques – it’s the Arnold Schwarzenegger of the usability world. On the face of it, it also seems incredibly simple and easy to do. You merely observe users as they use your product or prototype and then apply what you’ve learnt to improving the user experience. But like all simple ideas whilst it might be relatively easy to do, it’s difficult to do well. There’s an awful lot to think about when it comes to usability testing and so to help you get to grips with this mighty technique I’ve distilled some of what I’ve learnt in the hundreds of usability tests that I’ve run over the years into some useful usability testing hints, tips and guidelines.
Before the sessions
As they say, ‘fail to prepare, prepare to fail’ and this is certainly true when it comes to usability testing. Here are some of the things to think about before your usability testing sessions, from writing the test script to recruiting participants.
Recruiting participants

Usability testing 101: Make sure that you recruit participants that reflect the users of your product (I can’t stress the importance of this enough). For example, if you have a product that is used by managers and carry out the testing with their secretaries you’re largely wasting your time. Try to recruit 3-5 users for each distinct user group, so for example if you have an ecommerce website you might recruit some existing and some new customers. I’ve found that about 8-10 users is generally about right for most usability tests (despite what Jakob Nielsen would have you believe) and don’t forget to recruit 1 or 2 spare participants in case of no shows – you’ll always get them!
When recruiting participants it’s also important to make sure that participants know exactly where they need to go, when (although don’t expect them to turn up exactly on time) and who to ask for. It’s a good idea to give them a map and directions to use, although be aware that they’ll still probably get lost and more than likely turn up late. Also, rather than calling it ‘usability testing’, which sounds a bit frightening it’s better to tell participants that they will be taking part in some ‘customer feedback sessions’. Much less scary sounding for all involved.
Planning sessions
Usability testing 101 part 2: Carry out a dry run of your usability test. Is there time to cover the tasks you want to? Are the tasks clear? Will participants understand what they need to do? If you don’t carry out a dry run you’ll live to regret it because you’ll probably find out that you’ve tried to cram in too many tasks and that participants don’t understand what they need to do.
When it comes to scheduling sessions I’ve found that it’s best to keep them around the 1 hour mark. Any less than this and you’re not utilising your precious time with participants as much as you should be. Any more than this and participants can start to get grouchy. If you do need to run longer sessions than I’d recommend giving participants a break midway through, otherwise you’ll get some very surly participants on your hands. Also don’t forget to leave at least 30 minutes in-between sessions to allow for over runs, late participants and to give you time to properly prepare. You don’t want to be rushing around in a mad panic because someone is delayed in traffic.
As I’ve already said, ‘fail to prepare, prepare to fail’ and it makes sense to think about what could go wrong during the sessions and what you can do to either prevent this from happening, or at least deal with it. For example, if you’re testing a prototype system you might want a back-up in case it’s not working, in the form of screen shots or perhaps a static prototype.
Writing a test script
When writing your test script concentrate on the key goals and tasks for your product. What must a user be able to do? What are the key user journeys? Also try to make tasks as realistic as possible. You want your tasks to reflect how users will actually use the product, not what you assume the key tasks will be, or what you think will test well.
Where possible try to use user generated tasks (sometimes called interview-based tasks) instead of prescribed tasks. User generated tasks are based on discussions with participants and will be personal to them. For example, asking them to find a product that they’d be interested to buy or a suitable present for a loved one. Also consider using scenarios to add context for a task and to help participants understand what they need to do. For example, you might ask participants to imagine that they’ve bought a new car and need to get an insurance quote. It’s tempting to try to cram loads and loads of tasks into a session but try to resist doing this because you’re unlikely to get through them all. It might however be a good idea to have some extra tasks up your sleeve because participants will invariably complete tasks at different rates and you don’t want speedy Gonzalez getting off too early.
If you’ve got a complex task you want participants to undertake you might want to think about breaking it up in to separate smaller tasks so that participants are not overwhelmed. You’ll also want participants to only be working on one task at a time as you don’t want to overload them.
Room set-up
The room you use for the usability testing should reflect the sort of environment the product will be used in. For example, if you’re testing a games console you’ll want a lounge type room. If you’re testing an office product you’ll want an office type room. Also try to use the sort of computer and set-up for the test (screen resolution, browser, operating system etc…) that your users are likely to use. For example, testing with a massive 28” monitor isn’t the best of ideas, unless of course your users are likely to use monitors this size.
Whilst it’s nice to have a web cam for usability testing, because it means that you can record the participants face during tests, it’s by no means essential. I would however strongly recommend recording the onscreen activity, together with the dialogue (so you’ll at least need a microphone) so that you can go back and replay parts of a session. If you can’t stretch to fancy usability testing software like Morae you can always use a free screen recording tool and basic microphone.
Have the room set-up so that you can see the screen and the participant’s face, as you’ll want to be able to get eye contact and see their expressions. For a desktop computer sitting to the side usually works best. Any observers within the room should be discrete and out of sight, and you should certainly have no more than 2 in the room at any one time (you don’t want to spook the poor participants). Ideally your observers will be in a different room altogether (i.e. using a video and audio feed) but of course this isn’t always possible. It’s also a good idea to have a test set-up checklist so that you can get everything right before a session. For example, check that the microphone is working, any audio and video feeds are working and that there is a copy of the test script and consent form etc…
Pre-session brief
Before the sessions begin you’ll want to brief any observers to let them know what to expect and what their behaviour should be (even if they’re in a different room as office walls are generally not that thick!). For example, you don’t want observers giggling in the background or rushing in midway through a session to tell participants where they’re going wrong! It’s also a good idea to leave observers with a copy of the test script so that they know what will be covered and what questions are likely to be asked.
During the sessions
Usability testing sessions can be stressful for all involved. Here are some of the things to think about to ensure that you get the absolute most out of those precious minutes with your participants.
Introducing the session

Whilst participants are waiting for the session to begin it can be a good idea to ask them to fill out their consent form and any pre-session questionnaires (e.g. background, level of experience, expectations etc…). This can free you up to get stuff ready for the session. When you are introducing the session it’s a good idea to have a checklist of things to go through, such as what participants can expect to be doing, how long the session will take and the fact that the session will be recorded. Try to resist the temptation of reading directly from a script because this can often sound very robotic and artificial and trying to remember everything off the top of your head can easily lead to something important being forgotten. Let participants know that they can ask questions at any time (and take a break at any time), but that you won’t be able to help them with the tasks. After all, your users aren’t going to have an assistant on hand when using your product so why should the usability testing be any different.
Before you start the hard work it’s sometimes useful to have a warm-up task. This is to ease participants in to the session and to help build a rapport. For example, you might ask them for their impressions of the homepage of a site.
Setting tasks
When asking participants to have a go at tasks I’ve found it’s generally best to verbally talk them through the task and then have the task details to hand if they need them. You can also get them to clarify that they have understood the task by asking them to repeat back what you’ve just asked them to do. Be careful with how you word and articulate tasks because you don’t want to inadvertently lead participants and accidently give the game away. If there are inputs for a task, such as an offer code or delivery address make sure that it is available (you don’t want the test being one of memory) and is in the right form. For example, if you’re testing the use of gift vouchers you might mock up some example gift vouchers for participants to use.
Before some tasks you might want to ask participants about their expectations. How hard do they expect a task to be? How long do they think it will take? Perceptions are obviously influenced by expectations so it’s important to find out what those expectations are. For example, a task that participants find hard but expect to find hard is probably not as serious as a task that participants find hard but expect to be easy.
Because participants will complete tasks at different rates it’s important to include the most important tasks first. This ensures that you at least get results for the key user goals and tasks. Also don’t be afraid to change tasks and to deviate from the script. The test script shouldn’t be set in stone and you might find for example that a key task for this particular participant has been left out. You might also change the script part way through the usability testing sessions as you refine the tasks to cover and questions to ask.
Asking questions
For each task it’s a good idea to make a list of the questions that you’ll want to ask, and to make a note of you want to find out. This will help when it comes to pin pointing what to look out for and identifying points of discussion. You don’t of course have to religiously stick to these but there might be questions that you’ll want to ask of all the participants.
Any questions that you do ask should generally be open questions, such as what would you expect to happen? Where would you go? What would you do? Mirror questions are often a good way to get participants to explain themselves because they force participants to expand on their comment. You basically replay to participants what they initially said so that they can clarify it. For example, “You said you weren’t sure what this text means”. Avoid overtly leading questions, such as hinting the path that the participant should take. You obviously don’t want to make the tasks too difficult but equally don’t want to give too much away.
Finally try to avoid jumping in during a session and interrupting a participant. Even though the temptation is there to jump in and help, wait until it’s clear that someone will not complete a task or really needs help. If in doubt keep quiet.
Taking notes
When it comes to making notes a pen and paper are king (at least in my humble opinion). I wouldn’t recommend using a computer because the typing sounds can be surprisingly distributive, not to mention annoying and unless you’re a very quick typist you’re unlikely to be able to capture as much information anyway. It’s a good idea to add time labels (e.g. 10:45) to quotes and incidents that you might want to return to and to use notation to identify major points and issues within your notes. For example, you might use asterisks (i.e. ‘*’) or different colour to signify something very interesting. This makes it easier to find them again when replaying the session.
I’ve found that it can be difficult to facilitate a test and to take notes at the same time, which is why I’d recommend using a facilitator and separate observer / note taker. Since running sessions can be quite tiring it’s also a good idea to take it in turns so that the facilitator gets a rest. If you don’t have a separate note taker then making notes directly on the test script is a good idea. This means that you don’t constantly have to switch between your notes and the script.
Wrapping up the session
At the end of the session it’s often a good idea to use a questionnaire to gather some quantitative feedback. For example you might use the system usability scale to gather some feedback (I’ve prepared an example System usability scale questionnaire you could use). Be mindful however that you’ll probably be using a relatively small sample size so the quantitative data you gather needs to be handled with care.
Of course you don’t always have to wrap up a session at the end. Don’t be afraid to stop a session mid way through. For example, you might find that this participant doesn’t have the level of experience you thought they did, or wouldn’t actually use this product at all.
Once you’ve wrapped up the session don’t forget to give the participant their reward, be it money, vouchers or even just a cuddly toy. You’ll also want a record that participants have received their reward so don’t forget to get them to sign on the dotted line.
After the sessions
Phew. The sessions are all over and you can put your feet up. Well hold your horses because your work isn’t quite finished yet. Don’t forget that usability testing is just a means to an end and that end is finding out how the product is performing, and how it can be improved. Here are some of the things to think about when it comes to analysing and reporting the results of usability testing.
Analysing the results

Speed is of the essence when it comes to analysing results. Go through your notes as soon as possible as the sessions should still hopefully be fresh in your mind. You might even find it useful to go through your notes at the end of a day of testing to mark out the most significant issues and findings. If there have been a number of observers for sessions then discuss the findings as a group because it’s likely that other people will have taken away stuff that you might have missed and vice versa.
Whether you’re analysing results in a group or own your own Affinity diagramming is a good way to bring some order to the chaos. I’ve found that using different coloured cards or post-it notes can work well. For example, pink for usability issues, blue for quotes and yellow for interesting finding etc…
Reporting results
When it comes to reporting the results of usability testing I’m very much of the ‘just enough’ school of thought. There’s no point in documenting any more than you have to, whether that means a short report, a presentation, or just a discussion within the team. Not only is it purgatory to have to write a massive usability testing report (believe me, I’ve been there), no one will ever be bothered to read it anyway!
Whenever reporting usability testing results It’s best to concentrate on the most significant and serious findings, because not everyone is going to be interested in all the nitty gritty detail. For this reason it’s a good idea to always include a summary of the main points. Also think visual. You ideally want lots of screen shots and perhaps even video clips to showcase issues and findings. If you are making a highlights reel, and I wouldn’t recommend it unless you feel that it’s very important as it’s a lot of work, wait until you’ve identified the main findings and issues because you’ll probably want a clip for each of these.
Try to ensure that you get to discuss the findings in person so that you can answer any questions and emphasis important points. After all it’s much harder to dismiss an important finding if there isn’t someone there to explain exactly why it’s so significant! Also remember to be nice. Consider not just what you’re saying but also how you’re saying it. If you’re reporting usability testing results back to a project team you don’t want to potentially alienate the team by continually slamming their product.
More about usability testing
Usability testing is a big subject and there are lots of really good books and articles out there. To get you started here are some other usability testing articles worth checking out:
- 8 guidelines for usability testing
- 7 common usability testing mistakes
- UsabilityNet guide to usability testing
And some good books to read:
- Rocket Surgery Made Easy by Steve Krug
- Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests by Jeffery Rubin and Dana Chisnell
- Practical Guide to Usability Testing by JS Dumas
- Usability Engineering by Jakob Nielsen
Enjoy and ‘Hasta la vista, Baby….’
25th April 2012 @ 10:07 am
Hey, I just wanted to say that this is a great round up of how to conduct usability testing. Well done. Also I just wanted to add another book suggestion – Moderating Usability Tests http://www.amazon.co.uk/Moderating-Usability-Tests-Interacting-Technologies/dp/0123739330