12 tips for better data informed design
10 minutes read
In my last post (Data informed design, not data-driven design) I outlined why you should be following data informed design, rather than data-driven design. Why it should be designers and the design team driving the design process, not the data. Now that’s all well and good I hear you say, but how precisely do I carry out data informed design? Throw me a frickin bone here… Well worry not. I can’t cover everything you need to know about data informed design in one article (not without it being the longest blog article known to man), but I can certainly give you some tips and advice for better data informed design.
1. Focus on key design decisions
Gathering data to inform each and every design decision, no matter how not how small is likely to be a lengthy, not to mention painful process. This is why it’s best to focus your efforts on key design decisions. Certainly in the spirit of the aggregation of marginal gains you can use the power of data to optimise a design within an inch of its life, but you’ll get the biggest bang for your data buck by focusing on the big decisions. For example evaluating new concepts, new features and perhaps alternative designs. Try to focus on getting data for the make or break decisions because the details won’t matter if the overall concept doesn’t work.
2. Define your hypothesis
Those of you who were concentrating during their science lessons at school should be familiar with the concept of a hypothesis. The concept of making an educated guess (note that a guess should be ‘educated’ not just a wild stab in the dark) and then gathering data to investigate whether that educated guess is correct or not. Well a hypothesis is often a good starting point for data informed design because it helps you to think about the sort of results that you want to see, and the sort of data that you’ll need to capture to test your hypothesis. A hypothesis also helps you to think about what success should look like. In fact you can sometimes work backwards from your picture of success and think about what sort of design changes you might make to drive your success factors.
To get you started there’s a very useful hypothesis statement in the excellent Lean UX book by Jeff Gothelf (it’s a great book – I highly recommend reading it) that provides some thought starters for a data informed design hypothesis.
We believe that [doing this / building this feature / creating this experience]
For [these people/personas]
Will achieve [this outcome]
We will know this to be true when we see [this feedback / quantitative measure / qualitative insight]
Alternatively you could use the revised hypothesis kit from Craig Sullivan and Michael Aagard.
1. Because we saw (qualitative & quantitative data)
2. We believe that (change) for (population) will cause (outcome)
3. We expect to see (data metric change) within (business cycles)

3. Choose your KPIs wisely
For those that aren’t familiar with the acronym, KPIs stands for key performance indicators. As their name suggests KPIs can tell you how well something is performing. For a digital product, such as an app or website KPIs might include the number of users, the average customer satisfaction rating, the conversion rate for a desired action (i.e. the percentage of users that carry out that desired action) and so on. KPIs to a manager are like catnip to a cat – they’ll always crave more, more, MORE…
You should define your KPIs, together with the sort of change that you hope to see (increase or decrease, and by how much) as part of your hypothesis. However, be sure to choose your KPIs wisely because some KPIs can be deceptive, and KPIs taken in isolation can be very deceptive. For example, if you have an advertising driven revenue stream for your website you will no doubt want to maximise the amount of time your visitors spend on your site (more eye balls = more advertising revenue). You will therefore want to track KPIs such as average session duration, which gives you an indication of how long visitors are spending on the site. You might test a new navigation system in the hope that visitors will discover more of the content and therefore spend more time on the site. Average time on the site goes up so you think that the new navigation is doing its job. However, an increase in time on the site could also be caused by users struggling with the new navigation system. They could simply be spending more time trying to find the content they’re looking for. This is why it’s so important to think carefully about the KPIs that you will be using and to gather a breadth of KPIs, and of data in general (more of that latter).
4. Benchmark, benchmark, benchmark
How do you know if a design change has had a positive or negative effect? How do you know if design A performs better than design B, or even design C? Simple – benchmark them. When you’re gathering data to help inform design you should always be looking for a benchmark to compare that data against. A benchmark gives you a reference point, a point of comparison, a stake in the ground. Without a benchmark how do you know if the performance is great, rather than merely good, or worse, mediocre?
You might benchmark against a current design, an alternative design or even a competitor. For example, if you’re running a classic A/B test you will run both designs in parallel and then randomly assign users to design A or B (just remember to send them to their previous design if they come back). This allows you to compare like for like data between the two designs. How do the KPIs compare? What about the user comments and feedback?
Comparing like for like data is very important because otherwise it’s an unfair comparison. Certainly if you’re benchmarking against historical data make sure that the time frames are the same. For example the sort of usage that a product or service gets on a Monday might be quite different from a Saturday, and you don’t want those differences unfairly skewing your benchmarking results.

5. Make sure you can capture the data you need
Data, data everywhere, but none of it is what you need! There’s nothing more frustrating than testing out a new design, or a design change and then discovering that the data you need is not being captured. Or worse still, the data you thought was being captured, isn’t actually being captured at all. Save for some Superman style reversal of time you can’t retrospectively collect usage data, so make sure that you’re definitely capturing what you need before something goes live (I’m still confused as to how flying backwards around the world, as per the first Superman film should reverse time rather than simply destabilising the planet and unwittingly killing everyone and everything on it).
It’s a good idea to outline the data that you plan to capture, and how you intended to do this upfront, so that you can have the data capture mechanisms that you need in place. In addition to the usual analytics tools you can also use remote user testing tools such as usertesting.com, Userzoom, Loop11 and Validately to capture rich usage data, including session replay, task completion rates and user satisfaction rates. You can even use these tools to collect data for competitor products and services, so now that’s no excuse when the boss asks, “how do we compare to…?”.
6. Get data as early as possible
“Fail fast, fail often”. A bit defeatist, but that’s what we’re told isn’t it? Try something, see if it works and then iterate. But to see if something works you should be collecting data, and that data has to come from real users, using a real live product or service, doesn’t it? Anything else is a bit made up. A bit false. A bit phoney. Well, no actually. The earlier you can get data the better, and it’s very hard to get early data if you have to wait for something to be fully implemented first.
This is why getting data from prototypes is so useful. You can prototype something, get some early indicative data through user testing (remote or face-to-face) and then determine where to go from there. Perhaps the design needs a few changes here and there but overall looks promising, or perhaps it should be culled immediately to save it from its own misery. With rapid prototyping tools such as Axure, invision and Proto.io it’s now ridiculously easy to quickly mock something up and get some data, and as you’ll see from my next tip the sort of qualitative data you get from user testing is just as important as BIG quantitative data.
7. Capture a breadth of data
Data is the fuel that launches and then propels the data informed design rocket and as NASA will testify, it takes a lot, and I mean a lot of fuel to launch a rocket (a whopping 1,733 tonnes of the stuff apparently). In the past I’ve come across the view that only quantitative data sources, such as analytics, surveys and A/B tests can provide that fuel. Only quantitative data can give you the sort of BIG numbers that are required. However, whilst such data sources are great, don’t limit yourself to purely quantitative data. You want a breadth of data to get the fullest picture of what is going on and why.
Qualitative data sources, such as user testing, focus groups, diary studies and user interviews are an incredibly important source of data. Qualitative data is a valuable supplement to quantitative data – it’s the afterburner that really gets your design rocket going and helps to steer it in the right direction. Quantitative data will tell you the ‘what’, but it usually takes qualitative data to tell you the ‘why’.

8. Stagger exposure to stop the s**t hitting the fan
Avoid the potential disaster of a poorly performing redesign (Marks & Spencer in the UK recently managed an epic redesign fail for their website) by staggering the users’ exposure to a new design. Rather than presenting a new design to all users, or even half of the users it’s a good idea to start small and then ramp up exposure once you get an idea of how a design is performing. For example you might go from 5% to 10%, to 25%, 50% and eventually 100% (assuming everything is OK). Of course the percentages you can get away with whilst still collecting a decent set of data will depend on the number of users you have, but as a general rule you want to start small and then only go up when you’re happy with how things are progressing. Of course you’ll have thoroughly evaluated prototypes with users before anything goes live and also be collecting qualitative data along the way, so there’s really nothing to worry about, is there?
9. Don’t’ be too quick to kill designs
If a design isn’t performing as well as you’d hoped there is often the pressure to kill it and move on to the next thing. But hold your horses there because like a fine wine or a Radiohead album, sometimes you just need to give it a little more time. With any new design you should consider the time and indeed effort it takes for users to adjust. People don’t like change. Change is scary. Change is unsettling. You only have to remember the collective uproar that occurred when Microsoft had the audacity to remove the start menu from Windows 8 (good to see it now back with Windows 10) to see that users don’t like change. It’s therefore important to give new designs a little time to bed in. It’s only once users have had a change to get used to a new design that its true performance can be gauged.
10. Don’t jump to conclusions
Data can be a tricky customer, a very tricky customer. Analysing data can sometimes feel a bit like a ‘Who done it?’ mystery. You might know what has happened (perhaps a design has been murdered by the users), but you don’t necessarily know why. There might be tantalising clues in the data but finding them, and putting all the pieces together will take the sort of detective work that the great Sherlock Holmes would be proud of.
Try to ensure that you have a good breadth and depth of data before announcing your findings to the wider world. It can be all too easy to jump to conclusions, to put 2 and 2 together and come up with 5 and you don’t want to be shot down because on closer examination the data simply doesn’t stack up.
11. Always communicate the context
Out of context it can be all too easy to misinterpret and indeed misrepresent data. As Mark Twain famously reminded us, “There are lies, damned lies and statistics” and the same is certainly true of data. This is why it’s so important to outline the context whenever design data is discussed, communicated and presented. Let stakeholders know the context in which the data should be interpreted. For example, when compared to an existing design, an alternative design or historical data. Also outline the context in which the data has been captured. Not all data is the same and some is certainly more valid than others. By outlining the context you can help others better understand the data and more importantly the ramifications of the data.

12. Don’t just analyse the data, do something with it
I’ve saved the most important advice for last. If you remember only one thing from this article (in which case please concentrate a bit more next time) then it should be this: Capturing design data is all well and good, but it’s what you do with it that counts.
Data informed design should be a continual process. You design, you collect and analyse data and then iterate. You design, you collect and analyse data and then iterate. You design, you collect and analyse data and then iterate – you get the idea. It’s all too easy to get lost in the data and forget that data is just a means to an end, and that end should be delivering a fantastic user experience to your users.
See also
- How to choose the right UX metrics for your product (Google Ventures)
- A Hands-On Guide To Data-Driven Design (Usabilla)
- Six Myths about Data-Driven Design (Smashing Magazine)
- Data-Driven Design In The Real World (Smashing Magazine)
- The Ultimate Guide To A/B Testing (Smashing Magazine)