Great product design is all about the individuals using the system. That’s a tenet that underpins the greatest design-thinkers and agencies worldwide and it’s absolutely true. In fact, there are entire frameworks of design built around it: service design, customer experience design, human-centred design and user experience design are just the first few off the top of my head. But there’s a trap here for young players. Who is that user?
If you don’t fully understand this as a non-biased, research-founded truth then terms like UX become less a framework for success and more a post-hoc rationalisation for a pre-existing belief. That’s a big problem.
As a designer, the way you’re most likely to fall into this trap is when you assume the individual you’re designing for is you. Or that it’s people you know. People who conform to your existing biases and behave in the way you expect them to.
When UX designers approach a project with these pre-packaged assumptions in mind, it’s all too easy to cherry-pick the evidence that supports these assumptions and ignore the evidence that doesn’t. And the end result is that you design a perfect system for people who never really existed to begin with.
Don’t worry, the good news is that we’re not the first industry to run into this problem. It’s called confirmation bias, and it’s been a plague of innovation for as long as humans have been innovators.
Even better news: there’s a cure.
The cure is science
Art vs. science. A competition that’s been going on since the dawn of time, right? Well, actually no. Not at all.
Some of the greatest artists and designers in history have also been some of our greatest scientists. Leonardo Da Vinci is obviously the first one to leap to mind, but he was far from the last. The fact is that the best design is always underpinned by one common element – the scientific method. It’s the most powerful tool we have for uncovering truth.
So how does it work? Well according to the man himself, Aristotle, who’s generally credited as first conceiving the scientific method, it goes a little something like this:
- Observe some aspect of the universe.
- Invent a tentative description, called a hypothesis, which is consistent with your observations.
- Use that hypothesis to make predictions about what you haven’t yet seen firsthand.
- Test those predictions by experiments or further observations and modify the hypothesis in the light of your results.
- Repeat steps 3 and 4 until there are no discrepancies between hypothesis and experiment and/or observation.
Complete step five and you’ve got yourself what’s called a theory, my friend. And that’s a framework based in fact that you can really work with.
But David, I hear you exclaim, I’m designing an app not hunting for up-quarks in a super collider – what’s this got to do with me?
I’m so glad you asked.
Lab coat first, Photoshop second
Whenever you approach a UX project, the first people you’re going to have to contend with are internal stakeholders. This can be a trap in itself, because although business owners and marketing managers may be more familiar with their customer demographic than you are, they’re just as susceptible to confirmation bias as anyone else.
Tim from accounting’s belief that the most important thing to users is information about payment schedules, because that’s what he takes the most calls about throughout the day should be taken with a grain of salt.
Similarly, Lauren the CEO’s belief that stock price information should be above the fold on the homepage because that’s what matters to her should also be questioned.
This doesn’t mean that either of them are wrong, but it does mean they need to be proved right before you can open the Photoshop app. More often than not, this can be done using the existing website analytic data and I’d strongly recommend Google Analytics or Adobe Omniture as your best friends when dissuading internal stakeholders of firmly held (and factually unfounded) biases.
Know thine user, know thine experience
In the process of researching, proving or disproving the assumptions of internal stakeholders, it will be almost impossible not to form some of your own. The easy path here is to think ‘well, I’ve already done the research, so mine must be correct.’ Unfortunately, that path leads to mistakes.
Once you’ve formed your own idea about who the user is, that needs to be tested too. And at this stage of overcoming confirmation bias we need to look beyond the preliminary analytics data of the existing website, because although informative that data is ultimately only as good as the current user experience. It’s time to run a user test.
Your user test should consist of a cross-section of all user groups (target audience and staff admin alike) in order to reveal the qualitative and quantitative information your design process has been craving.
Now, it’s worth saying here that there will always be issues with the process. No one is perfect at this. But by taking ques from the scientific community UX researchers are constantly refining best practices and there’s a lot of great advice out there that you can leverage in your own work.
Once you have your sample groups lined up and ready to come in, it’s time to form your questions. Gary Barber from Radharc has some great tips around reducing bias in your questioning, which I’m going to paraphrase here because he really sums it up perfectly.
Don’t let your question affect the answer
Scientists generally refer to this as ‘begging the question,’ and in a nutshell it means asking leading questions that position a user to answer a certain way. If you ask ‘did you find the checkout form lengthy and difficult to complete?’, the obvious answer is ‘well yeah, I guess it was.’
On the other hand, asking ‘what did you find easiest about the checkout form? What was the most challenging aspect?’ requires that the user think about their experience before answering.
Leave emotions at the door
If you ask people what they like or dislike about just about anything, they’ll give an emotion-driven response that’s hard to correct for. If I ask why you don’t like peanut butter, you’ll likely respond that it tastes bad. There’s not a lot I can do to improve my recipe based on that.
Instead of asking what someone likes, ask what they want to achieve and how that could be better facilitated.
Don’t get technical
Nobody likes industry jargon. Nobody. Leave your business-speak behind when conducting user tests and keep questions in easy-to-understand, plain English. We tend to answer questions in imitation of the way they were asked, so if you ask a question in a confusing manner you’ll likely get a confusing (or confused) response.
Not all answers are created equal
It goes without saying that lengthier answers are better, and you should certainly prompt your users to elaborate where possible, but don’t go hitting your head against the brick wall of a short answer for too long. If the user won’t give the answers you want, move on to the next question (or the next interviewee) or you’ll just be wasting everyone’s time.
If you’re following the scientific process (you are, right?) then you’ll need several rounds of these interviews with small groups to get the best possible results and narrow your hypothesis down to a truly workable theory. In between each round of user tests, it’s always a good idea to cross reference your results against the existing analytics data and see how things match up.
Once you’ve finally arrived at your theory, you can be confident that confirmation bias is firmly dead and buried.
Now comes the fun part!