Research bias manifests itself in various ways. An important one is the way we ask our questions. Ask them one way, we get one answer. Ask them another way, and we get a quite different response.
In Anthony Jay and Jonathan Lynn's brilliant Yes Minister – as relevant today as when it first aired nearly 40 years ago – the biasing effects of leading questions are laid bare by everyone’s favourite Mandarin, the silver-tongued Sir Humphrey Appleby:
Of course, Sir Humphrey’s point of reference is quantitative research - an opinion poll to be precise. And it’s plain to see how such a poll, with its carefully constructed flow, set questions, and pre-determined answers can push the interviewee in a particular direction.
But qualitative research is quite different: it is deliberately fluid and flexible. Its direction is determined not just by the interviewer, but by the interviewee too. Questions and answers are anything but set in stone. It's a conversation, not a questionnaire.
So what can qualitative researchers learn from Sir Humphrey's dismantlement of our quantitative cousins? And what should we do to avoid the bias he so neatly exposes?
1. Make sure context reflects reality
When exploring ideas - for example concepts, advertising ideas or pack designs - researchers often like to discuss category or brand perceptions before presenting them.
So, if they're exploring concepts for Dyson, they might have a chat about people's frustrations with vacuum cleaners. If they are researching advertising ideas for Head & Shoulders, they might start by exploring how people feel about dandruff.
It is, we are told, a way of couching the discussion in an appropriate context, a way of getting people into the 'right frame of mind'.
Except it isn't.
In fact, it amounts to sensitising respondents to the matter at hand in a way that would never happen in the real world. It creates a bias that would never exist in reality.
Think about it for a moment. When was the last time you thought deeply about your frustrations with vacuum cleaners just before seeing an ad for Dyson? When was the last time you had a 10-minute discussion about dandruff before seeing an ad for Head & Shoulders?
Almost certainly never.
And if you had have done, your reactions to the ideas would have been quite different.
Which is why you shouldn't do it in research.
2. When moderating, be alert to what’s gone before
As Sir Humphrey shows, what you choose to discuss at the start can have a huge influence on what happens later on.
In fact, I would go further. The influence of what has gone before is inevitable and inescapable.
It sets the unwritten rules for what is expected and accepted later on, for what is seen as 'right' and 'wrong'. Emotionally, it evokes feelings and associations that unavoidably affect mood and colour responses.
So, when researchers are presenting concepts, ads or pack designs, and they ask respondents to ‘forget about the ideas you’ve seen before and just focus on this idea’, I blanch.
While it may be well-intentioned, it is futile. Respondents are not in control of their memories. They cannot ‘forget’ what has gone before. Whether consciously or sub-consciously, earlier ideas and the discussion around them will inevitably influence reactions to the new ideas.
The researcher's job is not to eliminate this bias. It is to be aware of it, and to take it into account when analysing their findings.
3. Rotate and Randomise
Like Open Questions, ‘rotating’ stimulus (i.e. showing ideas in a different order in each interview) should be second nature to researchers.
By giving each idea an equal opportunity to 'go first', it allows the researcher to get a 'clean' read on each idea, unpolluted by responses to ideas preceding it.
And by ensuring that a particular idea doesn't directly follow another particular idea in multiple interviews, it mitigates the bias that might occur as a result of that specific ordering combination.
For similar reasons, coding should be randomised in such a way that a 'natural hierarchy' cannot be inferred. Concepts shouldn’t be labelled as ‘1’, ‘2’, ‘3’, etc, but given random codes: ‘G6’, ‘X3’, ‘M8’, and so on.
Unfortunately, I’ve been party to many occasions when rotations and randomised coding are simply not carried out.
Expediencies such as making interviews easier to follow for watching clients, or not having to go through the time-consuming task of re-ordering documents, are sometimes allowed to take precedence.
They shouldn’t.
And before anyone worries about causing discomfort to their client, there really is no need. In my experience, clients are only too willing to come on board when a clear explanation is provided: it is after all they who will ultimately reap the dividend.
4. Ask Open Questions
This is a basic rule of thumb that any good researcher knows only too well. And it sounds easy.
But carrying it out consistently is much harder, even for the most seasoned professionals.
In day-to-day life, it is our habit to converse in a closed fashion: ‘That must have made you feel happy’ and ‘I'm sure you were disappointed when that happened’ are the stuff of everyday conversation.
Such responses help us to demonstrate understanding. They enable us to ‘help out’ the other person by giving them a ‘hook’ for their response. And they just feel natural, because we use them all the time.
But they implicitly set a context for the answer: they imply that you must have felt something. That happiness or disappointment are the valid frames of reference. And, importantly, they hint at the moderator's point of view before the respondent has had a chance to air, or even consider, their own feelings, potentially biasing the conversation.
Their open equivalents, phrases such as ‘How did that make you feel?’ and ‘Can you tell me more about that?’ are less everyday. They do not demonstrate understanding, at least not explicitly. They provide no ‘hook’, and can therefore be difficult to answer. And they come less naturally to the questioner.
But they make far fewer assumptions. They imply nothing about the way in which they should be answered. They are a blank canvas, giving the respondent the chance to articulate their answer in the way they see fit. They avoid bias.
5. Ask participants to explain their responses
The impulse to scratch below the surface of every response, to ask what has produced it, where it has come from, should be second nature to researchers.
Prompts such as ‘What makes you say that?’ and ‘Where do you get that idea from?’ are some of the most powerful tools in a researcher’s box.
They help us understand whether an interviewee's responses are the result of influences in the interview itself - such as the topics already covered, the language used by the moderator, or the inputs of other interviewees - or whether they represent the more generalised, longer-term thoughts and feelings that are usually of more interest when digging for insights or developing ideas.
They also mitigate the subjective bias inherent in the researcher's analysis. They prevent us from over-relying on our own beliefs and experience when interpreting what we've seen and heard. They give us more context, more clues, and lead us to more accurate, less biased conclusions.
So, if you want to do your research in the way it should be be done - impartial, dispassionate and unprejudiced - then keep these guidelines in mind.
On the other hand, if you're looking to prove a pre-determined point, well I think we all know a man(darin) who can help.
Comments