10 ways to ensure your online surveys deliver quality data.

Surveys remain an invaluable tool in helping brands understand their audiences and find new opportunities. But as creating online surveys becomes more democratised, there is an increasing danger of poor question design and UX leading to bad data. Bad data that can lead to bad decision-making and potentially costly misdirected choices.

In this post we look at 10 common pitfalls, pratfalls and facepalms that we regularly, and increasingly, see in survey design.

Read on for some tips on boosting your questionnaire design skills and ensuring your surveys deliver quality data.

 

 

Have we really had enough of experts?

Going back to the early days of online surveys in the ‘00s, creating and launching online surveys was typically the preserve of highly-trained researchers. Starting out in one of the big research agencies of the day, you would put in months’ of extensive training before being let loose on launching a live survey. Extensive training in different question types and their purpose (Categorical, Dichotomous, Ordinal, Likert, Semantic Differential, Conjoint, etc), question psychology, sampling techniques, survey UX, cleaning and processing of data were all the norm. Good survey design was both a science and craft that needed years to master.

Today, anyone can launch a survey to their customers or to their target market. No data science training is needed. No survey craft to learn. Just a credit card and your choice of many self-serve survey platforms and you are good to go. But as great as removing barriers to research is, with more surveys comes more bad surveys. And more bad data. Ask the wrong type of question or ask it in the wrong way and the resulting data will lead to poorly informed (and potentially costly) decisions.

But many of the issues with surveys are easy to spot and avoid if you know what to look out for. Below we’ve compiled a list of 10 survey watch-outs and related tips for delivering good data. Hopefully, it will help inform and inspire great data collection.


1. Avoid subjective ‘fuzzy’ answer lists and scales

This first watch-out is one of the most widespread: ‘fuzzy’ answer lists. Including things like numbered rating scales (1 to 10, 1 to 5, etc), star ratings or answers using words like ‘regularly’ and ‘occasionally’. These types of answers are all open to interpretation and despite widespread use, they should be avoided whenever possible. If your answer list isn’t fully defined, or it could potentially mean different things to different people, then it’s going to be hard to answer accurately and equally hard to make sense of the resulting data.

For any good survey question, you should be able to define every potential answer in any list or scale shown to participants. This will be much easier for the participant, researcher and research audience to understand. If you can’t do this then the question is either incomplete, unbalanced or has redundant answers - so it is time to redesign.

The most familiar survey question of all, the Net Promoter Score, is a major offender here. Despite its success, it’s something of a poster boy for bad question design. Most of us will have completed an NPS survey question, where we are asked to rate likelihood to recommend a product or service on a scale, where 10 is ‘Extremely likely’ and 0 is ‘Not at all likely’. In most cases, the rest of the scale is not defined, with no indication that seven points on the scale from 0 to 6 are negative (‘Detractors) and just two, from 9 to 10 are positive '(Promoters’). Most people would logically think a recommendation score of 8 or 9 out of 10 were both very positive results but with NPS, your score of ‘8’ is considered ‘passive’, it has no real value. Personally, I’d be very likely to recommend a brand I give a score of ‘8’ to. Meanwhile, an expected middling score of 6 is treated the same as a score of 1 (!). So whilst stakeholders might demand it as a KPI, you can see why you wouldn’t want to hang any big decisions on such a skewed question as NPS alone.

So how do we fix it? Taking the standard NPS question (QA below), we’d suggest the alternative (QB) that defines every answer point, uses a balanced answer scale (see also point 2), removes some of the unnecessary granularity in an 11-point scale and uses ‘Definitely’ to provide a clearer interpretation of the intention to recommend. We’ve also provided context for the question wording (see point 7).

A TYPICAL NPS ADVOCACY QUESTION

QA. How likely is it that you would recommend brand X to a friend or colleague?

0 (Extremely unlikely)
1
2
3
4
5
6
7
8
9
10 (Extremely likely)

ADVOCACY (ASKED A BETTER WAY):

QB. If you were asked to recommend a [insert category / purpose], how likely would you be to recommend brand X?

I would…

Definitely recommend
Probably recommend
Be unsure whether to recommend
Probably not recommend
Definitely not recommend

2. Balance your answer scales

Any good survey question should aim to collect unbiased data, it should not steer the respondent to answer one way more than another as a result of question design. If you’re asking people to rate the quality of a concept, product, experience or brand, they should be given an equal chance to answer positively as negatively. Otherwise, you’ll get results that will likely overclaim the positive.

But more and more, we’re seeing this fundamental of good survey design being overlooked or worse, purposefully gamed. There is an increasing trend of scales biased towards the positive across customer satisfaction studies, purchase intention questions and opinion polls. When you see more positive answer options than negative you should always take the resulting data with a pinch of salt. Think of it as having your homework marked but with the only grades the teacher can choose from being an A* to B.

A real-life example, the question below (QA) is used by Google to ask users of its news feed whether the content shown fits with users’ interests. You don’t have to be a trained quant researcher to pick up a couple of things clearly wrong with this question. The first is the obvious bias towards positive answers (x3 options) vs. negative (x1 option). The other one is the difference, or lack of one, between ‘Excellent’ and ‘Great’. Having both makes no sense other than as an attempt to boost positive responses. Whether this bias is intentional we’ll leave you to decide.

FEEDBACK QUESTION FROM GOOGLE NEWS FEED

QA. How is this recommendation?

Excellent
Great
Good
OK
Bad

So how do we fix it? There are a few different ways you could balance a scale like this, we prefer either a five-point likert scale (QB) where OK sits in the middle or if the UI allows a 4 point scale tailored more to the relevance purpose of the question (QC).

QUESTION ASKED A BETTER WAY - OPTION 1
QB. How is this recommendation?

Excellent
Good
OK
Poor
Very poor

QUESTION ASKED A BETTER WAY - OPTION 2
QC. How relevant is this story recommendation?

Very relevant
Quite relevant
Not very relevant
Not at all relevant

3. Filter-out leading questions

One of the survey design fails we see most are leading questions that push participants towards answering in a certain way. Often these are innocent, coming from an unconscious bias based on being too close to the product / service / brand being researched. A lack of research expertise comes into it too. We’ve seen leading survey SaaS platforms that come loaded with biased templated questions that will be used without much thought (e.g. How well do our products meet your needs?). And again, at worst, these biased questions can be designed to deliver a preferred response for a research sponsor.

A recent example we saw was in a survey asked of visitors to a well-known car review website. The question asked ‘How long would you be happy to wait for a car you wanted to be delivered from a dealer?’, with an answer scale starting 1 month and running via several options to 12 months. Personally, I wouldn’t expect to wait anything more than 1 week for a car delivery but the shortest time frame forced me into saying 1 month. There was no option to choose anything shorter. You can see already see the chart headline: ‘100% of car buyers are happy to wait a month’ for their cars to be delivered. Nope. They are not.

Making important decisions based on (mis)leading data is not going to help anyone in the long run. Good research should always seek an independent view, with research clearly separated from the team responsible for creating a product, service, advert, UX/ CX, etc. If that’s not possible for you, check your questions to make sure they are not subject to bias (either consciously or unconsciously) and test your surveys in person with other people to make sure the answers they want to give are available.

Below are a handful of leading questions (QAs) we have seen doing the rounds and some matched alternatives (QBs) that should be clearly less biased.


QA1. What do you like most about this design / concept / etc?
QB1. What, if anything, did you particularly like or dislike about this design / concept / etc?

QA2. How well does this product / service / feature meet your needs?
QB2. How would you rate this product / service / feature ability to meet your needs?

QA3. How much would you pay for this product / service?
QB3. Would you buy product / service if it was priced at £XX/$XX? [repeat with alternate price points].

QA4. How much do you agree that [insert statement]?
QB4. To what extent would you either agree or disagree that [insert statement]?

QA5. How easy did you find it to complete this task on the app?
QB5. How would you rate the app on ease of completing this task?

QA6. How important is it to you that [insert statement]?
QB6. Is [insert statement] important or unimportant to you? How much?

4. Use complete but not overly long answer lists

When understanding reasons for an opinion, decision or behaviour, surveys will often contain multiple-choice questions with a list of potential reasons for users to choose from (e.g. What’s most important to you when choosing which mobile phone handset to purchase?). These types of questions often take the most thought and insight to compile. They need to be derived by hypotheses and insight. The result is often that some survey writers don’t do enough thinking, or research, and miss out potential key drivers. Others go to the far extreme and create a wordy list of twenty options to choose from. Both approaches deliver poor data. A balance is needed - you want to cover the most important themes and discover new trends but go too granular and no participant will bother to read all your possible answers.

So how do we fix it? To generate a complete answer list the ideal is to use some existing insight or qualitative research to help generate the list but if time or budget does not allow for this do some desk research, brainstorm the possible answers with your colleagues, use Chat GPT to create a starter list. Then refine the list into clear, snappy and consumer-friendly answers. Do this and you’ll probably cover most bases and you can then fill the gaps with an open-ended ‘Another reason’ response option to make sure you haven’t missed something important, or a new trend.

And whilst we’re looking for exhaustive answer lists, we also want them to be read so we’d recommend keeping answer lists to a maximum of ten relevant options. Where possible group answers into macro themes and paraphrase reasons rather than write out each separately with full and perfect grammar. For example, rather than have multiple answers around one area like ‘It was too expensive’, ‘I didn’t have enough money’, ‘It wasn’t good value for money’, ‘I was waiting for a sale’, etc, just group into one related answer, e.g. ‘Too expensive / Poor value’.

5. Keep it relevant; use survey routing and answer piping

The best surveys are personalised to the user, with dynamic techniques used to keep content relevant and avoid wasting people’s valuable time. This helps keep participants engaged, reduces drop-outs and improves the quality of responses. But many surveys will ask all questions to all participants and ask them in exactly the same way. This is both a frustrating and dull experience that will lead to potentially poor data quality.

So how do we fix this? The best survey platforms will have dynamic features like ‘piping’ and ‘routing’ as standard. Routing ensures you only ask questions on the subjects (interests, behaviour, products, brands) that are relevant to participants. This can be done on a question-by-question basis using previous question responses or from customer data uploaded from the sample file as hidden variables. Piping is used to reference previous answers, making it feel more of a human, contextualised and engaging process (more like a conversational interview). At a most basic level, it can be just piping in a previous answer into the question wording (e.g. Why did you choose [Samsung] for your current mobile phone?) but it can also be used to dynamically change the list of answers and context of questions (e.g. You said you were going to buy a new mobile phone [in the next 6 months], how likely are you to buy [Samsung] again?).

The five questions below demonstrate an example survey flow that combines routing and piping to ensure survey questions (and data) are relevant across a multi-category product survey.

EXAMPLE USES OF ROUTING & PIPING

Q1. Which of the following products, if any, do you expect to buy in the coming 12 months?
ANSWER LIST INCLUDING MOBILE PHONE, TABLET, LAPTOP PLUS NONE OF THESE. THE PARTICIPANT CHOOSES ‘MOBILE PHONE’

Q2. Which of the following [mobile phone] brands had you heard of before today?
PIPE IN CATEGORY. ANSWER LIST OF 10 MOBILE PHONE BRANDS, THE PARTICIPANT CHOOSES 5 BRANDS CHOSEN

Q3. And, which of these brands would you consider for your next [mobile phone]?
PIPE IN JUST THE 5 BRANDS CHOSEN AT Q2 AS THE LIST OF ANSWERS. PARTICIPANT CHOOSES ‘SAMSUNG’

Q4. Which of the following models of [Samsung] [mobile phone] do you find most appealing?
SHOW MODEL LIST / IMAGE SELECTION OF SAMSUNG PHONE MODELS.

Q5. Why did you choose the [Galaxy Fold] as the most appealing [Samsung][mobile phone]?
PIPE IN CHOSEN MODEL NAME, THE IMAGE AND PRODUCT CATEGORY


6. Keep survey language succinct, simple and familiar

Using simple and clear language helps ensure participants quickly understand the question's purpose and provide accurate responses. People will avoid reading long-winded or complicated text that adds to their cognitive load. It’s always best to keep question and answer wording short and succinct.

It’s easy to become too close to a subject and assume your survey participants will know the latest acronyms, innovations and category quirks. But they typically won’t. Most people are not like you and your colleagues. You need to avoid jargon or industry speak with general consumers. Examples of things to avoid can come from the market category (e.g. Internet of Things, SaaS), business language and acronyms (e.g. KPI, CRM, OKR, USP) and even research survey jargon itself (e.g. what do ‘Somewhat applies’ and ‘Other’ actually mean?).

It’s also important to use familiar and everyday language. Asking frequency of use as ‘How often do you [insert activity]?’ is much simpler than ‘With what frequency do you [insert activity]? And market researchers can be pretty guilty here. Take the ‘Other (specify)’ answer that will appear in many surveys. It means nothing to the average person but is very common today in online surveys (a hangover from interviewer instructions on telephone surveys). To fix this, use contextual answers like ‘Another reason (please tell us)’ or ‘In another way (please tell us)’, which make much more sense and encourage better engagement with the question.

7. Frame questions with context & don’t be vague

Take the NPS score again - the gift that keeps on giving when looking at bad survey design. Another of its failings is that the question is typically not framed in a scenario. It usually just jumps in with a question like ‘How likely are you to recommend Microsoft?’. Now without the scenario of being asked by a friend to recommend a brand for a specific purpose, I’m always going to give a low score (probably a 5 or 6) however great the brand is. People just don’t go around recommending brands, especially those like Microsoft that are well-known already. But it would be a different result if I was asked ‘If you were asked by a friend which brand they should use for creating a spreadsheet. How likely would you be to recommend Microsoft?’. In that case, I’d be much more positive and probably give a strong 8 (not that it would help boost the NPS score!).

So how do we fix this? Any good survey question should be based on actual behaviour when possible, e.g. ‘Have you ever been asked to recommend a company that provides [insert category / need]? IF YES: Who did you recommend?’ But looking at the typical likelihood of doing something in the future approach, a specific and framed scenario will be better than one without context or that makes assumptions about behaviour, e.g. ‘If you were asked by a friend to recommend a brand that does [insert category / need], how likely would you be to recommend brand X?’.

8. Don’t ask impossible-to-answer questions

Sometimes what we’d like to know as researchers can’t be framed or answered in a simple survey question. It’s become well-versed that people don’t always actually behave as they say they do or will in surveys. But this is not due to some attempt to mislead or misdirect. It’s more likely that the survey questions have unrealistic expectations of what a participant can accurately know, calculate, recall or predict.

Questions that would involve getting out a calculator, bank statement, comparison website, diary or even a proverbial crystal ball should be avoided. Below are some examples of the types of questions that should be dismissed:

QA How much did you personally spend on technology products in 2022?
ISSUES: Too big a time period, too long ago, undefined category

Q. How much more or less do you expect to spend on technology products in 2023 vs. 2022?
ISSUES: Difficult to predict. Difficult to calculate

Q. How many hours do you expect to use social media this month?
ISSUES: Difficult to predict. Difficult to calculate. Lack of behavioural awareness.

Q. How often do you check your social media feeds each day?
ISSUES: Micro-moments are often not recalled. Answer will be different day to day

QA Which of the following product ideas do you think will be most successful?
ISSUES: Assumes market knowledge. No context on pricing, sales channels, etc.

QA How often would you buy this product if it was available today?
ISSUES: No competitive or promotional context. Impossible to predict.

Typically these types of questions should be avoided altogether but sometimes there are workarounds. Instead of questions that are about future prediction (e.g. How many hours do you plan to watch streaming video services this weekend?) ask about behaviour in the recent past (e.g. How many hours did you spend watching streaming video services last weekend?). Try to break time periods into smaller chunks to get a more accurate response (e.g. How much time did you spend on social media yesterday? vs. How much time did you spend using social media in the past week?). Ask about personal preferences over market predictions (e.g. Which of the following do you find most appealing? vs Which of the following do you think would sell the most?).

9. Don’t let quality checks become quality fails.

Understanding and adapting to learned behaviour is crucial in good UX. People expect sites and apps to work the same way as others they have seen and used. On auto-pilot they will do what they have done elsewhere. Online survey completion is no different.

Building in too many clever quality checks and ringers to catch out rogue survey participants can sometimes do more harm than good. Rotating answer scales (e.g. putting negative answers first instead of the expected positive) is common practice in survey design as a way to check people are paying attention but this can back-fire. Customers will naturally expect the first answers to run positive to negative - top to bottom, left to right. Mess with this expectation and some people will unintentionally give the wrong answer. Follow-up open-ended questions we’ve asked when people rated an experience negatively (on a rotated question) has consistently highlighted this as enough of an issue for us to ditch what we once thought was good practice.

Use rotations sparingly and certainly do not flip scale order with the same participant in the same survey. When using questions designed to catch-out untrustwothy participants it’s best not to use an important question or KPI for this purpose. Additional questions such as asking the same question in two different ways at start and end of the survey, using CAPTCHA-inspired image checks and monitoring response speed will help filter out bogus responses without ruining crucial data. But even then, use these sparingly in order to not patronise, confuse or frustrate survey participants.

10. Check and follow UX best practice, across devices

The last of our 10 is a big one. Like any online experience, following UX best practice, heuristics and testing is crucial for a good survey experience and accurate engagement. Surveys that are easy to comprehend, quick to complete, visually appealing and aligned with expectations will lead to higher response rates and more insightful, accurate data.

We’ll expand on this with specific examples in a separate post later in the year but for now, always aim to do the following when designing surveys:

  • Think mobile-first but be responsive: Any good survey will aim to be inclusive within the target audience. Minimising participant restrictions based on device, time or location of completion is essential for this. To achieve this, create truly responsive surveys that automatically adjust to device / screen size. And with 2 in 3 survey responses now typically coming from mobile, designing for and testing on mobile is the preferred route.

  • Keep the UI clean and simple: A minimal but engaging UI that puts the focus on the questions, answers and content being tested will perform best. Avoid cluttering the screen with too many elements, using background images and excessive branding. Use a clear font and ensure a good contrast between text and background.

  • Use images to support easy completion: Visual aids such as images, buttons, icons and text enhancements, when used well, can enhance understanding and remove cognitive effort. For example, using icons to represent product categories, brand logos instead of names, button-based answers instead of text check-boxes and bold text to signify important elements will all help.

  • Help users to orientate the survey: Tell users how long the survey will take them. Break up the survey into sections with a clear purpose and transparently guide people through completion (e.g. ‘To understand the needs of different people we have a few profile questions’.). Use a progress bar or a percentage indicator to show participants how far they have come and how much longer they need to complete the survey.

  • Trial and test the survey: Always test the survey with a small group of people, across a range of devices and screen sizes, to identify any usability issues and ensure that the survey is user-friendly. Your testers should ideally include people who represent the target audience and are not part of the project or client team. Gather and act on feedback on navigation, comprehension, and use this to optimise the experience.


 

At their best surveys can uncover new opportunities, unlock potential and provide better decision-making. They can provide rich and robust data based on the people who matter most - your audience. Getting the most surveys requires experience and expertise.

If you’d like to know more about how our surveys can help your brand, get in touch today.

 

Previous
Previous

See the headlines from our recent research into digital subscriber churn and what brands can do to improve stickiness.

Next
Next

Brave New World: Can Apple’s Vision Pro live up to the promise?