If you want to know someone’s height you can take out a tape measure and measure it. Alternatively you could ask ‘how tall are you?’ Both options are forms of measurement and both are subject to error. Errors can arise in how we take the measurement or record it, and as a result of our choice of measure (tape or question), for example. Our awareness of measurement error and its impact on survey data quality has grown considerably over the past 40 years. A significant contribution has come from the Cognitive Aspects of Survey Methods (CASM) movement, a coming together of cognitive and social psychologists with survey methodologists and statisticians to develop methods and theories to understand and assess measurement error.
Last month I, along with around 360 others interested in measurement error, attended the second Questionnaire Design, Development, Evaluation and Testing (#QDET2) conference in Miami, Florida. The conference provided some useful insights on our understanding of measurement error in surveys, what we should be doing to tackle this error, and where further research is needed. It also emphasised how surveys are changing in response to the digital revolution taking place across the globe. For example, in his keynote address, Mario Callegaro highlighted how digital technology is transforming how we collect data and by implication, design and test data collection tools. Using objective measurement - sensors and wearables - as part of surveys, paradata to improve survey design, and administrative data to augment survey data are examples of where digital technology is changing the kinds of data we can collect and opening up new possibilities.
However, it would be wrong to think that the message from QDET2 was that we would (or should) no longer be undertaking surveys and asking people questions about their lives, their opinions and values. There is still a place for the survey questionnaire as a data collection tool but care is needed. Here are some of the ideas I took away from QDET2 that I think are helpful in designing survey questions and making use of other types of data.
It’s vital to ask the right questions…
Measurement error can occur when we ask people the wrong questions. Pretesting can check that survey participants have stored in their memory and can recall the information being sought. However, as researchers we also need to make sure we are clear what it is we want to measure (conceptual clarity) and of whom. Selecting the right pretesting methods at the right stage in the survey design process is important and there were papers that proposed frameworks to guide decision-making in this area.
…in the right way…
The language we use to communicate with participants is important. We need to ensure that questions use language that participants understand in the way we intend. In cross national and cross cultural studies this is particularly important but this principle also applies to national and local surveys. Rigorous specification, translation and pretesting procedures aim to reduce such errors but we can also make better use of linguistic resources and statistical models that predict the risk of measurement error.
…with the right response options
Our choice of answer options can affect participants’ responses, as can the way in which answers are presented. For example, the direction of the rating scale (e.g. agree-disagree or disagree-agree) or frequency scale (high-low or low-high) can produce different results. We don’t really know why this occurs and further research to develop and test theory is needed.
Do not underestimate the importance of design
Visual layout is as important as question wording and design features can affect responses. Visual layout is particularly important in the design of web and mobile surveys. Usability testing is an essential part of questionnaire development in the digital era and there is much that we as social survey researchers can learn from the usability (UX) community.
We need to think about the future of interviewing
Interviewers can play an important role in encouraging people to take part in our surveys but we also know that Interviewer characteristics, attitudes and behaviours can have negative impacts on data quality. With the growing role of digital methods of data collection we need to figure out what the role of the survey interviewer will be in future.
QDET2 highlights how far our understanding of measurement error has come since the first QDET conference in 2003, and how multi-disciplinary the field is. There are a wide range of methods at our disposal to help us develop, test and predict measurement error but we have a long way to go. We need more theory building activity to inform our decision making, and a greater understanding of how context and the characteristics of the people who take part in our surveys impact on measurement error. Making good use of the digital tools at our disposal to design, develop, evaluate and test questions (and other measurement tools) is challenging in a fast-changing digital technology landscape. Sharing our knowledge and ideas, successes and failures will help us to make progress more rapidly.
All the presentations from the conference are now available on the QDET2 website. The Questionnaire Design Development & Testing LinkedIn group provides a platform for further sharing of ideas and knowledge.