A09: Scales and Don't Know Answers
This session ends at 1:20.
When Don’t Know is not an Option: The Motivations behind Choosing the Midpoint in Five-Point Likert Type Scales
University of Gothenburg, Sweden
Relevance & Research Question: Likert items, a common attitude measure in surveys, typically has five response categories labelled ‘strongly agree,’ ‘agree,’ ‘disagree,’ and ‘strongly disagree,’ with a midpoint labeled ‘neither agree nor disagree’ to assess an ordered (intermediary) attitude, positioned in equal distance from the points of ‘disagree’ and ‘agree.’ But how do individuals who respond to Likert type questions actually interpret the midpoint value? If respondents select the midpoint for other reasons than expressing a middle position, it violates the assumption of an ordered response scale, and raise questions about the accuracy of the estimates. We investigate how respondents motivate their chose of the midpoint, and how this may vary when ‘don’t know’ is included as response option.
Methods & Data: An online survey experiment was fielded in 2018, with 6,393 members of the Swedish Citizen Panel. All participants were exposed to three 5-point attitude items, split sample with or without ‘don’t know’ as additional option. To assess the reasons for choosing the midpoint option, we asked respondents who selected the midpoint (as well as ‘don’t know’ when included) to motivate why in open-ended questions.
Results: Besides expressing a middle position, we found four general motivations for choosing the midpoint of the 5-point Likert scale; ambivalence, lack of knowledge, no opinion, and indifference. Most frequent were ambivalence and lack of knowledge. The inclusion of ‘don’t know’ as response option yield a lower number of ‘lack of knowledge’ motivations, and an increased number of individuals indicating ambivalence as the reason. However, even when ‘don’t know’ was included, there was a notable share of respondents who still referred to lack of knowledge as reason for choosing the midpoint.
Added Value: The findings comply with previous research, indicating that respondents of Likert type questions choose the midpoint for several reasons besides expressing a middle position. While including ‘don’t know’ as response option has been suggested as a possible solution, however, we find that this alone would not eradicate the problem. Instead, more diverse and item specific measures are likely needed to reduce ambiguities with how to interpret the midpoint in Likert items.
Effects of using numeric instead of semantic labels in rating scales
GESIS - Leibniz Institute for the Social Sciences, Germany
Relevance & Research Question:
Web surveys are increasingly being completed by using smartphones. This trend facilitates the need to optimize question design to fit smaller screens and thereby to provide respondents a better survey experience, lessen burden, and increase response quality. In this regard, it has been suggested to replace semantic labels of rating scales (e.g., “strongly like”) with numeric labels (e.g., “+5”). However, research on the applicability of these scales is sparse, especially with respect to interactions with other scale characteristics. To address this research gap, we investigated the effects of using numeric labels on response behavior in comparison to semantic labels. Moreover, we tested how these effects varied across different scale orientations (positive-negative vs. negative-positive) and scale formats (agree-disagree vs. construct-specific).
Methods & Data:
Our experiment was implemented in a web survey on “Politics and Work” fielded in Germany, November 2018 (N=4,200). The survey was quota-sampled from an access panel. Respondents were randomly allocated in a 2x2x2 between-subjects design in which we varied scale labels (numeric vs. semantic), scale orientation (positive-negative vs. negative-positive), and scale format (agree-disagree vs. construct-specific) of a rating scale comprising 10 items presented item-by-item. The experimental variations were assessed based on several response quality indicators (e.g., agreement, primacy effects, response times, straightlining).
In our preliminary analyses, we found semantic labels to result in more agreement compared to numeric labels. Relying on numeric labels further resulted in larger primacy effects, with primacy effects primarily occurring with construct-specific scales and not with agree-disagree scales. Differences in response times also were moderated by the other scale characteristics. For instance, items with numeric scale labels took longer to answer when scales were aligned from negative to positive but not in the reverse scale orientation.
Our study adds to the sparse knowledge about the usability of numeric scale labels. Moreover, it enhances previous studies by investigating the interaction between different scale characteristics and identifying scenarios in which numeric scale labels may be applied and when they should better be avoided.
Do we know what to do with “Don’t Know”?
Kantar Public, United Kingdom
Relevance & Research Question:
Much evidence exists on the treatment of Don’t Know (DK) response options in interviewer administered questionnaires, including arguments around whether they should be explicit options. With the move to online self-completion or mixed-mode designs it is unclear how to best deal with DK and other ‘spontaneous’ codes.
The current approach for online self-completion questions at Kantar Public is to ‘hide’ DK codes and only make them available where respondents try to move on without selecting an answer. Usability testing has uncovered issues with this approach, with respondents often unaware how to select a DK response and feeling forced to select an alternative. This poses questions over whether the current approach risks producing inaccurate data.
Methods & Data:
This paper presents results from an experiment conducted on the UK’s Understanding Society Innovation Panel (IP11) that compared different treatment of ‘Don’t know’ (DK) response codes within a self-completion online questionnaire.
Our experiment compared three approaches:
- Treatment 1 -To ‘hide’ DK codes and only make them available if respondents try to move on without selecting an answer
- Treatment 2 - As above but with a specific prompt at each question on how to view additional options
- Treatment 3 - Including DK codes as part of the main response lists (so they are always visible)
Analysis was conducted across 26 questions. Treatment 3 consistently elicited a higher proportion of DK responses than treatment 1. The differences at self-assessed health measures tended to be small (typically in the order of one or two percentage points). The differences were particularly pronounced for attitudinal questions on low-salience issues.
When asked about the benefits and risks of nuclear energy, only 11% of those exposed to treatment 1 answered DK, compared with 23% for treatment 2 and 33% for treatment 3. Our analysis suggests that treatment 3 effectively discourages the reporting of ‘non-attitudes’. Under treatment 1, 57% of those that reported knowing ‘nothing at all’ about nuclear energy provided a valid (non-DK) answer at the benefits and risks question, this fell to 34% for treatment 2, and 16% for treatment 1.
The Presentation of Don't Know Answer Options in Web Surveys: an Experiment with the NatCen Panel
NatCen Social Research, United Kingdom
While it is possible to collect ‘spontaneous’ answers of ‘Don’t Know’ (DK) in interviewer-administered surveys, this is not so easily the case in self-completion questionnaires. An important decision in web questionnaire design is therefore whether and how to include a DK option. Including a DK option may increase the amount of ‘missing’ data and lead to satisficing, but not including a DK option may negatively affect the data as respondents who genuinely do not know are forced to give a false answer or perhaps exit the survey entirely.
Several approaches have been suggested for web surveys. These include (1) offering a DK option up-front with visual separation from substantive answer options, (2) offering a DK option up-front but probing for more information after a DK answer, and (3) only showing a DK option if a respondent tries to skip a question without giving an answer.
This study tests these approaches by collecting experimental data using the NatCen panel, a probability-based sequential mixed-mode panel. Additionally, the study tests the effects of more explicitly explaining the functionality of option (3) to respondents upfront.
The study looks at the effects of these approaches on the number of DK answers alongside measures of data quality. Further insight is gained into the cognitive processes that respondents went through by using follow-up closed and open probes at the end of the survey to understand why they answered the questions in the way they did.