Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
A 6.2: Cognitive Processes
Time:
Friday, 11/Sep/2020:
3:30 - 4:30

Session Chair: Otto Hellwig, respondi AG & DGOF, Germany

Show help for 'Increase or decrease the abstract text size'
Presentations

Using survey design to encourage honesty in online surveys

Steve Wigmore, Jon Puleston

Kantar, United Kingdom

Relevance & Research Question:

There can be multiple reasons why data collected in online surveys may differ from the “truth”. Surveys which do not collect data from smartphones for example will include bias from a skewed sample that does not reflect the modern world. The way that individual questions are asked may be subject to inherent biases and some respondents may find survey experience itself frustrating or confusing which will impact their willingness to answer truthfully.

Methods & Data:

This paper will discuss key psychological motivations for respondents to answer surveys truthfully even when this requires them to make more of an effort for the same financial incentive. What drives individuals to tell the truth and how can survey design help to reward such honesty. We will look at number of questioning techniques that reflect real-life decision making and make it easier to for respondents to answer truthfully. Conversely, we will also examine methods for validating data to reduce overclaim from aspirational respondents.

Results:

By conducting a number of research-on-research surveys on the Kantar panel we have seen the direct impact of asking questions across a range of subjects and countries to encourage honesty in data collection and also to validate or trap respondents who are prepared to answer dishonestly. We will present the results of this research and provide some key learnings which can be used directly in online questionnaires.

Added Value:

Many research companies and end-clients use the results of online research as an import part of their insight generation process or tracking studies. By using the techniques that will be presented in this paper they should be assured that we will be collecting higher quality and more honesty respondents from more engaged respondents. This is something that we would encourage anyone involved in the design of online surveys to take some consideration of.



What Is Gained by Asking Retrospective Probes after an Online, Think-Aloud Cognitive Interview

William Paul Mockovak

U.S. Bureau of Labor Statistics, United States of America

Relevance & Research Question: Researchers have conducted cognitive testing online through the use of web-based probing. However, Lenzer and Neuert (2017) mention that, of several possible cognitive interviewing techniques, they applied only one technique: verbal probing. They also suggest that given the technical feasibility of creating an audio and screen recording of a web respondent’s answering process, future studies should look into whether web respondents can be motivated to perform think-aloud tasks while answering an online questionnaire. Using an online instrument to guide the process, this study demonstrated that unmoderated, think-aloud cognitive interviewing could be successfully conducted online, and that the use of retrospective probes after the think-aloud portion was completed resulted in additional insights.

Methods & Data: Think-aloud cognitive interviewing, immediately followed by the use of retrospective web-based probing, was conducted online using a commercially available online testing platform and separate software for displaying survey instructions and questions. Twenty-five participants tested 9 questions dealing with the cognitive demands of occupations. Videos lasting a maximum of 20 minutes captured screen activity and each test participant’s think-aloud narration. A trained coder used the video recordings to code the think-aloud narration and participants’ answers to the retrospective web-based probing questions.

Results: 25 cognitive interviews were successfully conducted. A total of 41 potential problems were uncovered, with 78% (32) identified in the think-aloud section, and an additional 22% (9) problems identified in the retrospective, web-based probing section. The types of problems identified dealt mostly with comprehension and response-selection issues. Findings agreed with results from a field test of the interviewer-administered questions, with findings from both studies used to revise the survey questions.

Added Value: A think-aloud online test proved successful at identifying problems with survey questions. Moreover, it was easier, faster, and less expensive to conduct the online think-aloud testing and retrospective web-based probing. Online and field testing yielded similar results. However, online testing had the advantage that respondent problems could be shared using videos. And online results had the additional advantage of providing clearer examples of respondent problems, which were then available for use as examples in interviewer training and manuals.



Investigating the impact of violations of the left and top means first heuristic on response behavior and data quality in a probability-based online panel

Jan Karem Höhne1,2, Ting Yan3

1University of Mannheim, Germany; 2RECSM-Universitat Pompeu Fabra, Spain; 3Westat, United States of America

Relevance & Research Question: Online surveys are an established data collection mode that use written language to provide information. The written language is accompanied by visual elements, such as presentation forms and shapes. However, research has shown that visual elements influence response behavior because respondents sometimes use interpretive heuristics to make sense of the visual elements. One such heuristic is the “left and top means first” (LTMF) heuristic, which suggests that respondents tend to expect that a response scale consistently runs from left to right or from top to bottom.

Methods & Data: In this study, we build on the experiment on “order of the response options” by Tourangeau, Couper, and Conrad (2004) and extend it by investigating the consequences for response behavior and data quality when response scales violate the LTMF heuristic. We conducted an experiment in the probability-based German Internet Panel in July 2019 and randomly assigned respondents to one of the following two groups: the first group (n = 2,346) received options that followed in a consistent order (agree strongly, agree, it depends, disagree, disagree strongly). The second group (n = 2,341) received options that followed in an inconsistent order (it depends, agree strongly, disagree strongly, agree, disagree).

Results: The results reveal significantly different response distributions between the two experimental groups. We also found that inconsistently ordered response options significantly increase response times and decrease data quality in terms of criterion validity. These findings indicate that order discrepancies confuse respondents and increase the overall response effort in terms of response times. They also affect response distributions reducing data quality.

Added Value: We recommend presenting response options in a consistent order and in line with the design strategies of the LTMF heuristic. Otherwise, this may affect the outcomes of survey measures and thus the conclusions that are drawn from these measures.