Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
A4.1: Respondent Behavior and Data Quality I
Time:
Friday, 10/Sept/2021:
11:00 - 12:00 CEST

Session Chair: Florian Keusch, University of Mannheim, Germany

Presentations

Satisficing Behavior across Time: Assessing Negative Panel Conditioning Using a Randomized Experiment

Fabienne Kraemer1, Henning Silber1, Bella Struminskaya2, Michael Bosnjak3, Joanna Koßmann3, Bernd Weiß1

1GESIS - Leibniz-Institute for the Social Sciences, Germany; 2Utrecht University, Department of Methodology and Statistics, Netherlands; 3ZPID - Leibniz-Institute for Psychology, Germany

Relevance and Research Question:

Satisficing response behavior (i.e., taking short-cuts in the response process) is a threat to data quality. Previous research provides mixed-evidence on whether satisficing increases across time in a panel study impairing the quality of survey responses in later waves (e.g., Schonlau & Toepoel 2015; Sun et al. 2019). However, these studies were non-experimental so little is known about what accounts for possible increases. Specifically, past research did not distinguish between the effects of general survey experience (process learning) or the familiarity with specific questions (content learning).

Methods and Data:

Participants of a non-probability German online access panel (n=882) were randomly assigned to two groups. The experimental group received target questions in all six panel waves, whereas the control group received these questions only in the last wave. The target questions included six between-subject question design experiments, manipulating (1) the response order, (2) whether the question included a ‘don’t know’ option, and (3) whether someone received a question in the agree/disagree or the construct specific response format. Our design, in which all respondents have the same survey experience (process learning) allows us to test the hypothesis whether respondents increasingly employ satisficing response strategies when answering identical questions repeatedly (content learning).

Results:

Since the study will be finished by end of March 2021, we conducted preliminary analyses using within-subject comparisons of the first three waves of the experimental group. The question design experiments provide evidence for the appearance of all three forms of satisficing (i.e., primacy effects, acquiescence, and saying ‘don’t know’) in each of the three waves of the study. These response effects have an average magnitude of 10 to 15 percentage points. However, there seems to be no clear pattern of increase or decrease in satisficing over time, disconfirming the content learning hypothesis.

Added value:

Currently, it is unclear how process and content learning affect satisficing response behavior across waves in longitudinal studies. Our findings contribute to the understanding of whether there are unwanted learning effects when asking respondents to complete identical survey questions repeatedly, which is a critical study design to monitor social change.



Consistency in straightlining across waves in the Understanding Society longitudinal survey

Olga Maslovskaya

University of Southampton, United Kingdom

Relevance & Research Question: Straightlining is one of the important indicators of poor data quality. Straighlining can be identified when respondents give answers to batteries of attitudinal questions. Previous research suggests that the likelihood of straightlining is higher in online mode of data collection when compared to face-to-face interviews and there is a difference in the likelihood of straightlining depending on the choice of device respondents use in mixed-device online surveys. As many social surveys nowadays move to either mixed-mode designs with online mode available for some respondents or even to online data collection as a single mode, it is important to address various data quality issues in longitudinal context. When different batteries of questions are asked in different waves of longitudinal surveys, it is possible to identify whether some individuals consistently choose straightlining as a response style behaviour across waves. This paper addresses the research question of whether there is consistency in straightlining behaviour within individuals across waves in online component of a longitudinal survey? And if yes, what their characteristics are.

Methods & Data: The project uses online components of Understanding Society Survey Waves 8-10. These data provide a unique opportunity to study straighlining across time in an online mixed-device longitudinal survey in the UK context. In Wave 8 around 40% of households responded in online mode and in consecutive waves the proportions were even higher. Longitudinal data analysis is used to address the main research question.

Results: Preliminary results are already available, the final results will become available in June 2021.

Added Value: This project addresses an important issue of data quality in longitudinal mixed-device online surveys. When the individuals who consistently choose straighlining response behaviour across waves are identified, they can be targeted during survey data collection either through real-time data quality evaluation or by using the information about data quality from a previous wave in the current wave. Tailored treatment can then be employed to improve quality of data from these respondents.



Effects of ‘Simple Language’ on Data Quality in Web Surveys

Irina Bauer, Tanja Kunz, Tobias Gummer

GESIS – Leibniz Institute for the Social Sciences, Germany

Relevance & Research Question:

Comprehending survey questions is an essential step in the cognitive response process that respondents go through when answering questions. Respondents who have difficulties understanding survey questions may not answer at all, drop out of the survey, give random answers, or take shortcuts in the cognitive response process – all of which can decrease data quality. Comprehension problems are especially likely among respondents with low literacy skills. We investigate whether the use of ‘Simple Language’ in terms of clear, concise, and uncomplicated language for survey questions helps mitigating comprehension problems and thus increase data quality. ‘Simple Language’ is a linguistically simplified version of standard language and is characterized by short and succinct sentences with a simple syntax avoiding foreign words, metaphors, or abstract concepts.

Methods & Data:

To investigate the impact of ‘Simple Language’ on data quality, we conducted a web survey of 10 minutes length among 4,000 respondents of an online access panel in Germany in December 2020. Respondents were randomly assigned to a questionnaire in ‘Standard Language’ or to a version of the questionnaire that had been translated into ‘Simple Language’. We compared both groups with respect to various measures of data quality, including item nonresponse, nondifferentiation, and speeding. In addition, we investigate several aspects of respondents’ survey assessment.

Results:

Our findings to date are based on preliminary analyses. We found an effect of language on item nonresponse: Respondents who received the questionnaire in ‘Simple Language’ were more likely to provide substantive answers compared with respondents who received a questionnaire in ‘Standard Language’. The findings regarding other quality indicators seem to be mixed and need further investigation.

Added Value:

The study contributes to a deeper understanding of the benefits of ‘Simple Language’ for question comprehension and data quality in web surveys. In addition, our findings should provide useful insights for improving the survey experience. These insights may be particularly helpful for low literate respondents who are frequently underrepresented in social science surveys.