Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
A5.1: Respondent Behavior and Data Quality II
Time:
Friday, 10/Sept/2021:
1:30 - 2:30 CEST

Session Chair: Otto Hellwig, respondi/DGOF, Germany

Show help for 'Increase or decrease the abstract text size'
Presentations

Looking up answers to political knowledge questions: the use of different instructions and measures for respondent behavior

Tobias Gummer1, Tanja Kunz1, Tobias Rettig2, Jan Karem Höhne3,4

1GESIS - Leibniz Institute for the Social Sciences, Germany; 2University of Mannheim; 3University of Duisburg-Essen; 4RECSM-Universitat Pompeu Fabra

Relevance & Research Question: Measures of political knowledge are crucial in various fields to determine and explain public and political phenomena. Depending on the research question, researchers are interested in capturing declarative (knowing information) and/or procedural memory (knowing where and how to find information). In web surveys, respondents can look up information easily, thus, confounding a measure of declarative memory with procedural memory. Our study advances existing research on looking up answers to political knowledge questions in important aspects. First, we investigate whether instructions can be used to discourage or even encourage looking up answers. Second, we compare the use respondents’ self-reports of looking up answers and paradata on window switching behavior.

Methods & Data: We implemented a survey experiment in wave 51 of the probability-based German Internet Panel which was fielded in January 2021. We used a between-subject design and randomly assigned respondents to four experimental groups. Group 1 (control group) received three political knowledge questions. Group 2 received an additional instruction encouraging them to look up answers. Group 3 received an instruction discouraging them to look up answers. Group 4 were asked for their commitment to not looking up answers. We captured lookups via self-reports by respondents, paradata on window switching, and a combined measure integrating self-report and paradata.

Results: Preliminary analyses show that providing respondents with instructions significantly affects their behavior. Encouraging instructions resulted in a higher share of lookups compared to the control group. Similar, discouraging them and asking for their commitment reduced the share of lookups compared to the control group. We found these effects across all three measures of looking up answers. Yet, we also found significant differences between the three measures with self-reports indicating the lowest number of lookups and the combined measure indicating the highest number.

Added Value: Our study provides evidence on the use of instructions to encourage or discourage respondents from looking up answers to political knowledge questions. Consequently, instructions can be used to reduce bias. Moreover, we provide insights on the use of paradata to supplement self-reported measures of looking up answers.



Better late than not at all? A systematic review on late responding in (web) surveys

Ellen Laupper1, Esther Kaufmann2, Ulf-Dietrich Reips2

1Swiss Federal Institute for Vocational Education and Training SFIVET, Switzerland; 2University of Konstanz

Relevance & Research Question: Using reminders is an established practice in survey methodology to increase response rates. Nevertheless, concern is widespread that "late respondents" are less motivated to provide survey data of high quality, e.g., item nonresponse, satisficing. There is evidence that late and early respondents differ in sociodemographic characteristics as well as relevant study outcomes (e.g., attitudinal or behavioural measures). In the continuum resistance model it is assumed that late respondents are similar to nonrespondents, hence, serving as a proxy for nonrespondents. Because the last review on time of responding by Olson (2013) did not address mode differences systematically and because web surveys were not included, we here provide an up-to-date systematic review. With this review we want to answer the question whether late responding varies for the different self-administered survey modes.

Methods & Data: After a comprehensive literature search our preliminary sample consists of 122 published and non-published studies, covering several fields, e.g., health, marketing, political science. We considered studies in English and German from 1980 to 2021. All studies included a comparison between early and late respondents in mail or web surveys and reported either sociodemographic or data quality or study outcome differences. We collected for each study features of publication (e.g., year, type of publication) and study (e.g., sample size, effect sizes, response rate, operationalization of late respondents, number of reminders) via two independent coders.

Results: Our systematic review describes late responding in detail in relation to publication and study features. Hence, our review provides results on the relevance of late responding and different study features with a special focus on the survey mode and its impact on data quality.

Added Value: Our review provides deeper insights into which (web) survey practices lead to which consequences in the trade-off between measurement error and nonresponse bias and on the effect of late responding on data quality.

Literature

Olson, K. (2013). Do non-response follow-ups improve or reduce data quality? A review of the existing literature. Journal of the Royal Statistical Society: Series A (Statistics in Society), 176(1), 129–145. https://doi.org/10.1111/j.1467-985X.2012.01042.



The impact of perceived and actual respondent burden on response quality: Findings from a randomized web survey

Tanja Kunz, Tobias Gummer

GESIS - Leibniz-Institute for the Social Sciences, Germany

Relevance & Research Question: Questionnaire length has been identified as a key factor affecting response quality. A larger number of questions and the associated respondent burden are deemed to lower the respondents’ motivation to thoroughly process the questions. Thus, respondent burden increasing with each additional question respondents have to work through is likely to lower response quality. However, only little is known so far about the relationship between actual and perceived respondent burden, how this relationship may change over the course of questionnaire completion, and how response quality is affected by this depending on the relative position of the question within the questionnaire.

Methods & Data: A web survey was conducted among respondents of an online access panel using a questionnaire of 25-29 minutes length. The question order was fully randomized, allowing the effects of question position on response quality to be disentangled from the effects of content, format, and difficulty of individual questions. Among these randomly ordered survey questions, a block of evaluation questions on self-reported burden was asked several times. Due to complete randomization of the survey questions and by repeatedly asking the evaluation questions, changes in the actual and perceived respondent burden over the course of questionnaire completion and its effect on response quality could systematically be examined. Several indicators of response quality were taken into account; among others, don’t know responses, nondifferentiation, attention check failure, and length of answers to open-ended questions.

Results: We found only minor effects of actual respondent burden on response quality, whereas higher perceived respondent burden is associated with poorer response quality in a variety of different question types.

Added Value: This study provides evidence of how the actual and perceived respondent burden evolves over the course of the questionnaire and how both affect response quality in web surveys. In this respect, the present study contributes to a better understanding of previous evidence on lower data quality in later parts of questionnaires.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: GOR 21
Conference Software - ConfTool Pro 2.6.135
© 2001 - 2020 by Dr. H. Weinreich, Hamburg, Germany