Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
A13: Data Quality in (Mobile)Web Surveys
Time:
Thursday, 07/Mar/2019:
12:00 - 1:00

Session Chair: Olga Maslovskaya, University of Southampton, United Kingdom
Location: Room 154
TH Köln – University of Applied Sciences

Presentations

Out of sight, Out of mind? Survey Modes Effect in objective and subjective questions

Joachim Schork2, Cesare Antonio Fabio Riillo1, Johann Neumayr2

1STATEC research, Luxembourg; 2STATEC, Luxembourg

Web questionnaires are increasingly used to complement traditional data collection in mixed mode surveys. The flexibility of mixed modes provides many advantages such as less nonresponse issues, lowered expenditures, and compensation for the decreasing availability of other data sources, i.e. fixed-line telephone numbers. However, the increased usage of web data raises concerns whether web questionnaires lead to mode-specific measurement bias, since responses given to web questionnaires may be significantly different compared to other survey modes.

We argue that the magnitude of measurement bias strongly depends on the content of a variable and investigate differences between web and telephone data in terms of objective and subjective variables. The study is based on the Luxembourgish Labour Force Survey that collects both objective and subjective employment variables. Analysis of the raw data reveals significant differences in sample composition (e.g. participants' personal characteristics such as age or nationality) as well as in the objective variable employment status and the subjective variables wage adequacy and job satisfaction.

In order to investigate whether differences in employment variables are caused by sample composition or mode-specific measurement bias, we match web and telephone samples according to variables that lead to dissimilarities in sample composition. We identify these variables by a combination of automatic variable selection via random forest and a theory driven selection. Based on the selected variables, we then apply a Coarsened Exact Matching that approximates randomized experiments by reducing dissimilarities between web and telephone samples.

After matching, we show that employment status is not affected by mode-specific measurement bias, but web participants report lower levels of wage adequacy and job satisfaction. Even though further research on subjective variables is advisable, our results support the implementation of mixed survey modes in official statistics such as the Labour Force Survey.


Attention checks in web surveys: The issue of false positives due to non-compliance

Henning Silber, Joss Roßmann, Tobias Gummer

GESIS - Leibniz Institut für Sozialwissenschaften, Germany

Relevance & Research Question:

All survey and especially web surveys rely on respondents being mindful when answering the survey questions. Survey researchers have, therefore, developed a variety of indicators to assess the data quality of survey data (e.g., straightlining, item nonresponse, and speeding). An indicator that has become especially popular in market research is attention check questions. Attention check questions usually explicitly instruct respondents to provide a specific answer or select a specific response category. However, these tests may produce false positives. That is, respondents fail the attention check tasks on purpose and are wrongfully deemed inattentive. These false positives endanger to confound the measure of attentiveness. Our study contributes to the knowledge about attention checks by reporting the findings of two experiments on how to optimize these checks.

Methods & Data:

Both experiments were conducted in a web survey based on a sample drawn from a German non-probability access panel (N = 3000). Respondents were randomly assigned to either answer on PC or smartphone. In the first experiment, we manipulated reasons that were given to respondents why they should comply. In the second experiment, we varied the placement of the attention check question within a grid question to assess the capability of these checks in measuring attention.

Results:

The results of the first experiment show that more respondents pass the attention check if a specific reason is given, which suggested that the measure might be confounded with compliance. These results are complemented by the second experiment, in which we found that respondents are more likely to pass the test if the check was placed earlier in the question sequence, thus, suggesting that attention checks—while having its inaccuracies—are capable of measuring attention.

Added Value:

The presentation provides new insights into the usefulness of attention check, a tool which is frequently used in web surveys to assess the mindfulness of respondents and data quality in general. In addition, we provide recommendations on how to design these checks to provide meaningful measures.


Effects of Survey Design and Smartphone Use on Response Quality: Evidence from a Web Survey Experiment

Joss Roßmann

GESIS Leibniz Institute for the Social Sciences, Germany

Relevance & Research Question

The increasing prevalence of respondents answering web questionnaires on their smartphone poses a challenge to researchers to optimize the design of their surveys for the use of specific devices or to implement designs that allow adaptation to the respective devices used. While applying non-adaptive designs might impair response quality on some of the devices, adaptive designs may limit the comparability of results between the devices used. Therefore, this study examined how four different adaptive and non-adaptive designs affected response quality on smartphones compared to other devices.

Methods & Data

Respondents from an opt-in online panel were randomly invited to participate in our survey experiment either on a smartphone or on a PC, Notebook, or Tablet. Then, the respondents from both conditions who complied with the instruction were randomly selected to answer the survey either in a non-adaptive PC-optimized, an adaptive, or in a non-adaptive smartphone-optimized design with paging- or scrolling layout. Overall, 4.299 respondents participated in the 2x4 fully factorial web survey experiment. We used regression modeling to study the effects of survey design and device on several indicators of response quality, such as interview duration, straightlining, non-substantive answers, and on the substantive response (i.e. latent means).

Results

The results of our study showed that particularly answering the survey in a non-adaptive PC-optimized design on a smartphone impaired the respondents’ survey experience and increased the perceived and actual interview duration as well as the number of survey breakoffs. While the non-adaptive smartphone-optimized design with a paging layout produced longer interviews, we did not find negative effects of smartphone-optimization on response quality on either type of device. In addition, the effects of the survey design on the substantive response were mostly small and insignificant. However, our results also indicated that response quality may slightly differ between devices in an adaptive design.

Added Value

Our study shows that applying a non-adaptive PC-optimized design is the least good option for surveying samples which include non-ignorable numbers of smartphone respondents. Thus, we recommend implementing either adaptive or non-adaptive smartphone-optimized designs in order to achieve high response quality for mixed-device surveys.


Roßmann-Effects of Survey Design and Smartphone Use on Response Quality-153.pdf