Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
A2: Adapting Online Surveys for Mobile Devices
Time:
Thursday, 16/Mar/2017:
10:45 - 11:45

Session Chair: Jan Karem Höhne, University of Göttingen, Germany
Location: A 208

Show help for 'Increase or decrease the abstract text size'
Presentations

Data chunking for mobile web: effects on data quality

Peter Lugtig, Vera Toepoel

Utrecht University, Netherlands, The

Relevance & Research question

Mobile phones are replacing the PC as key devices in social science data collection. In daily life, mobile phones are used for short interactions. Successful data collection strategies over mobile phones should therefore also be brief for respondents.

Questionnaires for attitude research are often very long. We argue that there is a trade-off to be made. Should questionnaires on mobile devices remain long, risking dropout, or should such questionnaires be split up (from here on called chunks) to optimize data quality?

Methods and data

We report on an experiment conducted in the probability-based LISS panel in the Netherlands, carried out in December 2015. We used a ‘within’ design of data chunking. Panelists who owned a mobile phone with Internet connection were randomly assigned to either:

a) The normal survey (about 20 min)

b) The same survey cut into three chunks, with each chunk offered after a week

c) The same survey cut into ten chunks, with each chuck offered every other day.

Results

First, we investigated the number of complete and incomplete responses and looked at indicators for data quality (straightlining, primacy effects, survey length). We find that more respondents are completing the questionnaire when it is offered in chunks (condition b, and especially c), but also that chunking results in more item missings. We find little evidence for effects on data quality.

Finally, we report on the differences we find in the factor structure when the questionnaire was split into chunks, or was completed in one go.

Added value

The idea of data chunking is not new. ‘Planned-missingness’ designs have been implemented in web surveys successfully in the past. This study is however the first to study data chunking in the setting of mobile phone surveys. We believe that more and more data will be collected using mobile phones (already 5-25% of all web surveys are taken on mobile phones), and that understanding how to design questionnaires for mobile phones is of vital importance to both survey researchers, market researchers, and anyone using such data for substantive reasons in the future.


The effect of horizontal and vertical scales on the response behavior when switching to a mobile first design

Christian Bruch, Annelies Blom, Katharina Burgdorf, Melvin John, Florian Keusch

University of Mannheim, Germany

Relevance & Research Question: The aim of this paper is to analyze the effects of horizontal and vertical scales on response behavior for smartphone versus tablet/desktop participants in the German Internet Panel (GIP). Changes in the way people use technology, in particular smartphones, affect the measurement quality of online surveys, which increasingly become mixed-device surveys. On smartphone screens, for example, it is difficult to display horizontal scales, forcing survey designers to rethink the way answer options are displayed. Moving from traditional horizontal scales to smartphone-compatible vertical scales may, however, impact on time-series of established measures, if respondents on desktops and/or smartphones answer differently on a horizontal than on a vertical scale.

Research questions of interest are for example:

Does the scale alignment have an effect on response behavior? In particular, do we find differential distribution effects across different alignments? And to what extent does the scale alignment affect item nonresponse and response times? Most importantly, are these effects different on desktops/tablets and smartphones?

Data: 59 experiments across six waves of the Germany Internet Panel, almost 3198 respondents per wave.

Methods: ANOVAs, (multilevel) (logistic) regressions.

Results: First analyses show no effect of scale alignment on variable distributions and response times for smartphone and desktop/tablet participants indicating an unproblematic switch from horizontal to vertical scales. However, analyses into distribution effects delivered significant results choosing extreme points when presented with a vertical scale as opposed to a horizontal scale.

Added Value: The participation in online surveys via smartphones is increasing but horizontal scales are still a common practice when designing questionnaires. Thus, it is of great importance to identify the effects of scale alignment on response behavior. Our experiments ensure a comprehensive investigation of the effect of scale alignment in 59 experiments implemented across six waves of the GIP and almost 3198 respondents per wave.


Predictors of nonresponse at different phases in a smartphone-only Time Use Survey.

Anne Elevelt, Peter Lugtig, Vera Toepoel

Utrecht University, Netherlands, The

Relevance & Research question: Smartphones are becoming increasingly important and widely-used for survey completion. Smartphones offer many new possibilities for survey research: We can, for example, send pop-up questions in real-time, for instance to measure participants' feelings, and record sensor data. However, as nice as these new opportunities are, the questions we ask can get increasingly intrusive, and we risk over-asking participants, who may choose to drop-out in response. Nonresponse and nonresponse bias may be different in different phases (f.e. survey, pop-up questions, consent to record sensor data) of the research, because of the different intrusiveness of every phase. Fundamental, methodological knowledge about nonresponse in smartphone-only studies is lacking, but very important to understand selection bias. Therefore, our main research question is: How can we predict nonresponse at different phases in a smartphone-only survey?

Methods & data: We studied an innovative smartphone-only Time Use Survey. The Dutch Institute for Social Research conducted their Time Use Survey in 2013 through an app on a smartphone at two randomly chosen days of the week.

The study consisted of four phases, specifically:

1. Invitation to participate in the study (n = 2154)

2. Participation in the Time Use Survey (n = 1610).

3. Answer pop-up questions (n = 1407).

4. Give permission to record sensor data (f.e. GPS locations and call data) (n = 1004).

Results: We documented the nonresponse and estimated nonresponse bias for each of the four phases. Because the data were collected in a panel, we can use predictors from earlier waves. Therefore, we do not only have the typical (age, sex, income) information, but also more interesting variables (personality, participation history, smartphone usage, survey attitude), which we can use as covariates and predictors of nonresponse.

Added value: This study provides us with knowledge about bias in smartphone-only studies, a field which remains relatively unexplored. We used participants from the LISS-panel, which aims to be representative of the Dutch population. Therefore, this knowledge about who does and who doesn’t participate, and how smartphone studies may be biased, can be very valuable for all online researchers who consider such a study.



 
Contact and Legal Notice · Contact Address:
Conference: GOR 17
Conference Software - ConfTool Pro 2.6.96
© 2001 - 2016 by H. Weinreich, Hamburg, Germany