Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
A 1: Smartphones in Surveys
Time:
Thursday, 10/Sep/2020:
10:30 - 11:30

Session Chair: Bella Struminskaya, Utrecht University, Netherlands, The

Show help for 'Increase or decrease the abstract text size'
Presentations

Effects of mobile assessment designs on participation and compliance: Experimental and meta-analytic evidence

David Richter1, Cornelia Wrzus2

1DIW Berlin, Germany; 2University of Heidelberg, Germany

When conducting mobile-phone based assessment in people’s daily life, researchers need to know how design characteristics (e.g., study duration, sampling frequency) affect selectivity and compliance, that is, who will participate in the study and provide how much information. We addressed the issue of selectivity in the Innovation Sample of the Socio-Economic Panel, who were invited to participate in an ESM study on happiness, and were either offered only feedback in 2015 or feedback and monetary reimbursement in 2016. Participation increased from 7% in 2015 to 36% when receiving feedback and monetary reimbursement, and compliance was much higher as well (29% in 2015 vs. 86% in 2016). Furthermore, participants differed from non-participants regarding age and gender, but negligibly regarding personality characteristics. To further examine design effects on participants’ compliance, we conducted a meta-analysis on ESM studies from 1987 to 2018, from which we coded a random subsample of 402 studies regarding sample characteristics, study design (e.g., study duration, sampling type and frequency, sensor usage), type of incentives, as well as compliance and drop out. Initial results showed that associations between design characteristics and compliance varied with sample type and type of incentive. For example, in adolescent and young adult samples, compliance was non-linearly related to number of assessments, whereas in adult samples compliance was larger with larger numbers of assessments. Providing incentives to participants, especially monetary incentives, predicted higher compliance rates compared to no incentivizing, except in physically-ill samples. This latter effect is likely attributable to high intrinsic motivation to provide information among participants dealing with chronic and other illnesses. We thus conclude that both the study design and the incentive should be adapted to the intended sample and we offer first empirical findings to guide these decisions.



Using geofences to trigger surveys in an app

Georg-Christoph Haas1,2, Mark Trappmann1,4, Florian Keusch2, Sebastian Bähr1, Frauke Kreuter1,2,3

1Institut für Arbeitsmarkt- und Berufsforschung der Bundesagentur für Arbeit (IAB), Germany; 2University of Mannheim, Germany; 3University of Maryland, United States of America; 4University of Bamberg, Germany

Relevance & Research Question: Within the survey context, geofences can be defined as geographical spaces that trigger a survey invitation, when an individual enters, leaves, or stays within this geographical space for a prespecified amount of time. Geofences may be used to administer context specific surveys, e.g., an evaluation survey of a shopping experience in a specific retail location. While geofencing is already used in other contexts (e.g., marketing and retail), this technology seems so far underutilized in survey research. In this talk, we will share our experiences with the implementation of geofences within an app data collection study. Given the limited research on this topic, we will answer the following exploratory research questions: How well did the geofencing approach work? What are the reasons for geofencing to fail?

Methods & Data: In 2018, we invited participants of the PASS panel survey to download the IAB-SMART app. The app passively collected smartphone sensor data (e.g., geolocation, app usage and location) and administered short surveys. Overall, 687 individuals installed the app. While most in-app surveys were triggered by a predefined time schedule, one survey module was triggered by a geofence. To define geofences and trigger survey invitations our app used the Google Geofence API.

Results: Overall, the app sent 230 invitations and received 225 responses from 104 participants. However, only 56 of the 225 responses stated that they actually accessed the geofence. Cross-validating the Google Geofence API survey triggers with our custom built geolocation measurement in the app shows frequent mismatches between the two. Our data indicates that in many cases individuals should not have received a survey invitation because they were actually not in the specified geofence.

Added Value: Existing literature about geofencing (largely consisting of Youtube videos and short blog posts) only provides a short introduction to this technology and virtually no use of geofencing is documented in survey research. Our presentation evaluates the reliability of geofences, shares lessons learned, and discusses limitations of the geofencing approach to a broader audience.



Mobile friendly design in web survey: Increasing user convenience or additional error sources?

Jean Philippe Decieux1, Philipp Emanuel Sischka2

1University of Duisburg-Essen, Germany; 2University of Luxembourg, Luxembourg

Relevance

At the beginning of the era of online surveys, these were programmed to be answered using desktop PCs or notebooks. However, due to technical development such as the increasing role of mobile devices, studies on online survey research detect an increase of questionnaires that are answered on mobile devices (md). However, survey navigation on md is different compared to PC: it takes place on a smaller screen and usually involves a touch pad rather than a mouse and a keyboard. Due to these differences in questionnaire navigation, some of the traditional used web question formats are no longer convenient to be answered on a md. The most common formats are matrix questions. To deal with this development, so called mobile-friendly or responsive-designs were developed, which change the layout of specific questions that are not convenient on a md into an alternative mobile-friendly-design. In case of matrixes, these were separated into item-by-item questions which are suggested to be more comfortable to answer on a mobile device.

Research question

However, from a psychometric perspective the question whether these changes in question format produce comparable results is too often ignored. Therefore, this paper elucidates the following research question: Do different versions of responsive-designs actually produce equivalent response?

Data & Methods

Using the data of the first two waves of the Germ and Emigration and Remigration Panelstudy we can base our analysis on more than 19.000 cases (appox. 7.000 using different md). As GERPS makes use of a responsive design, we are able to investigate measurement invariance between different md and desktop device groups.

Results:

As the data management is still in progress and will be finished in the end of October, we will be able to present first-hand information based on fresh data. However, first initial analyses reveal differences between md and desktop device versions.

Added Value

Our study is one of the first that elucidates the equivalence of responsive design options. Thus, it enhances the perspective on the existence of possible new biases and error sources due to the increased use of md within web surveys.