A07: Mobile Surveys
How Do Different Device Specifications Affect Data Collection Using Mobile Devices?
University of Essex, United Kingdom
Relevance & Research Question:
Previous studies have found differences between surveys completed on mobile devices and desktops. A variety of mobiles devices are being used to respond to surveys. Little is known about how the specifications of mobile devices affect both response behaviour and data quality. This research aims to answer three research questions: What proportions of the variance in different response behaviour and data quality indicators can be attributed to the device and to the participants? Do specific device characteristics impact upon the response behaviour and data quality indicators? Do the observed effects of device characteristics remain after controlling for respondent characteristics?
Methods & Data:
Data is from the Understanding Society Spending Study One, an app based study asking participants to take pictures of receipts or submit information about purchases. This was embedded within the Understanding Society Innovation Panel, a household panel with a probability-based sample representative of Great Britain. The make, model and operating system of the mobile devices were captured. Additional data on device characteristics was collected using Amazon mTurk and web scraping including: RAM, camera quality, screen size, and processor performance.
Cross-classified multilevel models were used to examine the clustering effect of both respondents and devices. Survey outcomes examined include the duration of app uses, the quality of images produced, and the type of submission made.
Results suggest that in some instances sizeable proportions of the variance in survey outcomes can be attributed to the device used. Additionally, strong associations were found between some device characteristics and survey outcomes. Sometimes these associations were stronger than those between respondent characteristics and survey outcomes. Multivariate analyses produced some results that were consistent with device characteristics having a direct effect on survey outcomes. This was more prominent with certain outcomes, particularly the quality of scanned images of receipts.
This paper first demonstrates how collection of additional device characteristics may be carried out. Then the relative impact of devices and respondents is assessed, highlighting the need to consider the devices being used to complete survey tasks at more granular level than simply dichotomising into mobile versus desktop.
Does the layout make a difference? An experiment on effects of online survey layout and device on data quality
GESIS Leibniz Institute for the Social Sciences, Germany
Relevance & Research:
Nowadays, the majority of online surveys can be defined as mixed-device studies of PC and smartphones. This fact requires rethinking design conventions of web surveys to consider the usage of both device types. Previous studies have mainly focused on either the effect of the device or the effect of the layout on data quality separately. Furthermore, the majority of the studies suffer from the self-selection of a preferred device by the respondents, thus measurement effects cannot be disentangled from selection effects. To overcome this lack of research, we examine the combined effect of devices and layouts on data quality.
Methods & Data:
In an experimental study we applied a 2x3 factorial design to test main effects and interactions of two factors: 1) the device respondents were invited for participation (desktop vs. mobile device) and 2) the presented online survey layout (optimized for desktop vs. optimized for smartphones vs. build-in adaptive layout).
In October 2018 respondents from an online access panel in Germany were randomly invited to one out of six experimental groups. We applied quota sampling regarding age, sex, and education. Overall 3300 respondents finished the survey, what results in about 550 respondents per treatment group.
The experimental design allows us to examine how the treatment groups differ on several indicators of data quality (e.g., break-off rates, duration, response styles, and characters in open-ended questions) and their evaluation of survey experience.
So far, preliminary analyses show that break-off rates are generally higher among smartphone users (9% vs. 19%) and that they are especially high in the group that received the desktop-optimized version of the questionnaire (26%). Contradicting to previous research, the overall survey evaluation does not differ between the treatment groups. Detailed analyses for the more advanced indicators will be available in early 2019.
This study will provide evidence on the effect of layout choices on data quality, depending on the device used. Furthermore, it offers information about comparability of results in mixed-device studies and practical guidance for designing mixed-device studies.
Dispelling Smartphone Data Collection Myths: Uptake and Data Quality in the UK Office for National Statistics (ONS) Large Random Probability Mixed-Device Online Survey Experiments
University of Southampton, United Kingdom
Relevance & Research Question: Social surveys are increasingly conducted via online data collection. They also started embracing smartphones for data collection. In the UK, there is a significant move towards online data collection, including the ambition to move established household surveys such as Labour Force Survey (LFS) as well as the next UK 2021 Census online. Since most online surveys allow participants to respond not only via PC/laptops and tablets but also via smartphones, it is important to understand associated data quality issues. Concerns still exist regarding smartphones producing lower data quality. Also nothing is known yet about longitudinal mixed-device social surveys in the UK. This research is timely and fills the knowledge gaps in these areas.
Methods & Data: We use Office for National Statistics (ONS) data for LFS online experiments (Test 1, Test 1b and Test 2) which were conducted in 2017. The main aim of these experiments was to move LFS online.
Descriptive analysis and then linear, logistic or multinomial logistic regressions are used depending on the outcome variables to study data quality indicators associated with different devices in the survey. The following data quality indicators are assessed in cross-sectional and longitudinal contexts: break-off rates, response latencies, timeout rates, restart rates and differential reporting.
Results: This paper compares data quality between smartphones and other devices in the UK ONS large-scale social survey experiments. It also assesses data quality between devices in the longitudinal context. The good news is that we can be less concerned about allowing smartphone data completion in the contexts where mobile-first design is used for questionnaires as data quality are not very different by devices.
Added Value: The findings from this analysis will be instrumental to better understanding of data quality issues associated with mixed-device surveys in the UK cross-sectional and longitudinal contexts in general and specifically for informing online versions of social surveys and the next UK Census 2021. The results can help improving designs of the surveys and response rates as well as reducing survey costs and efforts.