Conference Agenda

Session
A 3.1: New Technologies in Surveys
Time:
Thursday, 10/Sep/2020:
3:30 - 4:30

Session Chair: Bella Struminskaya, Utrecht University, Netherlands, The

Presentations

Effects of the Self-View Window during Videomediated Survey Interviews: An Eye-tracking Study

Shelley Feuer1, Michael F. Schober2

1U.S. Census Bureau, United States of America; 2The New School for Social Research, United States of America

Relevance & Research Question: In videomediated (Skype) survey interviews, how will the small self-view window affect people's disclosure of sensitive information and self-reported feelings of comfort during the interviews? This study replicates and expands on previous research by (a) tracking where video survey respondents look on the screen—at the interviewer, at the self-view, or elsewhere—while answering questions and (b) examining how gaze location and duration differ for sensitive vs. nonsensitive questions and for more and less socially desirable answers.

Methods & Data: In a laboratory experiment, 133 respondents answered sensitive questions (e.g. sexual behaviors) and nonsensitive questions (e.g. reading novels) taken from large scale US government and social scientific surveys over Skype, either with or without a self-view window. Respondents were randomly assigned to having a self-view or not, and interviewers were unaware of the self-view manipulation. Measures of gaze were recorded using an unobtrusive eye-tracking system.

Results: The results show that respondents who could see themselves looked more at the interviewer during question-answer sequences about sensitive (compared to nonsensitive) questions, while respondents without a self-view window did not. Respondents who looked more at the self-view window reported feeling less self-conscious and less worried about how they presented to the interviewer during the interview. Additionally, the self-view window increased disclosure for a subset of sensitive questions, specifically, total number of sex partners and frequency of alcohol use. Respondents who could see themselves reported perceiving the interviewer as more empathic, and reported having thought more about what they said (arguably reflecting increased self-awareness). For all respondents, gaze aversion—looking away from the screen entirely—was linked to sensitive (or socially undesirable) responses and self-presentation concerns.

Added Value: Together, the findings demonstrate that gaze patterns in videomediated interviews can be informative about respondents’ experience and their response processes. The promise is that findings like these can contribute to the design of new, potentially cost-saving video-based data collection interfaces. This study also provides necessary groundwork for continued investigation not only of mode effects on disclosure in surveys (as one measure of response accuracy) but also on interactive discourse more generally.



Measuring expenditure with a mobile app: How do nonprobability and probability panels compare?

Carina Cornesse1, Annette Jäckle2, Alexander Wenz1,2, Mick Couper3

1University of Mannheim, Germany; 2University of Essex, United Kingdom; 3University of Michigan, United States of America

Relevance & Research Question: So far, a number of studies have examined nonprobability and probability-based panels, but mostly only with regard to survey sample accuracy. In this presentation, we compare nonprobability and probability-based panels on a new dimension: we examine what happens when panel members are asked to use a mobile app to record their spending. We answer the following research questions: Do different types of people participate in the app study? Are there differences in how participants use the app? Do differences between samples matter for key outcomes? And do differences between samples remain after weighting?

Methods & Data: To answer our research questions, we use data from Spending Study 2, which is an app study that was implemented in May to December 2018 in two different panels in Great Britain: Understanding Society Innovation Panel, which is a probability-based panel, and Lightspeed UK, which is a nonprobability online access panel. In both panels, participants were asked to download a mobile app and use it for one month to report their spending. In our presentation, we compare the app data collected from the participants of the two panels.

Results: Our analyses show that different people participate in the app study implemented in the nonprobability and probability-based panel, both in terms of socio-demographic characteristics and with regard to digital affinity and financial behavior. Furthermore, the app study leads to different conclusions in terms of key substantive outcomes, such as the total amount and type of spending. Moreover, differences between the app study samples on substantive variables remain after weighting for socio-demographic characteristics. Only the way in which the app study participants use the app does not seem to differ between the panels.

Added Value: Our study contributes to the ongoing discussion on nonprobability and probability-based panels by adding new empirical evidence. Moreover, our study is the first to examine app study data rather than survey data. Furthermore, it covers a wide range of data quality aspects, including sample accuracy, respondent participation behavior, and weighting procedures. We thereby contribute to widening the debate to non-survey data and multi-dimensional data quality assessments.



Are respondents on the move when filling out a mobile web survey? Evidence from an app- and browser-based survey of the general population

Jessica Herzing1, Caroline Roberts1, Daniel Gatica-Perez2

1Université de Lausanne, Switzerland; 2EPFL and Idiap, Switzerland

Relevance & Research Question: Mobile devices are designed to be used while people are on the move. In the context of a mobile web survey, researchers should consider the potential consequences of respondent mobility for data quality. Being exposed to sources of distraction could result in suboptimal answers and an increased risk of breakoff. This study investigates whether there are between-device differences (web and mobile web browser vs. smartphone app) in terms of the context in which questionnaires are completed. We consider: 1) day, time and location of survey participation; 2) whether participants’ location changes during completion; and 3) whether differences in completion context are related to breakoffs and item nonresponse.

Methods & Data: We use data from an experiment embedded in a three-wave probability, general population survey conducted in Switzerland in 2019 (N=2,000). Half the sample was assigned to an app-based survey; the other half to a browser-based survey, encouraging mobile web completion. We use a combination of questionnaire data (on current location), paradata (timestamps and location indicators), and respondents’ photos of their surroundings taken at the beginning and end of the survey to gain insight into completion conditions.

Results: Our results suggest a minority of respondents was ‘on the move’ while filling out the survey questionnaires. Mobile web browser users were more likely to answer in the evening, while PC browser users responded in the late afternoon. Photographs indicate that app users tended to complete the survey at home, although the app was designed to be used on the move (using a modular design with questionnaire chunks which took less than three minutes). Furthermore, app users were unwilling to move outside to complete a different photo task.

Added value: The findings inform the design of mobile web surveys, providing insights into ways to optimise data collection protocols (e.g. by tailoring the timing of survey requests in a mixed device panel design), and to improve the onboarding procedure for smartphone app respondents. The provision of unique log-in credentials may have inhibited participant mobility and the possibility to take advantage of this key feature of mobile internet technology.