Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
A6.3: Voice Recording in Surveys
Time:
Friday, 10/Sept/2021:
3:10 - 4:20 CEST

Session Chair: Bella Struminskaya, Utrecht University, Netherlands, The

Presentations

Willingness to provide voice-recordings in the LISS panel

Katharina Meitinger1, Matthias Schonlau2

1Utrecht University, Netherlands; 2University of Waterloo, Canada

Relevance & Research Question: Technological advancements now allow exploring the potential of voice recordings for open-ended questions in smartphone surveys (e.g., Revilla & Couper 2019). Voice-recordings may also be useful for web surveys covering the general population. It is unclear whether and which respondents show a preference to provide voice-recordings and which respondents prefer to type responses to open-ended questions.

Methods & Data: We report on an experiment that was implemented in the LISS panel in December 2020. Respondents in this experiment were randomly assigned to a voice-recording only, a text-recording only, or a group in which they could select between voice and text recording. We will report who shows preferences for voice-recordings and which factors influence these preferences (e.g., perception of anonymity of data, potential by-standers during data collection).

Results: Preliminary analyses indicate that respondents show strong preferences to provide written instead of voice-recordings. We expect that respondents who are concerned about the anonymity of their data and who had by-standers during data collection are even less willing to provide voice-recordings.

Added Value: This research provides important insights whether voice-recording is a viable alternative for data collection of open-ended questions in general social surveys. The results also reveals factors that need to be addressed to increase the willingness of respondents to provide such data.



Audio and voice inputs in mobile surveys: Who prefers these communication channels, and why?

Timo Lenzner1, Jan Karem Höhne2,3

1GESIS - Leibniz Institute for the Social Sciences, Germany; 2University of Duisburg-Essen, Germany; 3Universitat Pompeu Fabra, Research and Expertise Centre for Survey Methodology, Barcelona, Spain

Relevance & Research Question: Technological advancements and changes in online survey participation pave the way for new ways of data collection. Particularly, the increasing smartphone rate in online surveys facilitates a re-consideration of prevailing communication channels to naturalize the communication process between researchers and respondents and to collect high-quality data. For example, if respondents participate in online surveys via a smartphone, it is possible to employ pre-recorded audio files and allow respondents to have the questions read out loud to them (audio channel). Moreover, in this survey setting, respondents’ answers can be collected using the voice recording function of smartphones (voice channel). So far, there is a lack of information on whether respondents are willing to undergo this kind of change in communication channels. In this study, we therefore investigate respondents’ willingness to participate in online surveys with a smartphone to have the survey questions read aloud and to give oral answers via voice input functions.

Methods & Data: We conducted a survey with 2,146 respondents recruited from an online access panel. Respondents received two willingness questions – one on the audio channel and one on the voice channel – each followed-up by an open question asking for the reasons of respondents’ (non)willingness to use these communication channels. The study was fielded in Germany in November 2020.

Results: The data are currently still being analyzed. The results of this study will be reported as follows: we first analyze how many respondents reported to be (un)willing to use the audio and/or voice channel when answering a survey. Next, we examine the reasons they provided for their (non)willingness. Finally, we examine which respondent characteristics (e.g., gender, age, educational level, professional qualification, usage of internet-enabled devices, self-reported internet and smartphone skills, and affinity for technology) are associated with higher levels of willingness.

Added Value: This study adds to the scarce literature on respondents’ (non)willingness to answer surveys using the audio play and voice recording functions of their smartphones. To our knowledge, it is the first study to examine the reasons of respondents’ (non)willingness by means of open-ended questions.



Effect of Explicit Voice-to-Text Instructions on Unit Nonresponse and Measurement Errors in a General Population Web Survey

Z. Tuba Suzer-Gurtekin, Yingjia Fu, Peter Sparks, Richard Curtin

University of Michigan, United States of America

Relevance & Research Question: Under the web survey design principles, one of the most cited considerations is reducing respondent burden. This consideration is mostly due to self-administration characteristic of web surveys and respondent owned technology. Often reduced respondent burden is hypothesized to be related to lower nonresponse and measurement errors. One of the operationalizations of reducing respondent burden is adapting widely used technology for other tasks to survey taking. Recently, a widely used technology is noted as digital voice assistants and its adaptation has a potential to improve nonresponse and measurement qualities in web survey data. Pew Research Center published that 42% of the U.S. adults use digital voice assistants on the smartphones (Pew Research Center, 2021).

Methods & Data: This study presents results from a randomized experiment that notes respondents can use voice-to-text instead of typing in 6 open-ended follow-ups in the experimental arm. The application is only presented in the smartphone version of the layout of an address based sampling web survey of the U.S. adult population.

Results: We will report (1) completion rates by device and initiation type (typing, QR code, email link), (2) item nonresponse rates, (3) codeable and noncodeable response rates, and (4) mean number of words in open-ended responses by two experimental arms.

Added Value: Our monthly data since January 2017 show an increase in the completion rates by smartphones and this study will be a baseline study to further understand the general population’s survey taking behavior in smartphones.