Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
A9: Interactive Features and Innovations in Web Surveys
Time:
Friday, 17/Mar/2017:
11:40 - 13:00

Session Chair: Stefan Niebrügge, INNOFACT AG, Germany
Location: A 208

Show help for 'Increase or decrease the abstract text size'
Presentations

Conversational Survey Frontends: How Can Chatbots Improve Online Surveys?

Christopher Harms, Sebastian Schmidt

SKOPOS GmbH & Co. KG, Huerth, Germany

Relevance & Research Question:

Even though online chats have been around for a long time, the tremendous success of WhatsApp and Facebook messenger have fundamentally changed how people interact and exchange information. With the appearance of “intelligent”, machine-learning based chatbots we assume that the areas of application will become even more versatile which leads to the question how we can utilize chatbots for market research purposes.

Chatbots consist of two components: (a) the frontend for the user as the point of interaction and (b) a backend, often based on Natural Language Processing algorithms that handles the user requests and sends appropriate responses back to the user.

Focusing on the frontend, we wondered how a chat interface impacts answer behavior. We were especially curious to understand the effect on response rates, data quality and survey fatigue by engaging in a conversational manner.

Methods & Data:

We developed a Chatbot interface which delivers survey questions to the user. Our aim was to create a non-obstructive, responsive frontend that feels familiar to users of messenger services. A sample of 600 participants from a commercial online panel was randomly assigned to either a traditional online questionnaire or a Chatbot interface. Both questionnaires included exactly the same questions covering topics concerning media consumption and mobility, as well as questions on the perception of the questionnaire itself. Different answer types were presented to the respondent, such as open questions, Likert scales and Multiple Choice questions.

(The study was pre-registered at http://aspredicted.org/blind.php/?x=cb5cqk)

Results:

Results will be available in early January 2017.

Added Value:

Chatbots as survey frontends do not only offer a new but familiar interface for respondents. They allow the integration within different contexts using developer APIs for Facebook Messenger or other messaging services. This allows to recruit and survey participants very easily without the need to redirect them to a separate questionnaire.

Thus, it is important to evaluate benefits and potential pitfalls gained from using such frontends. Especially, when future applications of chatbots in online surveys include AI capabilities to analyze responses and adapt questions in real-time.


Harms-Conversational Survey Frontends-172.pdf

Willingness of online panelists to perform additional tasks

Melanie Revilla1, Mick Couper2

1RECSM-Universitat Pompeu Fabra, Spain; 2University of Michigan, USA

Relevance & Research Question: People’s willingness to share data with researchers is the fundamental raw material for a lot of research. So far, researchers have mainly asked respondents to share data in the form of answers to survey questions. However, there is a growing interest in using alternative sources of data. Some of these data can be used without further issues (e.g. publicly-shared social media data). For others, people's willingness to share them is a requirement. Despite the growing interest in using and combining different data sources, little is known about people’s willingness to share these other kinds of data with researchers. In this study, we aim to: 1) provide information about the willingness of people to share different types of data; 2) explore the reasons of their acceptance or refusal, and 3) try to determine which variables can affect the willingness to perform these additional tasks.

Methods & Data: In a survey implemented in 2016 in Spain, around 1,400 panelists of the Netquest online access panel were asked about their hypothetical willingness to share different types of data: passive measurement on devices they already use; wearing special devices to passively monitor activity; providing them with measurement devices and then having them self-report the results; the provision of physical specimens or bodily fluids (e.g. saliva); others. Open questions were used to follow up on the reasons of acceptance or refusal in the case of the use of a tracker.

Results: The results suggest that the acceptance level is quite low in general, but there are large differences across tasks and respondents. The main reasons justifying both acceptance and refusal are related to privacy, security and trust. Further analyses exploring the differences in levels of willingness show that we are able to identify factors that predict such willingness (attitude toward sharing, perceived benefit of research, trust in anonymity, attitude toward surveys, etc).

Added Value: This study provides new information about the willingness of online panelists to share data, extending prior research, which has largely focused on a single type of data and has not explored correlates of willingness.


Revilla-Willingness of online panelists to perform additional tasks-118.pdf

Automatic versus Manual Forwarding in Web Surveys

Arto Tapani Selkälä1, Mick P. Couper2

1University of Lapland, Finland; 2University of Michigan, United States

Keywords: auto forwarding, cognitive burden, information accessibility

In this paper we extend previous work on automatic forwarding (AF) versus manual forwarding (MF) to examine the effect on the cognitive response process. We expect respondent cognitive burden to increase as a combined function of low information accessibility and auto forwarding. We experimentally tested manual versus auto forwarding for varying levels information accessibility (low versus high) and need for consistency of responses (low versus high). We expect AF to perform better when information is readily accessible and the need to access previous questions (consistency) is low.

Methods & Data:

Keywords: response time, paradata

Undergraduate students at two universities in Finland were randomly assigned to six independent web survey conditions in two experiments (n=3028 and n=5004). The experimental design was an incomplete factorial, including three independent variables: forwarding procedure, information accessibility and consistency requirement. Total response times and elapsed times on individual items from paradata were analyzed using linear regression models and multilevel models. The returns and shifts between items as well as straight-line responding and consistent responding were examined.

Results:

Total response time was 27 seconds higher on average in the MF surveys. Less-accessible information and the consistency requirement both increased total response time (20 seconds and 14 seconds on average, respectively). All three main effects were statistically significant. Contrary to expectation, no significant interactions were found.

MF respondents were significantly more likely than AF respondents to change answers on experimentally-manipulated items. For example, 13 percent of manually forwarded group changed their responses on the question conveying less-accessible information compared with 2 - 4 % changes in the control groups. AF did not increase straight-line responding. However, based on experiments 1 and 2 we found slight evidence that AF enhances consistent responding when the need to access previous questions is high. AF respondents of experiment 2 self-reported responding easier than MF respondents (difference 10 %).

Added Value

There are many proponents of auto forwarding in web surveys. This paper provides one of the few carefully-designed and theoretically-motivated studies to explore this important design choice under experimentally-varied conditions. These results will help shape practice.


Selkälä-Automatic versus Manual Forwarding in Web Surveys-193.pdf

Learning from Mouse Movements: Improving Web Questionnaire and Respondents’ User Experience through Passive Data Collection

Florian Keusch1,2, Sarah Brockhaus1,3, Felix Henninger1, Rachel Horwitz4, Pascal Kieslich1, Frauke Kreuter1,2,5, Malte Schierholz1,5

1University of Mannheim, Germany; 2University of Maryland, USA; 3LMU Munich, Germany; 4U.S. Census Bureau, USA; 5Institute for Employment Research, Germany

Relevance & Research Question:

While tracking mouse movements is common in other areas of usability testing (e.g., web design, e-learning), applying mouse movement tracking as a tool for web questionnaire testing is relatively new and has so far been mostly limited to lab studies. In the current study, we operationalize collection of specific movements on a large scale outside the lab and we experimentally vary the type of difficulty in survey questions to see if different movements are associated with different cognitive processing.

Methods & Data:

The data for this study come from a web survey among 1,250 people who are employed, unemployed, job seekers, recipients of unemployment benefit II, and active labor market program participants. The study was conducted by the Institute for Employment Research in Nuremberg, Germany, in fall 2016. The questionnaire includes factual, opinion, and problem-solving questions with a variety of response formats, such as radio buttons and slide bars. We vary experimentally the difficulty and complexity of items between respondents to show how complexity affects behavior. We collect and log participants' mouse movements as they complete the online survey.

Results:

We find that unsorted lists of response questions are associated with more mouse movements and more vertical regressions than sorted lists. We also find that Yes/No format results in more mouse movements and more horizontal flips than check all that apply questions. We also find specific patterns of mouse movements on sensitive questions and when difficult terms are used.

Added Value:

By collecting and analyzing participants' mouse movement data during completion of the questionnaire, we show how complexity affects response behavior and the degree to which indicators of insecurity are related to the veracity of answers. Our results constitute initial steps toward a real-time analysis of the collected paradata, and provide a building-block for adaptive questionnaires that employ online detection and resolution of respondent's difficulties, leading to more accurate survey data.


Keusch-Learning from Mouse Movements-174.pdf


 
Contact and Legal Notice · Contact Address:
Conference: GOR 17
Conference Software - ConfTool Pro 2.6.96
© 2001 - 2016 by H. Weinreich, Hamburg, Germany