Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
P 1.3: Poster III
Time:
Thursday, 09/Sept/2021:
12:50 - 1:50 CEST

sponsored by GIM

Presentations

Willingness to participate in in-the-moment surveys triggered by online behaviors

Carlos Ochoa, Melanie Revilla

Research and Expertise Centre for Survey Methodology, Universitat Pompeu Fabra

Relevance & Research Question:

Surveys are a fundamental tool of empirical research. However, surveys have limitations that may produce errors. One of their most well-known limitations is related to memory recall errors: people can have difficulties to recall relevant data related to events of interest for researchers. Passive data solve this problem partially. For instance, online behaviours are increasingly researched using tracking software (“meter”) installed on the browsing devices of members of opt-in online panels, registering which URLs they visit. However, such a meter also suffers from new sources of error (e.g., the meter may not collect data temporally). Moreover, part of the objective information cannot be collected passively, and subjective information is not directly observable. Therefore, some information gaps must be filled, and some information must be validated. Asking participants about such missing/dubious information using web surveys conducted in the precise moment an event of interest is detected has the potential to fill the gap. However, to what extent people may be willing to participate raise doubts about the applicability of this method. This paper explores which parameters affect the willingness to participate in in-the-moment web surveys triggered by the online activity recorded by a meter installed by the participants on their devices, using a conjoint experiment.

Methods & Data:

A cross-sectional study will be developed to ask members of an opt-in panel (Netquest) in Spain about their willingness to participate in in-the-moment surveys. A choice based conjoint analysis will be used to determine the influence of different parameters and different characteristics of participants.

Results:

This research is in progress, results are expected in July-2021. Three key parameters are expected to play a crucial role in the willingness to participate: length of the interview, maximum time allowed to participate and incentivization.

Added Value:

This research will allow to design effective experiments to collect data in the moment to prove the actual value of this method. The use of a conjoint experiment is a new approach to explore the willingness to participate in research activities that may lead to a better understanding of the relevant factors that influence participation.



Memory Effects in Online Panel Surveys: Investigating Respondents’ Ability to Recall Responses from a Previous Panel Wave

Tobias Rettig1, Bella Struminskaya2, Annelies G. Blom1

1University of Mannheim, Germany; 2Utrecht University, the Netherlands

Relevance & Research Question:

Repeated measurements of the same questions from the same respondents have several applications in survey research in longitudinal studies, pretest-posttest experiments, and to evaluate measurement quality. However, respondents’ memory of their previous responses can introduce measurement error into repeated questions. While this issue has recently received renewed interest from researchers, most studies have only investigated respondents’ ability to recall their responses within cross-sectional surveys. The present study aims to fill this gap by investigating how well respondents can recall their responses in a longitudinal setting after 4 months in a probability-based online panel.

Methods & Data:

Respondents of the German Internet Panel (GIP) received 2 questions on environmental awareness at the beginning of the November 2018 wave. Four months later, respondents were asked (1) whether they could recall their responses to these questions, (2) to repeat their responses, and (3) how certain they were about their recalled answer. We compare the proportions of respondents who correctly repeated their previous response among those who alleged that they could recall it and those who did not. We also investigate possible correlates of correctly recalling previous responses including question type, socio-demographics, panel experience, and perceived response burden.

Results:

Preliminary results indicate that respondents can correctly repeat their previous response in about 29% of all cases. Responses to attitude and behavior questions were more likely recalled than responses to belief questions, as were extreme responses. Age, gender, education, panel experience, perceived response burden, switching devices between waves and participation in the panel wave between the initial questions and repetitions did not have significant effects on recall ability.

Added Value:

The implications of respondents’ ability to recall their previous responses in longitudinal studies are nearly unexplored. This study is the first to examine respondents’ recall ability after a realistic interval for longitudinal settings of 4 months, which is an important step in determining adequate time intervals between question repetitions in longitudinal studies for different types of questions.



Default layout settings of sliders and their problems

Florian Röser, Stefanie Winter, Sandra Billasch

University of Applied Sciences Darmstadt, Germany

Relevance & Research Question:

In online survey practice, sliders are increasingly used to answer questions or to query attitudes / consents. In the social sciences, however, the rating scale is still the most widely used scale type. The question arises as to whether the default layout settings of these two types of scales in online survey systems have effects on the answers of the test persons (first of all independent of the content of the questions).

Methods & Data:

We used a 2 (rating scale vs. slider) x 2 (default vs. adjusted layout) factorial experimental design. Each subject answered 2 personality questionnaires, which were taken from the ZIS (open access repository for measurement instruments) database: A questionnaire with an agreement scale (Big Five Inventory-SOEP (BFI-S); Schupp & Gerlitz, 2014) with originally 7 response options and a questionnaire with adjective pairs (Personality-Adjective Scales PASK5; Brandstätter, 2014) with originally 9 levels. In one setting, the default layout for a slider was used in the LimeSurvey survey tool. In another setting, the layout of the slider was adjusted so that the endpoints of the slider stopped where the first and last crosses could be placed on the scale.

Results:

A total of 344 subjects participated in the study. It was found that there were significant differences in the slider for most personality traits (regardless of the questionnaire) between the default and an adjusted design. In the default slider design, there was a significant shift in responses toward the middle compared to the rating scale.

Added Value:

With this study we were able to show that the use of a slider in online surveys in the default layout can lead to different results than a classical rating scale, and that this effect can be prevented by adjusting the layout of the slider. This result should sensitize online researchers not to simply change an answer type using the default layout settings and stimulate further research to analyze the exact causes and conditions.