Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
A 6.3: Attrition and Response
Time:
Friday, 11/Sep/2020:
3:30 - 4:30

Session Chair: Florian Keusch, University of Mannheim, Germany

Show help for 'Increase or decrease the abstract text size'
Presentations

Personalizing Interventions with Machine Learning to Reduce Panel Attrition

Alexander Wenz1,2, Annelies G. Blom1, Ulrich Krieger1, Marina Fikel1

1University of Mannheim, Germany; 2University of Essex, United Kingdom

Relevance & Research Question: This study compares the effectiveness of individually targeted and standardized interventions in reducing panel attrition. We propose the use of machine learning to identify sample members with high risk of attrition and to target interventions on an individual level. Attrition is a major concern in longitudinal surveys since it can affect the precision and bias of survey estimates and costs. Various efforts have been made to reduce attrition, such as using different contact protocols or incentives. Most often, these approaches have been standardized, treating all sample members in the same way. More recently, this standardization has been challenged in favor of survey designs in which features are targeted to different sample members. Our research question is: Can personalized interventions make survey operations more effective?

Methods & Data: We use data from the German Internet Panel, a probability-based online panel of the general population in Germany, which interviews respondents every two months. They receive study invitations via email and a 4€ incentive per survey completed. To evaluate the effectiveness of different interventions on attrition, we implemented an experiment in 2018 using a standardized procedure. N = 4,710 sample members were randomly allocated to one of three experimental groups, and within each group were treated in the same way: Group 1 received an additional 10€ incentive, Group 2 received an additional postcard invitation while Group 3 served as control group.

Results: Preliminary results suggest that the standardized interventions were only effective for sample members interviewed for the first time (postcard significantly reduced the attrition rate by 3%-points; incentive no effect), but not for those in subsequent waves. In a further analysis, we conduct a counterfactual simulation investigating the effect of these interventions if 1) only people with high attrition propensities were targeted, and 2) these people received the treatment that was predicted to be most effective for them.

Added Value: This study provides novel evidence on the effectiveness of using personalized interventions in reducing attrition. In 2020, we will develop prescriptive models in addition to the predictive models for actually targeting panel members during fieldwork under a cost-benefit framework.



Now, later, or never? Using response time patterns to predict panel attrition

Isabella Luise Minderop, Bernd Weiß

GESIS Leibniz Institute for the Social Sciences, Germany

Relevance & Research Question:

Keeping respondents who have a high likelihood to attrite from a panel in the sample is a central task for (online) probability panel data infrastructures. This is especially important when respondents at risk of dropping out are notably different from other respondents. Hence, it is key to identify those respondents and prevent them from dropping out. Previous research has shown that response behavior in previous waves, e.g., response or nonresponse, is a good predictor of next wave’s response. However, response behavior can be described in more detail, by, for example, taking paradata such as time until survey return into account. Until now, time until survey return has mostly been researched in cross-sectional contexts, which offer no opportunity to study panel attrition. In this innovative study, we investigate whether (a) respondents who return their survey late more often than others and (b) respondents who show changes in their response behavior over time are more likely to attrite from a panel survey.

Methods & Data:

Our study relies on data from the GESIS Panel which is a German bi-monthly probability-based mixed-mode panel (n = 5,000). The GESIS Panel includes data collected in web and mail mode. We calculated the days respondents required to return the survey from online and postal time stamps. Based on this information, we distinguish early, late and nonresponse. Further, we identify individual response patterns by combining this information across multiple waves. We calculated the relative frequency of late responses and the changes in a response pattern.

Results:

Preliminary results show that the likelihood to attrite increases by 0.16 percentage points for respondents who always return their survey late compared to those who always reply early. Further, respondents who change their response timing each wave are 0.43 percentage points more likely to attrite.

Added Value:

The time until survey return is an easily available paradata. We show that the frequency of late responses as well as the changes in response time patterns predict attrition just as good as previously used models that include survey evaluation or available time, which might not always be available.



A unique panel for unique people. How gamification has helped us to make our online panel future-proof

Conny Ifill, Robin Setzer

Norstat Deutschland GmbH, Germany

Relevance & Research Question: For many years, online panels have been struggling with every time lower response rates and shorter membership durations in average. The responses to this threatening challenge are manifold. Simply put, panels either have to lower the quality standards to sustain a high recruitment volume or they have to increase the loyalty and activity rate of its then costlier recruited members. We decided to invest into the longevity of our member’s base by relaunching out panel in 18 European countries and introducing game mechanics to our panelists.

Methods & Data: We have strictly followed a research-based process to identify the motivation and pain-points of our panel members. With the help of focus groups and iterative user testing, we successively developed a panelist centric platform that included a new visual design, new functions for the user and game mechanics to better engage with our members. An integral part of the whole project was (and still is) accompanying research. Among the KPIs we continuously monitor over time are panel composition (i.e. demographics), panel performance (e.g. churn rate, response rate) and panelist satisfaction.

Results: Our first results are very promising. We see that all target groups increased their activity and loyalty level. To our satisfaction, especially hard to reach segments (i.e. young men) experienced a significant boost. As a result, our panel has become more balanced and better performing than before.

The evaluation of this transition is ongoing, especially as we are still introducing new features and making smaller adjustments to existing functions. We are planning to share the current status of this long-term project with the audience of the conference.

Added Value: While comparability of data is a very high value in research, the dynamic nature of digitalization requires us to adapt the method from time to time. Our case shows that research methodology can evolve without compromising its quality standards. We believe that this is partly because the whole process was based on and accompanied by research.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: GOR 20
Conference Software - ConfTool Pro 2.6.127
© 2001 - 2019 by Dr. H. Weinreich, Hamburg, Germany