Conference Agenda

General Online Research 2019

A10: Learning Effects, Recall, and Panel Conditioning
Friday, 08/Mar/2019:
2:15 - 3:15

Session Chair: Bella Struminskaya, Utrecht University & DGOF, Netherlands, The
Location: Room Z28
TH Köln – University of Applied Sciences


Dynamics and moderators of panel conditioning effects. A meta-analysis.

Tanja Burgard1, Michael Bosnjak1, Nadine Kasten2

1ZPID - Leibniz Institute for Psychology Information, Germany; 2University of Trier, Germany

Relevance & Research Question:

Panel Conditioning is a learning effect, that can endanger the representativeness and validity of results from panel studies. It describes the change in attitudes or behaviors themselves or the way they are reported due to the participation in former survey waves. The meta-analysis examines which moderators affect the strength of panel conditioning. Moreover, the development of panel conditioning over time will be investigated.

Methods & Data:

The literature search was conducted using the broad search interface CLICsearch. To be included, articles had to report randomized or quasi-experiments, involving a control group of fresh respondents or actuary information from a registry and at least one group of conditioned respondents. Both groups had to be exposed to identical survey questions to enable between-group comparisons of quantitative survey outcomes. 20 studies met these criteria.

Data was collected on four levels: First, general information on the report; second, information on the sample composition and conduction of the study; third, information on the kind of intervention, such as incentives or conditioning frequency; finally, the outcome measures for the differences between the control group and a corresponding treatment. The effect sizes used for the meta-analysis are standardized mean differences.

Using the metafor package in R, four-level mixed effects models will be used to meet the needs of the hierarchical data structure. To test the time effect, the influence of the year of data collection on the strength of panel conditioning will be tested. Afterwards, further characteristics of the intervention are tested as moderators.


The first calculations indicate, that the type of question is the moderator with the greatest impact on the strength of panel conditioning. Knowledge questions suffer the most from panel conditioning, followed by attitudinal questions. A time effect concerning the year of data collection cannot be detected with the available data.

Added Value:

The meta-analysis will reveal which kind of questions are particularly affected by panel conditioning. Recommendations on the implementation of panel surveys, such as the optimal frequency and time intervals between waves, will be concluded.

Burgard-Dynamics and moderators of panel conditioning effects A meta-analysis-139.pdf

Recalling Survey Answers: A Comparison Across Question Types and Different Levels of Online Panel Experience

Tobias Rettig1, Jan Karem Höhne1,2, Annelies Blom1

1University of Mannheim; 2RECSM-Universitat Pompeu Fabra

Relevance & Research Question:

Measuring attitudes, behaviors, and beliefs over time is an important strategy to draw conclusions about social developments. The use of longitudinal study designs is also important to evaluate measurement quality (i.e., reliability and validity) of data collection methods. However, one serious concern associated with repeated survey measurements is that memory effects can affect the precision of parameter estimations. So far, there is only a small body of research dealing with respondents’ ability to recall previous answers. In this study, we therefore investigate the ability of respondents to recall their answers to previous questions.

Methods and Data:

We conducted an online survey experiment defined by question type (i.e., attitude, behavior, and belief) in the November 2018 wave of the probability-based German Internet Panel. To evaluate respondents’ recall ability, we employed follow-up questions asking whether they recall their answers, what their answers were, and how certain they are about recalling their answers.


The results indicate that respondents recall their answers, irrespective of the question type. Interestingly, respondents are more likely to recall answers to behavior questions than to attitude or belief questions. In addition, respondents who give extreme answers are much more likely to recall their answers.

Added Value:

Our empirical findings indicate that respondents have a high recall ability. Consequently, the precision of parameter estimations is a serious concern in studies with repeated survey measurements.

Looking up the right answer: Errors of optimization when answering political knowledge questions in web surveys

Jan Karem Höhne1,2, Carina Cornesse1, Stephan Schlosser3, Mick P. Couper4, Annelies Blom1

1University of Mannheim, Germany; 2RECSM-Universitat Pompeu Fabra, Spain; 3University of Göttingen, Germany; 4University of Michigan, USA

Relevance & Research Question: Political knowledge is an important determinant affecting outcomes in public opinion research and political science, which can have a profound impact on governmental decision-making processes. However, some respondents look up the right answer (e.g., on the Internet), which inflates political knowledge scores and can be seen as a kind of “optimizing error” (Yan, 2006) committed by engaged respondents with good intentions. As indicated by previous research, this response behavior is detectable in web surveys using indirect methods. In this study, we investigate optimizing errors when answering political knowledge questions in web surveys by using paradata. More precisely, we use JavaScript “OnBlur” functions enabling us to gather whether respondents switch away from the web survey to search for the correct answer on the Internet using the same device.

Methods & Data: We conducted a web survey experiment in a German non-probability access panel (N = 3,332) and used a two-step split-ballot design with four groups defined by device type (i.e., PC and smartphone) and question difficulty (i.e., open and closed response format). Our expectation is that looking up the answer is more likely on PCs and open response formats. Additionally, we measured response times in milliseconds, employed self-report questions, and measured several respondent characteristics.

Results: The preliminary results indicate that respondents indeed switch away from the web survey page to search for the right answer on the Internet. This finding is supported by the JavaScript “OnBlur” functions and by respondents’ self-reports. In line with our expectations this is more common on PCs and open response formats.

Added Value: The findings provide new insights on optimizing errors when answering knowledge questions. Furthermore, they reveal that paradata seem to be a promising way to observe response behavior that may lead to incorrect inferences about respondents’ knowledge measured in web surveys.