General Online Research 2019
A10: Learning Effects, Recall, and Panel Conditioning
Dynamics and moderators of panel conditioning effects. A meta-analysis.
1ZPID - Leibniz Institute for Psychology Information, Germany; 2University of Trier, Germany
Relevance & Research Question:
Panel Conditioning is a learning effect, that can endanger the representativeness and validity of results from panel studies. It describes the change in attitudes or behaviors themselves or the way they are reported due to the participation in former survey waves. The meta-analysis examines which moderators affect the strength of panel conditioning. Moreover, the development of panel conditioning over time will be investigated.
Methods & Data:
The literature search was conducted using the broad search interface CLICsearch. To be included, articles had to report randomized or quasi-experiments, involving a control group of fresh respondents or actuary information from a registry and at least one group of conditioned respondents. Both groups had to be exposed to identical survey questions to enable between-group comparisons of quantitative survey outcomes. 20 studies met these criteria.
Data was collected on four levels: First, general information on the report; second, information on the sample composition and conduction of the study; third, information on the kind of intervention, such as incentives or conditioning frequency; finally, the outcome measures for the differences between the control group and a corresponding treatment. The effect sizes used for the meta-analysis are standardized mean differences.
Using the metafor package in R, four-level mixed effects models will be used to meet the needs of the hierarchical data structure. To test the time effect, the influence of the year of data collection on the strength of panel conditioning will be tested. Afterwards, further characteristics of the intervention are tested as moderators.
The first calculations indicate, that the type of question is the moderator with the greatest impact on the strength of panel conditioning. Knowledge questions suffer the most from panel conditioning, followed by attitudinal questions. A time effect concerning the year of data collection cannot be detected with the available data.
The meta-analysis will reveal which kind of questions are particularly affected by panel conditioning. Recommendations on the implementation of panel surveys, such as the optimal frequency and time intervals between waves, will be concluded.
Recalling Survey Answers: A Comparison Across Question Types and Different Levels of Online Panel Experience
1University of Mannheim; 2RECSM-Universitat Pompeu Fabra
Relevance & Research Question:
Measuring attitudes, behaviors, and beliefs over time is an important strategy to draw conclusions about social developments. The use of longitudinal study designs is also important to evaluate measurement quality (i.e., reliability and validity) of data collection methods. However, one serious concern associated with repeated survey measurements is that memory effects can affect the precision of parameter estimations. So far, there is only a small body of research dealing with respondents’ ability to recall previous answers. In this study, we therefore investigate the ability of respondents to recall their answers to previous questions.
Methods and Data:
We conducted an online survey experiment defined by question type (i.e., attitude, behavior, and belief) in the November 2018 wave of the probability-based German Internet Panel. To evaluate respondents’ recall ability, we employed follow-up questions asking whether they recall their answers, what their answers were, and how certain they are about recalling their answers.
The results indicate that respondents recall their answers, irrespective of the question type. Interestingly, respondents are more likely to recall answers to behavior questions than to attitude or belief questions. In addition, respondents who give extreme answers are much more likely to recall their answers.
Our empirical findings indicate that respondents have a high recall ability. Consequently, the precision of parameter estimations is a serious concern in studies with repeated survey measurements.
Looking up the right answer: Errors of optimization when answering political knowledge questions in web surveys
1University of Mannheim, Germany; 2RECSM-Universitat Pompeu Fabra, Spain; 3University of Göttingen, Germany; 4University of Michigan, USA
Methods & Data: We conducted a web survey experiment in a German non-probability access panel (N = 3,332) and used a two-step split-ballot design with four groups defined by device type (i.e., PC and smartphone) and question difficulty (i.e., open and closed response format). Our expectation is that looking up the answer is more likely on PCs and open response formats. Additionally, we measured response times in milliseconds, employed self-report questions, and measured several respondent characteristics.
Added Value: The findings provide new insights on optimizing errors when answering knowledge questions. Furthermore, they reveal that paradata seem to be a promising way to observe response behavior that may lead to incorrect inferences about respondents’ knowledge measured in web surveys.