Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
A6.4: Representativity in Online Panels
Time:
Friday, 10/Sept/2021:
3:10 - 4:20 CEST

Session Chair: Ines Schaurer, City of Mannheim, Germany

Show help for 'Increase or decrease the abstract text size'
Presentations

Investigating self-selection bias of online surveys on COVID-19 pandemic-related outcomes and health characteristics

Bernd Weiß

GESIS - Leibniz Institute for the Social Sciences, Germany

Relevance & Research Question: The coronavirus SARS-CoV-2 outbreak has stimulated numerous online surveys that are mainly based on online convenience samples where participants select themselves. The results are, nevertheless, often generalized to the general population. Based upon a probability-based sample that includes online and mail-mode respondents, we will tackle the following research questions assuming that the sample of online respondents mimics respondents of an online convenience survey: (1) Do online (CAWI) respondents systematically differ from offline (PAPI) respondents with respect to COVID-19-related outcomes (e.g., pandemic-related attitudes or behavior) and health characteristics (e.g., preconditions, risk group)? (2) Do internet users (in the CAWI and the PAPI mode) systematically differ from non-internet users with respect to COVID-19-related outcomes and health characteristics?

Methods & Data: The analyses utilize data from the German GESIS Panel, a probability-based mixed-mode access panel that includes about 5,000 online and mail-mode respondents. Upon recruitment, respondents’ preferred mode, i.e., CAWI or PAPI, was determined via a sequential mixed-mode design. The GESIS Panel was among the first surveys in Germany that started in March 2020, collecting data on the coronavirus outbreak. Since then, five additional waves have been fielded, allowing cross-sectional and longitudinal comparisons between the two survey modes (CAWI vs. PAPI) and groups (internet vs. non-internet users), respectively. Statistical analyses address mode and group comparisons regarding COVID-19-related outcomes such as pandemic-related attitudes or behavior as well as health characteristics.

Results: Preliminary analyses reveal only small differences with respect to some behavioral and attitudinal pandemic-related outcomes among the two modes/groups. However, larger systematic differences regarding mode can be reported for health characteristics (e.g., “belong to a risk group”). Further analyses will be conducted focusing on differences among internet vs. non-internet users.

Added Value: With a focus on the current COVID-19 pandemic, the results of this study add to the existing literature that cautions against the use of self-selected online surveys for population inference and policy measures.



Relationships between variables in probability-based and nonprobability online panels

Carina Cornesse, Tobias Rettig, Annelies G. Blom

University of Mannheim, Germany

Relevance & Research Question:

Commercial nonprobability online panels have grown in popularity in recent years due to their relatively low cost and easy availability. However, a number of studies have shown that probability-based surveys lead to more accurate univariate estimates than nonprobability surveys. Some researchers claim that while they do not produce accurate univariate estimates, nonprobability surveys are “fit for purpose” when conducting bivariate and multivariate analyses. Very little research to date has investigated these claims, which is an important gap we aim to fill with this study.

Methods & Data:

We investigate the accuracy of bivariate and multivariate estimates in probability-based and nonprobability online panels using data from a large-scale comparison study that included data collection in two academic probability-based online panels and eight commercial nonprobability online panels in Germany with identical questionnaires and field periods. For each of the online panels, we calculate bivariate associations as well as multivariate models and compare the results to the expected outcomes based on theory and gold-standard benchmarks, examining whether the direction and statistical significance of the coefficients accurately reflect the expected outcomes.

Results:

Preliminary results on key political variables (e.g., voter turnout) indicate a high variability in the findings gained from the different online panels. While the results from some panels are mostly in line with the expected results based on theory and findings from gold-standard survey benchmarks, others diverge a lot. For example, contrary to expectations, some panel results indicate that older people are less likely to vote conservative than younger people. Further analyses will extend these comparisons to health-related items (subjective health, BMI) and psychological indicators (Big 5, need for cognition).

Added Value:

Research on the accuracy of bivariate and multivariate estimates in probability-based and nonprobability online panels is so far very sparse. However, the growing popularity of online panels as a whole and nonprobability online access panels in particular warrant deeper investigation into the accuracy of the results obtained from these panels and into the question of whether nonprobability panels are indeed “fit for purpose” for such analyses.



Sampling in Online Surveys in Latin America: Assessing Matching vs. "Black Box" Approaches

Oscar Castorena1, Noam Lupu1, Maitagorri H Schade2, Elizabeth J Zechmeister1

1Vanderbilt University; 2Agora Verkehrswende

Relevance & Research Question: Online surveys, a comparatively low-cost and low-effort medium, have become more and more common in international survey research projects as internet access continues to expand. At the same time, conventional probabilistic sample design is often impossible when utilizing commercial online panels. Especially in regions with comparatively low internet penetration, this poses the question of how well nonprobabilistic approaches can approximate best practice offline methodologies, and what a best practice for online sampling should look like when parts of the population are excluded by default from the sampling frame.

Methods & Data: For this study, we investigated one well-established approach to generating as-good-as-possible nonprobability samples from online panels, sample-matching, in three Latin American countries. In each country, we collected samples of at least 1000 responses each through the standard commercial “black box” approach as well as an original sample-matching approach. This experiment-based approach permits a comparison of matched samples to samples resulting from a panel provider's standard approach, as well as to census extracts and representative population surveys. To assess the quality of each sample, we assess mean average errors for the categories of benchmark questions and of standard demographic indicators, calculated between samples and reference populations.

Results: The results show that the sample-matching approach yields better reproduction of benchmark questions not employed in sample design, compared to the standard one. To conclude, the paper discusses the benefits and drawbacks of choosing a custom sampling approach as opposed to a standard one.

Added Value: We demonstrate that fully transparent and reproducible sampling approaches are possible, if not common, in nonprobabilistic commercial online surveys, and that they can measurably improve the quality of online samples. We also illuminate the possible practical drawbacks in deploying such a custom-made sampling method, adding a useful reference for those wishing to apply such an “outside the black box” approach to drawing samples from online panels provided to the survey research community by commercial firms.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: GOR 21
Conference Software - ConfTool Pro 2.6.135
© 2001 - 2020 by Dr. H. Weinreich, Hamburg, Germany