Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Session Chair: Bella Struminskaya, Utrecht University, Netherlands, The
Evaluating data quality in the UK probability-based online panel
Olga Maslovskaya1, Gabi Durrant1, Curtis Jessop2
1University of Southampton, United Kingdom; 2NatCen Social Research, United Kingdom
Relevance and Research Question: We live in a digital age with high level of use of technologies. Surveys have also started adopting technologies for data collection. There is a move towards online data collection across the world due to falling response rates and pressure to reduce survey costs. Evidence is needed to demonstrate that the online data collection strategy will work and produce reliable data which can be confidently used for policy decisions. No research has been conducted so far to assess data quality in the UK NatCen probability-based online panel. This paper is timely and fills this gap in knowledge. This paper aims to compare data quality in NatCen probability-based online panel and non-probability-based panels (YouGov, Populus and Panelbase). It also compares NatCen online panel to the British Social Attitude (BSA) probability-based survey on the back of which NatCen panel was created and which collects data using face-to-face interviews.
Methods and Data: The following surveys are used for the analysis: NatCen online panel, BSA Wave 18 data as well as data from YouGov, Populus and Panelbse non-probability-based online panels.
Various absolute and relative measures of differences will be used for the analysis such as mean average difference and Duncan dissimilarity Index among others. This analysis will help us to investigate how sample quality might impact on differences in point estimates between probability and non-probability samples.
Results: The preliminary results suggest that there are differences in point estimates between probability- and non-probability-based samples.
Added value: This paper compares data quality between “gold standard” probability-based survey which collects data using face-to-face interviewing, probability-based online panel and non-probability-based online panels. Recommendations will be provided for future waves of data collection and new probability-based as well as non-probability-based online panels.
Building 'Public Voice', a new random sample panel in the UK
Kantar, United Kingdom
Relevance & Research Question:
The purpose of this paper is to describe the building of a new random sample mixed-mode panel in the UK ('Public Voice'), focusing on its various design features and how each component influenced the final composition of the panel.
Methods & Data:
The Public Voice panel has been built via a combination of two recruitment methods: (i) face-to-face interviewing, and (ii) web/paper surveying. So far as possible, measurement methods have been unified, including the use of a self-completion section within the face-to-face interview for collecting initial opinion and (potentially) sensitive data. The same address sample frame was used for both methods. For this initial phase, the objective was to recruit to the panel c.2,400 individuals, split evenly by method.
The response rates to the two recruitment survey methods were aligned with expectations (c.40% for the interview survey, c.8% for the web/paper survey) as were the observable biases. Presenting the panel up front (an experimental manipulation) did not lower the web/paper recruitment survey response rate compared to introducing it at the end of the survey. Respondent agreement to join the panel was much higher than expected in the web/paper survey (>90%). Contact details were of generally high quality in the face-to-face and web modes but less so in the paper mode. [More results to come]
This paper adds to the evidence base for what works when building survey panels with a probabilistic sample base. In particular, the use of a dual-design recruitment method is novel.
Predictors of Mode Choice in a Probability-based Mixed-Mode Panel
David Bretschi, Bernd Weiß
GESIS Leibniz Institute for the Social Sciences, Germany
Relevance & Research: Even with a growing number of Internet users in Germany, a substantial proportion of respondents with Internet access still chose to participate in the mail mode, when given a choice. We know little about the characteristics of those reluctant respondents, as most survey designs do not allow to measure potential predictors of the mode choice process before individuals make a decision. This study aims to fill this gap by investigating which personal characteristics of respondents in a mixed-mode panel are related to their willingness to respond via the web mode.
Methods & Data: We use data from multiple waves of the GESIS Panel, a probability-based mixed-mode panel in Germany (N=5,700). In October/November 2018, a web-push intervention motivated around 20 percent of 1,896 panelists previously using the mail mode to complete the survey via the web mode. We measured potential predictors of mode choice in waves before the intervention. These predictors include indicators of web-skills, web usage, attitudes to the Internet, and privacy concerns. Our study design allows us to investigate how those predictors are associated with mode choice of panelists who switched to the web and those who refused to do so.
Results: Preliminary results suggest that web-skills and web usage are important predictors of mode choice. In contrast, general privacy concerns do not seem to affect the decision to respond via the web mode, but attitudes towards the Internet do.
Added Value: This study will provide new insights into how the characteristics of respondents predict their decision to participate in web surveys. Learning more about the mode choice process and response propensities of web surveys is important to develop effective web-push methods for cross-sectional and longitudinal studies.