Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
A6.2: Web Probing and Survey Design
Time:
Friday, 10/Sept/2021:
3:10 - 4:20 CEST

Session Chair: Florian Keusch, University of Mannheim, Germany

Show help for 'Increase or decrease the abstract text size'
Presentations

What is the optimal design of multiple probes implemented in web surveys?

Cornelia Neuert, Timo Lenzner

GESIS, Germany

The method of web probing integrates open-ended questions (probes) into online surveys to evaluate questions. When asking multiple probes, they can either be asked on one subsequent survey page (scrolling design) or on separate subsequent pages (paging design). The first design requires respondents to scroll down the page to see and answer all questions, but they are presented together and independently of the survey question. The latter design presents each probe separately and respondents only see how many and what sorts of probes they will receive by navigating successive survey pages. A third alternative is to implement the probes on the same page as the question being tested (embedded design). This might have the advantage that the probes are directly related to the survey question and the answer process is still available in respondents’ memory. On the negative side, this makes the response task more complex and might affect how respondents answer the survey question presented on the same page.

In this paper, we examine whether multiple probes should be presented on the same page as the question being tested, on a subsequent page that requires respondents to scroll down, or on separate, consecutive questionnaire pages.

Based on a sample of 2,200 German panelists from an online access panel, we conducted a web experiment in which we varied both presentation format and probe order to investigate which format produced the highest data quality and the lowest drop-out rate. Respondents were randomly assigned to three conditions: an embedded design, a paging design, a scrolling design. The study was fielded in November 2020.

We expect the embedded design and the scrolling design to make the response task more complex, resulting in lower data quality compared to the paging design.

We will use the following data-quality indicators: amount of probe nonresponse, number of uninterpretable answers, number of dropouts, number of words per probe, and survey satisfaction. However, research is still work in progress, and therefore results are not available, yet.

The results will provide information on how (multiple) open-ended questions should be implemented to achieve the best possible response quality.



Analysis of Open-text Time Reference Web Probes on a COVID-19 Survey

Kristen L Cibelli Hibben, Valerie Ryan, Hoppe Travis, Scanlon Paul, Miller Kristen

National Center for Health Statistics

Relevance & Research Question: There is debate about using “since the Coronavirus pandemic began” as a time reference for survey questions. We present an analysis of three open-ended web probes to examine the timeframe respondents had in mind when presented with this phrase, as well as “when the Coronavirus pandemic first began to affect” their lives and why. The following research questions are addressed: How consistently do people understand when “the Coronavirus pandemic began”? To what extent does this align with when the pandemic began affecting their lives? Methodologically, what is the quality of responses to the open-ended probes and how might this differ by key socio-demographics?

Methods & Data: Data are from Round 1 of the Research and Development Survey (RANDS)during Covid-19 developed by researchers at the United States’ National Center for Health Statistics (NCHS). The National Opinion Research Center (NORC) at the University of Chicago collected the data on behalf of NCHS from June 9, 2020 to July 6, 2020 using their AmeriSpeak® Panel. AmeriSpeak® is a probability-based panel representative of the US adult English-speaking non-institutionalized, household population. The data for all three probes is open text. A rules-based machine learning approach was developed to automate the data cleaning for the two probes about timeframes. In combination with hand review, topic modeling and other computer-assisted approaches were used to examine the content and quality of responses to the third probe.

Results: Results show respondents do not have a uniform understanding of when the pandemic began and there is little alignment between when people think the pandemic began and when it began affecting their lives. Preliminary data quality findings indicate most respondents gave valid answers to the two date probes, but a wider range in response quality and variation among key population subgroups is observed for the third probe.

Added Value: This analysis sheds light on use of the phrase “since the Coronavirus pandemic began” as a time reference and helps us understand when and how the pandemic began affecting peoples’ lives. Methodologically, we implemented new and innovative data science approaches for the analysis of open-ended web probes.



Reducing Respondent Burden with Efficient Survey Invitation Design

Hafsteinn Einarsson, Alexandru Cernat, Natalie Shlomo

University of Manchester, United Kingdom

Relevance & Research Questions:

Increasing costs of data collection and the issue of non-response in social surveys has led to a proliferation of mixed-mode and self-administered web surveys. In this context, understanding how the design and content of survey invitations influences propensities to participate could prove beneficial to survey organisations. Reducing respondent burden with efficient invitation design may increase the number of early responders, the number of overall responses and reduce non-response bias.

Methods & Data:

This study implemented a randomised experiment where two design features thought to be associated with respondent burden were randomly manipulated: the length of the text and the location of the survey invitation link. The experiment was carried out in a sequential mixed-mode survey among young adults (18-35-year-old) in Iceland.

Results:

Results show that participants were more likely to participate in the initial web survey when they receive shorter survey invitation letters and when the survey link is in the middle of the letter, although further contacts by other modes mitigate these differences for the full survey results. Additionally, short letters with links in the middle perform well compared to other letter types in terms of non-response bias and mean squared error for those characteristics available in the National Register.

Added Value:

These findings indicate that the concept of respondent burden can be extended to mailed survey invitations to web surveys. Design choices for survey invitations, such as length and placement of participation instructions, can affect propensities to respond to the web survey, resulting in cost-savings for survey organisations.



Recruitment to a probability-based panel: question positioning, staggering information, and allowing people to say they’re ‘not sure’

Curtis Jessop, Marta Mezzanzanica

NatCen, United Kingdom

Key words: Surveys, Online panels, Recruitment

Relevance & Research Question:

The recruitment stage is a key step in the set-up of a probability-based panel study: a lower recruitment rate risks introducing bias and limits what subsequent interventions to minimise non-response can achieve. This paper looks at how positioning the recruitment question relative to the offer of an incentive for participating in the recruitment survey, and how staggering information about joining a Panel and allowing participants to say they are ‘not sure’ affects recruitment and participation rates.

Methods & Data:

A split-sample experiment was implemented in the 2020 British Social Attitudes survey, a probability-based push-to-web survey in which participants were invited to join the NatCen Panel. Of 3,964 participants a random half were asked if they would like to join the Panel immediately before being asked what type of incentive they would like, and the other half were asked immediately after.

In addition, a random half were presented with all information about joining the panel up-front, while the other half were presented basic information, but given the option to ask for more information. This group was then provided with more information and asked again but were allowed to say they were still unsure.

Results:

There was no significant difference in the proportion of people agreeing to join the panel or taking part in the first panel survey by the positioning of the recruitment question. In contrast, participants that were allowed to say they were ‘not sure’ were more likely to agree to join the panel, although this difference was no longer significant when looking at the proportion that took part in the first survey wave.

Added Value:

Findings from this study will inform the future design of recruitment questions for panel studies. More generally, it provides evidence on the use of an ‘unsure’ option in consent questions, and how moving away from a binary, ‘in the moment’, approach might affect data collection.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: GOR 21
Conference Software - ConfTool Pro 2.6.135
© 2001 - 2020 by Dr. H. Weinreich, Hamburg, Germany