Conference Agenda

Session Overview
Session
A11: Methods to Improve Questionnaires
Time:
Friday, 08/Mar/2019:
3:30 - 4:30

Session Chair: Stephanie Gaaw, Technische Universität Dresden, Germany
Location: Room Z28
TH Köln – University of Applied Sciences

Presentations

Context Effects in Online Probing of Sensitive Topics – Explorations Using Survey Data and Paradata

Patricia Hadler

GESIS - Leibniz Institute for the Social Sciences, Germany

Relevance & Research Question

Surveys interested in socially undesirable behavior are likely to include questions on the behavior itself and related attitudes. Context effects have frequently been analyzed for sensitive topics in a survey context. However, little is known about their impact on cognitive pretesting. Online probing offers an anonymous and self-administered setting to study these effects. Using questions about delinquency, this study examines the impact of question order on probe response for sensitive behavioral and attitudinal questions. Analyses combine survey and probe answers with client-side paradata.

Methods & Data

A behavioral and attitudinal question were randomized and each followed by an open probe. A client-side paradata script collected item-level response times, answer changes, page revisits, time spent answering the probe and corrections to probe response. Over 320 respondents in Germany and the US participated in the survey via an online access panel.

Results

Probes following behavioral questions show a higher level of non-response. However, whereas these answers do not differ strongly with question order, respondents report a more lenient attitude in the probe following the attitudinal question when the prior question asked about their behavior. Moreover, probes following a behavioral question are likely to include both behavioral and attitudinal content, whereas probes following an attitudinal question generally do not reference past behavior.

The length of probe response varies with question order for the probe following the behavioral question only. Attitudinal probe responses are associated with more text corrections in both probes. Response times for the probe mainly depend on the answer given to the survey question and the content of the probe. They correlate strongly with the response time of the survey question for respondents giving non-substantive answers.

Added Value

The study analyses context effects in online pretesting of sensitive topics by combining survey and probe responses with paradata from both. Results strongly support the notion of testing question order during pretesting, as question order impacts the results of both online probing and survey response.


Taking Respondents Seriously: Feedback in Mixed-Device Studies

Katharina Meitinger1, Henning Silber2, Jessica Daikeler2, Christoph Beuthner2

1Utrecht University; 2GESIS Leibniz Institute for the Social Sciences

Relevance & Research Question: Feedback questions evaluating respondents’ satisfaction with a survey are an important source of information for survey designers to improve surveys and predict future participation behavior. When asked as a closed-ended question, they provide quick to analyze, standardized insights into respondents’ satisfaction with relevant aspects of a survey. When asked as an open-ended question, the analysis is more time-consuming but the qualitative date can disclose valuable in-depth information, e.g., whether respondents faced previously undetected technical problems as well as formatting issues or gain information which items were affected by social desirability issues. Additionally, feedback questions are particularly important in mixed-device studies since mobile respondents might encounter particular challenges (e.g. suboptimal visual design) when responding to a web survey.

However, there are several research gaps regarding the optimal design of feedback questions in mixed-device studies: 1) It is unclear whether mobile respondents provide comparable response quality as PC respondents. 2) It is unclear whether a question order-effect occurs when using a combination of closed and open-ended feedback questions.

Methods & Data: An experiment was implemented in a mixed-device survey with 3,374 German respondents from a non-probability web panel in 2018. The survey addressed a variety of topics, including consent to data linkage. Respondents were randomly assigned to either the PC or mobile group. In this experiment, we manipulated the question order and screen presentation of closed and open-ended feedback questions. The analysis focuses on a comparison between mobile and PC respondents. The indicators of response quality used in the analysis are item-nonresponse, response time, number of themes mentioned, and content (issues mentioned).

Results: Question order effects and important response quality differences between PC and mobile users have been found. Whereas the closed feedback format clearly reduced nonresponse and increased the number of mentioned topics, the open-ended format provided unique insights into respondents’ opinions toward consent requests to data linkage which were not detected by the closed format.

Added Value: This presentation provides valuables insights into the optimal implementation of feedback questions in mixed device studies.


List-style open-ended questions in Web surveys: A comparison of three visual layouts

Tanja Kunz1, Katharina Meitinger2

1GESIS Leibniz Institute for the Social Sciences, Germany; 2Utrecht University, the Netherlands

Relevance & Research Question: Previous studies on the visual layout of open-ended questions in web surveys have shown that respondents are responsive to verbal and visual design variations of the answer boxes. Findings consistently showed that several list-style answer boxes as compared to one large answer box elicit more elaborated answers. By contrast, a dynamically growing number of list-style answer boxes where respondents initially are exposed to one fixed answer box and further answer boxes are displayed only after they have clicked in the previous one seems to be less effective. The use of follow-up probes, in turn, where respondents are asked to provide further information on the following screen increases the number of themes. Despite a higher response elaboration with all these methods, item nonresponse remains an important issue. This paper aims at comparing all three methods in order to identify the optimal design which optimizes respondents’ answers to list-style open-ended questions without increasing item nonresponse.

Methods & Data: In two experiments embedded in a web survey on “Politics and Voting Behavior”, respondents from a nonprobability online panel (N=4,371) in Germany were randomly assigned to one of three experimental conditions, namely (a) a static design with six fixed list-style answer boxes, (b) a dynamic design with up to six list-style answer boxes displayed one after the other depending on whether respondents clicked on the previous one, and (c) a follow-up probe design providing three fixed list-style answer boxes on the initial screen and additional three on the next screen. The open-ended questions were on “satisfaction with democracy” and “current problems in Germany”, respectively.

Results: Findings showed that especially the follow-up probe design yielded more elaborated answers in terms of the number of characters and the number of themes mentioned, whereas a dynamic design was least effective. Overall, no differences in item nonresponse were found between the three visual layouts.

Added Value: The study contributes to the systematic assessment of visual design variations in open-ended questions to identify the optimal design with respect to data quality in list-style open-ended questions which are implemented in web surveys.


Kunz-List-style open-ended questions in Web surveys-151.pdf