Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Session Overview
A6: Satisficing in Web Surveys
Thursday, 16/Mar/2017:
17:00 - 18:20

Session Chair: Edith Desiree de Leeuw, Utrecht University, Netherlands, The
Location: A 208

Show help for 'Increase or decrease the abstract text size'

The good, the bad and the ugly data: using indicators to get high quality survey respondents from online access panels

Daniel Althaus

Splendid Research, Germany

Relevance & Research Question: One of the main issues of using online access panel members for surveys is the avoidance of invalid responses. In order to establish quality and to increase trust, indicators for the survey answering behavior called satisficing have been suggested. These include completing surveys in very short amounts of time, answering in patterns and providing inconsistent answers. A lot of research so far has focused on assessing the validity of these indicators. But how do the indicators covariate? And how can surveys be engineered to minimize invalid responses?

Methods & Data: Post-hoc statistical analysis was performed on 20.000 online interviews generated by online surveys on celebrities for the Human Brand Index project, all of them performed in the proprietary MOBROG Online Access Panel. All surveys had the same basic structure, containing five grid questions of different length and a switch in scale direction in the fifth grid. They also provided the possibility for respondents to enter invalid data on their age and the size of their household. Overall interview time was measured.

Results: About 50% of respondents do not score on any indicator of bad data quality. Another 25% exhibit only one sign of invalid answers, often due to inconsistencies caused by the switch of scale direction in the fifth grid. There is a subpopulation of about 10% of online access panel members who will score simultaneously on almost all indicators. 80% of them manage to successfully adapt to the switch in scale direction while speeding and straight lining. Available time is a factor: Answer quality is highest among high school- and college students, pensioners and unemployed people. It is lowest among 30 to 40 year olds with full time employment. Involvement also helps: die-hard fans and haters of celebrities have higher percentages of high quality data.

Added Value: Using several indicators to screen out respondents whose answers are invalid is a more precise and fair way to ensure data quality than using just one. Loaded questions are avoided by most professional survey takers and confuse respondents whose answers exhibit no signs of satisficing otherwise.

Althaus-The good, the bad and the ugly data-221.pptx

Is Clean Data Good Data?: Data Cleaning and Bias Reduction

Randall K. Thomas, France M. Barlas, Nicole R. Buttermore

GfK Custom Research, United States of America

Relevance and Research Question:

Many researchers have argued that, in order to improve accuracy, we should clean our data by excluding participants who exhibit sub-optimal behaviors, such as speeding or non-differentiation. Some researchers have gone so far as incorporating ‘trap’ questions in their surveys to catch such participants. Increasingly, researchers are suggesting more extensive cleaning criteria to identify larger portions of respondents for removal and replacement. This not only raises questions about the validity of the survey results, but also has cost implications as replacement sample is often required. Our research question focused on the effects of the extent of data cleaning on data quality.

Methods and Data:

We used data from three surveys that contained items which allowed us to estimate bias, including items for which external benchmarks existed from reputable sample surveys along with actual election outcomes. Survey 1 had 1,847 participants from GfK’s U.S. probability-based KnowledgePanel® and 3,342 participants from non-probability online samples (NPS) in a study of the 2016 Florida presidential primary. Survey 2, had over 1,671 participants from KnowledgePanel and 3,311 from non-probability online samples fielded for the general elections in 2014 in Georgia and Illinois. Survey 3 was a 2016 national election study with over 2,367 respondents from the KnowledgePanel. Each study had questions that paralleled benchmarks established with high quality federal data.


We examined how varying the proportion of respondents removed based on increasingly aggressive data cleaning criteria (e.g., speeding) affected bias and external validity of survey estimates. We compared using all cases versus cleaning out from 2.5% up to 50% of the sample cases based on speed of completion.

As found in our initial investigation of other studies, while we found NPS had higher bias than the probability-based KnowledgePanel sample, we found that more rigorous case deletion generally did not reduce bias for either sample source, and in some cases higher levels of cleaning increased bias slightly.

Added Value:

Some cleaning might not affect data estimates and correlational measures, however, excessive cleaning may increase bias, achieving the opposite of the intended effect while increasing the survey costs at the same time.

How Stable is Satisficing in Online Panel Surveys?

Joss Roßmann

GESIS Leibniz Institute for the Social Sciences, Germany

Relevance & Research Question

Satisficing response behavior is a severe threat to data quality of web-based surveys. Yet, to date no study has systematically explored the stability of satisficing in repeated interviews of the same respondents over time. Gaining novel insights into this issue is particularly important for survey methodologists and practitioners in the field of online panel research because the effectiveness of approaches to cope with satisficing depends among others on the stability of the response behavior over time.

Methods & Data

The present study used data of three waves of an online panel survey on politics and elections in Germany to analyze the respondents’ response behavior over time. For each wave of the panel respondents were classified as either optimizers or satisficers using latent class analysis with five common indicators of satisficing response behavior (i.e.,speeding, straightlining, don’t know answers, mid-point responses, and nonsubstantive answers to an open-ended question).


The results of our study showed that between 10.2% and 10.9% of the respondents have used satisficing across the waves of the panel survey. Furthermore, we observed certain stability in optimizing and satisficing over time. Nevertheless, the response behavior of the participants was all but completely stable across the panel waves. This finding indicates that time-varying characteristics of the respondents and the context of a survey are important explanatory factors.

Added Value

Our study provides evidence that satisficing is not a completely stable and trait-like characteristic of respondents. Rather satisficing should be perceived as a response strategy, which is affected by both time-stable as well as time-varying characteristics of respondents and the context of a survey. Thus, we conclude that approaches to cope with this response behavior should focus on motivating rather than removing satisficing respondents from online panels.

Does the Exposure to an Instructed Response Item Attention Check Affect Response Behavior?

Tobias Gummer, Joss Roßmann, Henning Silber

GESIS, Germany

Relevance & Research Question: Providing high-quality answers requires respondents to thoroughly process survey questions. Accordingly, identifying inattentive respondents is a challenge to web survey methodologists. Instructed response items (IRI) are one tool to detect inattentive respondents. IRIs are included as one item in a grid and instruct the respondents to mark a specific response category (e.g., “click strongly agree”). By now it has not been established whether it has positive or negative spill-over effects on response behavior if respondents are made aware that they are being controlled. Consequently, we investigated how the exposure to an IRI attention check affects response behaviors and, thus, answer quality.

Methods & Data: We rely on data from a web-based survey that was fielded in January 2013 in Germany (participation rate=25.3%, N=1,034). The sample was drawn from an offline-recruited access panel. We randomly split the sample into three groups: Two treatment and one control group. Both treatment groups received an IRI in a grid with 7 items (at the beginning vs. the end of the questionnaire). The control group received the same grid but without the IRI. To assess the effect of being exposed to an IRI on data quality, we compared the following 8 indicators of questionable response behavior between the three experimental groups: straightlining, speeding, choosing “don’t know”, item nonresponse, inconsistent answers, implausible answers, respondent’s self-reported motivation and effort.

Results: Overall, our study did not provide evidence that the exposure to an IRI affected the response behavior. The only notable exception was straightlining for which we found respondents who received an attention check at the beginning of the questionnaire to less frequently straightline in grid questions compared to respondents who were not made aware that they were being controlled.

Added Value: Our experimental study provides insights into the implications of using attention checks in surveys – a topic for which research is surprisingly sparse. While our study is encouraging in terms of negative backlashes by using IRIs in a survey, it also means that we did not find IRIs to raise the respondents’ awareness and, thus, enhance the overall data quality.

Contact and Legal Notice · Contact Address:
Conference: GOR 17
Conference Software - ConfTool Pro 2.6.96
© 2001 - 2016 by H. Weinreich, Hamburg, Germany