Conference Agenda

Overview and details of the sessions of this conference. Please select a date or room to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
A 5: Response Quality & Fraudulent Respondent Behaviour
Time:
Thursday, 19/Mar/2015:
15:45 - 16:45

Session Chair: Ines Schaurer, GESIS - Leibniz Institute for the Social Sciences
Location: Room 248
Fachhochschule Köln/ Cologne University of Applied Sciences
Claudiusstr. 1, 50678 Cologne

Presentations

Are professional respondents a threat to probability-based online panels?

Joop J. Hox, Edith D. de Leeuw

Utrecht University, the Netherlands

Relevance & Research Question

A major concern about the quality of non-probability online panels centers around the presence of ‘professional’ respondents and several studies have addressed this (AAPOR-taskforce, 2010). Although the definition of professional respondent varies, common indicators are number of panel memberships and number of surveys completed in a specific period. In reaction to criticism on non-probability panels, probability-based online panels are established (e.g., LISS in Holland, GIP in Germany, ELIPS in France, and Knowledge Networks in the USA), reflecting the need for representativeness. However, probability-based panels suffer initial nonresponse during panel formation with the danger of selective nonresponse. R.Q: Are probability-based panels a safeguard against professional respondents? Or are they comparable to opt-in panels regarding count of panel memberships and survey behavior of their members.

Methods & Data

In the Netherlands a large study (NOPVO) of 19 (opt-in) online panels reports on professional respondents (Vonk, Van Ossenbruggen & Willems, 2006). We partly replicated their study in two Dutch probability based online panels: the first is a probability sample of the general population of the Netherlands (LISS panel), and the second a probability sample of the four largest ethnic minority groups in The Netherlands (LISS immigrant panel). We posed the same questions on panel membership, reasons to participate in online research, and the number of questionnaires completed in the last four weeks. We also asked questions on internet use and mobile devices.

Results

In the two probability-based Dutch online panels, the number of panel memberships was lower than in the NOPVO panels: 84.5.% of the LISS members did not belong to other panels and 80.3% of the ‘minority-panel’ did not have multiple panel memberships, while in the NOPVO-panel study only 38% was not a member of multiple panels. In the NOPVO-study on average more than 80% of the respondents reported to have completed more than one questionnaire in the past 4 weeks, while in the two probability-based panels this was less than 40 %

Added Value

Assessing the prevalence of professional respondents in general probability-based online panels and in probability-based panels for special groups. Establishing profiles of professional respondents.

Hox-Are professional respondents a threat to probability-based online panels-127.pdf

PageFocus: Using Paradata to Detect and Prevent Cheating in Online Achievement Tests

Birk Diedenhofen, Stefan Trost, Jochen Musch

University of Duesseldorf, Germany

Relevance & Research Question:

Cheating participants threaten the validity of unproctored online achievement tests. To address this problem, we developed PageFocus, a JavaScript allowing to determine if and how frequently participants abandon test pages by switching to another window or browser tab. To validate whether PageFocus can detect and even prevent cheating, we conducted a parallel lab and web validation study.

Methods & Data:

115 lab participants and 194 members of an online panel completed a knowledge test consisting of 16 items that were difficult to answer, but easy to look up on the Internet. In the experiment, one half of the participants was invited to cheat by looking up solutions on the web. In the second half of the test, a popup message warned participants not to cheat whenever PageFocus detected that a participant was leaving the test pages.

Results:

As expected, test-takers invited to cheat abandoned the test pages more frequently and achieved higher scores. Presenting a popup warning successfully reduced cheating rates. With operating system data and the test-takers' self-reports as external criteria, a very high sensitivity and specificity was found for cheating detection based on the PageFocus script. Concurrent evidence from the lab and the web sample suggests that our lab results generalize to testing in online contexts.

Added Value:

Participants switching back and forth from a test to an unrelated application running in the background pose a huge problem to online achievement testing. Our experiment shows that PageFocus cannot only be used to successfully address this problem by detecting cheating participants; PageFocus also allows to reduce cheating by presenting a popup warning whenever cheating is detected. Our results suggest that the validity of all kind of online achievement tests can be improved by adding the PageFocus script to test pages, making PageFocus a promising new tool to improve data quality in online research.

Counting confusion: The role of attitude importance and item clarity in extreme responding

Anton Örn Karlsson1, Vaka Vésteinsdóttir1, Fanney Thorsdottir1, Nick Allum2

1University of Iceland, Iceland; 2University of Essex, United Kingdom

Relevance & Research Question:

Response sets can bias the results of surveys, leading to erroneous results. It is therefore important for survey researchers to assess the form and extent of response bias in order to ensure data quality. Extreme response set (ERS) is one form of response sets that has been identified as a possible threat to survey quality. In short, ERS refers to the tendency of respondents to simplify the response scale of survey questions by using the outermost scale points more frequently than the scale points near the middle. There are some indications that this might especially be linked to cultural differences, making comparisons between different cultural groups difficult.

This study is aimed at assessing to what extent respondents in an Internet survey display a pattern resembling ERS as a function of attitude importance as opposed to resorting to ERS because of the difficulties with the task.

Methods & Data:

The data used were from a 2011 study on immigrants in the Netherland which was a part of the Dutch LISS-panel, administered by CentERdata. ERS was assessed by counting the number of times each respondent selected the most extreme response option in a set of items aimed at measuring attitudes towards different ethnic and cultural groups. The main independent variables were to what extent the respondents felt the issue of immigration was important and to what extent they felt the questions asked were unclear.

Results:

First results suggested a significant interaction between attitude importance and the clearness of the questions. For respondents who felt that the questions were unclear, the level of ERS was higher when attitude importance was low, while the opposite was true in the case of respondents whose attitude importance was high, where the level of ERS was lower for respondents who felt that the items were unclear.

Added Value:

The study provides added understanding of the mechanism underlying extreme responding by separating responses of high intensity from responses that can be taken as an indicator of a response set.


 
Contact and Legal Notice · Contact Address:
Conference: GOR 15
Conference Software - ConfTool Pro 2.6.76
© 2001 - 2014 by H. Weinreich, Hamburg, Germany