Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Session Overview
C10: Response and Measurement
Friday, 02/Mar/2018:
2:15 - 3:15

Session Chair: Bella Struminskaya, Utrecht University & DGOF, Netherlands, The
Location: Room 147

Show help for 'Increase or decrease the abstract text size'

When Less is More: Improving Respondent Experience with the Sociometric Framework

Randall K. Thomas, Frances M. Barlas

GfK Custom Research, United States of America

Relevance & Research Question: The psychometric framework focuses on accurate measurement in the classification, diagnosis, placement, or evaluation of individuals. To accurately measure individuals, it is necessary to detect small differences between them. Increasing the number of items and responses to measure a specific concept are ways to improve the differentiation, reliability, and validity of measurement. Many survey researchers have been trained in the psychometric approach and apply it in their questionnaire design. However, in sample surveys, we are interested in measuring concepts for groups, not individuals. As such, a reconsideration of the applicability of the psychometric measurement framework for sample surveys is warranted. Thomas (2017) outlined a new way to organize thinking about sample surveys that focuses on measurement reliability and validity based on the number of respondents rather than the number of items or responses we use. This new sociometric measurement framework significantly reduces the need for many redundant items in sample surveys as well as reduces the need for longer scales with many responses that resulted from the misapplication of the psychometric framework. This new focus comes at just the right time, as more respondents are taking surveys on smartphones which impose limits on screen real estate.

Methods & Data: In this paper, we summarize a number of studies which show that, by applying this sociometric framework, we can produce reliable and valid data. We also describe a number of alternative metrics, that when used with various resampling techniques, provide alternative indicators of measurement validity and reliability (e.g., group split-half reliability) for these new, simpler measurement formats supported by the sociometric framework.

Results: We apply these techniques to a number of different items across studies to demonstrate how these reliability and validity indicators are related to each other and can usefully supplant traditional psychometric indicators of reliability and validity.

Added Value: This paper provides a solid foundation for new measurement methodologies for the new challenges confronting online surveys. Based on these new indicators of reliability and validity, we can reduce both the number of items and number of responses used for scales, reducing survey burden for respondents.

Thomas-When Less is More-219.pdf

Is it possible to select respondents at random in push-to-web surveys when using address-based samples and postal contact?

Andrew Cleary1, Alex Cernat2, Peter Lynn3, Gerry Nicolaas1

1Ipsos MORI, United Kingdom; 2University of Manchester; 3University of Essex

In this study we test alternative methods for selecting respondents within households when using postal contact to encourage a random probability sample of the population to go online and complete a questionnaire. Often lists of addresses are used as a sampling frame and it is therefore essential to provide an instruction in the letters on who at the address should complete the questionnaire. There is evidence from postal surveys as well as push-to-web surveys showing that a substantial proportion get the selection wrong when asked to apply a procedure such as last/next birthday. To counter this, some studies have instead allowed all adults to complete the survey but this can encourage fraud when coupled with a conditional incentive.

In a pilot study commissioned by The European Union Agency for Fundamental Rights (FRA), an experiment was conducted in 18 EU countries. Households were asked to select up to two or three adults, depending on the average household size in the country. This ensures that the risk of selection bias is minimised given that most households include no more than two or three adults. Two main treatments were tested: (a) the letter provides login details up front for two or three adults; and (b) the letter requests any adult member to take part and on completion of the questionnaire, an additional one or two adults are asked to take part (only if there are two or more adults in the household). Within the second treatment, two methods for selecting the additional adult(s) are tested: (b1) household choice; and (b2) online random selection.

The results are not available at the time of writing this abstract. However, at the conference we will present (a) compliance rates; (b) address-level response rates, (c) number of completed questionnaires, and (d) impact on sample composition.

This study will provide new evidence on how to instruct respondent selection when using address-based samples and postal contact for push-to-web surveys. It builds on prior research which has demonstrated that a substantial proportion of people do not follow instructions provided in letters, such as the commonly used last/next birthday methods.

Cleary-Is it possible to select respondents at random in push-to-web surveys when using address-based samples.pdf

Solving the “Satisfaction Paradox”: Advances in Measuring Satisfaction

Hubertus Hofkirchner

Prediki Prognosedienste GmbH, Austria

Measuring the Unmeasurable: Two Cohorts, Two Methods, Four Results, Six Permutations

Relevance & Research Question:

Measuring satisfaction is difficult, be it customer or political satisfaction, for many reasons. Satisfied people often keep quiet, dissatisfied ones are more likely to speak up or complain. Satisfaction is difficult to quantify, it is unclear if questionnaire surveys reflect absolute satisfaction correctly or not. Last, the “Satisfaction Paradox”: averaging detailed satisfaction scores yields a worse score than asking for overall satisfaction.

Methods & Data:

In recent years, the prediction market method has expanded its usefulness beyond its origin, predicting election outcomes. Researching new product ideas or concepts, optimised pricing, customer and political satisfaction are now emerging as promising applications.

In a recent project, we found indications that the Gold Standard of satisfaction research, a questionnaire presented to a random customer sample may give inferior results compared to a self-selected crowd and a prediction market, considering the underlying purpose of such research.

We will present a case study about citizens’ political satisfaction, comparing the results of Prediki PROMPT, a new quali-quant method based on advanced prediction market technology, to traditional questionnaire results. Our case is based on two cohorts - a representative n=1000 and a self-selected of n=1500 - each doing both exercises which produces four data series and six relative comparisons.


These combinations not only shed a light on citizen satisfaction (or lack thereof) with Austria’s central government. We will show how the four results compare. Differences point to System 1 vs. System 2 responses, Relative errors indicate that an absolute measure of satisfaction is in fact possible, however that the current Gold Standard is not it. We will present how crowdsourcing yields a more authentic interpretation of these results, for more insight into why satisfaction levels are as they are.

Added Value:

A better read on customer satisfaction will yield significant financial and non-financial benefits for clients and governments alike. It will increase customer loyalty, thus secure more repeat business. It will focus businesses and organisations on the right actions to increase customer satisfaction while saving money and time on measures which do not.

Contact and Legal Notice · Contact Address:
Conference: GOR 18
Conference Software - ConfTool Pro 2.6.109
© 2001 - 2017 by H. Weinreich, Hamburg, Germany