Conference Agenda

Overview and details of the sessions of this conference. Please select a date or room to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Session Overview
A 10: Improving Questionnaires
Friday, 20/Mar/2015:
14:00 - 15:00

Session Chair: Daniela Wetzelhütter, Johannes Kepler University
Location: Room 248
Fachhochschule Köln/ Cologne University of Applied Sciences
Claudiusstr. 1, 50678 Cologne


Coding Surveys on their Item Characteristics: Reliability Diagnostics

Frank Bais1, Barry Schouten1,2, Vera Toepoel1

1Utrecht University, Netherlands, The; 2Statistics Netherlands, Netherlands, The

Relevance & Research Question:

More and more surveys use multiple modes, which supplement or replace traditional interviewer modes by web. In multi-mode questionnaire design, usually some consideration is given to mode-specific measurement error. Despite this consideration, however, these measurement effects are frequently unexpectedly large and hamper publication. For this reason, there is a strong incentive to better predict measurement effects. Measurement effects are determined by the interplay between characteristics of the questionnaire and characteristics of the respondents. In our research, we investigate the existence and utility of so-called questionnaire and respondent profiles, in which these characteristics are summarized, for predicting measurement effects. As a first research question, we ask whether questionnaires can be coded reliably on item characteristics that are suggested in the literature as influential in mode-specific measurement effects.

Methods & Data:

We constructed a typology of item characteristics from the literature and applied it to a wide range of surveys; the Dutch Labour Force Survey of Statistics Netherlands and the core studies of the LISS panel of CentERdata. For all surveys, 16 item characteristics are coded by two main coders, while 7 of these 16 item characteristics that are assumed to be relatively influential in evoking measurement error are also coded by a third coder. Reliability diagnostics are derived for the various item characteristics.


Analyses of the survey coding scores indicate a relatively low reliability for characteristics that literature suggested as influential in mode-specific measurement effects: Sensitivity to social desirable answers, potential presumption of filter question, emotional charge, centrality, and language complexity.

Added Value:

It is investigated to what extent coding of questionnaires on its item characteristics is reliable and to what extent questionnaire profiles can be constructed based on this coding. Along with process data and register data that are linked to individual respondents who have filled out multiple questionnaires, the questionnaire and respondent profiles might shed light on the occurrence and scope of measurement effects for specific respondents and specific questionnaire characteristics over different survey modes.
Bais-Coding Surveys on their Item Characteristics-185.pdf

Approaches for Evaluating Online Survey Response Quality

Nils Glück1,2

1Cologne University of Applied Sciences, Germany; 2QuestBack GmbH, Germany

Relevance & Research Question: Online questionnaires are a common utility for research companies. While survey software products offer many features, respondent fraud and indifferent or inattentive respondent behaviour remain critical issues. How can responses with such bad quality be identified in an automated process?

Methods & Data: The author proposes a post-fieldwork approach which is based on behaviour pattern detection and does not rely on control or trap questions. Using response quality indicators as well as discriminant analysis, logistic regression and an optional flag variable, responses are classified with regard to their quality. The 17 indicators focus on aspects such as response differentiation to open-ended questions, the time spent for answering the survey and monotonous behaviour in response to matrix questions. For the procedure to work, the survey should include open-ended questions, several matrix questions as well as a minimum of ten questions overall. An incentivized survey containing quality-related trap questions and other control measures is sent out to a Facebook river sample (n = 134) as well as a commercial panel sample (n = 1,000). This survey is used to generate a standard classification. Another five survey data sets from past real-case projects are finally used to examine the effectiveness of the procedure developed (157 <= n <= 2,603). R is used for calculating indicators, SPSS for discriminant and regression analysis. The automation process is specifically designed for QuestBack EFS software.

Results: Depending on the data, the procedure identifies between 2.5 and 5.2 per cent of all respondents as low-quality respondents. Judging from their indicator values, their behaviour is clearly suspected to indicate bad quality. Therefore they should be considered to be removed from the sample.

Added Value: The approach offers straight-forward ways of judging whether survey responses should be considered trustworthy in comparison to one another. This knowledge supports post-fieldwork data cleansing and reduces effects of distortion by low-quality data. The procedure is ready for implementation in the EFS software.
Glück-Approaches for Evaluating Online Survey Response Quality-193.pdf

Deep impact or no impact, evaluating opportunities for a new question type: Statement allocation on importance-performance-grid

Sebastian Schmidt

SKOPOS GmbH & Co. KG, Germany

Relevance & Research Question:

Standardized grid question types in online questionnaires can be regarded as the backbone of modern quantitative research. Grid questions allow comparability among different survey waves. Furthermore, it is well proven that varying the way by which grid questions are displayed strongly affects response behavior. Despite its importance in everyday research, this type of questions is lacking the ability of taking full advantage from the opportunities the web currently offers, such as using media elements to express opinions or visualizing certain aspects e.g. by using pictures.

Against this background, the authors will explore an innovative approach of the importance-performance-analysis (IPA) whereby respondents are able to position particular customer satisfaction aspects directly on a grid which is divided into four so called action areas: “concentrate here”, “keep up the good work”, “low priority” and “possible overkill”. We will assess whether this approach allows identifying critical performance factors as already known from the traditional IPA. Benefits of this approach could be increasing respondent engagement, time savings and a more distinct prioritization of aspects. Pitfalls like a lack of comprehensibility and satisfaction patterns need to be considered to assess future usage of this approach.

Methods & Data:

The authors conducted a customer satisfaction survey via an Online-Access-Panel, applying a split-half-design to examine effects of this new grid replacing question type. Respondents were assigned at random to rate their customer satisfaction statements, either using the traditional Importance-Performance-Analysis (IPA), or the new approach explained above, whereby statements are directly assigned to the 4 field grid. A comparison of both designs will reveal differences among satisfaction and importance ratings. Analysis will also assess the comprehensibility of the grid approach while differences of the interview duration and respondent engagement are being examined as well.


Available by the end of January.

Added Value:

The authors will show to what extent a direct assignment of the customer satisfaction statements on the IPA grid can be considered as comprehensible and valid for the purpose of analyzing effects of customer satisfaction, highlighting possible advantages and potential risks, ultimately concluding whether future usage is reasonable and beneficial.

Schmidt-Deep impact or no impact, evaluating opportunities-205.pdf

Contact and Legal Notice · Contact Address:
Conference: GOR 15
Conference Software - ConfTool Pro 2.6.76
© 2001 - 2014 by H. Weinreich, Hamburg, Germany