Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
A7: Measurement in Web Surveys
Time:
Friday, 17/Mar/2017:
9:00 - 10:00

Session Chair: Peter Lugtig, Utrecht University, Netherlands, The
Location: A 208

Show help for 'Increase or decrease the abstract text size'
Presentations

Clarification features in close ended questions and their impact on scale effects

Anke Metzler, Marek Fuchs

Darmstadt University of Technology, Germany

Relevance & Research Question:

Previous research on clarification features in Web surveys has shown that they are an effective means of improving response quality in open-ended questions. However, little is known about their influence on response quality in closed-ended questions. Results from the literature indicate that respondents use the range and content of response categories as relevant information when generating an answer (scale effects). Given the findings concerning clarification features in open-ended questions we assume (1) that they are similarly effective in closed ended-questions and (2) that they may have a stronger effect on the response process than the range of the response categories potentially reducing scale effects.

Methods & Data:

Experiment 1 and 2 were conducted in two randomized field experimental Web surveys (n=4,620; n=944). Using a between-subjects design we assessed the effectiveness of clarification features in closed-ended frequency questions. Two types of clarification features were tested that aim at either clarifying the question meaning (definitions) or motivating respondents to search their memories for relevant information (motivating statements). Questions and clarification features were designed in a way that respondents in the experimental groups with the clarifications features were expected to provide either higher or lower frequencies than respondents in the control groups with no clarification features.

Experiment 3 was conducted in a randomized field experimental Web survey (n=944). A between-subjects design was implemented in closed-ended questions to test low and high frequency scales without clarification features, with definitions and motivating statements (2 x 3 factorial design). We assessed the magnitude of scale effects as a dependent variable.

Results:

Overall clarification features are effective in influencing responses provided. Results indicate that definitions yield stronger effects than motivating statements. Furthermore, scale effects are lower for respondents receiving clarification features than for respondents of the control group. Again, definitions are more effective in reducing the scale effect than motivating statements.

Added Value:

The use of definitions in closed-ended questions has a positive effect on survey responses and helps improve data quality. Definitions seem to have the potential to counteract scale effects, whereas motivating statements do not show any effect.


Metzler-Clarification features in close ended questions and their impact-178.pdf

Is Higher Endorsement in Yes-No Grids Due to Acquiescence Bias vs. Salience in Response?

Randall K. Thomas1, Frances M. Barlas1, Nicole R. Buttermore1, Jolene D. Smyth2

1GfK Custom Research, United States of America; 2University of Nebraska at Lincoln

Relevance and Research Question:

A common method used to efficiently obtain data in online studies is the Yes-No Grid. Elements in Yes-No Grids are endorsed at higher rates than when they occur in a Multiple Response Format (i.e., select all). Prior research (Smyth et al. 2005; Thomas & Klein, 2005) suggested that this may be due to increased consideration of each element. Asking for a response for each element to be considered will increase the access of less proximal memories (the salience hypothesis). Alternatively, Callegaro et al. (2015) proposed that acquiescence bias most likely explained heightened endorsement. Acquiescence bias results from socialization that encourages people to be agreeable leading to a greater tendency to endorse ‘agree’ with an agree-disagree response format or select ‘yes’ in a yes-no choice format. Our research question was a test of the viability of the alternative explanations.

Methods and Data:

We present two studies, with 1,127 and 1,449 respondents respectively, testing the divergent predictions of the salience and acquiescence bias hypotheses. Each study had a 2 X 5 factorial design. The difference between the studies was in the brands used for evaluation. Respondents were randomly assigned to a rating dimension (either descriptive or agreement ) and to one of five response formats. Each participant rated four brands with which they had some familiarity along six different attributes (e.g., ‘Is distinctive’, ‘Is expensive’, etc.). Those assigned the descriptive dimension were asked if the attributes described each brand, with response formats being either a multiple response format, dichotomous format (‘Describes, Does not describe’ or ‘Yes-No’), or trichotomous format. The agreement dimension had comparable response formats (e.g., ‘Agree, Do not agree’; ‘Yes-No’, etc.).

Results:

We found that trichotomous formats had the highest endorsements while the multiple response formats had the lowest endorsements regardless of responses used, supporting the increased consideration-salience hypothesis. In addition, the selection of ‘Describes’ or ‘Yes’ or ‘Agree’ were not significantly different, for either the dichotomous or trichotomous formats, disconfirming the acquiescence bias explanation.

Added Value:

Contrary to recommendations by Krosnick & Presser (2010) both yes-no and agree-disagree are efficient and valid response formats.


Evaluation of Agree-Disagree Versus Construct-Specific Scales in a Multi-Device Web Survey

Tanja Kunz

TU Darmstadt, Germany

Relevance & Research Question: Rating scales with agree-disagree response options are among the most widely used question formats to ask for attitudes, opinions, or behaviors in Web surveys, especially because several items can be combined in a grid irrespective of whether the items measure the same or different constructs. Nevertheless, there is an ongoing debate on whether construct-specific (CS) scales are to be preferred to agree-disagree (AD) scales with regard to data quality and cognitive burden. Furthermore, due to respondents increasingly arriving at Web surveys via mobile devices, conventional grids are frequently being replaced by standalone (or item-by-item) question formats at least for mobile respondents. Thus, the question may arise why not make use of construct-specific scales, since standalone question formats are already on the rise in multi-device Web surveys.

Methods & Data: The present experiment was designed to gain a better understanding of how data quality and cognitive burden are affected by different kinds of scale formats. In a Web survey conducted among university applicants (n=4,477), a between-subjects design was implemented to examine three different scale formats in terms of an AD-grid, AD-standalone, and CS-standalone format. Moreover, scale direction was varied with response options being presented either in a positive-to-negative or reverse ordering. Several indicators of data quality and cognitive burden have been distinguished (e.g., response times, primacy effects, straightlining, and central tendency).

Results: Findings suggested that construct-specific scales prevent respondents from rushing through a series of items, thus encouraging a more thoughtful processing of the scale content without unnecessarily burdening the respondents. Moreover, construct-specific scales are less susceptible to variations in scale direction compared to agree-disagree scales. On the downside, respondents are more inclined to choose the middle category in construct-specific scales than in agree-disagree scales.

Added Value: The findings provide a better understanding of differences in the respondents’ processing of construct-specific scales compared to agree-disagree scales. Moreover, there is convincing evidence that construct-specific scales are a proper alternative to agree-disagree scales for both mobile and desktop respondents in multi-device Web surveys.


Kunz-Evaluation of Agree-Disagree Versus Construct-Specific Scales-152.pdf


 
Contact and Legal Notice · Contact Address:
Conference: GOR 17
Conference Software - ConfTool Pro 2.6.96
© 2001 - 2016 by H. Weinreich, Hamburg, Germany