Conference Agenda

Overview and details of the sessions of this conference. Please select a date or room to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
A 2: Measurement
Time:
Thursday, 19/Mar/2015:
10:45 - 11:45

Session Chair: Oliver Bastian Tristan Franken, TU Dresden
Location: Room 248
Fachhochschule Köln/ Cologne University of Applied Sciences
Claudiusstr. 1, 50678 Cologne

Presentations

Click, Touch, Slide: Impact of the Implementation of Graphical Rating Scales on Data Quality in Mobile and Desktop Settings

Frederik Funke1,2, Vera Toepoel3

1datamethods.net; 2LINK Institut, Germany; 3Utrecht University, The Netherlands

Relevance & Research Question: Rating scales (e.g., agree-disagree scales) can be implemented in different ways. Besides standard HTML radio buttons there are different graphical rating scales available. This study focuses on slider scales (e.g., Funke, Reips & Thomas, 2011) and visual analogue scales (VAS, e.g., Couper et al., 2006), two scales that differ in the way they are operated. VAS and radio buttons are operated by clicking only, which makes a marker appear on the previously empty scale. Slider scales consist of a handle that is visible in the beginning and has to be moved. There are different possible implementations: Sliders can either be operated by sliding only or by a combination of sliding and clicking. This study is aimed at identifying the best of implementing way graphical rating scales.

Methods & Data: The sample consisted of N = 4180 respondents that were randomly assigned to a questionnaire consisting of either radio buttons, slide-only sliders, click and slide sliders, or VAS. As the actual use could also depend on the respondent’s device, a comparison between desktop computers (N = 1406), smart phones (N = 1372), and tablets (N = 1402) was made. As second experimental factor the response scale consisted either of 5, 7, or 11 options.

Results: Item nonresponse was highest with sliders (7.3% and 7.7%) followed by VAS (5.5%) and it was lowest with radio buttons (3.8%). Especially respondents with a low formal education produced missing data with sliders that could only be slid but not clicked on. No difference was found in mean ratings, response time, and evaluation of the questionnaire. The way sliders were operated did affect the results. The number of response options did not affect the results systematically.

Added Value: Overall, higher rates of item nonresponse argue against the use of slider scales, especially against those that can only be operated by sliding. Overall, it is recommended to use radio buttons or VAS as graphical rating scales.

Dynamic Drag-and-Drop Rating Scales in Web Surveys

Tanja Kunz

Darmstadt University of Technology, Germany

Relevance & Research Question: In Web surveys, rating scales are typically presented in grid (or matrix) questions. Besides benefits such as the neat arrangement and efficient processing of rating scale items, grid questions carry an increased risk of respondents relying on cognitive shortcuts in order to reduce their cognitive and navigational effort. Even though in Web surveys a wide range of visual and dynamic features is available for the design of survey questions, new types of rating scale designs beyond grid questions using conventional radio buttons have rarely been used yet. In this study, two different rating scale procedures using drag-and-drop as a more interactive data input method are applied: Respondents need to drag the response options towards the rating scale items (“drag-response”), or in reverse, the rating scale items towards the response options (“drag-item”). Both drag-and-drop scales aim at encouraging respondents to process rating scale items more attentively and carefully instead of simply relying on cognitive shortcuts.

Methods & Data: In two randomized field experimental Web surveys conducted among university applicants (n=5,977 and n=7,395), various between-subjects designs were implemented to assess the effectiveness of the drag-response and drag-item scale in preventing the respondents’ susceptibility to cognitive shortcuts compared to a standard grid question in terms of different systematic response tendencies commonly encountered with rating scales such as nondifferentiation and primacy effects. Furthermore, item missing data and response times were examined.

Results: Findings revealed that the quality of answers to rating scales may profit from the respondents’ higher attentiveness and carefulness in both drag-and-drop scales which is reflected in a decreased susceptibility to systematic response tendencies. At the same time, however, results also showed that both drag-and-drop scales entail a higher extent of respondent burden compared to conventional radio button scales as indicated by increases in item missing data and longer response times.

Added Value: This study provides a comprehensive examination of the potentials and limitations of new drag-and-drop procedures as an interactive data input method for rating scales in Web surveys. In addition, findings contribute to a better understanding of the cognitive processing of rating scales in Web surveys.
Kunz-Dynamic Drag-and-Drop Rating Scales in Web Surveys-186.pdf

Positioning of Clarification Features in Open Frequency and Open Narrative Questions

Metzler Anke, Marek Fuchs

Darmstadt University of Technology, Germany

Relevance & Research Question: The lack of interviewer assistance increases response burden in Web surveys and enlarges the risk that respondents misinterpret survey questions. Clarification features and instructions are seen as an effective means of improving question understanding and response behavior. However, clarification features often suffer from limited attention. Thus, they need to be positioned exactly where they are needed (Dillman, 2000). In the past it has been suggested to place clarification features after the question text. By contrast, recent findings from an eye-tracking study indicated that clarification features concerning question meaning are particularly noticed when they are presented before the question text whereas formatting instructions should be placed after the response options (Kunz & Fuchs, 2012). This study aims to answer the question whether optimal positioning of clarification features depends on the cognitive stage addressed by the instructions.

Methods & Data: In four randomized field experimental Web surveys a between-subjects design was implemented to test the effectiveness of three different positions of clarification features related to open frequency and open narrative questions: after the question text, before the question text and after the response options. Clarification features concerning question meaning, the retrieval of a response and the formatting of the response have been tested.

Results: Results indicate that clarification features regardless of the processing stage of the question-answer process addressed are best positioned after the question or after the response options. Instructions placed before the question stem yielded the smallest effect.

Added Value: The use of clarification features in Web surveys has a positive effect on survey responses and helps improve data quality. The optimal position of instructions does not depend on the cognitive stage of the question-answer process addressed. Survey researchers should avoid placing clarification features before the question since this position seems to be least efficient. However, results also indicate that clarification features positioned after the input field are similar effective as compared to clarification features positioned after the question stem.

Anke-Positioning of Clarification Features in Open Frequency and Open Narrative Questions-155.pdf


 
Contact and Legal Notice · Contact Address:
Conference: GOR 15
Conference Software - ConfTool Pro 2.6.76
© 2001 - 2014 by H. Weinreich, Hamburg, Germany