Logo

General Online Research 2011

March 14-16, 2011, Heinrich-Heine University of Düsseldorf

DGOF Logo

Conference Agenda

Overview and details of the sessions of this conference. Please select a date or room to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session

A4: Visual & Interaction Design

Time: Tuesday, 15/Mar/2011: 4:00pm - 5:00pm
Session Chair: Frederik Funke

Presentations

Should we use the progress bar in online surveys? A meta-analysis of experiments manipulating progress indicators

Mario Callegaro1, Yongwei Yang2, Ana Villar3

1Google, United States of America; 2Gallup Inc, Unites States of America; 3Stanford University, Unitest States of America

a) Relevance & Research question:

Although the use of progress bar seems to be standard in many online surveys, there is no consensus in the literature regarding its effect on survey drop-off rates. Researchers hope that using a progress bar helps reducing drop-off rates by providing respondents with a sense of survey length and allowing them to monitor their progress through it.

b) Methods & Data:

In this meta-analysis we analyzed 27 randomized experiments that compared drop-off rates of an experimental group who completed an online survey where a progress bar was shown, to drop-off rates of a control group to whom the progress bar was not shown. In all the studies drop-offs were defined as any respondent who did not fully complete the survey. Three types of bars were analyzed: a) linear or constant, b) fast first then slow, and c) slow first then fast.

c) Results:

Random effects analysis was used to compute odds ratios (OR) for each study. Because the dependent variable was drop-off rate, an OR greater than 1 indicates that the progress bar group had a higher drop-off rate while an OR lower than 1 indicates that the progress bar group had a lower drop-off rate. The OR for the 13 studies using a constant progress bar is 1.065 (p=0.304). The OR for the 7 studies using fast-to-slow progress bar is 0.835 (p=0.131), whereas the OR for the 7 studies presenting the slow-to-fast progress bar is 1.564 (p=0.002). These preliminary results suggest that, contrary to widespread expectations, the progress indicator does not help reduce drop-off rates for the constant progress indicator while there is some indication that the fast to slow does. Furthermore, the slow-to-fast bar increases drop-off rates as compared to not showing the progress bar. We do not suggest using the fast to slow progress indicator because ethically questionable and against AAPOR and ESOMAR codes.

d) Added value:

To our knowledge this is the first meta-analysis study on the topic. Additional literature search will be performed and we are awaiting from some authors to send us data to add to the study.

Callegaro-Should we use the progress bar in online surveys A meta-analysis-151.pdf

Slider Scales Causing Serious Problems With Less Educated Respondents

Frederik Funke1, Ulf-Dietrich Reips2, Randal K. Thomas3

12Universidad de Deusto and IKERBASQUE (Basque Science Foundation), Spain; 3ICF International, USA

(a) Relevance & Research Question:

Rating scales can considerably affect data quality regarding mean ratings, distribution of answers, response time, or item nonresponse (e.g., Couper, Conrad, & Tourangeau, 2007; Healey, 2007; Heerwegh & Loosveldt, 2002; Krosnick, 1999; Krosnick & Fabrigar, 1997). However, because implementation is easy, designers of Web surveys are tempted to use special rating scales without knowing much about their impact on data quality. This presentation focuses on how slider scales may harm survey data. Nevertheless, sometimes changes in rating scales are inevitable, especially when scrolling on Internet devices with an upright display (e.g., smart phones) should be avoided (for problems see Couper, Tourangeau, Conrad, & Crawford, 2004).

(b) Methods & Data:

In a 2 x 2 Web experiment, type of rating scale (5-point Java-based slider versus 5-point HTML radio button scale) was manipulated as well as the spatial orientation on the screen. On a single Web page, respondents ( N = 779) had to evaluate two product concepts, counterbalanced for order. For analysis, respondents’ reported education was recoded in two groups, below college degree (e.g., B.A. or B.S.) and at least college degree.

(c) Results:

Overall, break-off was significantly higher with slider scales in comparison to radio button scales, chi2(1, N = 779) = 12.81, p < .001, odds ratio = 6.92. Whereas respondents in the group with low education had problems with slider scales, chi2(1, N = 451) = 5.89, p = .018, odds ratio = 5.45, no difference in break-off was observed in the group of respondents with a high formal education, chi2(1, N = 321) = 1.66, p = 1.000. Additionally, task duration was considerably higher with slider scales, F (1, 703) = 638,23, p < .001, eta2 = .48. Furthermore, fewer respondents chose the middle category with slider scales. Spatial orientation of the rating scale had no significant influence on break-off or distribution of values.

(d) Added Value:

The interaction between rating scale and educational level is a serious argument against the use of Java-based slider scales in general. Overall, it seems that horizontal and vertical layout can be substituted mutually.


Drop-out rates during completion of an occupation search tree in web-surveys

Kea Tijdens

University of Amsterdam, Netherlands, The

Occupation is a key variable in socio-economic research and predominantly asked using an open response format, followed by field- or office-coding. Web-surveys are disadvantageous because unidentifiable and too aggregate responses can’t be corrected during survey completion. Two solutions can improve respondent’s self-identification, namely online recoding of text or a search tree with an occupational database. The latter is commonly used by online jobsites. Statistical agencies judge the measurement of occupation in web-surveys risky.

The paper uses the 2010q2 data for UK, Belgium and Netherlands (16,680 observations) from the continuous, multi-country WageIndicator web-survey on work and wages, employing a 3-tier search tree with a choice-set of approximately 1,600 occupational titles. This paper investigates:

• What are dropout rates during search tree completion?

• What is completion time for completed and not-completed search trees?

• How often do respondents use the open ended question following the search tree for further detailing their occupation?

• Are dropout rates during search tree completion explained by the length of search paths or individual characteristics?

• Does search tree completion time depends on characteristics related to the survey, the search tree, or individual education?

A new dataset was created, consisting of the survey data, the time stamps and data on the length of the search tree (words and characters). The findings show that drop-out rates for the search tree are approximately 10%, taken into account an overall drop-out rate of 50%. The base model reveals indeed that the more characters red, the higher the likelihood of drop-out, though the effect is larger and significant for the numbers of characters in the 1st compared to the 2nd tier. Drop-out chances in tier1 are lower for employees compared to employment status groups with slightly less pronounced occupations, such as unemployed, students or housewives. No significant relationship is found between the number of characters in tier1 and the time needed to complete tier1, but both the number of characters in tier2 and tier3 and the respective completion times relate positively. The text data was analysed separately, revealing that respondents tend to report a more disaggregated job title.

Tijdens-Drop-out rates during completion of an occupation search tree-138.pdf

 
Imprint · Contact Address:
Conference: GOR 2011
Conference Software - ConfTool Pro 2.6.17
© 2001 - 2010 by H. Weinreich, Hamburg, Germany