Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
A4.2: Scale and Question Format
Time:
Friday, 10/Sept/2021:
11:00 - 12:00 CEST

Session Chair: Bella Struminskaya, Utrecht University, Netherlands, The

Presentations

Investigating Direction Effects Across Rating Scales with Five and Seven Points in a Probability-based Online Panel

Jan Karem Höhne1, Dagmar Krebs2

1University of Duisburg-Essen, Germany; 2University of Gießen, Germany

Relevance & Research Question: In social science research, survey questions with rating scales are a commonly used method in measuring respondents’ attitudes and opinions. Compared to other rating scale characteristics, rating scale direction and its effects on response behavior has not received much attention in previous research. In addition, a large part of research on scale direction effects has solely focused on differences on the observational level. To contribute to the current state of research, we investigate the size of scale direction effects across five- and seven-point rating scales by analyzing observed and latent response distributions. We also investigate latent means and the equidistance between scale points.

Methods & Data: For this purpose, we conducted a survey experiment in the probability-based German Internet Panel (N = 4,676) in July 2019 and randomly assigned respondents to one out of four experimental groups defined by scale direction (decremental or incremental) and scale length (five- and seven-point). All four experimental groups received identical questions on achievement motivation with end-labeled, vertically aligned scales and no numeric values. We used a single question presentation with one question per page.

Results: The results reveal substantial direction differences between five- and seven-point rating scales. Five-point scales seem to be relatively robust against scale direction effects, whereas seven-point scales seem to be prone to scale direction effects. These findings are supported by both the observed and latent response distributions. However, equidistance between scale points is (somewhat) better for seven- than five-point scales.

Added Value: Our results indicate that researchers should keep the direction of rating scales in mind because it can affect response behavior of respondents. This similarly applies to the scale length. Overall, there is a trade-off between direction effects and equidistance when it comes to five- and seven-point rating scales.



Serious Tinder Research: Click vs. Swipe mechanism in mobile implicit research

Holger Lütters1, Steffen Schmidt2, Malte Friedrich-Freksa3, Oskar Küsgen4

1HTW Berlin, Germany; 2LINK Marketing Services AG, Switzerland; 3GapFish GmbH, Germany; 4pangea labs GmbH, Germany

Relevance & Research Question:

Implicit Association Testing (IAT) after Greenwald et al. is established for decades now. The first experimental designs using the keyboard to track respondent's answers are still in practice (see project implicit.harvard.edu). Some companies transferred the mechanism from the desktop into the mobile environment without specific adaptation respecting the opportunities of touch screen interactions.

The idea of this new approach is to adapt the established swiping mechanism inspired by the dating app Tinder together with a background time measurement as a means of implicit measurement in brand research.

Method & Data:

The work of C.G. Jung's archetypes serves as a framework to measure the brand relationship strength towards several pharmaceutical vaccine brands related to the fight against COVID-19 on an implicit level with an implicit single association test (SAT).

The online representative sample (n>1.000) drawn from a professional panel in Germany allows the manipulation of several experimental conditions in the mobile only survey approach.

The data collection approach aims to compare the established mechanism of clicking with the approach of swiping answers (Tinder style answers). Contentwise the study is dealing with COVID-19 vaccination brands.

Results:

The analysis shows differences in the answer patterns of those technically deviant approaches. The authors discuss the question of validity of the data collection on mobile devices. Additionally paradata about respondent's behaviour is discussed, as the swipe approach may be a good option to keep respondent's motivation up during an intense interview, resulting in lower cost and effort for the digital researcher.

Added Value:

The study is meant to inspire researchers to adopt their established methodological setting to the world of mobile research. The very serious measurement approach turns out to be even fun for some of the respondents. In an overfished environment of respondents this seems to open a door to even more sustainable research with less fatigue and a higher willingsness to participate. The constribution shows that Serious Tinder Research is more than just a joke (even though it started as a fun experiment).



The effects of the number of items per screen in mixed-device web surveys

Tobias Baier, Marek Fuchs

TU Darmstadt, Germany

Background:

When applying multi-item rating scales in web surveys, a key design choice is to decide the number of items that are presented on a single screen. Research suggests that it may be preferable to restrict the number of items that are presented on a single screen and instead increase the number of pages (Grady, Greenspan & Liu 2018, Roßmann, Gummer, & Silber, 2017, Toepoel et al., 2009). In the case of mixed-device web survey, multi-item rating scales are typically presented in a matrix format for large screens such as PCs and a vertical item-by-item format for small screens such as smartphones (Revilla, Toninelli & Ochoa, 2017). For PC respondents, splitting up a matrix over several pages is expected to counteract respondents using cognitive shortcuts (satisficing behaviour) due to a lower visual load as compared with a one large matrix on a single screen. Smartphone respondents who receive the item-by-item format do not experience a high visual load even if all items are on a single screen as only a few items are visible at the same time. However, they have to undergo more extensive scrolling that is supposed to come with a higher amount of fatigue as compared to the presentation of fever items on more screens.

Method:

To investigate the effects of the number of items per screen we will field a survey panel members of the non-probability online panel of respondi in the spring of 2021. Respondents will be randomly assigned to a device type to use for survey completion three experimental conditions that vary the presentation of several rating scales.

Results:

Results will be reported for response times, drop-out rates, item missing data, straightlining, and non-differentiation.

Added value:

This paper contributes to the research on the optimal presentation of rating scales with multiple items in mixed-device web surveys. The results will inform as to whether decreasing the number of items per screen at the expense of more survey pages is beneficial for both the matrix format on a PC and the item-by-item format on a smartphone.