Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
A02: New Technologies and Human-like Interviewing
Time:
Thursday, 07/Mar/2019:
10:45 - 11:45

Session Chair: Oliver Tabino, Q Agentur für Forschung GmbH, Germany
Location: Room Z28
TH Köln – University of Applied Sciences

Presentations

Adapting surveys to the modern world: comparing a researchmessenger design to a regular responsive design for online surveys

Vera Toepoel, Peter Lugtig, Marieke Haan, Bella Struminskaya, Anne Elevelt

Utrecht University, Netherlands, The

Relevance & Research Question: In recent years, surveys are being adapted to mobile devices. This results in mobile-friendly designs, where surveys are responsive to the device being used. How mobile-friendly a survey is, depends largely on the design of the survey software (e.g. how to deal with grid questions, paging or scrolling designs, visibility, tile design etc.) and the length of the survey. An innovative way to administer questions is via a researchmessenger, a whatsapp-like survey software that communicates as one does via whatsapp (see www.researchmessenger.com). In this study we compare a researchmessenger layout to a responsive survey layout in order to investigate if the researchmessenger provides similar results to a responsive survey layout and if the researchmessenger results in more respondent involvement and satisfaction.

Methods & Data: The experiment has been carried out in 2018 using panel members from Amazon Mechanical Turk in the United States. Respondents were randomly assigned to the researchmessenger survey or the regular responsive survey. In addition, we randomly varied the type of questions (long answer scale, short answer scale, open-ended). We used four blocks of questions containing questions about politics, news, sports, and health. To investigate question order effects-and possible respondent fatigue dependent on the type of survey- we randomly ordered blocks of questions. 1728 respondents completed the survey.

Results: We are currently analyzing results. We will investigate response quality (e.g. response distribution/mean scores, #check-all-that-apply, #don’t know, item missings and drop out, use of back button), survey duration, and respondents’ evaluation of the questionnaire. Respondents could self-select into a particular device. We will also compare results obtained via different device. We will show a video of the layout of both the researchmessenger and regular survey.

Added Value: The experiment identifies recommendable design characteristics for an online survey in a time were survey practitioners need to rethink the design of their survey since more and more surveys are being completed on mobile phones and response rates are declining.


Toepoel-Adapting surveys to the modern world-183.pdf

Voice Recording in Mobile Web Surveys - Evidence From an Experiment on Open-Ended Responses to the "Final Comment"

Konstantin Leonardo Gavras

University of Mannheim, Germany

Relevance & Research Question: In times of increased usage of mobile devices, the user experience has changed dramatically. Interacting with IoT using voice prompts has become common to wide shares of the public. However, survey research has not yet acknowledged these changes in online mobile behavior (Singer/Couper 2017). Most mobile web surveys only allow respondents to answer open-ended questions by writing them down manually. In order to realize the full potential of mobile devices, mobile web surveys should allow respondents to record their answers vocally. Using an experiment on mobile devices, I show that voice recording has potential for recruiting new respondents to open-ended questions, but slightly alters respondents’ behavior.

Methods & Data: The experiment was part of the GLES 2018 pre-test with 1566 respondents on mobile devices in Germany. Respondents were forced to take the survey with mobile devices to avoid self-selection. In order to avoid ceiling effects, I decided to employ the experiment with the final comment of the survey, forcing respondents to either comment manually or via voice recording.

Results: The results of this experiment provide evidence that using voice recording techniques allows survey researchers to recruit new target groups for open-ended questions in mobile web surveys. However, this innovation is accompanied by minor behavioral differences in response styles. Respondents who are older, have lower level of education and are less politically interested are more likely to use voice recording over being forced to write down open-ended responses. Furthermore, I am able to show that vocally recorded responses are on average friendlier than written comments, providing first evidence that social desirability bias might increase using this survey mode.

Added Value: Using a large-scale experiment, I was able to show that voice recording is a feasible alternative for gathering responses to open-ended questions in mobile web surveys. Besides increasing coverage in general, voice recordings are able to motivate underrepresented respondents in open-ended questions to provide answers. Using automated transcribing tools, voice recording allows researchers to ask additional open-ended questions in mobile web surveys, realizing the full potential of mobile devices for survey research.


Gavras-Voice Recording in Mobile Web Surveys-217.pdf

How well is remote webcam eye tracking working? - An empirical validation of Sticky and Eyes Decide against Tobii

Michael Wörmann

Facit Digital GmbH, Germany

Relevance & Research Question:

Keywords: Evaluation webcam eye tracking; Sticky, Eyes Decide, Tobii

Webcam remote eye tracking has been on the market for some time now, but is still viewed critically by many, as the accuracy of results measured by a webcam appears questionable. We evaluated two webcam eye tracking solutions, Sticky and Eyes Decide and compared them to Tobii – an established offline eye tracking solution. In addition to the comparison of the eye tracking results we also compared the handling and applicability of the two tools.

Methods & Data:

Keywords: UX laboratory setting; standardized conditions; specification of scenarios; moderated test; HD webcam; Tobii T-2-60;

30 individual interviews in the Facit Digital UX lab in Munich. 3 independent samples of 10 participants were tested either with Sticky, Eyes Decide or Tobii. Two German websites (Fressnapf / Capri Sun) were presented with suitable use cases. Heatmaps and selected areas of interests were compared. Metrics were compared using t-tests.

Results:

Keywords: Limited field of application; low fit of heatmap data; good fit of areas of interests

The heatmaps showed no good fit for Eyes Decide and Sticky with our reference Tobii. Also, the self-reported perceived areas of the participants only matched partly with the recorded heatmap data. Numeric values match somewhat better with Tobii for both tools.

Both, Sticky and Eyes Decide have constraints which limit their field of application: For Sticky, the high quota of not usable results and the missing raw data for individual participants only allow limited results and increase the recruiting effort. Eyes Decide only offers English participant instructions which limits the possible target group and has a tedious test setup.

Added Value:

Keywords: empirical analysis, independent, classification, areas of application

The study offers an empirical analysis of the advantages and disadvantages of two online eye tracking solutions in terms of test creation, conduction and results in comparison to Tobii. The study also allows a classification of the tools and shows suitable areas of application.


Wörmann-How well is remote webcam eye tracking working-113.pptx