Conference Agenda

Overview and details of the sessions of this conference. Please select a date or room to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Session Overview
Poster Session I: Online Methodology and Applied Online Research
Thursday, 19/Mar/2015:
14:00 - 15:30


Virtual Teams at Work: Do Attractive Interfaces Improve Performance?

Maria Douneva1, Russell Haines2, Meinald T. Thielsch1

1University of Münster, Germany; 2Old Dominion University, USA

Relevance and Research Question:

Online research has acknowledged the importance of website and interface aesthetics by examining its strong influence on users’ attitudes and reactions. However, evidence for effects on task performance is limited and mixed. By manipulating chat background colour in a within-subjects design, this study aims to investigate the effects of an unattractive green vs. attractive blue chat interface on task performance in a web-based collaborative setting.

Methods and Data:

During the study, participants communicated via chat and email using a custom-made browser application. Each of them performed the simulated role of a nurse, a doctor, laboratory technician, or specialist as a member of an emergency response team that had to diagnose patients within a given time. The data of 184 participants (53.3% male, mean age = 21 years, SD = 4.8) during three rounds of the same task were analysed for effects of the colour manipulation on mood, affect, and team performance (measured as number of diagnosed patients after each round). Chat background colour was randomly varied in round 2 and 3.


Although a t-test shows that participants clearly preferred the attractive over the unattractive version (p < .001, d = 0.5), analyses of variance reveal that neither their attitudes (mood or affect) nor the performance on the group level are significantly influenced by the colour manipulation. Still, participants in the unattractive colour condition have the lowest mood and affect scores. Likewise, there is a tendency for better performance after seeing the attractive colour in round 2 (d = .08) and in round 3 (d = .12).

Added Value:

While one can find several effects of aesthetics on attitudes and online user reaction measurements in the literature, none have directly examined the effects of aesthetics on performance, especially in a virtual team setting. The results here challenge the assumption “attractive things work better”, which also proposes mood to mediate between aesthetics and performance. Additionally, our study offers future directions for research on the link between web aesthetics and performance, namely varying the performance criterion, the nature of the task, and the aesthetics manipulation used.
Douneva-Virtual Teams at Work-236.pdf

Connecting Offline and Online Surveys: Reconsidering Respondent Determinants in Attribute Bias

Sae Okura1, Yohei Kobashi1, Leslie Tkach-Kawasaki1, Manuela Hartwig2, Yutaka Tsujinaka1

1University of Tsukuba, Japan; 2Free University of Berlin, Germany

Relevance & Research Question

While internet-based surveys are considered methodologically biased (Rasmussen 2008), previous studies suggest that bias involving political attitudes can be corrected partially through weighting based on respondent attributes (Taniguchi & Taniguchi 2008). However, the relationship between bias deriving from online surveys and attribute bias is unclear, giving rise to the question: Does mitigation accidentally arise due to respondent determinants in attribute bias?

To investigate further, in considering the possible universality of bias produced by online surveys, we demonstrate that bias could originate in the fact that respondents use online surveys (as Internet users), and that reassessment of attributes, including educational level, reveal the possibility of mitigation.

Method & data

Our data is based on (1) a 2014 online survey administered to respondents in Germany, Japan, the U.S., and Korea; and (2) a 2013 door-to-door survey undertaken in Japan, wherein most survey items were identical to the 2014 online format. We first compared the results of the survey with census data from each country to investigate the universality of the resulting bias. By correcting for inherent bias among Internet users drawing on each country’s current situation, we then tested the validity of bias correction in terms of political party support. Focusing on the combined results for the two Japanese surveys, we obtained our results by compensating for the propensity score.


According to our results, there are intrinsically common gaps among online respondents and the general public in terms of education (i.e., in Japan, college graduates at 66.0% versus 37.3%, respectively) and job status in the four countries. Consistent with previous research in Japan, correcting the bias for education is critical, and this tendency is supported. Henceforth, the correlation between education and political attitudes, and possible corrections for other attributes will be statistically investigated [Reference: Table 1].

Added Value

The strengths of our research lies in our approach to detecting universal bias and its contributing mechanism through our analysis of highly comparative cross-national data sets. Our results clearly suggest that the universal bias in online surveys is based on inherent social features, including education, in each advanced nation.
Okura-Connecting Offline and Online Surveys-196.pdf

Higher response rates at the expense of validity? Consequences of the implementation of the ‘forced response' option within online surveys

Jean Philippe Décieux2, Alexandra Mergener1, Kristina Neufang3, Philipp Sischka2

1Federal Institute for Vocational Education and Training (BIBB), Germany; 2University of Luxembourg, Luxembourg; 3University of Trier, Germany

Due to the low cost and the ability to reach thousands of people in a short amount of time, online surveys have become well established as a source of data for research. As a result, many non-professionals gather their data through online questionnaires, which are often of low quality due to having been operationalised poorly (Jacob/Heinz/Décieux 2013; Schnell/Hill/Esser 2011).

A popular example for this is the ‘forced response‘ option, whose impact will be analysed within this research project.

The ‘forced response’ option is commonly described as a possibility to force the respondent to give an answer to each question that is asked. In most of the online survey computer software, it is easily achieved by enabling a checkbox.


There has been a tremendous increase in the use of this option, however, the inquirers are often not aware of the possible consequences. In software manuals, this option is praised as a strategy that significantly reduces item non-response.

In contrast, research studies offer many doubts that counter this strategy (Kaczmirek 2005, Peytchev/Crawford 2005, Dillman/Smyth/Christian 2009, Schnell/Hill/Esser 2011, Jacob/Heinz/Décieux 2013). They are based on the assumption that respondents typically have plausible reasons for not answering a question (such as not understanding the question; absence of an appropriate category; personal reasons e.g. privacy).

Research Question:

Our thesis is that forcing the respondents to select an answer might cause two scenarios:
- Increasing unit non-response (increased dropout rates)
- Decreasing validity of the answers (lying or random answers).

Methods and Data:

To analyse the consequences of the implementation of ‘forced response’ option, we use split ballot field experiments. Our analysis focuses especially on dropout rates and response behaviour. Our first split ballot experiment was carried out in July 2014 (n=1056) and we have planned a second experiment for February 2015, so that we will be able to present our results based on strong data evidence.

First results:

If the respondents are forced to answer each question, they will
- cancel the study earlier and
- choose more often the response category “No” (in terms of sensitive issues).

Décieux-Higher response rates at the expense of validity Consequences of the implementation of the ‘forced re.pdf

A comparison of two eye-tracking supported cognitive pretesting techniques

Cornelia Neuert, Timo Lenzner

Gesis - Leibniz Institute for the Social Sciences, Germany

Relevance & Research Question: In questionnaire pretesting, supplementing cognitive interviewing with eye tracking is a promising new method that provides additional insights into respondents’ cognitive processes while answering survey questions. When incorporating eye tracking into cognitive interviewing, two techniques seem to be particularly useful. In the first technique (retrospective probing), cognitive interviewers first monitor participants’ eye movements and note down any peculiarities in their reading patterns, and then ask targeted probing questions about these peculiarities in a subsequent cognitive interview. In the second technique (gaze video cued retrospective probing), respondents are additionally shown a video of their eye movements during the cognitive interview. This video stimulus is supposed to serve as a visual cue that may better enable respondents to remember their thoughts while answering the questions. We compare gaze video cued retrospective probing with retrospective probing without any cue when it comes to identifying problematic survey questions by addressing the following research questions: 1) Do both techniques differ in terms of numbers of problems identified? 2) Do both techniques differ in types of problems identified? 3) Does using a gaze video stimulate the participants in different ways when commenting on their behavior?

Methods & Data: In a lab experiment, participants' eye movements (N=42) were tracked while they completed six questions of an online questionnaire. Simultaneously, their reading patterns were monitored by an interviewer for evidence of response problems. After completing the online survey, a cognitive interview was conducted. In the retrospective probing condition, probing questions were asked if peculiar reading patterns were observed during the eye-tracking session (e.g., re-readings of specific words). In the other condition, participants were shown a video of their recorded eye movements in addition to receiving probing questions about the questions displayed.

Results: Results show that both techniques did not differ in terms of the total number of problems identified. However, gaze video cued retrospective probing identified less unique problems and less different types of problems than pure retrospective probing.

Added Value: Our experimental study offers first insights in the usefulness of a gaze video cue in conjunction with the method of cognitive interviewing.

Neuert-A comparison of two eye-tracking supported cognitive pretesting techniques-136.pdf

Development and Validation of the "Participatory Market Communication Scale" (PMCS)

Stefan Beckert, Alena Kirchenbauer, Julia Niemann, Alexander Schulze

Universität Hohenheim, Germany

Relevance & Research Question: Recent media developments – especially the social web – paved the way for recipients’ active participation in advertising and market communication. Until now, this phenomenon of participative market communication (PMC) was mainly examined using qualitative approaches. To gain a better understanding of the motives for PMC, we developed a quantitative test instrument, the "Participatory Market Communication Scale" (PMCS). We draw on three dimensions identified by Berthon et al. (2008: 10f): “intrinsic enjoyment”, “self-promotion” and “change perception”. Based on the fact that most PMC campaigns offer a competition to stimulate participation, a fourth dimension “reward” was added.

Methods & Data: To develop the scale, items that represent the four mentioned dimensions were formulated and tested using qualitative and quantitative pretest techniques. The final scale contained 13 Items and was evaluated during a multi-thematic online survey. The sample was quoted by age, gender and formal education. It can be considered representative for the German online population (n = 448).

Results: The items show satisfying values for item difficulty and item variance. However the assumed multi-dimensionality of the construct could not be confirmed. Neither explanatory nor confirmatory factor analysis lead to results that indicate the four supposed dimensions. Especially the three dimensions “intrinsic enjoyment”, “self-promotion” and “change perception” that were derived from the qualitative study of Berthon et al. (2008) do not differentiate. All items that belong to these three dimensions load on the same factor, we called “action motivation” (α = .93). The second factor “reward motivation” (α = .77) consists of the items that represent the extrinsic motivation for participating due to the exposed reward.

Added Value: PMCS is a suitable instrument for advertising research. Due to its two-dimensionality the scale is flexible in its application. Advertising material that contains a reward for PMC-activities can be tested by adding the items for “reward motivation”, so these items can be seen as an add-on module of PMCS.


Berthon, Pierre/Pitt, Leyland/Campbell, Colin (2008). When consumers create the ad. In: California Management Review, 50(4), S. 6-30.

Beckert-Development and Validation of the Participatory Market Communication Scale-238.pdf

webdatanet & webdatametrics

Pablo de Pedraza

Univerity of Amsterdam, The Netherlands

WEBDATANET is a unique multidisciplinary European network bringing together leading web-based data collection experts, (web) survey methodologists, psychologists, sociologists, linguists, media researchers, Internet scientists, economists and public opinion researchers from 31 European Member States plus USA, Brazil and Russia.

By addressing methodological issues of web-based data collection (surveys, experiments, tests, non-reactive data collection, and mobile Internet research) and fostering its scientific usage, WEBDATANET aims to contribute to the theoretical and empirical foundations of web-based data collection, stimulate its integration into the entire research process (i-science), and enhance the integrity and legitimacy of these new forms of data collection.

The Master in Webdatametrics – web based data collection and analysis –offers a front of the line top quality programme in the area of Internet-based data collection methods and Big Data analysis. The programme was created by the Webdatametrics Academic Board elected from members of the COST network Webdatanet. It joins together academics and researchers from the best Universities and departments in Europe.

Who is Your Customer? A Data-Driven Approach to B2B Customer & Competitive Analysis

Carol Scovotti1, Ross Scovotti2

1University of Wisconsin-Whitewater, United States of America; 2NueMedia, LLC, United States of America

Relevance & Research Question: Customer and competitive information needed by firms in the business marketplace (B2B) is not as readily available as consumer data. Niches are smaller and lines of business more specialized. Customer preference data is difficult to collect and few materials, machinery, and product suppliers have extensive marketing analytics expertise. Nonetheless, businesses on the commercial side of the supply chain must understand customer needs and preferences. They must also be aware of alternatives that interest their prospects. Industry information disseminators like trade publishers, web portal managers, and associations can increase their value to members, subscribers, advertisers, etc. by providing customer and competitive intelligence that independently would be difficult to gather. This study examines an innovative use of customer and competitive data collected in a “State of the Industry” report conducted by a US-based site developer and manager of web portals for the commercial woodworking, finishing and countertop industries.

Methods & Data: Over 42,000 practitioners in these industries were emailed an invitation to participate in an online survey in August 2014. Almost 5000 responded (11.9%), providing information about the size, nature and scope of their businesses, and the media they use to gather company, product, and business process information. They also identified the equipment, software, and supplies used to produce their products, including current vendors and competitors they would consider in the future. These data were then grouped by product category to identify buying behaviors.

Results: Results vary by company within a product category. For example, the customer profile of a premium quartz countertop manufacturer is a fabricating specialist with 11-25 employees, $1-5 million in annual revenue and attends trade shows. The profile of a popular laminate maker is a cabinet maker or contractor that also makes countertops with 2-10 employees, less than $1 million in annual revenue and finds product information through search engines and email.

Added Value: These examples demonstrate that with a little planning and the right questions, B2B information disseminators can supply their clientele with enhanced demographics and media usage information so they can identify, communicate with, and ultimately convert viable prospects into customers.
Scovotti-Who is Your Customer A Data-Driven Approach to B2B Customer & Competitive Analysis-173.pptx

Research on Pilot Survey for Mixed Mode Effects: Face to Face Survey and Internet Survey

KyuHo Shim1, KyungEun Lim2

1Statistics Korea, Korea, Republic of (South Korea); 2Statistics Korea, Korea, Republic of (South Korea)

Relevance and research question: Nowadays, Household surveys such as the Labor Force Survey or the Household Finances and Living Conditions Survey face rising “response burden” in Korea. In order to reduce this response burden, Statistics Korea is using a mixed mode survey in the Household survey. But we have questions concerning how much difference exists in the effects of these various modes. Also, how can we design mixed mode surveys in order to reduce these differences. With experimental surveys for mixed mode effects, we are redesigning our mixed mode survey process and are trying to discover the best statistical method to estimate mixed mode effects.

Methods and Data: We conducted a pilot survey for research mode effect. We designed a pilot survey with two survey modes: a face to face interview and an Internet survey. And we added two selection modes with the Household survey, one allowing us to select the survey mode and one not allowing us to do so. Finally, we designed four survey modes, which allowed selecting the face to face interview survey or the internet survey. We sampled 1,600 household members randomly. 800 households selected the interview or internet survey, and the others did not select one particular mode. We provided the questionnaire which included general social survey questions The total number of questions is up to 60.

Results: Finally we collected data from 766 household members with response rates of almost 42%. In some questions, there were mode effects between the face to face interview and the internet surveys. For example, concerning the question “How many times do you contact with your parents in a week”, the respondents doing the internet survey less had less contact than those who opted for the face to face interview. We can show more results which contain mode effects.

Added value: Using the results of the pilot survey, we can redesign the process surveying with mixed mode surveys. Also, we can apply an estimate to the mixed mode data.
Shim-Research on Pilot Survey for Mixed Mode Effects-233.pdf

Not to Be Considered Harmful: Mobile Users Do Not Spoil Data Quality in Web Surveys

Jana Sommer, Birk Diedenhofen, Jochen Musch

University of Duesseldorf, Germany

Relevance & Research Question:

The number of respondents accessing web surveys using a mobile device (smartphone or tablet) has rapidly been increasing over the last few years. Compared to desktop computers, mobile devices have smaller screens, different input options, and are used in a larger variety of locations and situations. The suspicion that data quality may suffer when respondents access surveys using mobile devices has stimulated a growing body of research that has mainly focused on paradata and web survey design. The question whether there are mode effects on the validity of web survey data, however, has been examined in only a few studies. To add to this research, we compared the quality of responses produced by mobile and desktop users responding to a political online survey. To examine data quality, we determined the consistency of the participant’s responses, and validated them against various internal and external criteria.

Methods & Data:

We collected the data of a large sample of participants in a political online survey conducted on the occasion of the 2013 German federal election. The internal-consistency reliability of a political knowledge test was used as an indicator of data quality. As additional indicators, we determined the consistency between self-reported voting intention for the German federal election in 2013 and party preference, coalition preference, and self-reported voting behavior in the previous election. Moreover, we examined the agreement between self-reported voting intention and the actual outcome of the election.


We found no difference between mobile users and desktop users for any indicator of data quality. In fact, the agreement between self-reported voting intention and the actual election result was larger for mobile users than for desktop users.

Added Value:

The present investigation adds to the ongoing discussion whether there are mode effects leading to a poorer quality of data submitted by respondents using mobile devices. Our findings suggest that the participation of mobile users does not compromise data quality, and that researchers do not need to worry about the participation of mobile respondents in web surveys.

Contact and Legal Notice · Contact Address:
Conference: GOR 15
Conference Software - ConfTool Pro 2.6.76
© 2001 - 2014 by H. Weinreich, Hamburg, Germany