Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Session Overview
F 1: Poster Session (Part I)
Thursday, 07/Mar/2019:
2:15 - 3:30

Location: Gallery
TH Köln – University of Applied Sciences

Show help for 'Increase or decrease the abstract text size'

Finding the trolls lurking beneath the news. A two-step approach to identify perceived propaganda through machine learning.

Vlad Achimescu

University of Mannheim, Germany

Relevance & Research Question: Recently, numerous attempts by foreign actors to manipulate public opinion were uncovered, where false accounts are employed to spread propaganda online. Eastern Europe is highly exposed and vulnerable to this type of political astroturfing. Users of online newspaper forums have been vocal in calling out some posters as ‘Russian trolls’, in an act of informal flagging. I investigate the potential of using these informal flags to predict perceived propaganda using machine learning models in a two-step approach.

Methods & Data: Over 200.000 comments posted to articles published in 2017 on a large Romanian online newspaper were scraped. Using specific keywords and manual classification, informal flags are identified. Supervised machine learning (regularized logistic regression and random forests) is used in two steps. The first step predicts whether a comment is an informal flag or not, based on the word content and metadata (Model 1). In the second step, flagged messages are labeled as potential propaganda and another model predicts whether a message would be flagged or not (Model 2) using the same features as Model 1.

Results: Through manual classification, 350 informal flags are identified. The best model in the first task has a precision of 0.69 on the test set. Applying this model to unlabeled data, 430 additional informal flags are discovered. Using both initial and additional flags in Model 2 improves prediction accuracy from 0.76 to 0.85, compared to using initial flags only. Random forests show improved performance over regularized logistic regression. Word content is key for identifying flags, while metadata is essential for identifying messages posted by trolls. Informal flaggers write shorter messages that get positive ratings, while trolls tend to obtain more negative ratings.

Added Value: This research contributes to the identification of online propaganda using computational text analysis. It shows the potential of externalizing the process of labeling to members of online communities, but it also highlights the risks of misclassification. The improved accuracy of the two step approach shows that it is necessary to periodically update the labeling process rather than to rely on a fixed model.

Achimescu-Finding the trolls lurking beneath the news A two-step approach-280.pdf

Do We Blame it for Its Gender? How Specific Gender Cues Affect the Evaluation of Virtual Online Assistants

Carolin Straßmann, Annika Arndt, Anna Dahm, Dennis Nissen, Björn Zwickler, Bijko Regy, Melissa Güven, Simon Schulz, Sabrina Eimler

Hochschule Ruhr West, Germany


Virtual online assistants give us recommendations on websites or help us with our daily lives. These agents have mostly a humanoid design and are associated with a gender. Based on the media equation theory (Revees & Nass, 1996) assistants trigger the same social responses as humans. Consequently, gender stereotypes are applied, which were found to affect the perception of the agent (c.f. Nowak & Fox, 2018). The gender of the assistant can be conveyed by different cues, which might make these stereotypes more or less salient. The present study aims to investigate the effect of different cues representing assistant’s gender on its evaluation.


An online experiment with a 2x3x2 between-subjects-design, where gender (male vs. female), gender cues (either represented by name, embodied character or voice) and interaction quality (flawless interactions or incorrect interaction) was conducted. A total of 138 people (52 female; Mage = 23.93, SDage = 8.19) completed the questionnaire. Participants evaluated the assistant afterwards with regard to warmth and competence.


Results indicate that female assistant were perceived as warmer than male (F(1, 135) = 4.58, p = .034, η2 = 0.03) and that when the interaction was flawless, the agent got evaluated as more competent after a flawless interaction than after an incorrect interaction (F(1, 135) = 4.07, p = .046, η2 = 0.03). Moreover, the representation of the gender differed with regard to warmth (F(2, 135) = 4.76, p = .010, η2 = 0.07), where the voice was perceived as significantly less warm than the name or embodied character. Additionally, a 3-way interaction between all independent variables occurred with regard to warmth (F(2, 135) = 3.20, p = .044, η2 = 0.05): For female agents represented through an embodied character an interaction with a failure leads to a higher warmth evaluation than a flawless interaction.


The studies’ findings emphasize that gender stereotypes and their consequences are deeply rooted in the human’s nature. Moreover, specific representation of the assistant’s gender seem to boost the application of gender stereotypes.

Straßmann-Do We Blame it for Its Gender How Specific Gender Cues Affect the Evaluation-285.pdf

Teaching Practical Tasks with Virtual Reality and Augmented Reality: An Experimental Study Comparing Learning Outcomes

Alexander Arntz, Sabrina Eimler, Uwe Handmann

Hochschule Ruhr West, Germany


Currently the effectiveness of Virtual Reality (VR) and Augmented Reality (AR) systems as teaching methods for practical skills is largely unexplored. Studies exploring the question whether these systems can provide the same or better learning outcomes than a text instructed practical task are still missing. This abstract describes result from an experimental study exploring computer assembling tasks combined with pre-/post-online questionnaire.


Three conditions (VR, AR and a real setup) were used to teach participant how to assemble a standard desktop computer. Each condition was divided into two parts: (1) participants were confronted with their specific scenario, (2) participants had to go through a real practice after one week. The experimental setup was accompanied by pre- and post-condition-online-questionnaires (using SoSciSurvey). Besides performance data (i.e. learning outcome), wellbeing, prior knowledge of the task and the system used as well as system usability measures were assesses. The survey helped to determine the learning outcome by containing a quiz that queried the designation, function and the correct assembling of the components. Time required to complete the task and error quote were collected using a checklist.


Results concerning the learning outcome showed that participants in the VR-condition outperformed those who learned from the real setup ((M=10.0, SD=0.0) [virtual reality] vs. (M=8.95, SD=1.27) [control]). Furthermore, results from the assembling duration assessment demonstrated that the VR-group participants completed their tasks 6.62% faster than the control group. Regarding the identification of hardware parts, both groups had a significant improvement during the post-condition compared to the first test run, indicating a learning progress. However, due to the VR group achieving a better outcome in average answers and a more significant difference between the trials, the results indicate a better performance by participants assigned to the VR-condition.


The results show that VR and AR systems could exceed text-based approach in terms of learning outcome performance. The effectiveness of the systems implicates a major benefit for the educational landscape, as learning content that is not realizable in terms of cost, distance or logistics could be designed as an immersive and engaging experience.

Arntz-Teaching Practical Tasks with Virtual Reality and Augmented Reality-284.pptx

Web Survey on e-grocery consumers’ attitudes- An efficient design experiment that mixes stated preference and rating conjoint tasks.

Orlando Marco Belcore1, Luigi Dell'Olio2, Massimo Di Gangi1

1Università degli Studi di Messina, Italy; 2Universidad de Cantabria, Santander, Spain

Relevance & Research Question: Digital infrastructures have changed everyday life, thereby helping us to solve different tasks. The e-grocery represents a new barrier for e-commerce, so a web-based survey has been developed with the aim to: reach very diverse samples, intercept consumers’ feelings, understand their perceived value and provide information on real behaviours. This proposal would represent an effective instrument to evaluate the future demand of e-grocery services and the impact generated by these on urban areas.

Methods & Data: The proposed web survey consists of three fundamental sections: a revealed preferences (RP) one, an efficient experimental design as Stated Preference (SP) and a rating based conjoint task. To help people who are not familiar with e-grocery and choice experiments, multimedia contents have been developed inside the web site and the survey. Moreover to overcome limitations of SP experiments when complex situation has to be studied, the scenarios have been divided in three steps and variables introduced combining images and descriptions creating an artificial purchase timeline that help the interviewed to handle a wider range of variables by solving simple tasks.

Starting from January 2019 respondents has been recruited spreading the survey by means QRcode touchpoints and social media.

Results: Submitting the survey to experienced consumers and newcomers across countries helps a more realistic evaluation increasing reliability of data. The recursive choice task inside the SP let to evaluate single variable relative weight and its cut off points. Data from Likert evaluation are used to strength the reliability of SP experiment pointing out the existence of patterns and situations that bring decision makers to select a specific purchase strategy increasing so clustering flexibility for analysts.

Added Value: This experiment, representing for an interviewed a complete cognitive process, underlines the potential offered by supplementing data from random utility theory and conjoint analysis to evaluate consumers’ attitudes, expectations and choices. The introduction of a multi-channel purchase option overcomes the limitation to agree or not with online strategy and the “timeline” solution increases reliability meets the satisfying of simplicity and accuracy. This will allow us to strength classical latent class models.

Belcore-Web Survey on e-grocery consumers’ attitudes- An efficient design experiment that mixes stated p.pdf

When Gender-Bias Meets Fake-News - Results of Two Experimental Online-Studies

Sarah Bludau1, Gabriel Brandenberg2, Lukas Erle2, Sabrina Eimler2

1University of Duisburg-Essen, Germany; 2University of Applied Science Ruhr West, Germany

RELEVANCE & RESEARCH QUESTION: Online media are perceived to be credible sources for information and up-to-date news. However, this also implies new possibilities to publish false information accessible to a large population. Information are evaluated unconsciously, resulting in biased interpretations and attributions. Goldberg (1968) demonstrated that female authors are perceived less credible than identical articles written by male authors. As a result gender-stereotypes could facilitate higher credibility of false information in online-settings. It is assumed that these biases also apply in online media with a variety of consequences for individuals and society.

METHODS & DATA: In two online studies (N = 226; N = 95) four stimulus articles were presented in a 2 (male vs. female author of text) x2 (reported misusage of a technical innovation by men vs. women) between-subjects-design. Participants were assigned randomly to one experimental condition and asked for their perception of the text (e.g. quality, style, credibility) and author (e.g. warmth, competence). The second study considered perceived authenticity in addition.

RESULTS: Results yield a main effect of authors gender on perceived credibility F(1, 224) = 5.04, p < 0.05, η2 = 0.05 showing higher scores for male authors. Participants considered articles presenting male misusage of a technical innovation to be less credible (F(1, 224) = 4.54, p < 0.05, η2 = 0.02). Also, there is an interaction effect showing that articles describing female misusage of technical innovation written by a male author are evaluated most credible (F1, 222) = 4.01, p < 0.05; η2 = 0.02), also confirming the assumption of a gender-bias in online media. Female authors’ warmth was perceived higher than males’ (F(1,224) = 11.08, p = .001, η² = .05), whereas no difference was found regarding perceived competence of the author. The second study showed that perceived authenticity has an impact on authors rating.

ADDED VALUE: Results indicate that, despite a change in the prevalence of female authors (e.g. bloggers, influencers) in the internet, a certain reproduction and stability of gender stereotypes still exists. Also, gender biases are at least partially intertwined with news credibility. Further results and limitations will be discussed.

Bludau-When Gender-Bias Meets Fake-News-283.pdf

Making Online Research Findable, Accessible, Interoperable and Reusable (FAIR)

Ines Drefs

GO FAIR International Support & Coordination Office, Germany

Across all disciplines, research is faced with digitalization and ensuing expectations towards sharing digital(ized) data, especially when publicly funded. In the field of online research, data are digital and thus machine-readable by nature. Hence, not only expectations are high in terms of full exploitation of this research data. There is also increased potential for data sharing and re-use within the field of online research. For researchers to manage their data, the so-called FAIR principles have been widely promoted as guidelines implying that research data should be made *f*indable, *a*ccessible, *i*nteroperable and *r*eusable. How can data FAIRification be best realized in online research? How can online researchers benefit from synergies when developing solutions for FAIR data management?

In the transition to FAIR, early movers from various disciplines and regions have started to organize themselves as so-called Implementation Networks (INs) of the GO FAIR initiative. GO FAIR INs consist of individuals, institutions and organisations committed to making services and data FAIR. At a practical level, this happens on the basis of three interactive processes which constitute the pillars of GO FAIR: GO CHANGE refers to IN activities that foster a socio-cultural change toward data sharing in the broader scientific system. The GO TRAIN process is fostered by INs who develop training curricula focused on FAIR Data Stewardship as well as certification schemes for pertinent competencies. GO BUILD refers to INs’ efforts of developing technical standards and infrastructure components needed to create an Internet of FAIR Data and Services.

Since its kick-off in 2018, GO FAIR has seen the emergence of more than 30 Implementation Networks, now in various stages of development. Collectively, the INs span a broad range of actors including research communities, service providers, librarians and funders.

GO FAIR participants benefit from workshops and meetings for knowledge exchange and knowledge transfer organized by the initiative’s Support and Coordination Office. By synchronizing their “FAIRification” efforts, the INs create synergies and avoid fragmentation as well as silo formation. INs can be joined any time or new INs can be launched. As such the GO FAIR initiative is entirely open, inclusive and stakeholder-driven.

Drefs-Making Online Research Findable, Accessible, Interoperable and Reusable-269.pdf

Fightclub - Market research vs. UX research

Lisa Dust1, Christian Graf2

1Facts and Stories GmbH, Germany; 2UXessible GbR, Germany

Relevance & Research Question: The relationship between UX research (user research as part of the user experience design) on the one hand and market research on the other hand is being discussed again and again lately. While one part of the community tends to emphasize differences between the two fields, the other points out clear overlaps. One only seems to agree that market and UX research is important. So, in the view of the protagonists, what is the relationship between both and what role does each play in the typical lifecycle of product? Where are the differences and commonalities?

Methods & Data: An online survey with closed and open questions was started. N=37 professionals completed it.

Results: We found that relevance of market research is seen as especially strong in the research and market entry stage (not surprisingly). UX research is regarded as especially strong in the conceptual and implementation stage. Interestingly when looked over all stages both are complementing each other.

Added Value: Lisa as a representative of market research and Christian as the one with UX research experience have embarked on a journey of discovery. They present controversial points for you and report on their (provisional) results. They want to enable a discussion, so that everyone has a better idea about each other's strengths and how to use them adequately.

Dust-Fightclub - Market research vs UX research-286.pdf

Survey Attitude Scale (SAS): Are Measurements Comparable Among Different Samples of Students from German Higher Education Institutions?

Isabelle Fiedler, Ulrike Schwabe, Swetlana Sudheimer, Nadin Kastirke, Gritt Fehring

Deutsches Zentrum für Hochschul- und Wissenschaftsforschung (DZHW), Germany

Besides others, general attitudes towards surveys are part of respondent’s motivation for survey participation. There is empirical evidence that these attitudes do predict participant’s willingness to perform supportively during (online) surveys (de Leeuw et al. 2017; Jungermann/Stocké 2017; Stocké 2006). Hence, the Survey Attitude Scale (SAS) as proposed by de Leeuw et al. (2010) differentiates between three dimensions: (i) survey enjoyment, (ii) survey value, and (iii) survey burden. Referring to de Leeuw et al. 2017, we investigate into the question whether the SAS measurements can be compared across different online survey samples of students from German Higher Education Institutions (HEI).

Therefore, we implemented the nine item short form of the SAS, adopted from the GESIS Online Panel (Struminskaya et al. 2015) at the beginning of three different online surveys for German students and PhD students being conducted recently: First, the HISBUS Online Access Panel – a periodic cross-sectional study of higher education students on current study specific issues (winter 2017/2018: n=4,895), second the seventh online survey of the National Educational Panel Study (NEPS) - Starting Cohort “First-Year Students” (winter 2018: n=4,939), and third, a quantitative pretest among PhD students within the National Academics Panel Study (Nacaps; spring 2018: n=2,424). To validate the original scale in each dataset we use confirmatory factor analysis (CFA).

Comparing the CFA results, our empirical findings indicate that the latent structure of the SAS is reproducible in all three samples. Factor loadings as well as reliability scores support the theoretical structure adequately. Thereby, our findings support the validity of the proposed nine item short form of the SAS, for new and repeated respondents as well.

By showing that the standardized short SAS instrument works for different samples, we contribute to existing literature. Since de Leeuw et al. 2017 analyses are based on four general population surveys, we complete the picture specifically for young highly educated respondents. For further research, we aim to pool our data to investigate into more sophisticated methods ensuring measurement equivalence (Chen 2007).

Fiedler-Survey Attitude Scale-291.pdf

Embedding the first question in the e-mail invitation: the effect on web survey response

Marco Fornea1, Chiara Respi2, Beatrice Bartoli1, Manuela Ravagnan1

1Demetra srl, Italy; 2University of Milano-Bicocca, Italy

Relevance&Research Question: Low response rates in web surveys are a challenging issue. Researchers explore several response inducements, e.g. when inviting participants through e-mail. However, we are aware of only two studies that focus on embedded questions in the e-mail invitation. The main aim of this poster is to assess the impact of tailoring the e-mail invitation text on response. In particular, we evaluate the impact of an e-mail invitation that includes the first question of the web questionnaire vs a standard e-mail invitation on survey participation, questionnaire completion and completion time, break offs, and respondents’ composition.

Methods&Data: We use experimental data from a web survey conducted on delegates of the trade union “Italian General Confederation of Labour”. Sample members (N=5,494) were stratified by geographic area and type of trade-union category, and then they were randomly (within the strata) assigned to two groups: the “link” and the “first question” e-mail invitation group. The text of the e-mail sent to the two groups was different only in the final statement. At the end of the e-mail text (the same for both groups), in the “first question” group, the first question of the questionnaire was reported, while, in the “link” group, the survey link was included. To analyse our data we adopt both bivariate and multivariate analysis.

Results: Preliminary findings show that the “first question” group is more likely to complete the questionnaire than the “link” group. The higher break-off rate for the “first question” group suggests that the embedded invitation is also effective in stimulating “reluctant” respondents to start the questionnaire. Moreover, there are no significant differences between the two groups on completion time. Lastly, respondents from the geographic areas where Internet access is less spread are more likely to respond when invited through an embedded e-mail.

AddedValue: We believe that our work may contribute to expand the knowledge on the effectiveness of embedding a question in the e-mail invitation on response. Indeed, to the best of our knowledge, this is the first study that looks at the impact of the embedded invitation on completion time and respondents’ composition.

Fornea-Embedding the first question in the e-mail invitation-264.pdf

Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: GOR 19
Conference Software - ConfTool Pro 2.6.118
© 2001 - 2018 by Dr. H. Weinreich, Hamburg, Germany