Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Session Overview |
Session | ||||||
F 3: Poster Session (Part III)
| ||||||
Presentations | ||||||
Style for Success? A Study on the Impact of Avatars’ Styling on Perceived Competence and Warmth. Hochschule Ruhr West, Germany RELEVANCE & RESEARCH QUESTION: Avatars representing humans in a virtual environment are used in different online scenarios. One future application might be a digital assessment center, where candidates got represented by an avatar to design an inclusive application process. Based on the media equation theory (Reeves & Nass, 1996), prior evidences showing that styling has an influence on the evaluation of women (Klatt, Eimler & Krämer, 2016) might also be applicable to avatars. Nevertheless, it is still questionable, if and how this evaluation impacts the perception of the represented candidate’s capabilities. Thus, this study investigates the influence of an avatar’s styling on its perception and whether it has an effect on the perceived leadership abilities of the represented human. METHODS & DATA: To examine this question we conducted an online experiment with a 2 x 2 x 2 (skirt/pants, loose hair/braid, with/without makeup) between-subjects design. To enhance the generalizability two different figures have been evaluated and collapsed for the analyses. Overall 143 participants (55 female, Mage = 30.31, SDage = 13.28) evaluated the virtual woman concerning warmth, competence, status and leadership abilities. RESULTS: The results showed, that avatars with makeup were rated more competent (F(1,135) = 5,801, p = .026, η2 =.036), evoked a higher leadership ability (F(1,135) = 7,309, p = .008, η2 =.0.051 and a greater chance of getting hired (F(1,135) = 4,01 p = .047, η2 =.029) in comparison to no make-up. Additionally, avatars with a braid are perceived as more competent (F(1,135) = 6.578, p = .011, η2 = 0.41), are associated with higher leadership ability (F(1,135) = 7,274. p = .008, η2 = .051 ) and had greater chances to get the job (F(1,135) = 5,85, p = .017 η2 = .042) than ones with loss hair. Moreover, for avatars with loss hair no make-up leads to a higher warmth perception than make up (F(1,135) = 5,565, p = .020, η2 = .040). No differences for clothing occurred. ADDED VALUE: The results show that it is important to be careful while designing the look of a digital avatar, because styling has an effect on the perception of the avatar and this evokes differences in the perceived capabilities of the human represented by it.
Fake News: On the Influence of Warnings and Personality University of Würzburg, Germany Relevance & Research Question: Misinformation on social media, often called fake news, has moved to the center of public discussion. One way to combat the spreading of fake news is the implementation of warning messages (e.g., “contested by independent fact-checkers”) attached to social media posts. However, the effectiveness of warning labels is debated and empirical research in this field is rare. Based on prior theories in offline contexts we expected warning messages to decrease the perceived accuracy and the intention to share such posts. Importantly, we assumed that the effect of warnings varies with users’ personality, the Dark Triad of personality (narcissism, Machiavellianism, and psychopathy) in particular. Individuals with high Dark Triad scores are low on empathy, they disregard others, and tend to disrespect justice and truth. Thus, these variables should predict higher accuracy ascribed to social media posts with fake news warning labels and higher sharing intention of misinformation. Methods & Data: An online experiment was conducted (N=438). Facebook posts with and without warning messages were prepared that highlighted a subsequent news article. After reading the news article the perceived accuracy and the intention to share the content were assessed. Narcissism, Machiavellianism, and psychopathy scores were obtained. This experiment was based on a one-factorial between-subjects design with Dark Triad scores as continuous moderators. Results: On average, the effect of warning messages on perceived accuracy and intention to share was small. As expected, the impact of warning messages decreased with participants Machiavellianism and Psychopathy. The more individuals are predisposed to disregard others and to disrespect justice and truth, the less warning messages affect their judgment and intended behavior. Added Value: Warning messages are an often-discussed tool to counter the spreading of fake news but little is known about their immediate psychological effects. This project addressed this research lacuna and showed that personality traits predict the handling of misinformation. Looking back. Moving forward. 20 years of GOR. Norstat Group, Germany Relevance & Research Question: After the 20th GOR conference took place in Cologne last year, we thought it was time to examine the evolvement of the event. Its beginnings date back to the very early days of online research and prevailed through a very dynamic era with rapidly changing technologies. We wanted to know, what topics appeared and disappeared over the course of time and how the focus of the research community may have shifted. Methods & Data: With a little help of DGOF’s managing director Birgit Bujard, we collected and consolidated all available abstracts from the past twenty events between 1997 and 2018. As the setup of the conference has changed over the last decades, the data needed to be formatted, cleansed and translated into English, in order to create a comparable data set. Based on this data, we conducted a (text) analysis and visualized our most interesting findings. Results: Our infographic shows that there are temporarily trending topics, but also topics, which seem deeply ingrained in the DNA of GOR and the online research community. We show the development of certain key topics and also other KPIs that illustrate the evolvement of the conference. Added Value: Our poster represents some of the identity of the General Online Research conference, but is also meant to surprise and hopefully entertain the conference participants.
Respondents behavior in web surveys: Comparing positioning effects of a scale on impulsive behavior GESIS-Leibniz Institute for the Social Sciences, Germany Relevance & Research Question: Previous research has shown that the quality of data in surveys is affected by questionnaire length. With an increasing number of survey questions that respondents have to answer, they can become bored, tired and annoyed. This may increase respondents’ burden and decrease their motivation to provide meaningful answers which might lead to an increased risk of showing satisficing behavior. Methods & Data: This paper investigates effects of item positioning on data quality in two web surveys. The first is an eye-tracking study with 130 participants and the second an online survey with 900 respondents. In each study, respondents answered a grid question on impulsive behavior that consists of eight items and a five-point response scale. The scale was randomly provided either in the beginning or at the end of the web questionnaire. Results: The position of the scale was predicted to influence a variety of indicators of data quality and response behavior in both web surveys: item nonresponse, response times, response differentiation, as well as measures of attention and cognitive effort operationalized by fixation counts and fixation times (only available for the eye-tracking study). Results show that data quality is lower for questions positioned later in a questionnaire which is shown in less item differentiation, shorter response times, less fixation times and less fixation counts. Added Value: This study adds to the existing research on the optimal positioning of variables in surveys.
Fit for Industry 4.0? – Results of an Empirical Study Bielefeld University of Applied Sciences, Germany Relevance & Research Question: Digitization and IoT have become the drivers of a far-reaching transformation process in companies worldwide. Companies are now faced with the challenge of shaping this change while considering the people and the organisation, in addition to technology. The aim of this study was therefore to examine the effects of the digitization of company employment and competence requirements, differentiated according to employee groups. Methods & Data: Following preliminary literature research and qualitative expert interviews [n=6], a research framework was developed that consists of two interconnected levels: [1] requirements of internal and external digitization as well as [2] qualifications and competencies of different occupation groups. Based on this, a quantitative online-study was conducted from Oct. 2017 to Jan. 2018. Participants [n=150] were recruited using a personal approach and consisted of company representatives from Germany with expertise in the field of digitization and HR. Results: Concerning internal focus, 73% of companies surveyed have an ERP system and 69% have an intranet, but only 36% use a cloud-system, 29% data analytics and 8% AR/VR. Regarding external factors, 50% do not have an online shop or a platform for customer communication. 77%, however, confirm the examination of new digital business models. Additionally, most respondents do not expect any employment effects from digitization but do expect a change in tasks and a greater need for training, especially for skilled workers (85%, academics 84%, unskilled 66%). Openness to change is regarded as the most important competency across the employee groups, followed by the ability to learn for unskilled (90%) and skilled (88%) workers, the ability to think in context for academics (97%) and communication skills for managers (96%). While the most important task for managers is the design of framework conditions, for other employee groups it will be working with new technologies and data analysis . Added Value: The results show the status quo and untapped potential of these companies. It is clear that, among IT and media skills, companies are faced with other qualification needs and new areas of responsibility within the scope of digital transformation, which differ according to occupation group.
How to catch an online survey cheater Demetra opinioni.net s.r.l., Italy Relevance & Research Question: Online surveys are self-administered by respondents seeking to receive incentives for completing questionnaires. Some respondents use minimal cognitive effort in order to quickly complete the survey and receive the incentives. However, this can trigger behavior such as not reading the questions carefully, racing through the survey or intentionally cheating, resulting in poor data quality. This paper aims to investigate the behavior of cheaters among online respondents from a non-probability-based panel analyzing seven techniques for detecting cheaters applied in different ways in order to find an efficient methodology that leads to the exclusion of the greatest possible number of cheaters without eliminating honest panelists. Methods & Data: We used data from 2 web surveys conducted in Italy (during January 2019) on members of our own panel, Opinioni.net, which is composed of 20,558 active panelists. The 2 surveys considered in our study have the following common characteristics: a sample size of 1,000, population target and a food consumption topic. Sample members were stratified by geographic area, gender and age in order to be representative of the Italian population. In both questionnaires, we asked a particular question that we used as target variable. The techniques used to detect the cheaters are: direct instruction in the body of the question, straightlining checks, speeder checks, trap questions (fake brand/names), open-ended question checks, multiple unlikely events in screening questions and consistency checks. We used the first survey as training set to determine a method for identifying cheaters. In particular, we analyzed the estimates of the target variable in each check and in any combination thereof, in comparison with a "gold standard". Once the method was defined, we validated it using the second survey as test set. Results: Preliminary findings show that removing respondents who fail a single quality control question does not improve data quality. In our analysis, participants flagged for removal should fail at least 2 quality control measurements. Added Value: Our paper aims to expand knowledge of cheaters and techniques to identify them. The main value of this work is the number of quality controls tested.
How Much Text Is Too Much? Assessing Respondent Attention to Instruction Texts Depending on Text Length University of Mannheim, Germany Relevance & Research Question: Whether respondents pay adequate attention to a questionnaire and the stimuli within it, has been a concern for survey researchers for decades. One way of assessing attention is asking respondents for specific answers or actions, known as an instructional manipulation check (IMC). Previous research into this field has largely dealt with the question whether respondents read texts or not, but not with how much text they can be expected to read. I fill this gap in the literature by including an IMC in an online panel survey and systematically varying the length of the surrounding text. Methods and Data: Data stems from the November 2018 wave of the German Internet Panel (GIP), an online panel representative of the German population. About halfway into the questionnaire, respondents are instructed not to answer a specific question, but to continue by clicking the GIP logo instead. This instruction was “hidden” in the question text, the length of which was experimentally varied between four conditions: (1) Only the instruction was displayed, (2) the instruction was placed in one paragraph of text, (3) the instruction was placed in the second of two paragraphs of text, and (4) the instruction was placed in the fourth of four paragraphs of text. Results: Whether respondents will carefully read a text strongly depends on its length. The passing rate for the IMC ranges from about 80% for the shortest to under 40% for the longest text condition. The more text respondents are asked to read, the fewer of them will actually do so. While lower attention from respondents using mobile devices is a commonly voiced concern, I find no evidence to support this. Added Value: Respondents were previously often treated as either attentive or not, yet my results show that whether respondents carefully read a text strongly depends on how much text they are asked to read. Respondent attention can therefore be optimized for by keeping stimuli short. The results also indicate that respondents using mobile devices do not pay less attention to the survey.
PC versus mobile survey modes: are people's life evaluations comparable? 1STATEC research, Luxembourg; 2STATEC research, Luxembourg; 3MZES, Mannheim University (Germany), The literature on mixed mode surveys has longly investigated whether face-to-face, telephone, and online survey modes permit to collect reliable data. Much less is known about the potential bias associated to using different devices to answer online surveys. We compare subjective well-being measures collected over the web via PC and mobiles to test whether the survey device affects people's evaluations of their well-being. We use unique, nationally representative data from Luxembourg which contains five measures of subjective well-being collected in 2017. The use of multinomial logit with Coarsened Exact Matching indicates that the survey tool affects life satisfaction scores. On a scale from 1 to 5, where higher scores stand for greater satisfaction, respondents using mobile phones are more likely to choose the highest well-being category, and less likely to choose the fourth category. We observe no statistical difference for what concerns the remaining three categories. We test the robustness of our findings using three alternative proxies of subjective well-being. Results indicate that survey tools do not induce any statistically significant difference in reported well-being. We discuss the potential consequences of our findings for statistical inference. |
Contact and Legal Notice · Contact Address: Privacy Statement · Conference: GOR 19 |
Conference Software - ConfTool Pro 2.6.118 © 2001 - 2018 by Dr. H. Weinreich, Hamburg, Germany |