Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
C02: Fake News, Fake Users
Integrating Artificial intelligence (AI) and the Human Crowd to Tackle 'Fake News': A Design Proposal
1University of Cologne, Dept. of Media and Technology Mgmt., Cologne, Germany; 2EYEO GmbH, Cologne, Germany
Relevance & Research Question:
Fake news has been around since politics began. Information - especially news - is often construed with 'truth' simply because online content seems to reflect the 'real world'. However, our conscious or unconscious agreement or disagreement with an author's words does not make them true or untrue. Key to successfully detecting fake news is reliable and unbiased data sources. They are exceedingly hard to get or openly accessible. The question arises how we can deploy technology to inform users of the trustworthiness of a particular piece of news or source of information.
Methods & Data (Proposed Design):
We conceptualize AI-based decision 'support' with human content rating (crowdsourcing paradigm) to tackle the challenge. In a first design phase, an open-source browser extension provides access to Metacert's data sources of fact-checking websites and uses that data to rate the trustworthiness of (inspected) news websites. A simple traffic light system - based on straight-forward (transparent) analytics - points to the trustworthiness of web content or entire websites leaving it up to the user to continue reading. Users can anonymously offer feedback likely leading to an increasing number of instantiations and degree of granularity. After evaluation by fact checkers, the accumulated feedback shall be used to continuously update the database underpinning the system.
In a second design phase, this 'crowd collected' data continuously feeds the AI-grounded sensing for supervised learning. While early AI-based ratings are likely to be wrong, the designed system should 'learn' fast from many user inputs. Eventually, moving ratings to the blockchain to improve transparency and third-party contributions to the database.
Results & Added Value:
The design detects 'fake news' with strong network effects (value of a good to one user depends on the number of other users) on a multi-sided platform (users, data providers, and web publishers) requiring trust resulting from credible input data among data providers, web publishers, and users.
Fact or Fake? A mediapsychological perspective on children judging credibility of news
Universität Würzburg, Germany
Relevance & Research Question
The internet and particularly social media have become an important source of information resulting in an increased risk to encounter so-called “fake news”. Consequently, information competence - the ability to detect incorrect and unreliable information - constitutes a key subdomain of digital competence. From a mediapsychological perspective, this study aims for (1) a methodological approach to assess information competence and (2) initial insights into children’s information competence.
Methods & Data
In a pretest (n = 53 students), four news articles (two “true”, two “fake”) were selected as stimulus material. A total of 247 German students (127 male) participated in the main study, ranging in age from 10 to 19 years (M = 13.66; SD = 2.36). In a 2x3x3 experimental design, participants were randomly assigned to two news articles (“true” vs. “fake”) with articles varying regarding channel (print, online, social media), source (Bild, Süddeutsche, anonymous) and topic. Participants evaluated these items regarding credibility and verisimilitude. Further, self-reports asked for impulsivity, extraversion, need for cognition (nfc), media use and information competence (e.g. trust in media, preferred sources of information).
Descriptive analyses revealed 83.62 % of the “true articles” and 77.86 % of the “fake articles” to be identified correctly with print articles to be perceived as most truthful. Interindividual differences affected the evaluation of verisimilitude. Hence, detecting “fake article” incorrectly was significantly negatively associated with age (r = -.145, p = .001). Further, nfc and personality were significantly correlated with different aspects of children’s media use. The ﬁnal model with all predictors (nfc, extraversion, impulsivity, media use, source, channel, topic and age) accounted for a signiﬁcant proportion of the total variation in participants’ evaluation of verisimilitude (adj.R2 = .424, F[41, 452] = 9.849, p < .001) with nfc, age, topic and newspaper use to contribute significantly.
Information competence has become essential in our digitized news world. Our study reveals first insights regarding children’s evaluation of news articles and their ability to distinguish correct from incorrect information. First conclusions about constituting predictors of information competence are drawn and first ideas of pedagogical interventions are derived and discussed.
Fake it till they take it? Pseudo user effects and pseudo user literacy
Relevance & Research Question: Social bots, click farm employees, and micro workers—so-called pseudo users—can inflate the number of likes of online messages. Thus, they manipulate genuine users’ credibility perceptions of, attitudes towards, and intentions to engage with (political) online messages. One way to fight this effect is by fostering pseudo user literacy. However, experimental evidence for this effect is missing. The present study tackles this gap by exploring the following questions: How does endorsement by large amounts of pseudo users affect 1) credibility perceptions of, 2) attitudes towards, and 3) intentions to engage with social media content among pseudo-user-literate and pseudo-user-illiterate individuals?
Methods & Data: I conducted an online survey with a 2 (information about pseudo users vs. control) × 2 (pseudo user likes vs. control) between-subjects design (N = 201). To increase variance in pseudo user literacy throughout the sample, participants were randomly assigned to watch a video about social bots before completing a literacy measure. Subsequently, an Instagram post by a fictitious health insurance company was shown to all participants. I chose this topic because pre-examinations suggested that respondents would not collectively tend towards a pro or a contra position. Participants saw one of two versions of the post. One had zero likes, the other one was liked by 316.609 pseudo user accounts, as respondents were forced to find out.
Results: Three moderation analyses tested the research questions (predictor: pseudo likes; moderator: literacy; dependent variables: credibility perceptions, attitudes, or engagement intention). Neither significant unconditional nor conditional effects were discovered in any of the analyses.
Added value: The analyses indicate that even those social media users who are not aware of pseudo users may not be particularly prone to have their credibility evaluations, attitudes, and behavioral intentions sabotaged by them. From that perspective, pseudo users may not be a threat to online discourses as dystopian debates have suggested. Meanwhile, the results also suggest that literate individuals do not make very great efforts to defend their evaluations, attitudes, and behaviors against manipulation attempts of pseudo users. This finding underlines the need for critically evaluating the uniform call for media literacy campaigns.
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: GOR 19
|Conference Software - ConfTool Pro 2.6.118
© 2001 - 2018 by Dr. H. Weinreich, Hamburg, Germany