Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
C 2: Hate Speech and Fake News
Time:
Thursday, 10/Sep/2020:
11:40 - 1:00

Session Chair: Pirmin Stöckle, University of Mannheim, Germany

Presentations

Building Trust in Fake Sources: An Experiment

Paul C. Bauer1, Bernhard Clemm von Hohenberg2

1MZES Mannheim, Germany; 2European University Institute, Italy

Relevance and research question:

Today, social media like Facebook and Whatsapp allow anyone to produce and disseminate “news”, which makes it harder for people to decide which sources to trust. While much recent research has focussed on the items of (mis)information that people believe, less is known what makes people trust a given. We focus on three source characteristics: Whether the source is known (vs. unknown), on what channel people receive its content (Facebook vs. web site) and whether previous information by that source was congruent (vs. incongruent) with someone's worldview.

Methods & Data:

In an pre-registered online survey experiment with a German quota sample (n = 1980), we expose subjects to a series of news reports, manipulated in a 2x2x2 design. Through the use of HTML, we create highly realistic stimulus material that is responsive to mobile and desktop respondents. We measure whether people believe that a report is true, and whether they would share it.

Results:

We find that individuals have a higher level of belief in and a somewhat higher propensity to share news reports by sources they know. Against our expectation, the source effect on belief is larger on Facebook than on websites. Our most crucial finding concerns the impact of congruence between facts and subjects’ world view. People are more likely to believe a news report by a source that has previously given them congruent information—if the source is unknown.

Added value:

We re-evaluate older insights from the source credibility literature in a digital context, accounting for the fact that social media has changed the way sources appear to news consumers. We further provide causal evidence that explains why people tend to trust ideologically aligned sources.



Social Media and the Disruption of Democracy

Jennifer Roberton1, Matt Browne2, François Erner1

1respondi; 2Global Progress

Relevance & Research Question:

It seems nostalgic to recall that the early days of the internet inspired hopes for a more egalitarian and democratic society. Some of this promise has been fulfilled — connectivity has enabled new forms of collective mobilization and made human knowledge accessible to anyone. But we are also living with the side effects of the Internet. Among them, pervasive disinformation in the polity that is weakening the integrity of our democracies and bringing people to the streets.

Fake news, hostile disinformation campaigns and polarization of the political debate have combined to undermine the shared narrative that once bound societies together. Trust in the institutions of democracy has been eroded. Tribalism and a virulent form of populism are the hallmarks of contemporary politics. The rules of politics are being rewritten.

Conducted as part of multi-stakeholder dialogue with the social media platforms on the renovation of democracy, our research explores the impact of social media on democratic society AND the impact of democratic disruptions on the reputation of social media platforms themselves.

Methods

20 minute surveys conducted in June and July 2019 in France, Germany and UK (n=500 in each country, representative age and gender). All agreed to install a software that monitors their online activity. They all have been tracked for 12 months before they participated in the research

Kmean segmentation combining declarative and passive data. Declarative data includes the attitude of each respondent towards “traditional” fake news. Passive data is mainly focused on the types of sites where they find information (main news websites or user generated content for instance).

Results

This research reveals three paradoxes of the digital democracy.

1. Those who supported, and who benefited from the digital revolution the most are those who trust the GAFAs the least, and think they now need to be controlled.

2. Those who trust democratic institutions the least are those who believe in Facebook political virtues the most.

3. To believe in fake news is less a cognitive matter than a political statement.

Added value

This research describes mechanisms by which facebook takes advantage of fake news.



What Should We Be Allowed to Post? Citizens’ Preferences for Online Hate Speech Regulation

Simon Munzert1, Richard Traunmüller2, Andrew Guess3, Pablo Barbera4, JungHwan Yang5

1Hertie School of Governance, Germany; 2University of Frankfurt, Germany; 3Princeton University, United States of America; 4USC, United States of America; 5UIUC, United States of America

Relevance & Research Question:

In the age of social media, the questions of what is allowed to say and how hate speech should be regulated are ever more contested. We hypothesize that content- and context-specific factors influence citizens’ perceptions of the offensiveness of online content, and also shape preferences for action that should be taken. This has implications for the legitimacy of hate speech regulation.

Methods & Data:

We present a pre-registered study to analyze citizens’ preferences for online hate speech regulation. The study is embedded in nationally representative online panels in the US and Germany (about 1,300 respondents, opt-in panels operated by YouGov). We construct vignettes in forms of social media posts that vary along key dimensions of hate speech regulation, such as sender/target characteristics (e.g., gender and ethnicity), message content, and target’s reaction (e.g., counter-aggression or blocking/reporting). Respondents are asked to judge the posts with regards to their offensiveness and consequences the sender should face. Furthermore, the vignette task was embedded in a framing experiment, motivating it by (a) looming government regulation protecting potential victims of hate speech, (b) civil rights groups advocating against censorship online, or (c) a neutral frame.

Results:

While for half (48%) of the posts the respondents saw no need for action by the platform provider, for 11% of the posts they would have liked to see the sender to be banned permanently from the platform. Violent messages are substantively more critically evaluated than insulting or vilifying messages. At the individual level, we find that females are significantly more likely to regard the posts as offensive or hateful than males. With regards to the framing experiment, we find that, compared to the control group, respondents confronted with the government prime are 20pp less likely to demand no action in response to offensive posts.

Added Value:

While governments around the world are acting towards regulating hate speech, little is known about what is deemed acceptable or inacceptable speech online in different parts of the population and societal contexts. We provide first evidence that could inform future debates on hate speech regulation.