Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Date: Thursday, 09/Sept/2021
11:00 CESTTrack A: Survey Research: Advancements in Online and Mobile Web Surveys
 
11:00 CESTTrack B: Data Science: From Big Data to Smart Data
 
11:00 CESTTrack C: Politics, Public Opinion, and Communication
 
11:00 CESTTrack D: Digital Methods in Applied Research
 
11:00 CESTTrack T: GOR Thesis Award 2021
 
11:00 - 11:30 CESTGOR 21 Conference Kick-off
 
11:30 - 12:30 CESTA1: Probability-Based Online Panel Research
Session Chair: Florian Keusch, University of Mannheim, Germany
 
 

The Long-Term Impact of Different Offline Population Inclusion Strategies in Probability-Based Online Panels: Evidence From the German Internet Panel and the GESIS Panel

Carina Cornesse1, Ines Schaurer2

1University of Mannheim; 2GESIS - Leibniz Institute for the Social Sciences

Relevance & Research Question:

While online panels offer numerous advantages, they are often criticized for excluding the offline population. Some probability-based online panels have developed offline population inclusion strategies: providing internet equipment and offering an alternative survey mode. Our research questions are:

1. To what extent does including the offline population have a lasting positive impact across the survey waves of probability-based online panels?

2. Is the impact of including the offline population different when providing internet equipment than when offering an offline participation mode?

3. Is the impact of offering an alternative participation mode different when extending the alternative mode offer to reluctant internet users than when only making the offer to non-internet users?

Methods & Data:

For our analyses, we use data from two probability-based online panels in Germany: the GIP (which provides members of the offline population with internet equipment) and the GESIS Panel (which offers members of the offline population as well as reluctant internet users the possibility of participating in the panel via postal mail surveys). We assess the impact of including the offline population in the GIP and GESIS Panel across their first 12 panel survey waves regarding two panel quality indicators: survey participation (as measured using response rates) and sample accuracy (as measured using the Average Absolute Relative Bias). Our analyses are based on nearly 10,000 online panel members, among them more than 2,000 members of the offline population.

Results:

We find that, even though recruitment and/or panel wave response rates are lower among members of the offline population than among members of the online population, including the offline population has a positive long-term effect in both panels, which is particularly due to the success of the inclusion strategies in reducing biases in education. In addition, it pays off to offer an offline population inclusion strategy to people who use the internet but do not want to use it for the purpose of completing online surveys.

Added Value:

Ours is the first study to compare the impact of different offline population inclusion approaches in probability-based online panels.



Why do people participate in probability-based online panel surveys?

Sebastian Kocar, Paul J. Lavrakas

Australian National University, Australia

Relevance & Research Question: Survey methodology as a research discipline is predominantly based on quantitative evidence about particular methods since it is fundamentally a quantitative research approach. Probability-based online panels are relatively few in number and there are many knowledge gaps that merit investigation. In particular, more evidence is required to understand the successes and failures in recruiting and maintaining the on-going participation of sampled panelists. In this study, we aim to identify the main motivation factors and barriers in all stages of the online panel lifecycle – recruitment to the panel, wave-by-wave data collection, and voluntary/opt-out attrition.

Methods & Data: The data were collected with an open-ended question in a panel survey and semi-structured qualitative interviews. First, 1500 panelists provided an open-ended verbatim about their motivations for joining the panel, which was gathered in a 2019 wave of Life in Australia™. Between April 2020 and February 2021, fifteen of these panelists were classified into three distinct groups based on their panel response behavior and participated in an in-depth qualitative interview. Each of these panelists also completed a detailed personality inventory (DiSC test). Due to COVID-19 crisis, the in-depth interviews were conducted virtually or over the phone.

Results: The results showed that (1) having the opportunity to provide valuable information, (2) supporting research, (3) having a say and (4) sharing their opinions were the most common reasons reported for people joining the panel and completing panel surveys. The most commonly reported barriers were (1) major life change, (2) length of surveys, (3) survey topics and (4) repetitive or difficult questions. In terms of personality types (DiSC), we can report that non-respondents on average scored much lower on dominance and higher on steadiness than frequent respondents.

Added Value: The study uses qualitative data to link the reported motivation and barriers with the existing survey participation theories, including social exchange theory, self-perception and compliance heuristics. It also relates the theories and the panelists’ reporting of their online panel behavior with their personality types. At the end, we turn the evidence from this study into practical recruitment and panel maintenance solutions for online panels.

 
11:30 - 12:30 CESTB1: Digital Trace Data and Mobile Data Collection
Session Chair: Stefan Oglesby, data IQ AG, Switzerland
 
 

The Smartphone Usage Divide: Differences in People's Smartphone Behavior and Implications for Mobile Data Collection

Alexander Wenz, Florian Keusch

University of Mannheim, Germany

Relevance & Research Question: Researchers increasingly use smartphones for data collection, not only to implement mobile web questionnaires and diaries but also to capture new forms of data from the in-built sensors, such as GPS positioning or acceleration. Existing research on coverage error in these types of studies has distinguished between smartphone owners and non-owners. With increasing smartphone use in the general population, however, the digital divide of the "haves" and "have-nots" has shifted towards inequalities related to the skills and usage patterns of smartphone technology. In this paper, we examine people’s smartphone usage pattern and its implications for the future scope of mobile data collection.

Methods & Data: We collected survey data from six samples of smartphone owners in Germany and Austria between 2016 and 2020 (three probability samples: n1=3,956; n2=2,186; n3=632; three nonprobability samples: n4=2,623; n5=2,525; n6=1,214). Respondents were asked about their frequency of smartphone use, their level of smartphone skills, and the activities that they carry out on their smartphone. To identify different types of smartphone users, we conduct a latent class analysis (LCA), which classifies individuals based on their similarity in smartphone usage patterns.

Results: First, we will assess which smartphone usage types can be identified in the samples of smartphone owners. Second, we will examine whether the different smartphone usage types vary systematically by socio-demographic characteristics, privacy concerns towards research activities on a smartphone, and key survey variables. Third, we will investigate how the composition of the smartphone usage types change over time.

Added Value: Smartphone-based studies, even those relying on passive data collection, require participants to be able to engage with their device, such as downloading an app or activating location tracking. Therefore, researchers not only need to understand which subgroups of the population have access to smartphone technology but also how people are able to use the technology. By studying smartphone usage patterns among smartphone owners in Germany and Austria, this paper provides initial empirical evidence on this important issue.



Digital trace data collection through data donation

Laura Boeschoten, Daniel Oberski

Utrecht University, Netherlands, The

Relevance and Research Question

Digital traces left by citizens during the course of life hold an enormous potential for social-scientific discoveries, because they measure aspects of social life that are difficult to measure by traditional means. Typically, digital traces are collected through APIs and web scraping. This is however not always suitable for social-scientific research questions. Disadvantages are that the data cannot be used for questions on individual level, only public data is provided, the data pertain to a non-random subset of the platform’s users and users who generate the data cannot be contacted for their consent. We aim to develop an alternative workflow that overcomes these issues.

Method

We propose a workflow that analyses digital traces by using data download packages (DDPs). As of May 2018, any entity that processes the personal data of citizens of the European Union is legally obligated by the GDPR to provide that data to the data subject upon request in digital format. Most major private data processing entities, comprising social media platforms, smartphone systems, search engines, photo storage, e-mail, banks, energy providers, and online shops comply with this right.

Our proposed workflow consists of five steps. First, data subjects are recruited as respondents using standard survey sampling techniques. Next, respondents request their DDPs with various providers, storing these locally on their own device. Stored DDPs are then locally processed to extract relevant research variables, after which consent is requested of the respondent to send these derived variables to the researcher for analysis.

Results and added value

We will present a proof-of-concept of developed software that enables the proposed workflow together with some use-cases. By using the workflow and the developed software, researchers can answer research questions with digital trace data while overcoming the current measurement issues and issues with informed consent.



Smartphone behavior during the Corona pandemic – How Germans used apps in 2020.

Konrad Grzegorz Blaszkiewicz1,2, Qais Kasem1, Clara Sophie Vetter1,3, Ionut Andone1,2, Alexander Markowetz1,4

1Murmuras, Germany; 2University of Bonn, Germany; 3University of Amsterdam, Netherlands; 4Philipps University of Marburg, Germany

The year 2020 changed dramatically our everyday routines. With Corona pandemic-related social distancing measures, connecting virtually helped us cope with isolation. While no single tool provides a whole picture of these changes, smartphones capture a significant part of online behavior. We looked at the usage of top smartphone apps to answer the following research questions:

What were the most popular smartphone apps in 2020?

How did they differ by demographics groups and occupations?

How did they change over the year?

Were these changes COVID-19 related?

Methods & Data:

Our academic partners recruited 1070 Participants from Germany for scientific purposes. We collected their real-time app usage data via the Murmuras app with the fully GDPR-conform consent and conducted exploratory data analysis.

Results:

Our participants spent on average almost 4 hours using their smartphones. The top five apps - WhatsApp, Instagram, YouTube, Chrome, and Facebook are popular throughout all demographics. Together they constitute almost 50% of all usage we recorded. Three of them - WhatsApp, Instagram, and Facebook capture most of our online social and communication behavior. WhatsApp remains at the top for all demographics groups. Instagram is used longer by women, younger people, and students. Facebook is still the second most used app for people above 30 and employed people.

Phone usage increased significantly in March and April, marked by a large number of COVID-19 cases and strict lockdown. Usage of social and communication apps in these months increased by over 20%. Time spent in entertainment and media apps showed a slight decrease in March and a rapid increase in the following months. Interestingly with the second wave of the pandemic in Autumn, we see an increase in media and social categories but no changes in communication apps.

Added Value:

With the use of real data, our study brings a better understanding of online behavior in the year 2020, when compared to questionnaires and app store-based studies. We look into demographic and occupational differences as well as changes throughout the year and the influence of lockdowns. This new perspective provides insight into changes in our habits brought by the COVID-19 pandemic.

 
11:30 - 12:30 CESTC1: Social Media and Public Opinion
Session Chair: Pirmin Stöckle, University of Mannheim, Germany
 
 

The Discourse about Racism on German Social Media - A Big Data Analysis

Anna Karmann, Dorian Tsolak, Stefan Knauff, H. Long Nguyen, Simon Kühne, Hendrik Lücking

Bielefeld University, Germany

Relevance & Research Question:

Racism is a social practice encompassing both actions and rationales for action, which naturalize differences between humans and thus take for granted the objective reality of race (Fields & Fields 2012). In 2020, events such as the terrorist attack in Hanau and the death of George Floyd illustrated the omnipresence of racism in its different facets globally. Thus, a new debate about racism in society emerged, which was conducted on social media to a considerable extent. Both, the rise of hashtag-based activism and the emergence of filter bubbles attest to the importance of social media on societal discourses.

Methods & Data:

Our study is concerned with the systematic measurement of the prevalence and magnitude of ‘racist’ discourses. By analyzing social media text data from Twitter, we draw conclusions regarding how these discourses vary over time and region.

From October 2018 to this present day, we have used a database of nearly 1 billion German tweets (~1.1 million tweets per day). We employ a combination of word embedding models and topic modeling techniques to identify clusters that include discourse about racism (Sia 2020). We link information on regional time series data to augment our dataset with social structural information.

Results:

We find that the discourse about racism in Germany peaked in the Summer of 2020 and made up about 4% of all German tweets in that timeframe. Most notably, every third tweet regarding this discourse has been retweeted by other users, which is indicative of a highly active network structure. Analyses using the regional data reveal distinct spatial differences between regions of Germany, not only in its prevalence but also in the perception of the racist discourse. Regression models using social structural data can account for some of this regional variance.

Added Value:

Our approach allows us to detect changing trends and continuities of the racial and anti-racial discourse over 2.5 years separated by regional differences. Our rich data on an abundance of different topics enables us to connect the discourse about racism to closely related topics discussed on social media.



Assessing when social media can complement surveys and when not: a longitudinal case study

Maud Reveilhac, Davide Morselli

Lausanne University (Switzerland), Faculty of social and political sciences, Institute of social sciences, Life Course and Social Inequality Research Centre

Researchers capitalizing on social media data to study public opinion aimed at creating point estimates like those elaborated by opinion surveys (e.g., Klašnja et al. 2018). By doing so, most attempts are directed toward whether social media data can predict election outcomes (see review from Rousidis et al. 2020). Other studies have investigated how the social media and public agendas from a representative public correlate and what affects the rhythms of attention (e.g., Stier et al. 2018). Our study is situated at the nexus of these two approaches and seeks to assess under what circumstances social media data can reliably complement survey data collection.

We rely on a two-year longitudinal data collection of tweets emitted by more than 100’000 identified Swiss users. We compare tweets with survey data across a range of topic areas.

In a first research step, we assess the extent to which Twitter data can validly reflect trends found in traditional public opinion measures, such as voting decisions in popular votes, and main political concerns. Concerning the former, text similarity measures between tweets about popular votes and open-ended survey pros and cons arguments about the same voting objects allow us to reflect the majority voting decision. Concerning the latter, we show a discrepancy between the off-line and on-line public agendas, especially in the ranking of the importance of policy concerns.

Beside the alignment of both data sources, there are numerous ways in which social media can complement survey data. Most notably, social media data are very reactive to events and can thus offer a useful complement to survey data for accounting for social movements. Namely, the timing of a survey might not always coincide with the timing of a protest. Our results reflect major “real-life” events (e.g., strikes and mobilizations) and allow us to extract salient aspects surrounding these events.

Our study disentangles circumstances in which social media reflect survey patterns, especially by looking at voting objects and main political concerns. It also insists on circumstances in which social media provide complementary insights to surveys, especially for social movement detection and analysis.



Personal Agenda Setting? The effect of following patterns on social media during Election

Yaron Ariel, Vered Elishar-Malka, Dana Weimann-Saks

Max Stern Academic College of Emek Yezreel, Israel

Relevance and research question:

Agenda-setting studies assume a correlation of agendas, in which media agenda influences the audiences' agendas. This assumption has been continuously challenged in the current multi-channel-online environment, where traditional media operate alongside social media accounts. Thus, scholars should posit that different audiences (perhaps even at the individual-user level) could form a “personal agenda-setting.” We explored the differences of the agenda topics salience among those exposed to content through different ‘follow-up’ patterns in online social networks.

Methods and data:

Respondents who represent the Israeli voters' population for the March 2020 elections were invited from an online panel to create a cluster sample. After a filtering question about using online social networks, this study is based on the answers of 448 respondents. The questionnaire examined the voting intentions, the topics on the respondents' agenda, and patterns of following candidates on online social networks: Facebook, Twitter, and Instagram.

Results:

When the prominent topics on the general agenda were examined, it was found that 48% of the respondents mentioned a security incident, 35% a health crisis, 22% a welfare issue, 20% an economic crisis, 17% a coalition formation. Nonetheless, considerable differences exist when inspecting the respondent's following patterns: for example, there is a significant difference (t = 1.74, p <.05) between those who follow politicians' accounts in social networks than those who do not. Among followers, the topic of ‘Welfare’ was ranked significantly higher. Respondents who follow politicians on Twitter tend to rank the ‘Economic crisis’ higher (t = 1.8, p <.05). There is a significant difference (t = 2.07, p <.05) between exclusive followers of the leading opposition candidate (Benjamin Gantz) concerning the topic of ‘Health’ which they ranked higher. Multivariant analyses were conducted to identify the personal agendas of specific topics concerning several combinations of the following patterns.

Added value:

This study implies that the traditional approach to agenda-setting research is less compatible for studying users’ agendas in online environments. A better understanding of online agendas' formation is paramount when examining users’ passive and active exposure to political content through social networks.

 
11:30 - 12:30 CESTD1: GOR Best Practice Award 2021 Competition I
Session Chair: Alexandra Wachenfeld-Schell, GIM Gesellschaft für Innovative Marktforschung mbH, Germany
Session Chair: Otto Hellwig, respondi/DGOF, Germany

in German

sponsored by respondi
 
 

Mobility Monitoring COVID-19 in Switzerland

Beat Fischer1, Peter Moser2

1intervista AG, Switzerland; 2Statistical Office of the Canton of Zurich, Switzerland

Relevance & Research Question:

$With the outbreak of the Corona pandemic in Switzerland, the authorities took measures and issued recommendations to severely restrict mobility behaviour. The questions arose as to whether the population will adhere to the measures and recommendations and what influence this will generally have on mobility behaviour in Switzerland?

Methods & Data:

On behalf of the Statistical Office of the Canton of Zurich, the Federal Statistical Office and the COVID-19 Science Task Force, the research institute intervista launched the Mobility Monitoring COVID-19 in March 2020. A geolocation tracking panel with 3,000 participants serves as the basis, and their locations are continuously recorded via a smartphone app. With the data, the distances travelled, the means of transport used, the purpose of mobility and the proportion of commuters are analysed in detail on a daily basis. Since the panel was already set up before the outbreak of the pandemic, data was analysed retrospectively since 1.1.2020. The project is still running and new results are published on an ongoing basis.

Results:

With this study, which provides almost live data, the current developments in mobility behaviour can be clearly traced. It showed that after the lockdown in March 2020, average daily distances fell from around 40 km to less than 15 km. It could be shown that older and also younger people cut back considerably. Commuting shares decreased and public transport has been used significantly less since the outbreak of the pandemic. At times, the use of public transport even dropped by about 80% compared to the time before the pandemic.

Added Value:

This study is of considerable value to the authorities as a tool for managing the pandemic. With the results, the effectiveness of the measures taken and recommendations made could be directly monitored. By disseminating the results in the media, the population received immediate feedback on how the social norm regarding mobility was changing, which may have additionally strengthened the effect of the measures taken. The monitoring also provides important planning data for the economy and serves as a basis for various scientific research projects.



Shifting consumer needs in the travel industry due to Covid-19 – AI based Big Data Analysis of User Generated Content

Johanna Schoenberger1, Jens Heydenreich2

1Dadora GmbH, Germany; 2Verischerungskammer Bayern, Germany

Relevance & Research Question: How does COVID-19 change the consumer needs of various types of Germany based tourists and how can a travel insurer provide maximum assistance in meeting these needs (of tourists, but also of tour operators & travel agencies)?

Methods & Data: Starting May 2020 analysis of 14 million discussions of German speaking travel forums (User Generated Content) using Artificial Intelligence, Natural Language Processing and Machine Learning.

Results: Awareness of (mostly) uncontrollable risks before and during a trip has increased significantly among all travelers due to the Corona pandemic. The desire to take out insurance for possible, unforeseeable reasons for cancellation as well as for advice and support in disputes with tour operators, portals, e.g. has become clearly larger. Above all, the need for information when planning a trip has shifted noticeably, at least in the short term. Although Covid-19 only really started in Germany at the end of March / beginning of April 2020, results were already available by the end of May 2020. The derivation of measures therefore did start as early as June 2020.

Added Value: Important contribution to the innovation challenge "ReStart Reise 2021" of VKB to identify and prioritize product and service elements for the travel insurance. The most relevant results have been introduced to the market by VKB, such as a medical concierge service (https://www.vkb.de/content/services/reiseservices/) and acoverage of COVID related risks https://www.urv.de/content/privatkunden/reiseversicherungen/covidschutz/), which are intended to positively support the travel / booking behavior of customers.



Hungry for Innovation: The Case of SV Group's Augmented Insights Brand Concept Fit Analysis

Steffen Schmidt1, Stephanie Naegeli2, Tobias Lang2, Jonathan T. Mall3

1LINK Marketing Services AG, Switzerland; 2SV (Schweiz) AG, Switzerland; 3Neuro Flash, Germany

Relevance & Research Question: SV Group is challenged to develop and adapt current, but also new catering concepts, especially against the backdrop of the Corona pandemic and the emergence of new trends, such as increased home office work or changing office behavior. The aim of the empirical study was to analyze the fit and sharpen the positioning of different concepts and brands in order to not only remain the number one caterer in Switzerland, but also to continue to grow by developing new innovative catering concepts.

Methods & Data: First, an AI-based neurosemiotic Big Data web technique was used to uncover associations on the topic of "lunch and snacks at work" as initial input for the b2b and b2c online survey. For the survey itself, implicit association test and MaxDiff method were used. Universal structural modeling (USM) with Bayesian neural networks was applied to identify the most salient implicit associations. In addition, TURF analyses using MaxDiff scores uncovered the top feature combinations that resonate with the most consumers. The USM and MaxDiff-TURF results were in turn used as input for further neurosemiotic Big Data web analysis to create an extended association network. A total of n=250 b2c participants and n=248 b2b participants were surveyed in November 2020.

Results: The results showed that most, but not all, of the catering concepts and only one brand offered sufficient activation potential. Considering the extended association network, four specific clusters were identified, which in turn were used as communicative input for the roll-out of the respective concepts. This four-cluster network enabled highly targeted and evidence-based positioning, especially when it comes to triggering the right associations at each touchpoint (website, app, etc.) in consumer’s mind.

Added Value: The combination of methods used created an innovative augmented insights loop from the beginning of the research start to the end and beyond to create an evidence-based management foundation. The AI-based neurosemiotic Big Data web insights analysis was the initial starting point, but also the means to further refine the uncovered insights from the other advanced methods. In addition, it can now be used on a daily basis to review and, if necessary, optimize human-generated content (e.g., claims, product descriptions) in light of the identified salient association network without further surveying consumers. This approach ensures both substance and speed for better management with evidence.

 
11:30 - 12:30 CESTGOR Thesis Award 2021 Competition
Session Chair: Olaf Wenzel, Wenzel Marktforschung, Germany

sponsored by Tivian
 
 

Generalized Zero and Few Shot Transfer for Facial Forgery Detection

Shivangi Aneja

Technical University of Munich, Germany

Relevance & Research Question:

With recent developments in computer graphics and deep learning, it is now possible to create high-quality fake videos that look extremely realistic. Over the last two years, there has been tremendous progress in the creation of these altered videos, especially Deepfakes. This has several benign applications in computer graphics, but on the other hand, this can also have dangerous implications on society, such as in political propaganda and public shaming. Especially, the fake videos of politicians can be used to spread misinformation. This calls for urgency to build a reliable fake video detector. Different manipulation methods come out every day. So, even if we build a reliable detector to detect fake videos generated from one manipulation method, the question still remains how successfully it will detect videos forged with a different and unseen manipulation method. This thesis is a step towards this direction. Taking advantage of available fake video creation methods and using as few images as possible from a new and unseen manipulation method, the aim is to build a universal detector that detects most of the fraudulent videos surfacing the internet to the best of its capability.

Methods & Data:

We begin the thesis by exploring the relationship between different computer-graphics and learning-based manipulation methods, i.e., we evaluate how well a model trained with one manipulation method generalizes to a different and unseen manipulation method. We then investigate how to boost the performance for a different manipulation method or dataset in case of limited data availability. For this, we explored a variety of transfer learning approaches and proposed a new transfer learning technique and an augmentation strategy. This proposed technique was found to be surprisingly effective in detecting facial manipulations in zero-shot (when the model has no knowledge about new videos) and few-shot (when the model has seen very few frames from the new videos) settings.

We used the standard classification backbone architecture (ResNet) for all our experiments and evaluated different pointwise metric-based domain transfer methods like MMD, Deep coral, Ccsa, D-sne. Since none of these methods worked well on unseen videos and datasets, we proposed a distribution-based approach where we model each of our classes (real or fake) as a component of mixture model and our model learns these distribution components, which we enforce with a loss function based on Wasserstein distance. Inspired by our insights, we also propose a simple data augmentation strategy that spatially mixes up images from the same classes but different domains. The proposed loss function and augmentation cumulatively perform better compared to existing state-of-the-art supervised methods as well as transfer learning methods. We benchmarked our results on several face forgery datasets like FaceForensics++, Google DF, AIF and even evaluated our results on in-the-wild deepfake videos (Dessa dataset).

The FaceForensics++ dataset provides fake videos created with 4 different manipulation techniques including Face2Face, FaceSwap, Deepfakes, and Neural Textures and corresponding real videos. The Google DF dataset provides fake videos generated with high-quality deepfake videos. The AIF dataset is the most challenging dataset that was donated to authors by AI Foundation which consists of deepfake videos generated in very bad illumination conditions and cluttered environments. And finally, we used the Dessa dataset which consists of high-quality deepfake videos downloaded from youtube.

Results :

We compare our results with current state-of-the-art transfer learning methods, and the experimental evaluation suggests that our approach consistently outperforms these methods. We also provide a thorough analysis of transferability among different manipulation methods, which provides a clear picture of which methods are more closely related to each other and exhibit a good transfer. We notice that learning + graphics-based methods transfer relatively well within each other, however purely graphics-based methods do not exhibit transfer. Additionally, we also compare transfer on different datasets to explore out-of-distribution generalization. Overall, we achieve a large 10% improvement (64% to 74%) compared to baseline across dataset generalization where the model has never seen the videos (zero-shot) and 7% improvement (78% to 85%) for few-shot transfer for in-the-wild deepfake videos.

Added Value :

The standard supervised classification models build by researchers detect fakes really well on datasets that they are trained on, however fail to generalize to unseen videos and datasets that the model has not seen before, commonly known as out-of-domain generalization. With this thesis, we combat these failure cases and were able to successfully build an unsupervised algorithm, where our model has no or very little knowledge about the unseen datasets and still is able to generalize much better compared to standard supervised methods. Our proposed technique generalizes better compared to other state-of-the-art methods and hence generates more reliable predictions, thus can be deployed to detect in-the-wild videos on social media and video sharing platforms. The proposed method is novel and effective, i.e. the thesis proposed a new loss function based on learning the class distributions that empirically generalizes much better compared to other loss functions. The added spatial augmentation further boosts the performance of our model by 2-3%. The proposed technique is not only limited to faces but can also be applied to various other domains where the datasets are diverse and scarce.



How Does Broadband Supply Affect the Participation in Panel Surveys? An analysis of mode choice and panel attrition

Maikel Schwerdtfeger1,2

1GESIS - Leibniz-Institut für Sozialwissenschaften, Germany; 2University of Mannheim

Relevance & Research Question:

Over the last decades, online surveys became a crucial part of quantitative research in the social sciences. This development yielded coverage strategies such as implementing mixed-mode surveys and motivated many scientific studies to investigate coverage problems. From the perspective of the research on coverage, having a broadband connection often implies that people can participate in online surveys without any problems. In reality, the quality of the broadband supply can vary massively and thereby affect the online experience. Negative experiences lower the motivation to use online services and thus also reduce individual skills and preferences. Considering this, I expect that regional differences in broadband supply have a major impact on survey participation behavior, which leads me to the following research questions:

1st Research Question: How does the broadband supply affect the participation mode choice in a mixed-mode panel survey?

2nd Research Question: How does broadband supply determine attrition in panel surveys?

Methods & Data:

In order to investigate the effects of broadband supply on participation mode choice and panel attrition, I combine geospatial broadband data of the German “Breitbandatlas” and geocoded survey data of the recruitment interview and 16 waves of the mixed-mode GESIS Panel. The geospatial broadband data classifies 432 administrative districts in Germany into five ordinal categories according to their proportion of broadband supply with at least 50 Mbit/s, which is seen as a threshold value for sufficient data transmission.

To answer the first research question, I apply a binomial logistic regression model to estimate the odds of choosing the online participation mode based on broadband supply, internet familiarity, and further control variables. Besides broadband supply, I included internet familiarity as a substantially relevant independent variable based on previous research results in the field of participation mode choice.

Following the theoretical background (see 2.2. Mode choice), I expect a person deciding between online or offline participation in a recruitment interview to consider their last and most prominent internet experiences with a particular focus on their internet familiarity and their perceived waiting times. The waiting times are largely affected by the data transmission rate of the available broadband supply.

Consequently, I derive the following two hypotheses for participation mode choice in mixed-mode panel surveys that provide web-based and paper questionnaires:

1st Hypothesis: Having a more pronounced internet familiarity increases the probability of deciding for online participation in a mixed-mode panel.

2nd Hypothesis: Living in a region with better broadband supply increases the probability of deciding for online participation in a mixed-mode panel.

To answer the second research question, I apply a Cox regression model to estimate the hazard ratios of panel dropout based on broadband supply, perceived survey duration, and further control variables. Besides broadband supply, I considered perceived survey duration as substantially relevant based on previous research results in the field of panel attrition.

According to the theoretical background (see 2.3. Panel attrition), I expect a person in a panel survey to constantly evaluate their satisfaction and burden of participation, whereas the flow experience and the perceived expenditure of time are the crucial factors in the decision process. The flow experience is largely determined by the quality of the available broadband supply. Consequently, I derive the following two hypotheses for attrition in panel surveys:

3rd Hypothesis: Living in a region with better broadband supply decreases the risk of attrition in an online panel survey.

4th Hypothesis: Evaluating the survey duration as shorter decreases the risk of attrition in an online panel survey.

Results:

The results of the first analysis show that both living in a region with better broadband supply and having a higher internet familiarity increases the probability of choosing the online mode in a mixed-mode panel survey. However, the effect of internet familiarity is found to be substantially more powerful and stable.

The results of the second analysis show that a longer perceived survey duration increases the risk of panel dropout, whereas the effect of broadband supply is small, opposite to the hypothesis, and not significant.

For the interpretation of the results in the overall context, it must be noted that the classification of about 400 administrative districts in Germany into five groups with different proportions of sufficient broadband supply is not ideal for the purpose of this analysis. Despite this limitation, the weak effect of broadband supply in the first analysis suggests greater potential in this methodological approach. In the discussion section, I provide further details on this issue and an outlook for a follow-up study that can test the presented methodological approach with more precise broadband data.

Added Value:

The present study aims to expand methodological research in the context of online surveys in two different ways. First, the approach of combining geospatial data on broadband supply and survey data is a novelty in survey methodology. The advantage is that there is no need to ask additional questions about the quality of the internet connection, which reduces survey duration. Additionally, geospatial data is not affected by motivated or unintentional misreporting of respondents. This is particularly important in the case of information that is excessively biased by subjective perceptions or by misjudgments due to lack of knowledge or interest. Technical details on broadband supply are vulnerable to this kind of bias.

Second, analyzing response behavior in the context of available broadband supply allows to draw conclusions about whether participants with poor broadband supply still choose the online mode. And if so, whether they have a higher probability of panel attrition than panelists with better broadband supply. These conclusions can be used to develop targeting strategies that actively guide the participation mode choice based on the panelists' residence, thereby reducing the likelihood of panel attrition.



Voice in Online Interview Research

Aleksei Tiutchev

HTW Berlin, Germany

Relevance & Research Question: Recently, voice and speech technologies’ developments significantly improved, reaching a high accuracy of speech recognition for the English language. Among others, the technologies could also be applied in market research. In the last years, only a few studies addressed the possibility of using speech recognition in online market research. The thesis further investigates the possibility of incorporating speech recognition technology into online surveys in various languages in six continents. Research Question is “What is the impact of voice in global online interviewing by the example of several languages and countries regarding…

... technological capabilities of participants?

... willingness to participate?

... quality of voice answers?

... the respondents’ level of engagement?

... respondents’ satisfaction?”

Methods & Data: Based on the review of the current state of speech recognition and related literature, online questionnaires with voice input and text input in five languages (English, German, French, Russian, and Spanish) were created and distributed through the online panel to 19 countries. The questionnaires consisted of 40 questions with 14 open questions on various topics, which participants could answer either with text or with voice depending on the technical possibilities and willingness to participate in a voice study. In addition to the open questions, the surveys included the Kano Model questions to measure how the respondents perceive the possibility of answering the survey with voice, Net Promoter Score question, and others. The data were collected between September 3, 2020, and October 27, 2020, and 1958 completed questionnaires became the focus of the study. Out of the all completed surveys, 1000 were filled in with text input, whereas 958 were filled in with voice input. Collected data were analysed with IBM SPSS Statistics v.27.

Results: The results of the study demonstrated that the technological capabilities of the respondents to participate in the voice research varied from country to country. The highest number of browsers and devices that support voice input was observed in developing countries. Also, those countries had the highest number of participants who use smartphones to fill in the questionnaires. At the same time, in developed countries, due to the popularity of iOS devices, which did not support voice input, it was more challenging to conduct voice research. Even with technical possibilities, 43 per cent of respondents were still unwilling to grant access to their microphones. The answers collected through voice input were 1.8 longer comparing to the text input answers. At the same time, questions with voice input took on average two seconds more time to answer. Moreover, surveys with voice input had two times higher dropout rate. Participants with voice input were more satisfied with the surveys and showed a very high willingness to participate in voice studies again. Meanwhile, respondents’ technological capabilities to participate in voice surveys, dropout rates, response times, and quality of voice answers significantly differed depending on the country. Analysis of the Kano Model questions demonstrated the participants’ indifference to the possibility to answer the surveys with voice. Key Driver Analysis demonstrated that such categories as tech-savvy, early adopter or data security concerns did not influence respondents’ willingness to participate in voice research again. Meanwhile, the most important categories that influenced such decision were frequency of Internet usage and information seeker behaviour.

Added Value: The study results have partially confirmed previous research on speech recognition use in online questionnaires in regards to higher dropout rates and longer answers in terms of characters for answers received through voice input. At the same time, some results have contradicted previous studies, as the voice answers appeared to be longer in time compared to text input answers, thus not confirming the lower response burden of the voice input answers in online surveys. In addition to that, the results of the study have complemented the existing research and provided more information about the use of voice input in online surveys in different countries. The technology is still new and currently not all devices support such technologies, which makes the research more complicated, more expensive, and more time-consuming in the countries, where the number of not supporting devices is great. Starting from the technological possibilities of the voice questionnaires to dropout rates and amount of data received with voice input, everything varied significantly and notably depended on the geographical location of the study. Even though voice input in online surveys requires more effort and demands higher costs for participants’ recruitment, and the transcriptions are not perfect in terms of quality, especially in non-English languages, marketers and researchers of different industries might consider using voice input in their studies to receive extensive quality data through online questionnaires. This method may allow professionals to conduct research among people, who cannot or do not want overwise to participate in classical text surveys.

 
12:30 - 12:50 CESTBreak
 
12:50 - 1:50 CESTP 1.1: Poster I
sponsored by GIM
 
 

Role of risk and trust beliefs in willingness to submit photos in mobile surveys

Jošt Bartol1,2, Vasja Vehovar1, Andraž Petrovčič1

1Centre for Social Informatics, Faculty of Social Sciences, University of Ljubljana, Slovenia; 2Faculty of Arts, University of Ljubljana, Slovenia

Relevance & Research Question: Smartphones provide promising new ways to collect survey data. An interesting option is to ask respondents to submit photos. However, virtually no research exists on how trust in surveyors about proper handling of submitted photos and risk beliefs related to submitting photos in mobile surveys impact the willingness to submit photos. Thus, we addressed three research questions: (1) What are smartphone users’ attitudes toward submitting photos in a mobile survey? (2) How do trust and risk beliefs differ according to the sensitivity of photos? (3) How do trust and risk beliefs affect the willingness to submit photos in a mobile survey?

Methods & Data: A follow-up subsample of respondents from the Slovenian Public Opinion Survey was used (n = 280 smartphone users). Respondents were presented with a hypothetical scenario of a mobile survey requesting three different photos: a window panorama, an open refrigerator, and a selfie. The respondents were first asked in an open-ended question to write their thoughts about the scenario. Next, they were asked about their willingness to submit the three photos and to indicate their trust and risk beliefs for each. The data were analyzed qualitatively (open-ended question), and quantitatively by three regression models.

Results: The respondents believed that submitting photos can be a threat to anonymity, and they would only submit photos that they did not perceive as too sensitive in terms of possible abuses. Interestingly, 47.9% of respondents would submit a photo of an open window panorama, 40.4% of an open refrigerator, and only 8.6% their selfie. Additionally, photos perceived as more sensitive were associated with lower trust and higher risk beliefs. Moreover, trust beliefs increased their willingness to submit photos while risk beliefs decreased it.

Added Value: The study indicates that only photos that respondents do not perceive as a threat to their anonymity can be collected in mobile surveys. Indeed, risk and trust beliefs play an important role in the decision to submit photos. Future research might investigate different types of trust and risk beliefs as well as study respondents’ actual submission of photos in mobile surveys.



Survey Attitudes and Political Engagement: Not Correlated as Expected for Highly Qualified and Professional Respondents

Isabelle Fiedler, Thorsten Euler, Ulrike Schwabe, Andrea Schulze, Swetlana Sudheimer

German Centre for Higher Education Research and Science Studies, Germany

Relevance & Research Question:

In times of declining response rates, investigating the determinants of survey participation in general and panel participation in particular are of special importance. Empirical evidence indicates that general attitudes towards surveys do predict willingness to participate in (online) surveys (de Leeuw et al. 2017; Jungermann et al. 2019). Beyond survey attitudes themselves, however, political engagement can be seen as another predictor for survey participation (Silber et al. 2020). The underlying assumption is that answering questions is one way to express personal opinion. Therefore, we analyse to what extent survey attitudes and political attitudes are associated.

Methods & Data:

We use data from two different panel surveys for groups of highly qualified: starting cohort 5 of the National Educational Panel Study (NEPS, n=3879) and cohort 2009 of the DZHW graduate panel (DZHW GP, n=619). Both surveys include the Survey Attitude Scale (SAS) in its nine item short form as proposed by de Leeuw et al. (2019) as well as different measures for political engagement.

Results:

Overall, our results show only weak and few significant correlations between the three dimensions of the SAS and different measures of political engagement. Survey Value shows significant positive correlations with different measures for social trust and political interest. In contrast, Survey Burden is significantly negative associated with participation in the last national election and general trust in others as well as general political activities. Finally, we find significant positive correlations between Survey Enjoyment and political interest as well as membership in a political party or association.

Added Value:

In sum, our empirical findings do not show theoretically expected strong associations between the SAS and political engagement. However, our sample consists of participants of already well established panel studies. Being asked in the 14th wave in the case of NEPS and the third wave (within ten years) in the case of DZHW GP, they can be regarded as professional respondents. Consequently, we suggest replicating the study by Silber et al. (2020) with a sample of newly sampled respondents of the highly qualified, because it is interesting to contrast this group against general population.

 
12:50 - 1:50 CESTP 1.2: Poster II
sponsored by GIM
 
 

Covid-19 and the attempt to understand the new normal – A behavioral science approach

Prof. Dirk Frank1,2, Evelyn Kiepfer2, Manuela Richter2

1University of Applied Sciences Pforzheim; 2ISM GLOBAL DYNAMICS, Germany

The market research industry has reacted to the massive uncertainties regarding future consumer behaviour emerging from the corona pandemic. It is providing the various stakeholders from industry and society with numerous studies, which are intended to guide them through the thicket of the New Normal: What (changed) attitudes do consumers show because of Corona? How do priorities in purchasing behaviour change, as do our needs? Most studies published follow the classical “explicit” attitude measurement paradigm using scaled answers. As a consequence, most findings, pretending to predict future or describe current consumer behaviour in the pandemia, suffer from the well-researched “say-do gap” and the general weakness of explicit attitude measure to predict real behaviour. In an international study we applied an implicit, reaction-time based methodology to assess Covid-related attitudes (towards politics, nutrition, vaccination, health-related behaviours) to highlight differences between countries in coping with Corona and showing a methodological approach to separate pure lip service from real behaviour intentions.

Led by our Polish research partner NEUROHM a large-scale global comparative study “COVID-19 Fever” was conducted between the late April and early May 2020, followed by a national wave in Germany in January 2021 to assess attitudes towards vaccination in more detail. The international study was conducted in ten countries with 1000 respondents each as a syndicated project involving universities and commercial research agencies specializing in behavioural economics. The theoretical basis of the applied measurement model of NEUROHM (iCode, see also Ohme, Matukin & Wicher 2020) is the “Attitude Accessibility” model of Fazio (1989). iCode is an algorithm that allows the calculation of a confidence index (CI), which integrates the explicit and implicit measures of attitudes in one score showing the tension between rationalizing opinions and the underlying security and trustworthiness in the form of implicit confidence.

Results clearly showed the need to distinguish between superficial, socially desirable answers and implicit, well-internalised beliefs when it comes to coping with Covid-19. If politicians or companies want to develop sound strategies based on highly predictable behaviours of consumers or citizens, they should add research paradigms from behavioural economics in their studies.



Gender and Artificial Intelligence – Differences Regarding the Perception, Competence Self-Assessment and Trust

Swetlana Franken, Nina Mauritz

Bielefeld University of Applied Sciences, Germany

Relevance & Research Question:

Technical progress through digitalisation is constantly increasing. Currently, the most relevant and technically sophisticated technology is artificial intelligence (AI). Due to the strong influence of AI, it is necessary that it meets with broad social acceptance. However, it is apparent that the prerequisites for this are distributed differently according to gender. Women are less frequently involved in research and development on AI. What are the differences between men and women in their perception, evaluation, development, and use of AI in the workplace?

Methods & Data:

A quantitative online survey consisting of 45 items was conducted among company representatives and students from July to September 2020 [N = 382 (age; M = 35.9, SD = 13.5, 69.6% female, 61.4 % university degree)]. To determine differences in the variables of interest, a t-test or ANOVA was calculated, if the prerequisites were fulfilled.

Results:

The results show that men, in contrast to women, see more opportunities in AI (t(317) = -2.88, N = 319, p = .004), rate their own AI-competence higher (t(317) = -6.65, N = 319, p < .001), and trust more in AI (U = 8401.00, Z = -3.604, p < .001). One reason for the significant results could be the fact that men are more involved and have more experience with AI than women (χ² (2, N = 319) = 7.902, p = .019). Men and women agree in their desire for better traceability in AI-decision-making processes (t(317) = .375, N = 319, p = .708), and both show a high motivation for further training (t(317) = -.522, N = 319, p = .602).

Added Value:

Developing one's own AI-competence takes away fears and promotes trust and acceptance towards AI – an important prerequisite for openness towards AI. Promoting interest in and the willingness to deal with AI can at the same time sensitize people to the possible risks of AI applications in terms of prejudice and discrimination and mobilize more women to engage in AI development.

 
12:50 - 1:50 CESTP 1.3: Poster III
sponsored by GIM
 
 

Willingness to participate in in-the-moment surveys triggered by online behaviors

Carlos Ochoa, Melanie Revilla

Research and Expertise Centre for Survey Methodology, Universitat Pompeu Fabra

Relevance & Research Question:

Surveys are a fundamental tool of empirical research. However, surveys have limitations that may produce errors. One of their most well-known limitations is related to memory recall errors: people can have difficulties to recall relevant data related to events of interest for researchers. Passive data solve this problem partially. For instance, online behaviours are increasingly researched using tracking software (“meter”) installed on the browsing devices of members of opt-in online panels, registering which URLs they visit. However, such a meter also suffers from new sources of error (e.g., the meter may not collect data temporally). Moreover, part of the objective information cannot be collected passively, and subjective information is not directly observable. Therefore, some information gaps must be filled, and some information must be validated. Asking participants about such missing/dubious information using web surveys conducted in the precise moment an event of interest is detected has the potential to fill the gap. However, to what extent people may be willing to participate raise doubts about the applicability of this method. This paper explores which parameters affect the willingness to participate in in-the-moment web surveys triggered by the online activity recorded by a meter installed by the participants on their devices, using a conjoint experiment.

Methods & Data:

A cross-sectional study will be developed to ask members of an opt-in panel (Netquest) in Spain about their willingness to participate in in-the-moment surveys. A choice based conjoint analysis will be used to determine the influence of different parameters and different characteristics of participants.

Results:

This research is in progress, results are expected in July-2021. Three key parameters are expected to play a crucial role in the willingness to participate: length of the interview, maximum time allowed to participate and incentivization.

Added Value:

This research will allow to design effective experiments to collect data in the moment to prove the actual value of this method. The use of a conjoint experiment is a new approach to explore the willingness to participate in research activities that may lead to a better understanding of the relevant factors that influence participation.



Memory Effects in Online Panel Surveys: Investigating Respondents’ Ability to Recall Responses from a Previous Panel Wave

Tobias Rettig1, Bella Struminskaya2, Annelies G. Blom1

1University of Mannheim, Germany; 2Utrecht University, the Netherlands

Relevance & Research Question:

Repeated measurements of the same questions from the same respondents have several applications in survey research in longitudinal studies, pretest-posttest experiments, and to evaluate measurement quality. However, respondents’ memory of their previous responses can introduce measurement error into repeated questions. While this issue has recently received renewed interest from researchers, most studies have only investigated respondents’ ability to recall their responses within cross-sectional surveys. The present study aims to fill this gap by investigating how well respondents can recall their responses in a longitudinal setting after 4 months in a probability-based online panel.

Methods & Data:

Respondents of the German Internet Panel (GIP) received 2 questions on environmental awareness at the beginning of the November 2018 wave. Four months later, respondents were asked (1) whether they could recall their responses to these questions, (2) to repeat their responses, and (3) how certain they were about their recalled answer. We compare the proportions of respondents who correctly repeated their previous response among those who alleged that they could recall it and those who did not. We also investigate possible correlates of correctly recalling previous responses including question type, socio-demographics, panel experience, and perceived response burden.

Results:

Preliminary results indicate that respondents can correctly repeat their previous response in about 29% of all cases. Responses to attitude and behavior questions were more likely recalled than responses to belief questions, as were extreme responses. Age, gender, education, panel experience, perceived response burden, switching devices between waves and participation in the panel wave between the initial questions and repetitions did not have significant effects on recall ability.

Added Value:

The implications of respondents’ ability to recall their previous responses in longitudinal studies are nearly unexplored. This study is the first to examine respondents’ recall ability after a realistic interval for longitudinal settings of 4 months, which is an important step in determining adequate time intervals between question repetitions in longitudinal studies for different types of questions.



Default layout settings of sliders and their problems

Florian Röser, Stefanie Winter, Sandra Billasch

University of Applied Sciences Darmstadt, Germany

Relevance & Research Question:

In online survey practice, sliders are increasingly used to answer questions or to query attitudes / consents. In the social sciences, however, the rating scale is still the most widely used scale type. The question arises as to whether the default layout settings of these two types of scales in online survey systems have effects on the answers of the test persons (first of all independent of the content of the questions).

Methods & Data:

We used a 2 (rating scale vs. slider) x 2 (default vs. adjusted layout) factorial experimental design. Each subject answered 2 personality questionnaires, which were taken from the ZIS (open access repository for measurement instruments) database: A questionnaire with an agreement scale (Big Five Inventory-SOEP (BFI-S); Schupp & Gerlitz, 2014) with originally 7 response options and a questionnaire with adjective pairs (Personality-Adjective Scales PASK5; Brandstätter, 2014) with originally 9 levels. In one setting, the default layout for a slider was used in the LimeSurvey survey tool. In another setting, the layout of the slider was adjusted so that the endpoints of the slider stopped where the first and last crosses could be placed on the scale.

Results:

A total of 344 subjects participated in the study. It was found that there were significant differences in the slider for most personality traits (regardless of the questionnaire) between the default and an adjusted design. In the default slider design, there was a significant shift in responses toward the middle compared to the rating scale.

Added Value:

With this study we were able to show that the use of a slider in online surveys in the default layout can lead to different results than a classical rating scale, and that this effect can be prevented by adjusting the layout of the slider. This result should sensitize online researchers not to simply change an answer type using the default layout settings and stimulate further research to analyze the exact causes and conditions.

 
12:50 - 1:50 CESTP 1.4: Poster IV
sponsored by GIM
 
 

Inequalities in e-government use among older adults: The digital divide approach

Dennis Rosenberg

University of Haifa, Israel

During the past two decades, governments across the globe have been utilizing the online space to provide their information and services. Studies report that several categories of population, including older adults, report relatively low rates of obtaining governmental information and services using the Internet. However, little attempt has been made to further understand what differs between e-government adopters and non-adopters in later life. The goal of the current study was to examine socio-demographic disparities in e-government use among older adults through the lens of the digital divide approach. The data for the current study were attained from the 2017 Israel Social Survey. The sample (N = 1173) included older adults (aged 60 and older) who responded either positively or negatively on the item assessing the e-government use three months prior to the survey. Logistic regression served for the multivariable analysis. The results suggest that being male, of younger age, having an academic level of education, being married and using the Internet on a daily basis increase the likelihood of e-government use among older adults. These results lead to the conclusion that the digital divide characterizes e-government use in later life, similar to other uses of the Internet. The results emphasize the need for further socialization among older adults in using government services. This in light of the ongoing transition of these services into the online sphere, of numerous advantages of the online provision of these services, and of their major relevance in later life.



Ethnic differences in utilization of online sources to obtain health information: A test of the social inequality hypotheses

Dennis Rosenberg

University of Haifa, Israel

Relevance & Research Question: People tend to utilize multiple sources of health information. Although ethnic differences in online health information search has been studied, little is known about such differences in utilization of specific online health information sources and their variety. The research question is: do ethnic groups differ in their likelihood of utilizing online sources of health information?

Methods & Data: The data were attained from the 2017 Israel Social Survey. The study population included adults aged 20 and older (N = 1764). Logistic regression was used as a multivariate statistical technique.

Results: Jews were more likely than Arabs to search for health information using the call centers or sites of Health Funds and other sites, and more likely to search for health information using more than one type of site. In contrast, Arabs were more likely to search for health information on the website of the Ministry of Health.

Added Value: The study used social inequality theories for examination of ethnic differences in use of online health information sources, while referring to specific sources of such information and their variety.



Recommendations in times of crisis - an analysis of YouTube's algorithms

Sophia Schmid

Kantar Public, Germany

Relevance and research question:

In recent years, the video platform YouTube has become a more and more important source of information, particularly for young users. During the Covid-19 pandemic, almost a fifth of all Germans used YouTube to find information on the pandemic. At the same time, disinformation on social media reached a peak in what the WHO called an “infodemic”. Therefore, we set up a study on the amount of disinformation and media diversity in YouTube’s video recommendations. As recommendations are an important driver of video reach, they were interested in whether YouTube’s recommendation algorithms promote disinformation. Moreover, the analysis set out to determine which videos, channels or topics dominate video recommendations.

Methods and data:

The study consisted of a three-step research design. As a first step, a custom-built algorithmic tool recorded almost 34.000 YouTube recommendations consisting of over 8.000 videos. After enriching those with metadata, we quantitatively analysed variables like channel type, number of views or likelihood of disinformation. In a second step, we selected 210 videos and quantitatively coded them on the specific topic or amount of disinformation. Finally, a qualitative content analysis of 25 videos delved into the characteristics and commonalities of disinformative videos. The poster will detail the specific make-up of this three-step methodology.

Results:

Our study showed that on the one hand, YouTube seems to have altered its recommendation algorithms so they recommend less disinformation, even though it is still present. However, on the other hand, the algorithm severely limits media diversity. Only a handful of videos and channels dominate the recommendations, without it being apparent which characteristics make a video more likely to be recommended.

Added value:

This study shows how a “big data” approach can be combined with more traditional research methodologies to provide extensive insight into the structures of social media. Moreover, it helped assess the extent of disinformation on YouTube and provided a window into what and how social media recommendation algorithms prioritise.



Residential preferences on German online accommodation platforms

Timo Schnepf

BIBB, Germany

Relevance & Research Question: Online accommodation platforms are a currently unused source to investigate the demand side of the housing market in urban areas. I show how this data source can serve to study individual residential preferences (RP) from different social groups for 236 mentioned districts from 4 German cities. Those information are otherwise hardly collectable by regular survey methods. Furthermore, I build a comparable “socio-economic residential preferences index” (SERPI) for each district and city based on the different residential preferences from academics and jobseekers.

Methods & Data: I scraped between Juli 2019 and April 2020 housing requests uploaded on Ebay-Kleinanzeigen from 8 German cities. I collected 19.123 individual requests. Online accommodation requests serve as good data source for natural language processing tasks as they are highly structured. I used named entity recognition and word matching to extract i) (informal) (sub-)district residential preferences and ii) socio-economic characteristics of the apartment seekers. Those were for instance employment status, occupational status, family status or maximum rents. I assume a biased sample towards those individuals who ‘seize every chance’ to find a new apartment.

Results: I find the highest differences between residential preferences from academics and jobseekers in Hamburg (SERPI=0.89), Munich (0.83), Cologne (0.73) and least differences in Berlin (0.36). Among 236 districts in those four cities, the district "Farmsen-Berne" (Hamburg) shows the strongest concentration of residential preferences from jobseekers, but almost no RP from academics (SERPI = -5.32). The strongest concentration of RP from academics can also be found in Hamburg in "Sternschanze" (3.10). Berlin's districts show the lowest levels of RP segregation. (More on the dashboard https://germanlivingpreferences.herokuapp.com/ (user: kleinanzeigen, pw: showme))

Added Value: The study presents a new approach for urban research to investigate residential preferences from actual apartment seekers - potentially in real time. The SERPI is a new instrument to investigate spatial segregation and gentrification processes. Further research could - for instance - investigate the causes of group specific RP.

 
12:50 - 1:50 CESTP 1.5: Poster V
sponsored by GIM
 
 

Does the Way how to Present Demanding Questions Affect Respondent’s Answers? Experimental Evidence from Recent Mixed-Device Surveys

Thorsten Euler, Isabelle Fiedler, Andrea Schulze, Ulrike Schwabe, Swetlana Sudheimer

Deutsches Zentrum für Hochschul- und Wissenschaftsforschung (DZHW), Germany

Within the framework of total survey error, systematic bias – which negatively affects data quality - can occur either on the side of measurement or the side of representation (Grooves et al. 2004). We address measurement bias by asking ourselves whether the way how to present questions in online surveys affects response behaviour (Rossmann et al. 2018). As online surveys are mixed-device surveys (Lugtig & Toepoel 2015), questions are differently presented for mobile and non-mobile respondents.

To answer this question, we realised a survey experiment with split-half design in two recently conducted online surveys for students in summer term 2020 (n=29,389) and 2021 (n=10,044). As examples for cognitive demanding questions, we use two items: (i) time use (for private and study related issues) during semester and semester break as well as (ii) sources of income per month. Both are open questions. We regard them as cognitive demanding, because the retrospective information need to be retrieved without any pre-formulated categories being offered. The retrieval process is conducted only by accessible information. As mobile devices provide less display space, item grids are split into single parts. Thus, we expect that the given answers depend on the way the question is presented. To test our two assumptions, the item grid is differently split for mobile devices.

To check for differences in response behaviour, we first show descriptives for break-offs, response times and missing values for all groups. In a second step, we compare means and standard deviations between the control and experimental group. Our results indicate that there are differences in response behaviour depend on type of presenting the question. However, these patterns are quite mixed for the specific question asked.

Overall, our results have direct implications for designing mixed-device surveys for highly qualified. Especially among the group of students, using mobile devices for participating in surveys becomes more relevant. Thus, the question of how cognitive demanding questions are presented is of special importance for designing self-administered online surveys: the context affects answering behaviour. We close with reflecting on the generalizability of our findings.



Psychological factors as mediators of second screen usage during viewing sport broadcasts

Dana Weimann-Saks, Vered Elishar-Malka, Yaron Ariel

Max Stern Academic College of Emek Yezreel, Israel

Relevance and research question: One of the major sports events in the world is the World Cup. This study examines the effect of enjoyment from and transportation into the broadcasts events, on using social media as a second screen. The use of second screens- watching television while using another digital device- usually a smartphone or tablet, may be considered a type of “media multitasking” as it affects the viewers’ attention, the information they receive, and their social conduct during the broadcast.

We assumed that a negative correlation will be found between the level of enjoyment from watching the sports event and the second screen usage during the broadcasts. Moreover, we assumed that the correlation between enjoyment and second screen usage will be mediated by transporting into the broadcasts.

Method and data: An online representative sample of the Israeli population was obtained during the final ten days of the World Cup, from the quarterfinals to the final. 454 respondents completed the questionnaire.

Results: Findings revealed that using social media while watching the World Cup broadcasts, is strongly correlated with the enjoyment from watching the broadcasts. Thus, as assumed, the use of social media for non-game-related usages declined as the enjoyment from the broadcast increased (r = –.35, p < .001). Contrary to our first hypothesis, the use of social media for game-related usages increased as the enjoyment from the broadcast increased (r = .31, p < .001). Examining the role of transportation as a mediated variable revealed that the more enjoyment participants experienced, the more transported they were into the game, which leads to a significant rise in their game-related usage of social media, and a significant decline in their non-game-related usages of it [F(3, 439)= 42.80, p < .001, R2= 16.32%].

Added value: The function of social media as a second screen depends on the relevancy of the usages to the broadcast. These findings contribute to our understanding of the effects of psychological factors (enjoyment and transportation) on second screen usage in the context of live television broadcasts of major sports events.



Measuring self-assessed (in)ability vs. knowledge-based (in)certainty in detecting Fake-News and its forcing or inhibiting effect on its spread.

Daniela Wetzelhütter, Sebastian Martin

University of Applied Sciences Upper Austria, Austria

Relevance & Research Question: The coronavirus crisis is accompanied by an immense corona media hype. Unfortunately, conflicting information also circulated from people who were actually managing the crisis (e.g. regarding a curfew, positive benefit of face masks). This might have created insecurity about the credibility of information around the pandemic. Unsurprisingly, since insecurity may lead to misinformation being believed, the "digital crisis team" in Austria uncovered 150 different Fake-News stories within a week in March 2020. Problematic here is, that social media users are known as ambivalent about the usefulness of fact-checking and verification services. However, according to current knowledge, trust in the source seems to be the most important factor for the spread of Fake-News anyway. Nonetheless, the question arises: To what extent does the (in)certainty of recognizing Fake-News force or inhibit the spread of Fake-News?

Methods & Data: To answer the research question, a scale was developed to capture knowledge-based (in)certainty in recognizing Fake-News. The indicators measuring the certainty of recognizing true or untrue headlines were derived based on a content analysis of 119 newspaper reports (March to December 2020) on Fake-News. In addition, a single indicator was used for self-assessment of this. Data were collected with both a calibration (student sample) and a validation sample (n=201).

Results: A measurement instrument to capture knowledge-based (in)certainty was developed. The reliability of the scale is acceptable, but expandable. It depends on the selection of the headlines on the one hand, and in the response scale on the other. The test of construct validity shows that both self-assessed (in)ability and knowledge-based (in)certainty play a subordinate role in the forwarding of Fake-News. Although the former shows significant influences depending on the motive for forwarding Fake-News.

Added Value: Based on the results, more in-depth research is now possible to elicit why knowledge and self-assessment on Fake-News detection skills contributes less to Fake-News spread/stopping than might be assumed.

 
1:50 - 2:00 CESTBreak
 
2:00 - 3:00 CESTA2: Recruitment for Probability-Based Panels
Session Chair: Bella Struminskaya, Utrecht University, Netherlands, The
 
 

Enhancing Participation in Probability-Based Online Panels: Two Incentive Experiments and their Effects on Response and Panel Recruitment

Nils Witte1, Ines Schaurer2, Jette Schröder2, Jean Philippe Décieux3, Andreas Ette1

1Federal Institute for Population Research, Germany; 2GESIS; 3University of Duisburg-Essen

Relevance & Research Question

There are two critical steps when setting up online panels with the exclusive reliance on mail invitations. The first one is the transition from the analogous invitation letter to a digitalized online questionnaire. Survey methods aim to minimize the effort for users and to increase the attractiveness and the benefits of a potential participation. However, nonresponse at the initial wave of a panel survey is not the only critical step to consider. The second one is the transition from initial wave participation to panel recruitment. Little is known about the potential enhancement of both transitions, from offline invitation to online participation and to panel recruitment by means of incentives. We investigate how mail based online panel recruitment can be facilitated through incentives.

Methods & Data

The analysis relies on two incentive experiments and their effects on panel recruitment and the intermediate participation in the recruitment survey. The experiments were implemented in the context of the German Emigration and Remigration Panel Study and encompass two samples of randomly sampled persons. Tested incentives include a conditional lottery, conditional monetary incentives, and the combination of unconditional money-in-hand with conditional monetary incentives. Furthermore, we assess the costs of panel recruitment per realized interview.

Results

Multivariate analyses indicate that low combined incentives (€5/€5) or, where unconditional disbursement is unfeasible, high conditional incentives (€20) are most effective in enhancing panel participation. In terms of demographic bias, low combined incentives (€5/€5) and €10 conditional incentives are the favored options. The budget options from the perspective of panel recruitment include the lottery and the €10 conditional incentive which break even at net sample sizes of 1,000.

Added Value

The key contribution of our research is a better understanding of how different forms of incentives facilitate a successful transition from postal mail invitation to online survey participation and panel recruitment.



Comparing face-to-face and online recruitment approaches: evidence from a probability-based panel in the UK

Curtis Jessop

NatCen, United Kingdom

Key words: Surveys, Online panels, Recruitment

Relevance & Research Question:

The recruitment stage is a key step in the set-up of a probability-based panel study, but it can also represent a substantial cost. A face-to-face recruitment approach in particular can be expensive, but a lower recruitment rate from a push-to-web approach risks introducing bias and putting a limit on what subsequent interventions to minimise non-response can achieve. This paper presents findings on using face-to-face and push-to-web recruitment approaches when recruiting to the NatCen Panel.

Methods & Data:

The NatCen Panel is recruited from participants in the British Social Attitudes survey (BSA). While normally conducted face-to-face, the 2020 BSA was conducted using a push-to-web approach in response to the Covid-19 pandemic. This study compares the recruitment rates and overall response rates of the face-to-face survey and push-to-web recruitment approaches. It also compares the demographic profile of panel survey participants recruited using each approach to explore to what extent any differences in recruitment and response rates translate into bias in the sample.

Results:

We find that, despite a higher recruitment rate and participation rate in panel surveys, the overall response rate using a push-to-web recruitment approach is substantially lower than when using a face-to-face recruitment approach due to lower response rates at the recruitment interview. There are also differences in the sample profile. For example, people recruited using a push-to-web approach were more likely to be younger, better-off financially, heavier internet users, and interested in politics.

Added Value:

Findings from this study will inform the future design of recruitment for panel studies, providing evidence on the likely trade-offs that will need to be made between costs and sample quality.



Building an Online Panel of Migrants in Germany: A Comparison of Sampling Methods

Mariel McKone Leonard1, Sabrina J. Mayer1,2, Jörg Dollmann1,3

1German Center for Integration and Migration Research (DeZIM), Germany; 2University of Duisburg-Essen, Germany; 3Mannheim Center for European Social Research (MZES), University of Mannheim, Germany

Relevance and Research Question

Underrepresentation of members of ethnic minority or immigrant-origin groups in most panels available to researchers hinders the study of these individuals experiences of daily life as well as racism and discrimination, or how these groups are affected by and react to important events.

Several approaches for reaching these groups exist but each method introduces biases. Onomastic classification is the current gold standard for identifying minority individuals; however, it is cost-intensive and has been shown to systematically miss well-integrated individuals. Respondent-driven sampling is increasingly popular for sampling rare or hidden individuals, while Facebook samples are the easiest and least expensive method to implement, but yield non-probability samples.

In order to identify the most efficient and representative methods of sampling and recruiting potential participants, we compare three different sampling methods with regard to the resulting biases in distributions.

Methods and Data

We compare three sampling methods:

(1) mail push-to-web recruitment of a probability sample with name-based (onomastic classification)

(2) web-based respondent-driven sampling (web-RDS)

(3) Facebook convenience sampling

In order to systematically test these methods against each other, we designed a set of experimental conditions. We test these conditions by sampling and recruiting a national sample of 1st-generation Portuguese migrants and their children.

We will compare the conditions based on factors which may affect recruitment into a national German online panel such as degree of integration, survey language and self-assesses language fluency, and income. Because we give individuals in the probability sample the option to respond via mail or web, we will additionally be able to compare differences across survey modes.

Results

We began fielding the probability sample condition at the beginning of March. We anticipate fielding of the additional conditions from April until June. This will allow us time to conduct analyses and develop preliminary results prior to the conference start date.

Added Value

Our paper will present an overview of our implementation of each method; our evaluation criteria; and preliminary results. We will provide a more realistic understanding of the potential biases, strengths, and weaknesses of each method, thus supporting researchers in making better informed methods choices.

 
2:00 - 3:00 CESTB2: Geodata in Market and Survey Research
Session Chair: Simon Kühne, Bielefeld University, Germany
 
 

Innovative segmentation using microgeography: How to identify consumers with high environmental awareness on a precise regional basis

Franziska Kern, Julia Kroth

infas360, Germany

Relevance & Research question: Sustainability is the topic of the day. But how can customers who tend towards an ecological lifestyle be identified? Customer segmentation is a well-known method to create more efficient marketing and sales strategies. One of the core problems is the intelligent linking of very specific customer data with suitable general market data and ultimately the precise local determination of potential. This paper shows an innovative way to classify potential buyers at address level.

Methods & Data: First, a survey is conducted with about 10,000 respondents. Questions on general needs, actions and attitudes are used to calculate a sustainability score. By geocoding the respondents’ addresses, the results can be enriched with more than 700 microgeographic data, including information on sociodemographics, building type, living environment, energy consumption or rent. A cluster analysis identifies five sustainability types, two of which tend towards sustainability. By means of a discriminant analysis, the generated segments are transferred to all 22 million addresses and 41 million households in Germany.

Results: As a result, households prone to a sustainable lifestyle can be easily identified. Type 1 of these sustainable households are mostly married couples with children, on average 51 years old, living in single-family houses with solar panel in medium-sized towns and rural communities. They have a monthly net income of €2,500 to 3,500 and sustainability, innovation, family and pragmatism are important to them. Type 2 representatives live in big cities in apartment buildings, are between 18 and 49 years old, often single and have a monthly income between €1.500 and 2500. To facilitate the application of the typology in marketing and sales practice, the typical representatives of both cluster types were developed and described as personas.

Added value: The resulting information can be combined with existing customer data and thus used to identify corresponding sustainability attitudes within one's own customer portfolio. For the specific acquisition of new customers, the address-specific knowledge can be aggregated to any higher (micro-) geographical level and then used in sales and marketing strategies. Advertising activities can then be precisely targeted to the right type of potential buyers.



GPS paradata: methods for CAPI interviewers fieldwork monitoring and data quality

Daniil Lebedev, Aigul Klimova

HSE University, Moscow, Russia

Relevance & Research Question:

In recent years there has been a steady increase of interest in sensor- and app-based data collection which can provide new insights into human behavior. However, the quality of such data lacks research focus and still needs further exploration. The aim of this paper was to compare various methods of using GPS paradata in CAPI surveys for monitoring interviewers and assess GPS data quality differences among different CAPI interviewers and survey regions.

Methods & Data:

We compared geofencing (distance between locations at the beginning of an interview and at the end), curbstoning (tests if there are too dense groups of interview locations within some area), and interwave geofencing methods (distance between location of interviews with same respondent between panel waves) to check whether they identify interviews that have lower data quality in terms of completion times, criterion validity, and test-retest reliability based on CAPI data of 26th and 27th waves of Russia Longitudinal Monitoring Survey with 491 and 631 respondents, respectively. In case of GPS data quality, we compared missing data rate and geolocation measures’ accuracy by interviewers and regions.

Results:

We found that the geofencing method was quite efficient in flagging “suspicious” interviews that have lower data quality. Curbstoning method can be quite useful, however, the problem concerns the selection of the thresholds of area and density of interviews within this area. In addition, using accuracy-based measures as GPS measurement error instead of selecting a threshold was found to be more efficient. In terms of data quality, the region of an interview proved to be the main factor associated with lower data quality in terms of missing data and measurement error compared to others with remote regions providing GPS data of higher quality.

Added Value:

The comparison of various methods of interviewers’ monitoring shows ways of how GPS-paradata can be used and which of the approaches allow to detect interviewers with lower data quality. GPS data quality assessment is useful in terms of future geolocation data employment within social science research as it shows the possible sources of measurement and item nonresponse errors.



Combining Survey Data and Big Data to Rich Data – The Case of Facebook Activities of Political Parties on the Local Level

Mario Rudolf Roberto Datts, Martin Schultze

University of Hildesheim, Germany

Relevance & Research Question: In times of big data, we have increasing numbers of easily accessible data which can be used to describe human behavior and organization's activities on large sample sizes without the well-known biases of survey data. Yet, big data is not particular good in measuring attitudes and opinions, as well as information that has not been made public. Thus, the question rises how survey and big data can be combined to enable a more complete picture of reality.

Methods & Data: As a case study, we analyse the facebook communication of political parties in Germany. We seek to describe and explain facebook activities. While the descriptive part of our study can be investigated on basis of data that was gathered via the official web interface of facebook, the “why” is examined via an online survey among the district associations of the most important political parties in Germany (n= 2,370), which began on 2 May 2017 and ended on 16 June 2017.

Results: By combining big data and survey data, we are able to describe the facebook usage of the district associations over a time period of eight years, as well as identify several key factors explaining the very different facebook activities of political parties in Germany, like the number of members and certain expectations of the chairman regarding the merits of social media for political communication activities. Furthermore, we can show that almost half of our respondents perceive its local party chapter as a very active one, while API data indicates that they are, if any, moderate social media communicators.

Added Value: It was only possible through the combination of survey data and big data, to draw a rich picture of the political usage of facebook on the local level in Germany. Our findings also indicate that ”objective” big data and an individual’s perception regarding the same issue, might differ substantially. Thus, we recommend analyst to - whenever possible - combine big and survey data and be aware of the limitations.

 
2:00 - 3:00 CESTC2: Misinformation
Session Chair: Anna Rysina, Kantar GmbH, Germany
 
 

Emotional framing and the effectiveness of corrective information

Pirmin Stöckle

University of Mannheim, Germany

Relevance & Research Question:

Concerns about various forms of misinformation and its fast dissemination through online media have generated huge interest into ways to effectively correct false claims. An under-explored mechanism in this research is the role of distinct emotions. How do emotional appeals interact with corrective information? Specifically, I focus on the emotion of disgust, which has been shown to be linked to the moralization of attitudes, which in turn reduces the impact of empirical evidence on attitudes and makes compromise less likely. Substantively, I investigate the issue of genetically modified (GM) food. I hypothesize that (i) emotionally framed misinformation induces disgust and moralizes attitudes towards GM food, (ii) that this effect endures in the face of neutral correction even if the factual misperception is corrected, and (iii) that an emotional counter-frame reduces this enduring effect of the original frame.

Methods & Data:

I implement a pre-registered survey experiment within a panel study based on a probability sample of the general population in Germany (N ≈ 4,000). The experiment follows a between-subjects 3 x 3 factorial design manipulating both misinformation (none, low-emotion frame, high-emotion frame) and corrective information (none, neutral, emotional counter-frame). The informational treatments consist of fabricated but realistic online news reports based on the actual case of a later retracted study claiming to find a connection between GM corn and cancer. As outcomes, I measure factual beliefs about GM food safety, policy opinions, moral conviction, and emotional responses to GM food.

Results: - not yet available -

Added Value:

In the view of many scientists, genetic engineering provides avenues with large potential benefits, which may be impeded by public resistance possibly originating from misleading claims easily disseminated through online media. Against this background, this study provides evidence on the effect of emotionally charged disinformation on perceptions of GM food, and ways to effectively correct false claims. In a broader perspective, these results inform further studies and policy interventions on other issues where disinformation loads on strong emotions, ranging from social policy over immigration to health interventions such as vaccinations.



Forwarding Pandemic Online Rumors in Israel and in Wuhan, China

Vered Elishar-Malka1, Shuo Seah2, Dana Weimann-Saks1, Yaron Ariel1, Gabriel Weimann3

1Academic College of Emek Yezreel; 2Huazhong University of Science and Technology, China; 3University of Haifa

Relevance and research question: Starting in the last quarter of 2019, the COVID-19 virus, led to an almost unprecedented global pandemic with severe socioeconomic and political implications and challenges. As in many other large-scale emergencies, the media has played several crucial roles, among them as a channel of rumormongering. Since social media have penetrated our lives, they have become the central platform for spreading and sharing rumors, including about the COVID-19 epidemic. Based on the Theory of Planned Behavior and on the Uses and Gratifications theory, this study explored the factors that affected social media users' willingness to spread pandemic-related rumors in Wuhan, China, and in Israel, via each country's leading social media platform (WeChat and WhatsApp, respectively).

Methods and data: we tested a multi-variant model of factors that influence the forwarding of COVID-19 online rumors. Using an online survey that was simultaneously conducted in both countries between April-May 2020, 415 WeChat and 503 WhatsApp users reported their patterns of exposure to and spread of COVID-19 rumors. As part of the questioner, users were also asked to report on their motives to do so.

Results: The main result was that in Wuhan, personal needs, negative emotions, and the ability to gather information significantly predicted willingness to forward rumors. In contrast, rumors' credibility was found to be a significant predictor in the regression model. In Israel, only the first two predictors, personal needs and negative emotions, were found significant. The best predictor in Wuhan was personal needs, and the best predictor in Israel was negative emotions.

Added value: This study's findings demonstrate the significant roles that WeChat and WhatsApp, the leading social media in China and Israel, respectively, play in local users' lives during a severe national and global crisis. Despite the major differences between the two societies, several interesting similarities were found: in both cases, individual impetuses, shaped by personal needs and degree of negative feelings, were the leading motives behind spreading rumors over social networks. These findings may also help health authorities in planning the right communication strategies during similar situations.



Acceptance or Escape: A Study on the embrace of Correction of Misinformation on YouTube

Junmo Song

Yonsei University, Korea, Republic of (South Korea)

Relevance & Research Question:

YouTube is one of the most important channels for producing and consuming political news in south Korea. YouTube has the characteristic that not only traditional medias, but also new media based on the Internet, or individual news producers, can freely provide news because the platform does not play an active gatekeeper role.

In 2020, North Korea's leader Kim Jong-un's death was reported indiscriminately by both the traditional media and individual channels, but a definite correction was made at the national level. Therefore, this study explores the response to correction by using this case as a kind of natural experiment.

This study aims to analyze the difference in response between producers and audiences when fake information circulating on the YouTube platform is corrected and not. Ultimately, this study seeks to explore the conditions under which correction of misinformation accelerates or alleviates political radicalization.

Methods & Data:

Videos and comments are collected from the top 437 channels in the Politics/News/Social category on YouTube of Korean nationality. Data was collected through the YouTube API provided by Google. Then, classyfing channels into two group that traditional media and new media including individual channel. In addition, the political orientation of comments was classified as progressive/conservative through supervised learning.

Results:

In pilot analysis, In both media and individual channels, the number of comments has generally decreased after correction. In particular, the number of comments in conservative individual channel has drastically decreased.

In addition, after the misinformation was corrected, the difference in political orientation between comments from individual channels and media outlets has significantly decreased or disappeared.

However, existing conservative users did not change their opinions due to the correction of misinformation, and it is observed that they immediately move to other issues and consume.

Added Value:

YouTube has been relatively less analyzed in politics than other platform like SNS, community. This study examines how misinformation is accepted in a political context through the case of Korea in which YouTube has a profound influence on politics.

 
2:00 - 3:00 CESTD2: GOR Best Practice Award 2021 Competition II
Session Chair: Otto Hellwig, respondi/DGOF, Germany
Session Chair: Alexandra Wachenfeld-Schell, GIM Gesellschaft für Innovative Marktforschung mbH, Germany

sponsored by respondi
 
 

High Spirits – with No Alcohol?! Going digital with Design Thinking in the non-alcoholic drinks category – a case study in unlocking the power of digital for creative NPD tasks

Irina Caliste3, Christian Rieder2, Janine Katzberg1, Edward Appleton1

1Happy Thinking People, Germany; 2Happy Thinking People, Switzerland; 3Bataillard AG

Relevance and Research Question

Our client – a Swiss wine distribution company - wished to improve its position in the growing non-alcoholic drink category.

They were looking for a step-change in their innovation approach: embracing consumer centricity, digital working & Design Thinking principles.

Budgets were tight, and timing short. Could we help?

Design Thinking is proven and used widely offline – but 100% digital applications are still embryonic.

In this project we demonstrated how a careful mix of online qual tools – real-time and asynchronous – allowed us to innovate successfully, covering both ideation and validation phases in a highly efficient manner.

Methods and Data

Phase 1 involved a DIY-style pre-task to help stakeholders get to know their consumers – talking to friends, relatives about category experiences.

A digital workshop followed: all participants shared their experiences and identified the most promising customer types. Detailed personas were worked up, with a range of core needs.

External experts delivered short pep-talks as inspiration boosters.

Initial ideas were then developed – multiple prototypes visualised rapidly by an online scribbler.

Phase 2 was about interrogating & evaluating the ideas from phase 1.

“Real consumers” (recruited to match the personas) interacted directly with the client groups.

Customers re-joined later on for a high-speed pitch session: As in the TV format “The Dragon’s Den” (role-reversal), client groups presented their ideas to real customers.

Online mobile polling was used for a final voting session – individual voices helping to optimize the concepts.

Results

• A broad, rich range of actionable new ideas was generated.

• The client team was enthused. The desired mind-shift to Consumer Centricity and openness to innovation was achieved – a key step-change hoped for by the innovation manager & company CEO.

• DIY & a fusion of professional online qual research approaches complemented one another well. No quality was lost.

Added Value

• Digital Design Thinking works well and extremely efficiently for online creativity tasks.

• The rules of F2F co-creation success – playful, time-boxed, competitive, smaller groups – were all applicable online.

• Consumers jumping in and out of the workshop day is a new, efficient use of their time.

• Overall: creativity and online can work very well hand-in-hand!



The dm Corona Insight Generator – A mixed method approach

Oliver Tabino1, Mareike Oehrl1, Thomas Gruber2

1Q Agentur für Forschung, Germany; 2dm-drogerie markt GmbH + Co. KG, Germany

Relevance & Research Question:

As one of the biggest German drugstore brand, the Corona pandemic confronts dm with several major challenges in different areas and units. How political, social and medical developments affect consumers, consumer behaviour, fears and concerns at the PoS, and the image of dm are key issues in this project.

Methods & Data:

dm needed above all fast, timely, reliable insights on current and highly dynamic developments. Weekly trend reports at the beginning of the project could only be achieved through a mix of methods and a highly efficient and flexible research process.

Diversity: we set up a very diverse project team to cover different point of views and lifeworlds.

Intelligence of the Q crowd: internal knowledge management platform collecting weak signals, observations.

Web crawler: capturing, structuring and analysing the web

Social Listening: Tracking, reviewing and quantifying previously found trends

Netnography: content analytical approach to capture, understand and interpret need states

Google Trends Analyses: uncovering linked topics and search patterns from a consumers’ perspective

AI: automated detection of trends.

Last not least: expertise and research experience.

Because of extreme time pressure, we opted for an agile and tight project management.

The project includes special process steps:

Regular editorial meetings between dm and Q to challenge trends and weak signals before reporting and to check relevance for dm.

Extremely open communication between client and agency, which enables a deep understanding of dm’s questions and a quick and tailor-made preparation of insights.

Results:

The results are presented in a customised format. It is suitable for management and includes exemplary trend manifestations as well as concrete recommendations for dm. In addition, the results are embedded in a context of society as a whole:

Overview and classification of all found trends in a trend map.

The reporting cycle has been changed depending on the social dynamics and dm’s requirements.

Q also conducted short-term on-demand analyses.

Added Value:

The results are made available in an internal dm network for the different departments and units and are used by dm branches, communication teams, marketing, product development and corporate strategy.

The reports work at different company levels (granular, concrete vs. strategic) and for the different areas such as marketing, private label development, communication, etc. In addition, the insights offer touchpoints for dm’s keyproduct categories such as colour cosmetics, skincare, washing and cleaning, etc.



The end of slide presentations as we know them: How to efficiently and effectively communicate results from market research?

Andreas Krämer1, Sandra Böhrs2, Susanne Ilemann2, Johannes Hercher3

1exeo Strategic Consulting AG, Germany; 2simpleshow gmbh, Germany; 3Rogator AG, Germany

Relevance & Research Question:

Videos are becoming increasingly popular in market research when it comes to capturing information (Balkan & Kholod 2015). At the same time, results from studies can be communicated in a targeted manner in form of a video. This is especially true for explainer videos, i.e., short (1-3 min.), animated videos long and convey key messages. Today, different platforms offer to produce DIY explainer videos based on AI (Krämer & Böhrs 2020). However, a key question is whether it is possible to convey information better via explainer video than via slide presentation. Another open question is whether learning effects can be improved through interaction.

Methods & Data:

As part of a customer survey (n=472, March 2021) by simpleshow, a leading provider of explainer videos, in addition to questions on customer satisfaction and ease of use, current results on the topic of home-office were presented as part of an experimental design (randomized 2+2 factorial design). In the test, an explainer video and a slide presentation were used as the format. Both formats were presented once without interaction and once with interaction (additional questions on the topic). Afterwards, a knowledge test was used to check how well the study results were conveyed. In addition, the participants rated the type of presentation and well as subjective effects.

Results:

The explainer video format achieves significantly better results in knowledge transfer than the presentation of the results as a slide presentation. With a maximum achievable score of 7, the explainer video without interaction achieves a value of 5.0, while the slide format achieves only 2.2 points. The differences show a high statistical significancy as well as strong effect size . The interaction only leads to slightly better results in combination with slide presentation. The subjective evaluation of the presentation format also shows similar level differences between the test groups. Taking into account the length of viewing, the explainer video without interaction achieves by far the best result.

Added Value:

The study results firstly demonstrate clear advantages of knowledge transfer through explanatory videos in comparison with conventional slide presentations. Secondly, it appears that in the context of short presentations, interaction (additional questions about the topic) does not significantly increase learning, but it does increase viewing time. Thirdly: Beyond the actual experiment, the study results underline that explainer videos can also play an important role in the presentation of market research results in the future.

 
3:00 - 3:10 CESTBreak
 
3:10 - 4:10 CESTKeynote 1
 
 

Election polling is not dead: Forecasts can be improved using wisdom-of-crowds questions

Mirta Galesic

Santa Fe Institute, United States of America

Election forecasts can be improved by adding wisdom-of-crowds questions to election polls. In particular, asking people about the percentage of their social contacts who might vote for different political options (social-circle question) improved predictions compared to traditional polling questions about participants’ own voting intentions in three recent U.S. elections (2016, 2018, and 2020) as well as in three recent elections in European countries with larger number of political options (in 2017 French, 2017 Dutch, and 2018 Swedish elections). Using data from large national online panels, we investigate three reasons that might underly these improvements: an implicitly more diverse sample, decreased social desirability, and anticipating social influences on how people will vote. Another way to use wisdom of crowds is asking people to forecast who will win the election (election-winner question). We find that the social-circle question can be used to select individuals who are better election-winner forecasters, as they typically report more diverse social circles. A combination of social-circle, election-winner, and traditional own intention questions has performed best in 2018 and 2020 U.S. elections. Taken together, our results suggest that election polling can produce accurate results when traditional questions are augmented with wisdom-of-crowds questions.

 
4:10 - 4:20 CESTBreak
 
4:20 - 5:30 CESTA3: New Technologies in Surveys
Session Chair: Ines Schaurer, City of Mannheim, Germany
 
 

Participation of household panel members in daily burst measurement using a mobile app

Annette Jäckle1, Jonathan Burton1, Mick Couper2, Brienna Perelli-Harris3, Jim Vine1

1University of Essex, United Kingdom; 2University of Michigan, USA; 3University of Southampton, United Kingdom

Relevance:

Mobile applications offer exciting new opportunities to collect data, either passively using inbuilt sensors, or actively with respondents entering data into an app. However, in general population studies using mobile apps, participation rates have to date been very low, ranging between 10 and 20 percent. In this paper we experimentally test the effects of different protocols for implementing mobile apps on participation rates and biases.

Methods:

We used the Understanding Society Innovation Panel, a probability sample of households in Great Britain that interviews all household members aged 16+ annually. During the 2020 annual interview, respondents were asked to download an app and use it every evening for 14 days to answer questions about their experiences and wellbeing that day. We experimentally varied: i) at what point in the annual interview we asked respondents to participate in the wellbeing study (early vs. late), ii) the length of the daily questionnaire (2 vs 10 mins), iii) the incentive offered for the annual interview (ranging from £10 to £30), and iv) the incentives for completing the app study (in addition to £1 a day: no bonus; £10 bonus for completing all days; £2.50 bonus a day on four random days).

Results:

Of the 2,270 Innovation Panel respondents, 978 used the app at least once (43%). The length of the daily questionnaire, the incentives for the annual interview, and the incentives for the app study had no effects on whether respondents downloaded the app during the interview, whether they used the app at least once, or the number of days they used the app. However, respondents who were invited to the app study early in the annual interview were 8 percentage points more likely to participate than those invited late in the interview (47% vs 39%, p<0.001) and respondents who completed the annual interview online were 28 percentage points more likely to participate than those who completed the interview by phone (48% vs 20%, p<0.001). Further analyses will examine the reasons for non-participation and resulting biases.

Value:

This study provides empirically based guidance on best practice for data collection using mobile apps.



App-Diaries – What works, what doesn’t? Results from an in-depth pretest for the German Time-Use-Survey

Daniel Knapp, Johannes Volk, Karen Blanke

Federal Statistical Office Germany (Destatis)

Relevance & Research Question:

The last official German Time-Use-Survey (TUS) in 2012/2013 was based mainly on paper mode. In order to modernize the German TUS for 2022, two new modes were added – an app and a web instrument. As the literature on how to design specific elements of a diary-based TUS App is still scarce, our goal was to derive best-practice guidelines on what works and what doesn’t when it comes to designing and implementing such an App-Diary (e.g. whether and how to implement hierarchical vs. open text activity search functionalities).

Methods & Data:

Results are based on an in-depth qualitative pretest with 30 test persons in Germany. Test persons were asked to 1. Fill out a detailed time-use diary app for two days, 2. Document first impressions, issues and bugs on a short questionnaire, 3. Participate in individual follow-up cognitive interviews. Combining this data allowed us to evaluate various functionalities and implementations in detail.

Results:

Final results of the pretest are still work in progress and will be handed in at a later date. The presentation will also include a brief overview of the upcoming federal German Time-Use-Survey 2022 and its transformation towards Online First.

Added Value:

New insights to further expand the literature on how to design a diary-based time-use-app in the context of the harmonized European Time-Use-Survey. This study expands on literature by focusing on specific elements of a diary-based app and proposing best-practice guidelines on several detailed aspects, such as app structure, diary overview, and activity search functionality.



Using text analytics to identify safeguarding concerns within free-text comments

Sylvie Hobden, Joanna Barry, Fiona Moss, Lloyd Nellis

Ipsos MORI, United Kingdom

Relevance & Research Question:

Ipsos MORI conducts the Adult Inpatient and Maternity surveys on behalf of the Care Quality Commission (CQC). Both surveys collect patient feedback on recent healthcare experiences via a mixture of multiple choice and free-text questions. As the unstructured free-text comments could potentially disclose harm, all comments are manually reviewed and allocated a flag indicating whether any safeguarding concerns are disclosed. Flagged comments are escalated to the CQC for investigation. We piloted an approach that uses machine learning to make this process more efficient.

Methods & Data:

IBM SPSS modeler was used to construct a model which was developed through multiple stages. We aimed to use the model to separate safeguarding concerns (which require review and escalation) from non-safeguarding (which may require spot-checking of a random sample).

1. 2019 Adult Inpatient and Maternity pilot comments (n=9,862), that had previously been manually reviewed for safeguarding issues, were used to train the model to identify potential safeguarding comments. The model identified a relatively small pool of comments.

2. The model output was compared with the previous manual review to assess accuracy. Where the model failed to identify safeguarding comments correctly, a qualitative review was conducted to identify how the model should be revised to increase accuracy.

3. 2019 Adult Inpatient and Maternity mainstage comments (n=60,754) were analysed by the model. This sample was independent of the pilot sample, ensuring the model's accuracy was generalisable across all survey comments.

Results:

On average, the model identified 44% of comments as non-safeguarding with high accuracy. Given the scale of the surveys, this could equate to around 27,000 fewer comments that need manual review each year. This would provide cost savings and enable safeguarding comments to be escalated to the CQC quicker. We are currently exploring how the model will be used for the 2020/2021 surveys.

Added Value:

Text analytics uses machine learning to assist in the translation of large volumes of unstructured text into structured data. This is an innovative application of the approach which has resulted in huge efficiencies and could be developed and implemented on other surveys.

 
4:20 - 5:30 CESTB3: Smartphone Sensors and Passive Data Collection
Session Chair: Simon Kühne, Bielefeld University, Germany
 
 

Online Data Generated by Voice Assistants – Data Collection and Analysis Using the Example of the Google Assistant

Rabea Bieckmann

Ruhr-Universität Bochum, Germany

Relevance & Research Question:

Voice assistants play an increasing role in many people's everyday life. They can be found in cars, cell phones, smart speakers or watches and the fields of application are increasing. The use is seldom questioned, although meanwhile children grow up with it and the voice assistants are often people’s only "conversation partner" during one day. At the same time, a large amount of data is automatically generated and ongoing online logs in the form of conversations are created. The question arises as to how this mass of personal data can be used for sociological research and based on this, what the special features of communication between humans and voice assistants are.

Methods & Data:

The considered data consists of conversation logs from one person with the Google Assistant over a whole year. In addition, there is information about whereabouts, music the person listened to, shopping lists and many more aspects. The entries in the logs are provided with time markers and, in most cases, are stored with the recorded audio files. The logs can be downloaded as PDF-files from the user’s personal account. They are strictly anonymized and examined with a qualitative approach using conversation analysis.

Results:

Collecting and processing the data for sociological research requires much effort. The barriers to obtain the data are very high, but once it is available, it is of great value because it contains an enormous amount of information. The communication between human and voice assistant is also very special as it differs greatly from other forms of communication. It is characterized by an imperative way of speaking, paraphrases and constant repair mechanisms. The personalization of the voice assistant is also a key finding in the analysis of human-technology communication.

Added Value:

The study not only provides initial results and suggestions for approaches in the sociological handling of data from voice assistants. In addition, the findings on the specifics of communication between people and voice assistants are relevant as they are increasingly becoming part of households, work places, public space and thus changing social dynamics.



Eyes, Eyes, Baby: BYOD Smartphone Eye Tracking

Holger Lütters1, Antje Venjakob2

1HTW Berlin, Germany; 2oculid UG (haftungsbeschränkt), Germany

Relevance & Research Question

The methodology of eye tracking is an established toolset typically used in a laboratory setting. The established technological toolset of infrared devices creates solid results, but makes it impossible to go into the remote testing field. Research accepted lower quality with webcams as a trade-off for the better access to more diverse research samples.

With the rise of smartphones as the preferred digital device, the methodology did not keep pace so far. App concept or mobile website tests still take place in a confined environment of established hardware that is in effect more suitable for eye tracking on bigger screens.

The approach presented brings the technology right into the hands of a research participant, who can use their own device’s camera while performing research tasks. The idea of BYOD (Bring your own device) is not new, but now it offers a high-tech toolset with exceptional quality.

Methods & Data

The presented approach offers an online based framework for the setup of studies for the less tech savvy researcher who can design, distribute and analyze a smartphone eye tracking test. The tool captures eye movements and touch interactions of a participant on the screen. The recording of thinking aloud helps to better understand the individual’s attention while performing research tasks. The entire interaction data is uploaded to the online platform and can be analyzed individually or in comparison.

The contribution shows the first experiments with the new eye tracking app from the Berlin based start-up Oculid, showing how to test advertising material, online task solving and a market research questionnaire being eye tracked and user behaviour.

Results

The contribution will show the process of setting up a study, distribution and analysis using several experiments performed by external researchers using the tool. The entire process of set-up, field recruitment, connection to external tools and analysis will be explained with all their advantages, insights and challenges.

Added Value

Smartphone usage does not only grow in quantity, but also the mobile camera technology is outperforming compared to non-mobile installations. The smartphone BYOD concept therefore may be more than just competitive.



Separating the wheat from the chaff: a combination of passive and declarative data to identify unreliable news media

Denis Bonnay1,2, Philippe Schmitt1,3

1Respondi; 2Université Paris Nanterre; 3Toulouse School of Economics

Relevance & Research Question: Fake news website detection

Hype aside, fake news have grown massive and threaten the proper functioning of our democracies. The detection of fake news has thus become a major focus of research both to the social media industry and in the academia. While most approaches to the issue are aimed at classifying news items as fake or legit, one may also wish to look at the problem in terms of sources’ reliability, aiming at a classification of news emitters as trustworthy or deceptive. Our aim in the present research is to explore the prospects for an automated solution to this problem, by trying to predict and extend existing man-made classification of news sources in France.

Methods & Data: browsing data, random forest, NLP, deep learning

A sample of 3192 French panelists aged from 16 to 85 had their online browsing activity recorded for one year from November 2019 to October 2020. Additionally, a survey was conducted in May 2020 to gather information about their socio-demographics and degrees of beliefs in various fake news. On this basis, we are using four kinds of predictors: (1) websites’ traffic (mean time spent, etc.), (2) origins of traffic, (3) websites’ audience features, (4) types of articles read (clustering titles embeddings obtained via a fine-tuned BERT language model). Our predictive target is the binary adjusted version of Le Monde’s media classification where medias are either reliable or not (61% vs. 39% of the total sample).

Results:

Predictions are made with random forests algorithm and K-Fold cross-validated with K=10. Combining all sets of variables, we achieve 75.42% accuracy on the test set. The top 5 predictors are average age, number of pages viewed, total time spent on websites, category of preceding visits and panelists’ clusters based on degrees of belief in fake news.

Added Value: combining passive and declarative data

Combining passive and declarative data is a new standard for online research. In this study, we show the potential of such an approach to fake news detection, which is usually tackled with by means of brute force NLP or pattern based algorithms.



Measuring smartphone operating system versions in surveys: How to identify who has devices compatible with survey apps

Jim Vine1, Jonathan Burton1, Mick Couper2, Annette Jäckle1

1University of Essex, United Kingdom; 2University of Michigan, USA

Relevance:

Data collection using mobile apps relies on sample members having compatible smartphones, in terms of operating system (OS) and OS version. This potentially introduces selection bias. Measuring OS version is however difficult. In this paper we compare the quality of data on smartphone OS version collected with different methods. This research arose from analyses of the uptake of the coronavirus test & trace app in the UK, which requires smartphones running Android 6.0 and up or iOS 13.5 and up.

Methods:

We use data from the Understanding Society COVID-19 study, a probability sample aged 16+ in the UK. The analyses are based on 10,563 web respondents who reported having an Android or iOS smartphone. We compare three ways of measuring smartphone OS version: i) using the user agent string (UAS), which captures characteristics of the device used to complete the survey, ii) asking respondents to report the make and model of their smartphone and matching that to an external database, and iii) asking respondents to report the OS version of their smartphone (by checking its settings, typing “whatismyos.com” into its browser, or scanning a QR code opening that webpage).

Results:

The UAS provided a smartphone OS version for just 58% of respondents, as the rest did not use a smartphone to complete the survey; 5% of the OS versions were too old to use the coronavirus app.

Matching the self-reported smartphone make and model to a database provided an OS version for 88% of respondents; only 2% did not answer the question, but 10% of answers could not be matched to the database; 10% of OS versions were too old for the app.

When asked for the OS version of their smartphone, 66% answered, 31% said don’t know and 3% refused or gave an incomplete answer; 15% reported an OS version that was too old.

Further analyses will examine the reasons respondents gave for not providing the OS version and cross-validate the three measures.

Added Value:

This study provides evidence on how to identify sample members who have smartphones with the required OS version for mobile app-based data collection.

 
4:20 - 5:30 CESTC3: COVID-19 and Crisis Communication
Session Chair: Pirmin Stöckle, University of Mannheim, Germany
 
 

The Mannheim Corona Study - Design, Implementation and Data Quality

Carina Cornesse, Ulrich Krieger

SFB 884, University of Mannheim, Germany

Relevance & Research Question:

The outbreak of COVID-19 has sparked a sudden demand for fast, frequent, and accurate data on the societal impact of the pandemic. To meet this demand quickly and efficiently, within days of the first containment measures in Germany in March 2020, we set up the Mannheim Corona Study (MCS), a rotating panel survey with daily data collection on the basis of the long-standing probability-based online panel infrastructure of the German Internet Panel (GIP). In a team effort, our research group was able to inform political decision makers and the general public with key information to understand the social and economic developments from as early as March 2020 as well as advance social scientific knowledge through in-depth interdisciplinary research.

Methods & Data:

This presentation gives insights into the MCS methodology and study design. We will provide a detailed account of how we adapted the GIP to create the MCS and describe the daily data collection, processing, and communication routines that were the cornerstones of our MCS methodology. In addition, we will provide insights into the necessary preconditions that allowed us to react so quickly and set up the MCS so early in the pandemic. Furthermore, we will discuss the quality of the MCS data in terms of the development of response rates as well as sample representativeness across the course of the MCS study period.

Results:

Our results show how the German Internet Panel could be transformed in an agile measurement tool in times of crisis. Participation rates were stable over the 16 weeks of data collection. Data quality indicators such as the Average Absolute Relative Bias comparing key survey indicators to German Mikozensus show stable low deviation from benchmark.

Added Value:

In this presentation we demonstrate how an existing research infrastructure can be quickly transformed in an instrument to measure important societal change or crisis events.



Tracking and driving behaviour with survey and metered data: The influence of incentives on the uptake of a COVID-19 contact tracing app

Holger Nowak, Myrto Papoutsi

respondi, Germany

Relevance & Research Question:

Tracing the chain of infections is a substantial part of the strategy against SARS-CoV-2. But how is the German Corona Tracing App (CWA) used? Who are the users? Could uptake be boosted by just informing the population? Or are monetary incentives more effective? We study these questions by combining survey with passively metered behavioral data. The passive metering not only measures app usage more accurately but helps also to understand sensitive behaviour that is affected by social desirability.

Methods & Data:

100+ days (June to September 2020) survey with 2,500 participants; 1,100 participants of the passive tracking panel, which measures the usage of the CWA

3 wave survey:

• Baseline. Random assignment to 2 informational treatments and a control group

• Re-measurement of attitudes and behaviour. Assign to 3 monetary treatments and a control group

• Last measurement

The control group contains not only surveyed respondents but also part of the metered panel that was not interviewed.

Results:

First, we provide evidence on covariates linked with app usage. We observe higher usage rates among people who are already well informed and adhere to public health guidelines. Furthermore, a higher proportion of higher educated, digitally competent and older people are using the app, as well as those who report to trust the government. We can show the impact of information treatments on uptake is negligible, whereas small financial offers increase app usage substantially.

Added Value:

Due to the app’s privacy-by-default approach, individual-level determinants of usage have been difficult to identify. This study provides important behavioral evidence and highlights the advantage of passive data to measure potential socially desirable behaviour, as well as complex over-time behaviour which is difficult to report. It also shows how such data can be combined with an experimental design to evaluate the effects of possible policy interventions. While the nature of the online access panel prohibits strong conclusions about overall usage rates in the population of interest (smartphone users, whose mobile phones are technically compatible with the tracing app are anyway virtually impossible to sample from), conditional usage rates across different demographic and behavioral groups are informative about app usage.



Are people more likely to listen to experts than authorities during Covid-19 crisis? The case of crisis communication on Twitter during the covid-19 pandemic in Germany

Larissa Drescher1, Katja Aue1, Wiebke Schär2, Anne Götz2, Kerstin Dressel2, Jutta Roosen1

1c3 team, Germany; 2sine - Süddeutsches Institut für empirische Sozialforschung e.V. | sine-Institut gGmbH, Germany

Relevance & Research Question:

The worldwide spread of the Covid-19 virus has led to an increased need for information related to the pandemic. Social media plays an important role in the population's search for information.

Both authorities and Covid-19 experts use Twitter to directly share their own statements and opinions with the Twitter community – unfiltered and independently from traditional media. Little is known on the twitter communication behavior of these players. This study aims to analyze characteristics and differences of both authorities and experts regarding the Covid-19 virus communication on Twitter.

Methods & Data: The evaluation is carried out using sentiment analysis and quantitative text analysis. Tweets from 40 German experts (n = 18) and public health authorities (n = 22) are analyzed between January 2020 and January 2021. For the analysis 35,645 relevant tweets covering Covid-19 topics have been identified. This study is commissioned by the Federal Office for Radiation Protection in Germany.

Results: First findings show that experts (58,6%) have 1.4 times more followers and tweet more often about Covid-19 than authorities (41,4%). Due to a much broader range of topics authorities tweet significantly more about non-Covid-19 topics in 2020 than experts do. Another important finding shows that Covid-19 tweets replicates the Covid-19 cases-curve including a lower Twitter activity during the summer of 2020. Regarding the structural, content and style elements of crisis communication tweets remarkable differences are revealed. While Covid-19 tweets of authorities are obviously designed to follow the known rules of successful social media communication with a higher rate of structural elements like hashtags, URLs and images, experts’ tweets are much plainer. Contrary, experts address their followers more directly via style elements such as use of first or second person than authorities do. Overall, Covid-19 tweets of experts are exceedingly more successful compared to authorities which is shown by a mean retweet rate that is 7 times that of authorities.

Added Value: The results of this study provide not only insights into risk and crisis communication during the Covid-19 pandemic, but also helpful conclusions for future (health) crisis situations, particularly for communication between authorities and the population.



Targeted communication in weather warnings: An experimental approach

Julia Asseburg1, Nathalie Popovic2

1LINK Institut, Switzerland; 2MeteoSchweiz, Switzerland

Relevance & Research Question: Weather warnings, risk communication

Weather warnings inform the public about potentially dangerous weather events so that they can take precautionary measures to avoid harm and damages. However, weather warnings are often not user-oriented, which leads to poor understanding and low compliance rate. The present study focuses on the question, which elements of a warning message are the most important to influence risk perception and intended behavioural change.

Methods & Data: Vignette experiment, implicit associations, Web survey experiment

Using a single association test in a survey vignette experiment with 2000 Swiss citizens from all three language regions, we focus on implicit associations that citizens have, or do not have, when they see a warning message with varying elements (physical values, impact information, behavioural recommendations, warning level and labelling of the warning level). We test for associations with different concepts that play a role in the pre-decisional process of a warning response (e.g. personal relevance, risk perception). The experimental setup allows us to test for causal relationships between the different elements of a warning message and the intended behavioural response. Measuring the implicit associations enables us to better understand the first reactions triggered by the warning elements and how that impacts intended behavior.

Results: Multi-level analyses

Results show that risk and relevance have to be addressed unconsciously for weather warnings to impact the intention to act. The emphasis on behavioural recommendations and potential effects in weather warnings have a wake-up call character. In a nutshell, people need to know to what extent the weather can have an impact on their well-being and what they can do to protect themselves.

Added Value: Targeted communication to the public

First, by conducting a survey vignette experiment in combination with the single association test, we apply an experimental setup, which will open the black box of the perception of targeted communication. Second, the results add direct practical value as they inform the development of user-oriented weather warnings.Finally, the study contributes to research on risk perception and communication by providing a further insight to the cognitive process that underlies the decision to take protective actions

 
4:20 - 5:30 CESTD3: ResearchTech
Session Chair: Stefan Oglesby, data IQ AG, Switzerland
 
 

ResearchTech: what are the implications for the insight industry?

Steve Mast

Delvinia, Canada

Recently, “ResearchTech” or “ResTech” has emerged as a new term in the world of consumer and data insight. Leading experts view it as the “next big thing”. Indeed, there is a new generation of online platforms and tools that are fundamentally changing the relationship between research and decision makers. ResearchTech is expected to boost the agility of the research process, increase speed and massively expand the circle of users of data-based insights. The presentation will give a brief introduction about the current state of ResearchTech, highlight relevant use cases, and talk about current and future implications for marketers and insight professionals.



Leveraging deep language models to predict advertising effectiveness

Christian Scheier

aimpower GmbH, Germany

While advertising testing has become more agile in the past few years, it still takes considerable time and effort to develop and deploy these tests, analyze results and derive key learnings from which to take actions.

This often means that tests are only conducted at the end of a creative development. Moreover, many of these tests lack clear predictive relationship with actual in-market results.

We show that by leveraging recent developments in deep language modelling, it becomes possible to predict actual sales results on just a single open-ended question respondents answer after having been exposed to the copy. Additional metrics then provide immediate insights into the reasons of a successful or failed concept. By implementing this solution as a SaaS platform, organizations for the first time have the opportunity to evaluate concepts / advertising assets along the entire development process, quickly iterating across versions to optimize ad effectiveness and thus sales.

The solution will be presented live with (anonymized) insights on how clients (FMCG Top3) actual use it and which results they achieved.



Opendata for better customer understanding

Christian Becker

FREESIXTYFIVE, Germany

Opendata is everywhere and companies need to keep pace with the accelerating speed of change challenging communication, products, services and the business model itself.

Research always helped to analyze data and provide solid insights for strategic planning.

FREESIXTYFIVE developed a SMART INTELLIGENCE framework to deliver interoperability throughout the research stack.

By delivering synchronized opendata we enrich structural research data with real-time insights. Thanks to this “hypercontexting” we are able to develop quick individual market maps and data training models across different industries. Smart Market Intelligence empowers customers to validate new markets, assess innovative product ideas or optimize their existing market activities through a better understanding of their customers and users.

The innovative framework will be illustrated with specific, real-world use cases, including the Gaming industry.



Advent of Emotion AI in Consumer Research

Lava Kumar

Entropik Tech, India

Emotion AI is adding 3As (Accuracy, Agility, and Actionability) to augment the traditional ways of consumer research. With more than 90% accuracy and computer-vision-based methods, Emotion Insights are making it easy for brands to humanize their media, digital, and shopper experiences. Thus, building a positive emotional connection with their customers and increase conversions.

Join Lava Kumar, CPO and Founder of Entropik Tech as he talks about:

• Introducing Emotion AI to Consumer Research

• Facial Coding, Eye Tracking, Voice AI, and Brainwave Mapping

• The 3As of Emotion AI: Accuracy, Agility, and Actionability

• Emotion AI in Media, Digital and Shopper Research

 
8:00 - 10:00 CESTVirtual GOR 21 Party
sponsored by mo'web