Conference Agenda

Session
P 1.1: Poster I
Time:
Thursday, 10/Sep/2020:
1:20 - 2:20


Presentations

Reproducible and dynamic meta-analyses with PsychOpen CAMA

Tanja Burgard, Robert Studtrucker, Michael Bosnjak

ZPID - Leibniz Institute for Psychology Information, Germany

Relevance & Research Question:

A problem observed by Lakens, Hilgard, & Staaks (2016) is, that many published meta-analyses remain static and are not reproducible. The reproducibility of meta-analyses is crucial for several reasons. First, to enable the research community to update meta-analyses in case of new evidence. Second, to give other researchers the opportunity to use subsets of meta-analytic data. Third, to enable the application of new statistical procedures and test the effects these have on the results of a meta-analysis.

We plan to set up an infrastructure for the dynamic curation and analysis of meta-analyses in psychology. A CAMA (Community Augmented Meta-Analysis) serves as an open repository for meta-analytic data, provides basic analysis tools, makes meta-analytic data accessible and can be used and augmented by the scientific community as a dynamic resource (Tsuji, Bergmann, & Cristia, 2014).

Methods & Data:

We created templates to standardize the data structure and variable naming in meta-analyses. This is crucial for the planned CAMA to enable interoperability of data and analysis scripts. Using these templates, we standardized data of meta-analyses from two different psychological domains (cognitive development and survey methodology) and replicated basic meta-analytic outputs with the standardized data sets and analysis scripts.

Results:

We succeeded in standardizing various meta-analyses in a common format using our templates and in replicating the results of these meta-analyses with the standardized data sets. We tested analysis scripts for various meta-analytic outputs for the CAMA planned, as funnel plots, forest plots, power plots, and meta-regression.

Added Value:

Interoperability and standardization are important requirements for an efficient use of open data in general (Braunschweig, Eberius, Thiele, & Lehner, 2012). The templates and analysis scripts presented moreover serve as the basis for the development of PsychOpen CAMA, a tool for the research community to collect data and conduct meta-analyses in psychology collaboratively.



Survey Attitude Scale (SAS) Revised: A Randomized Controlled Trial Among Higher Education Graduates in Germany

Thorsten Euler, Ulrike Schwabe, Nadin Kastirke, Isabelle Fiedler, Swetlana Sudheimer

German Centre for Higher Education Research and Science Studies, Germany

Relevance & Research Question:

Various empirical evidence signals that general attitudes towards surveys do predict willingness to participate in (online) surveys (de Leeuw et al. 2017; Jungermann/Stocké 2017; Stocké 2006). The nine-item short form of the Survey Attitude Scale (SAS) as proposed by de Leeuw et al. (2010, 2019) differentiates between three dimensions: (i) survey enjoyment, (ii) survey value, and (iii) survey burden. Previous analyses in different datasets have shown that especially the two dimensions, survey value and survey burden, do not perform satisfactory with respect to internal consistency and factor loadings in different samples (Fiedler et al. 2019). Referring to de Leeuw et al. (2019), we therefore investigate into the question whether the SAS can be further improved by reformulating single items and adding new ones from existing literature (Stocké 2014; Rogelberg et al. 2001; Stocké/Langfeldt 2003).

Methods & Data:

Consequently, we implemented the proposed German version of the SAS, adopted from the GESIS Online Panel (Struminskaya et al. 2015) in an online survey for German Higher Education Graduates being conducted recently (October - December 2019, n = 1,378). Furthermore, we realised a survey experiment with split-half design aiming to improve the SAS by varying the wording of four items and adding one supplemental item per dimension. To compare both scales, we use confirmatory factor analysis (CFA) and measures for internal consistency within both groups.

Results:

Comparing CFA results, our empirical findings indicate that the latent structure of the SAS is reproducible in the experimental as well as in the control group. Factor loadings as well as reliability scores support the theoretical structure adequately. But, we do find evidence that changes in the wording of the items (with respect to harmonize the use of terms and to avoid survey mode mentioning) can partially improve the internal validity of the scale.

Added Value:

Overall, the standardized short SAS is a promising instrument for survey researchers. By intensively validating the proposed instrument in an experimental setting, we contribute to the existing literature. Since de Leeuw et al. (2019) also do report shortcomings of the scale; we show possibilities for further improvement.



„Magic methods“, bigger data and AI - Do they endager quality criteria in online surveys?

Stephanie Gaaw, Cathleen M. Stuetzer, Stephanie Hartmann, Johannes Winter

Technical University Dresden, Germany

Relevance & Research Question: ---Quality criteria in the field of (online) surveys are already existing for quite a long time and are therefore also viewed as well established. With the upcoming of new methodological approaches like new sampling procedures, it's questionable though if those criteria are still up-to-date and how current research achieves a methodological suitable reconstruction of quality. Therefore this contribution deals with the current state of the art in meeting quality criteria of online surveys in times of big data, self-learning algorithms and AI.---

Methods & Data: ---On the basis of a narrative literature review, the current state of research will be presented. Findings from both academic as well as applied research are brought together and transfered into recommendations for action. Current (academic) contributions were analysed for challenges and potentials related to quality assurance procedures in the area of online research. In addition, scientific standards and current codes of conduct for the industry were elaborated and examined for their adaptability and scalability for market, opinion and social research.---

Results: ---The results are currently being processed. However, there still are general quality criteria such as objectivity, reliability and validity. The construct „representativeness“ though is still in discussion and it's not clear yet whether the gold standard of a representative survey does work online without manipulation procedures. Therefore, a new view on quality criteria in online contexts seems essential in order to give orientation for the successfully implementation of new methods of online research in the future.---

Added Value: ---The aim is to make a sustainable contribution in the field of quality assessment for both academic and applied online research, especially online surveys. A particular benefit for applied research is to address problems such as survey fatigue and acceptance issues.---



Semi-automation of qualitative content analysis based on online research

Annette Hoxtell

HWTK University of Applied Sciences, Germany

Relevance & Research Question:

According to the GRIT-report 2019, market-researchers consider research and analysis automation a crucial opportunity for their industry. Although qualitative studies are harder to automate than quantitative ones, the automation of qualitative content analysis, a major qualitative evaluation method, is already partially feasible and expected to be further developed.

Main research question: How can qualitative content analysis be (semi-)automated?

Sub-questions: What would an automated qualitative research process as a whole look like? How does it advance online research?

Methods & Data: Semi-automation of qualitative content analysis as well as the research process as a whole are conceptualized based on the hermeneutic method, which is applied to a non-automated study carried out by the author using case-study methodology. Automation approaches already in use are identified through a systematic literature-review.

Results: Currently, qualitative content analysis and the qualitative research process as a whole can only be semi-automated since they depend on continuous human-machine interaction. Full automation seems feasible with the advance of artificial intelligence. It would be based on online and mobile technologies.

Added Value: This poster highlights a roadmap for the automation of qualitative research comprising qualitative content analysis, an increasingly important topic in qualitative social research, and especially in market research.



Assessing the Reliability and Validity of a Four-Dimensional Measure of Socially Desirable Responding

Rebekka Kluge1, Maximilian Etzel1, Joseph Walter Sakshaug2, Henning Silber1

1GESIS Leibniz Institute for the Social Sciences, Germany; 2Institute for Employment Research (IAB), Germany

Relevance & Research Question: Socially desirable responding (SDR), understood as the tendency of respondents to present themselves in surveys in the best possible light, is often understood as a one- or two-dimensional construct. The two short scales, Egoistic (E-SDR) and Moralistic Socially Desirable Responding (M-SDR) understand SDR as a four-dimensional construct. This understanding represents the most comprehensive conceptualization of SDR. Nevertheless, these short scales have not yet been applied and validated in a general population study. Such an application is important to measure and control for social desirability bias in general population surveys. Therefore, we test the reliability and validity of both short scales empirically to provide a practical measure of the four dimensions of SDR in self-administered surveys.

Methods & Data: The items of the source versions of the E-SDR and M-SDR were translated into German using the team approach. To avoid measuring a response behavior rather than social desirability bias, we balanced negative and positive formulated items. The scales together comprise 20 items. We integrated these 20 items into a questionnaire within a mixed-mode mail- and web-based survey conducted in the city of Mannheim, Germany (N~1000 participants). The sample was selected via Simple Random Sampling (SRS).

We assess the reliability and validity of E-SDR and M-SDR by using different analytical methods. To test the reliability, we aim to compute Cronbach’s alpha, the test-retest stability for the two short-scales, and the item-total correlation. To investigate the validity, we will test the construct validity by confirmatory factor analysis (CFA). For measuring discriminant and convergence validity, we correlate the two short-scales with the Big Five traits Extraversion, Agreeableness, Conscientiousness, Emotional Stability, and Openness.

Results: The field period will be from November 2019 to December 2019, and the first results will be available in February 2020.

Added Value: Based on our findings, we can evaluate the four-dimensional measurement of SDR with E-SDR and M-SDR short-scales in self-administered population surveys. If the measurement turns out to be reliable and valid, it can be used in future general population surveys to control for SDR.