Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
P 1.4: Poster IV
Time:
Thursday, 10/Sep/2020:
1:20 - 2:20


Show help for 'Increase or decrease the abstract text size'
Presentations

Data quality in ambulatory assessment studies: Investigating the role of participant burden and presentation form

Charlotte Ottenstein

University of Koblenz-Landau, Germany

Relevance & Research Question: Parallel to the technical development of mobile devices and smartphones, the interest in conducting ambulatory assessment studies has rapidly grown during the last decades. Participants of those studies are usually asked to repeatedly fill out short questionnaires. Besides numerous advantages, such as a reduced recall bias and a high ecological validity, the participant burden is higher than in cross-sectional studies or classical longitudinal studies. In our study, we experimentally manipulated participant burden (questionnaire length low vs. high) to investigate whether higher participant burden leads to lower data quality (e.g., as indicated by the compliance rate and careless responding indices). Moreover, we aimed to analyze effects of participant burden on the association between state extraversion and pleasant mood. We provided the questionnaires on two different platforms (questionnaire app vs. online questionnaire with link via e-mail) to investigate differences in the usability of those two presentation forms.

Methods & Data: Data were collected via online questionnaires and a smartphone application. After an initial online questionnaire (socio-demographic measures), participants were randomly assigned to one of four experimental groups (short vs. long questionnaires x app vs. online questionnaire). The ambulatory assessment phase lasted three weeks (one prompt per day in the evening). Participants rated the extent of situational characteristics, momentary mood, state personality, daily life satisfaction, depression, anxiety, stress, and subjective burden due to study participation. At the end of the study, participants filled out a short online questionnaire about their overall impression and technical issues. We computed the required sample size for mean differences (two-way ANOVA). 245 participants were needed to detect a small to medium effect (Φ = 0.18, power > 80%).

Results: Data collection is still running. The results of the preregistered hypotheses will be presented at the conference.

Added Value: This study can add insights into the effects of participant burden on data quality in ambulatory assessment studies. The results might serve as a basis for recommendations about the design of those studies. It would be desirable to conduct future ambulatory assessment studies in a way that participants are not overly stressed by their study participation.



Guess what I am doing? Identifying Physical Activities from Accelerometer data by Machine Learning and Deep Learning

Joris Mulder, Natalia Kieruj, Pradeep Kumar, Seyit Hocuk

CentERdata - Tilburg University, The Netherlands

Relevance & Research Question:

Accelerometers or actigraphs have long been a costly investment for measuring physical activity, but nowadays they have become much more affordable. Currently, they are used in many research projects, providing highly detailed, objectively measured sensory data. Where self-report data might miss everyday life active behaviors (e.g. walking to the shop, climbing stairs) accelerometer data provides a more complete picture of physical activity. The main objective of this research is identifying specific activity patterns using machine learning techniques and the secondary objective is improving the accuracy of identifying the specific activity patterns by validating activities through time-use data and survey data.

Methods & Data:

Activity data was collected through a large-scale accelerometer study in the probability-based Dutch LISS panel, consisting of 5.000 households. 1200 respondents participated in the study and wore a GeneActiv device for 8 days and nights, measuring physical activity 24/7. In addition, a diverse group of 20 people labeled specific activity patterns by wearing the device and performing the activities. These labeled data were used to train supervised machine-learning models (i.e. support vector machine, random forest) detecting specific activity patterns. A deep learning model was trained to enhance the detection of the activities. Moreover, 450 respondents from the accelerometer study also participated in a time-use study in the LISS panel. Respondents recorded their daily activities for two days (weekday and weekendday) on a smartphone, using a time-use app. The labeled activities were used to validate the predicted activities.

Results:

Activity patterns of specific activities (i.e. sleeping, sitting, walking, cycling, jogging, tooth brushing) were successfully identified using machine learning. The deep learning model increased predictive power to better distinguish between specific activities. The time-use data proved to be useful to further validate certain hard to identify activities (i.e. cycling).

Added Value:

We show how machine learning and deep learning can identify specific activity types from an accelerometer signal and how to validate activities by time-use data. Gaining insight in physical activity behavior can, for instance, be useful for health and activity research.



Embedding Citizen Surveys in Effective Local Participation Strategies

Fabian Lauterbach, Marc Schaefer

wer denkt was GmbH, Germany

Relevance & Research Question: Citizen surveys as an initiating and innovative form of participation are becoming increasingly popular. They are advocated as a cost-effective and purposeful method of enhancing the public basis of the policy-making process, thus representing an appealing first step towards participation for local governments. With the rapid advance and increasing acceptance of the Internet it is now possible to reach a sufficiently large number of people from various population groups. However, in order to exploit these advantages to their full potential, it is important to gain insights into how to maximise the perceived impact and success for citizens. Ergo, how should local municipalities design and follow up on citizen surveys?

Methods & Data: We want to present key insights into citizen surveys as a participatory driving force based on more than twenty citizen surveys of various sizes and on various topics with over 12,000 participants in numerous municipalities (e.g. Alsfeld, Friedrichshafen, Konstanz, Marburg). More precisely, our main focus lies on the effective communication with the target population at the beginning of the process and the subsequent processing, visualisation and presentation of survey results.

Results: Citizen surveys can be used as an initiating process of enhancing political mobilisation and participation in the context of broader political processes, provided that rules and conditions are communicated early & clearly. Consulting citizens first but then deciding contrastingly is the worst imaginable approach, and yet this is still continuously occurring in practice. Key factors which contribute to the success of a survey include an objective evaluation, a thourough analysis and the usage of its results as future guidelines for policy-making.

Added Value: While citizen surveys are particularly well suited for initiating participation, it often remains unclear, how citizens perceive the impact their participation has and the overall success of the survey. Although there has been extensive research and debate about the specific design, the issues of preparating and following-up on citizens in order to promote responsiveness and efficiency, has – up until now – been widely. Accordingly, we seek to advance knowledge on these essential, yet scarcely studied, stages of implementation.



Cognitive load in multi device web surveys - Disentangling the mobile device effect

Ellen Laupper, Lars Balzer

Swiss Federal Institute for Vocational Education and Training SFIVET, Switzerland

Relevance & Research Question: Increased survey completion time for mobile respondents’ completing web surveys is one of the most persistent findings. Furthermore, it is often used as a direct measure of cognitive load. However, as the measurement and interpretation of completion time faces various challenges, in our study we examined which possible sources of device differences ad to cognitive load, operationalized as completion time (objective indicator) as well as perceived cognitive load (subjective indicator). Furthermore, we wanted to examine whether cognitive load was functioning as a mediator between these sources and several data quality indices as proposed in the "Model of the impact of the mode of data collection on the data collected" by Tourangeau and colleagues.

Methods & Data: An extra questionnaire was added to our institutions mobile optimized, routinely used online course evaluation questionnaire. Key variables like distraction, multitasking, presence of others, attitude toward course evaluation in general as well as mobile device use were assessed. Additionally, paradata like device type and completion time were collected.

The sample consisted of participants of 107 mostly one-day continuing training courses for VET/PET professionals from the Italian-speaking part of Switzerland (N=1795).

Results: Consistent with previous research we found for mobile device use a self-selection bias and more reported distractions in the mobile completion situation as well as longer completion times and a higher perceived cognitive load. Several data quality indices like breakoff rate and item nonresponse were higher too, whereas straightlining was less. In addition, we found that the key variables in our study predicted the objective and subjective indicator of cognitive load differently and to a varying degree.

Added Value: The presented study suggests that cognitive load is a multifaceted construct. Its findings add to the existing limited knowledge on the question which survey factors are related to which aspect of cognitive load and how these in turn are related to different data quality indices.



Assessing Panel Conditioning in the GESIS Panel: Comparing Novice and Experienced Respondents

Fabienne Kraemer1, Joanna Koßmann2, Michael Bosnjak2, Henning Silber1, Bella Struminskaya3, Bernd Weiß1

1GESIS Leibniz Institute for the Social Sciences, Germany; 2ZPID - Leibniz-Institute for Psychology Information, Germany; 3Utrecht University, The Netherlands

Relevance and Research Question:

Longitudinal surveys allow researchers to study stability and change over time and to make statements about causal relationships. However, panel studies also hold methodological drawbacks, such as the threat of panel conditioning effects (PCE), which are defined as artificial changes over time due to repeated survey participation. Accordingly, researchers cannot differentiate “real” change in respondents’ attitudes, knowledge, and behavior from change that occurred solely as a result of prior survey participation which may undermine the results of their analyses. Therefore, a closer analysis of the existence and magnitude of PCE is crucial.

Methods and Data:

In the present research, we will investigate the existence and magnitude of PCE within the GESIS Panel - a probability-based mixed-mode access panel, administered bimonthly to a random sample of the German-speaking population aged 18+ years. To account for panel attrition, a refreshment sample was drawn in 2016. Due to the incorporation of the refreshment sample, it is possible to conduct between-subject comparisons for the different cohorts of the panel in order to identify PCE. We expect differences between the cohorts regarding response latencies, the extent of straightlining, the prevalence of don’t know-options, and the extent of socially desirable responding. Specifically, we expect that more experienced respondents show shorter response latencies due to previous reflection and familiarity with the answering process. Secondly, experienced respondents are expected to show more satisficing (straightlining, speeding, prevalence of don’t know-options). Finally, becoming familiar with the survey process might decrease the likelihood of socially desirable responding of experienced respondents.

Results:

Since this research is work in progress and related to a DFG-funded project which just started in December last year, we do not have results yet, but will present first results at the GOR conference in March.

Added value:

PCE can negatively affect the validity of widely used longitudinal surveys and thus, undermine the results of a multitude of analyses that are based on the respective panel data. Therefore, our findings will make a further contribution to the investigation of PCE on data quality and may encourage similar analyses with similar data sets in other countries.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: GOR 20
Conference Software - ConfTool Pro 2.6.127
© 2001 - 2019 by Dr. H. Weinreich, Hamburg, Germany