Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
Poster Part (I)
Time:
Thursday, 16/Mar/2017:
14:00 - 15:30


Show help for 'Increase or decrease the abstract text size'
Presentations

Bayesian Combining of Web Survey Data from Probability- and Non-Probability Samples for Survey Estimation

Joseph Sakshaug1, Arkadiusz Wisniowski1, Diego Perez-Ruiz1, Annelies Blom2

1University of Manchester, United Kingdom; 2University of Mannheim, Germany

Sample surveys are frequently used in the social sciences to measure and describe large populations. While probability-based sample surveys are considered the standard by which valid population-based inferences can be made, there has been increased interest in the use of non-probability samples to study public opinion and human behavior, particularly through web surveys. This increased interest is driven by multiple factors such as costs which can be significant when recruiting a probability-based sample. A second factor is the popularity of the web as a survey platform which has led to increased adoption of online access panels that can deliver cheaper and timelier survey results compared to traditional probability-based surveys. However, online access panels are heavily criticized because they do not employ probability sampling methods to recruit panel members, and therefore the mathematical probability theories that underlie valid statistical inference cannot be applied. While non-probability-based surveys are not ideal for making population-based inferences, their attractive cost properties make them potentially useful as a supplement to traditional probability-based data collection. In this paper, we examine this notion by combining probability and non-probability Web survey samples under a Bayesian framework. The Bayesian paradigm is well-suited for this situation as it permits the integration of multiple data sources, and a potential for increased precision in estimation. On the other hand, combining probability samples with non-probability samples that could be biased may offset gains in efficiency. Thus, there is likely to be a bias-precision tradeoff when combining probability- and non-probability samples. We examine this tradeoff using the German Internet Panel (GIP), a nationally-representative, probability-based web survey in combination with a set of non-probability-based web surveys that fielded a subset of the GIP questionnaire during the same time period. We apply the Bayesian combining framework to produce estimates of survey items and compare them to the probability-based estimates alone. We examine the accuracy and precision of the resulting survey estimates to determine whether combining the probability and non-probability samples yields valid inferences (and a likely cost savings) relative to the probability survey alone.


Comparing cross-cultural cognitive interviews and online probing for the assessment of cross-cultural measurement equivalence

Jule Adriaans, Michael Weinhardt

Bielefeld University, Germany

Relevance & Research Question:

When measuring concepts cross-culturally, measurement equivalence is essential in yielding meaningful results. As part of the questionnaire design processes, cross-cultural cognitive interviewing (CCCI) is commonly used to identify possible threats to measurement equivalence. CCCI is a version of standard cognitive interviewing used for assessing the cognitive processes behind the response process in personal interviews. For pragmatic reasons, CCCI is usually carried out with small sample sizes and involves the use of different probing techniques. The relatively new tool of online probing (OP) combines features of CCCI with the advantages of an online survey, achieving a greater sample size and broader coverage of concepts. This study investigates whether OP can be an efficient alternative to CCCI in developing cross-cultural questionnaires by comparing response quality and substantial results.

Methods & Data:

In this study both CCCI and OP are applied in the questionnaire design process of developing a cross-cultural questionnaire on justice attitudes. Existing items that measure justice attitudes will be presented to respondents in CCCI and OP followed by comprehension and category-selection probes. A convenience sample of university students and employees with an international background will be recruited focusing on the languages German, English and Russian. The response quality of both methods will be evaluated by comparing nonresponse as well as response length. In a second step we will analyze whether both methods identify similar threats to measurement equivalence.

Results:

The study is work in progress; preliminary results will be available for the conference.

Added Value:

While CCCI as a method yields a higher level of interactivity and is assumed to produce higher quality data, OP can be implemented in web surveys which allow for larger sample sizes in the evaluation of threats to cross-cultural equivalence. In comparing results of both methods we study the relative benefits of both methods for the assessment of cross-cultural equivalence. We expect to find OP as a useful additional technique in the development of questionnaires, especially in cross-cultural settings.


Adriaans-Comparing cross-cultural cognitive interviews and online probing-273.pdf

How much does the mode of response matter? A comparison of web-based and mail-based response when examining sensitive issues in social surveys

Aki Koivula, Pekka Räsänen, Outi Sarpila

University of Turku, Finland

Relevance & Research Question:It is argued that traditional ways of collecting social surveys are threatened by the rising data-collection costs and the declining response rates. In an attempt to solve this problem, researchers have started to utilize cheaper and easier data collection methods, especially those focusing on various types of online data. Current research on survey methodology has criticized sample-to-population representativeness of many online surveys. At the same time, however, research on how the mode of data collection affects to responses is almost completely lacking. This paper examines whether the survey responses using Web-questionnaire are different from the mail-questionnaire responses when examining respondents’ attitudes towards sensitive issues such as immigrants.

Methods & Data: Our data are derived from the International Social Survey Program (ISSP) 2013. We selected two countries for the analysis, Finland (n=1, 243) and Norway (n=1, 585), both of which applied similar methods of data collection technique (self-conducted mail survey and web survey).

Results: We found that respondents’ tend to answer more negatively towards immigration via mail-questionnaire than Web-questionnaire. The results indicate also that although the popularity of the Web-surveys has increased during recent years, the mode of response is still associated with socio-demographic background, and therefore, the response mode has impact on responses.

Added Value: We suggest that the mixed-mode survey is a reliable method of data collection especially after controlling for background variables and their interactions with the response mode.


Koivula-How much does the mode of response matter A comparison of web-based and mail-based response when.pdf

Impact of using profiling or passive data to select the sample of web surveys

Melanie Revilla1, Carlos Ochoa2

1RECSM-Universitat Pompeu Fabra, Spain; 2Netquest, Spain

Relevance & Research Question: Probability-based sampling is the gold standard for surveys of the general population. However, when interested in more specific populations, for instance the consumers of a particular brand, a lot of research uses data from opt-in online panels.

This paper investigates, in the frame of non-probability based online panels, different ways to select a sample of consumers: without previous information, using profiling information, or using passive data from a tracker installed on the devices of the panelists. In addition, it investigates the effect of sending the survey closer to the moment-of-truth, which is expected to reduce memory limitations in recall questions.

Methods & Data: The data was collected in Spain in 2016 by the Netquest online fieldwork company. The samples for administrating a web survey about the experience with the visit of the website of different airline companies were selected in four different ways (without previous information, using profiling information, using passive data in the next 48 hours after the visit or later) and compared on different aspects: participation, efficiency, data quality and accuracy, survey evaluation, etc.

Results: The main results were the following:

- Using additional information (profiling or passive) to select the sample leads to clear improvements in terms of levels of participation and fieldwork efficiency, but not in terms of data quality or accuracy.

- Doing the survey closer to the "moment-of-truth" further improves the fieldwork efficiency, but not the other aspects.

- We also observed differences across the different samples in respondents' socio-demographic characteristics and in the survey evaluation. This suggests that depending on the sample selection methods, we might end up with different profiles of respondents.

Added Value: This is the first study to our knowledge to study the possibility of using passive data from a tracker to select the sample for a web survey and for doing in-the-moment research in the frame of an online panel. Overall, it suggests that using additional information from profiling or passive data seems recommendable, whereas contacting the panelists in the next 48 hours after the event of interest does not improve further.


Revilla-Impact of using profiling or passive data to select the sample of web surveys-261.pdf

The influence of Forced Answering on response behavior in Online Surveys: A reactance effect?

Philipp Sischka1, Alexandra Mergener2, Kristina Neufang3, Jean Philippe Décieux1

1University of Luxembourg, Germany; 2Federal Institute for Vocational Education and Training (BIBB); 3University of Trier

Recent studies have shown that the use of the forced answering (FA) option in online surveys results in reduced data. They especially examined that forcing respondents to answer questions in order to proceed through the questionnaire leads to higher dropout rates and lower answer quality. However, no study researched the psychological mechanism behind the correlation of FA on dropout and data quality before. This response behavior has often been interpreted as psychological reactance reaction. So, the Psychological Reactance Theory (PRT) predicts that reactance appears when an individuals’ freedom is threatened and cannot be directly restored. Reactance describes the motivation to restore this loss of freedom. Respondents could experience FA as a loss of freedom, as (s)he is denied the choice to leave a question unanswered. According to PRT, possible reactions in this situation might be to quit survey participation, to fake answers or to show satisficing tendencies.

This study explores the psychological mechanism that effects response behavior in FA condition (compared to non-FA- condition). Our major hypothesis is that forcing respondents to answer will cause reactance, which turns into increasing dropout rates, decreasing answer quality and a satisficing behavior.

We used an online survey-experiment (n =914) with two conditions (forced and non-forced answering instructions). Throughout the whole questionnaire, a dropout button was implemented on each page. In both conditions, this button led to the same page that fully compliant participants reached at the end of the questionnaire. Reactance was measured with a self-constructed reactance scale. To determine answer quality, we used self-report for faking as well as the analysis of answers to open ended questions.

Zero-order effects showed that FA increased state reactance and questionnaire dropout as well as it reduced answer length in open-ended questions. Mediation analysis (Condition -> state reactance -> dropout/answer quality) supported the hypothesis of reactance as an underlying psychological mechanism behind negative FA effects on data quality.

This is the first study which offers statistical evidence for the often proposed reactance effect influencing response behavior. This offers a base for a deeper psychological reflection of the use of the FA-option.


Sischka-The influence of Forced Answering on response behavior-280.pdf

Using smartphone sensors for data collection: towards a research synthesis

Bella Struminskaya, Peter Lugtig

Utrecht University, Netherlands, The

Relevance & Research Question: Given the rapid proliferation of smartphones, the potential offered by smartphone measurement for social and market research is substantial. However, there are several challenges of smartphone sensor data collection regarding willingness to allow such measurement, sample selection, and data quality. In the recent years, the number of studies have emerged that use smartphone sensor measurement. The aim of this presentation is to systematize available findings to answer the following research questions: 1) What are the rates and determinants of willingness to participate in studies involving smartphone sensor measurement? 2) How do participants differ from nonparticipants? 3) Does the use of smartphone sensors improve data accuracy?

Methods & Data: The research synthesis is based on the literature identified in online journal databases, conference presentations and working papers where smartphones (owned by participants or provided) are used for passive data collection using built-in sensors or apps. Qualitative review will provide an overview of domains in which sensor measurement is used and whether it leads to more accuracy and less respondent burden. Quantitative review aims to estimate nonwillingness and nonparticipation effect sizes and the role of sensor types (e.g., GPS, QR-code scanner, camera), study characteristics, and respondent characteristics.

Results: This is a study in progress, therefore the results are still preliminary. The body of available literature consists of three groups of studies from various domains. The first group are (small-scale) studies of volunteers that focus on the implementing smartphone measurement and practical issues. The second group are studies that focus on hypothetical willingness to participate, while the third group are relatively rare implementations of sensor measurements in (large-scale) population studies. The rates of willingness and participation vary considerably between the studies. There are some indications that smartphone sensor data collection can improve accuracy and reduce respondent burden.

Added Value: Using smartphone sensors for data collection can reduce self-report errors and respondent burden if certain questions are substituted by such passive measurement. Systematizing available empirical evidence and identifying research gaps will help researchers target their resources towards studies that will allow more efficient use of this data collection method.


Asking for Consent to the Collection of Geographical Information

Barbara Felderer, Annelies Blom

University of Mannheim, Germany

In online surveys lots of paradata can be captured as a byproduct of data collection, for example information about devices and IP addresses. Even though much of this information is automatically send by the browser, its storage and use by researchers is not always compatible with data protection guidelines and informed consent by respondents is required.

We study consent to the request to automatically collect geographical information in a large German online panel. Respondents of wave 4 of the German Internet Panel were asked for consent to automatically track their location using JavaScript. If consent was provided, the IP-address was stored and longitudes and latitudes were derived from it. In addition, the same respondents were asked to report their location (city name and postal code).

Geographical information on the location where the respondents fill in the survey is valuable for both substantial and methodological research. Spatial identifiers can be used to link outside information to the survey to enrich the data set with additional explanatory variables, for example weather or climate data or distances to public place like supermarkets, green spaces, or schools.

Automatically collected geographical information can be assumed to be of higher quality than reported locations, especially for respondents who fill in the survey in unfamiliar places or on the road. While response burden is lower for the automated collection, tracking of IP-addresses can also be perceived as intrusive and raise data protection concerns with respondents.

We address the following research questions:

1. What is the acceptance among the general population in Germany towards digitally collecting information on their geographical location?

2. Are there differences between people who consent to the digital collection of their geographical location and people who fill in information about their location manually in terms of socio-demographic and personality characteristics?

While about 95 % of the respondents report a city name or postal code only about 60 % consent to the digital collection. Both reporting a location and consent to digital collection are influenced by personal characteristics and different characteristics determine the willingness to provide the two types of information.


Pictures in Online Surveys: To Greet or Avoid?

Manuela Schmid, Bernad Batinic

JKU - Johannes Kepler University, Austria

Relevance & Research Question: The visual design of online surveys is a decisive factor as it contributes to the motivation of the participants to complete the survey. Pictures often fulfill the function of encouraging people, but it remains the question if they distort self-evaluations of participants in online surveys. The aim of this study was to test whether pictures, presented at the beginning of the survey, lead to an increase or decrease – depending on the type of the picture – of the participants’ self-reported evaluations of work-life fusion and well-being.

Methods & Data: On the basis of an experimental design, 321 participants provided information on their well-being, work-life fusion, work-life conflict and burnout in an online survey. At the beginning of the survey, the participants of the experimental group were shown a picture. Participants were randomly assigned into three groups: Group 1: picture with a person at work, seeming relaxed and sitting on the beach; group 2: picture with a person in the office, seeming stressed and screaming; group 3: no picture. Univariate and multivariate ANOVAs were conducted.

Results: The pictures did not significantly distort participants’ self-evaluations. All three experimental groups showed similar self-reports of well-being, work-life fusion, work-life conflict, and burnout. In addition, a shorter time lag between showing the pictures and participants’ self-reports did not differ significantly from a longer time lag.

Added Value: At the present, studies on the effects of pictures in online surveys are rather sparse. Our study contributes to the current state of research on the question if pictures can be used in online surveys without distorting participants’ self-evaluations.


Schmid-Pictures in Online Surveys-205.pdf

Effects of additional reminders on survey participation and panel unsubscription

Maria Andreasson, Johan Martinsson, Elias Markstedt

University of Gothenburg, Sweden

Relevance & Research Question: Many surveys today are challenged by falling response rates or by the difficulty to recruit panel members. It is often tempting for survey practitioners to send additional reminders in order to achieve higher response rates. It is widespread agreement that several reminders and follow-up contacts do yield higher response rates. However, it is sometimes uncertain when the reminder effects are saturated and adding more reminders will no longer increase response rates, or maybe even result in negative effects by an increase in unsubscription rates from panels as a result.

Methods & Data: With an experimental set-up, using members of the Citizen Panel, a non-commercial web panel run by the Laboratory of Opinion Research at the University of Gothenburg, this study examines the impact of adding several reminders to a web survey on survey participation rates, completion rates and panel unsubscription rates. 10,000 invited respondents were randomized into four groups and assigned to receive a maximum of no reminder, one reminder, two reminders or three reminders during a three week period. Reminders were only sent to those who had not answered the survey before a certain date.

Results: The results show that going from zero to one reminder increases the participation rate by eleven percentage points, from one to two by four percentage points, and from two to three by two and half percentage points. As the number of reminders increase, the share of people who complete the entire survey after starting it also increases, as do the share of the invited sample who instead unsubscribes permanently from the panel, albeit this negative consequence becomes more pronounced only by the third reminder.

Added Value: As expected, adding more reminders increases survey participation rates. Another positive effect of adding more reminders found in this study is that they also increase completion rates, thus yielding more complete data and fewer survey breakoffs. Although adding several reminders increases participation rates and completion rates, it unfortunately also seems that it makes more people leave the panel.


Andreasson-Effects of additional reminders on survey participation and panel unsubscription-218.pdf


 
Contact and Legal Notice · Contact Address:
Conference: GOR 17
Conference Software - ConfTool Pro 2.6.96
© 2001 - 2016 by H. Weinreich, Hamburg, Germany