Conference Agenda

Overview and details of the sessions of this conference. Please select a date or room to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Session Overview
C 5: GOR Thesis Award 2015 Competition I: Dissertation
Thursday, 19/Mar/2015:
15:45 - 16:45

Session Chair: Meinald T. Thielsch, University of Muenster
Session Chair: Frederik Funke, (1) LINK Institut (2)
Location: Room 154
Fachhochschule Köln/ Cologne University of Applied Sciences
Claudiusstr. 1, 50678 Cologne


Open-ended questions in Web surveys - Using visual and adaptive questionnaire design to improve narrative responses

Matthias Emde

Universität Hamburg, Germany

Relevance & Research Question:

One of the most significant decisions when designing survey questions is whether the questions will be posed as closed-ended or open-ended. Closed-ended questions require respondents to choose from a set of provided response-options, while open-ended questions are answered by respondents in their own words. Open-ended questions offer the benefit of not constraining responses and allowing respondents to freely answer and elaborate upon their responses. Narrative open-ended questions are especially useful when there are no suitable answer categories available for a closed-ended question format, or if providing response options might bias the respondents. Open-ended questions are also powerful tools for collecting more detailed and specific responses from large samples of respondents. However, open-ended questions are also burdensome to answer and suffer from higher rates of item-nonresponse. This thesis aims to improve narrative open-ended questions in Web surveys by using visual and adaptive questionnaire design.

Previous research on open-ended questions demonstrated that respondents react to the size and design of the answer box offered with an open-ended question in Web surveys. Larger answer boxes seem to pose an additional burden as compared to smaller answer boxes. At the same time larger answer boxes work as a stimulus that increases the length of the response provided by those respondents who actually answer the question. By varying the visual design of answer-boxes this thesis seeks ways to improve narrative open-ended questions. Despite the influence of different answer-box sizes, the effectiveness of a counter associated with the answer box is tested. In addition dynamic in size growing answer-boxes were compared to answer-boxes that where adjusted in size by respondents themselves.

Besides varying the visual appearance of narrative open-ended questions and the answer-boxes used, the interactive nature of the internet allows a multiplicity of ways to integrate interactive features into a survey. It is possible to adapt Web surveys individually to groups of respondents. Based on previous answers it is feasible to provide specific designed questions to engage respondents. This thesis puts two adaptive design approaches to improve narrative open-ended questions to a test.

Methods & Data:

Despite the influence of three different answer-box sizes, the effectiveness of a counter associated with the answer box that continuously indicates the number of characters left to type is tested. In addition dynamic in size growing answer-boxes were compared to answer-boxes that where adjusted in size by respondents themselves by a plus or minus button.

Further this thesis puts two adaptive design approaches to improve narrative open-ended questions to a test. The amount of information respondents typed into the response box of an initial open-ended question was used to assign them later in the survey to a custom-size answer box. In addition a follow-up probe was tested were respondents who didn’t respond to a narrative open-ended question were assigned to the same question in a closed format to get at least some information from them.

All experiments were embedded in large scale surveys among university applicants or students.


While larger answer-boxes were expected to pose an additional burden, we found no influence of the answer-box size on item-nonresponse. Using a counter indicating the number of characters left curtailed the response length if the default counter value was set low, and increased the response length when the default value was set high. However, a low-value counter limited the number of words used by respondents, but not the number of topics reported. Respondents always seemed to report what they intended to report. Automatically growing answer-box designs do not improve response length or the number of topics reported to narrative open-ended question. In the respondent-adjusted design, respondents were able to set the answer-box size themselves. Since they were aware of the box-size adjustment, the design corresponds better with the question–answer process, as compared to the dynamic growing answer-spaces. As a result, respondents reported more topics and produced longer responses with the self-adjusted answer-box design.

Adapting individually-sized answer-boxes increased only the length of responses to narrative open-ended questions. The answer-box size does not seem to pose a higher burden to respond. As in the visual design experiments, responses can be improved using the adaptive answer-box size assignment, but the willingness to respond is not affected by any of the designs tested. In order to improve response rates, the final experiment in this thesis used a closed-ended follow-up probe to combine the strengths of closed- and open-ended questions. Switching to a closed-ended question is not ideal, but the design accomplished the aim of getting at least some information from former non-respondents. In the initial open-ended question, respondents provided fewer topics but elaborated them. In the closed-ended follow-up probe the respondents checked more answer categories respective to topics in the open-ended question, most likely because they were at hand. Overall, the probe succeeded in getting information from those respondents who neglected to answer the same question in an open format.

Added Value:

Overall, the visual design experiments demonstrate that it is well worth paying attention to the visual and adaptive design of open-ended questions in Web surveys, and that well-designed open-ended questions are a powerful tool for collecting specific data from large samples of respondents.

Further results provide preliminary support for the effectiveness of a Web survey design that adapts the type and visual design of survey questions to the motivation and capabilities of the respondent. While previous studies in the design of open-ended narrative questions aimed to enhance the effectiveness of design features that were meant to influence response behavior (in particular of less-motivated respondents), the adaptive design changes the questionnaire in order to get the most out of the respondent, consistent with his motivation and capabilities.

Data quality in probability-based online panels: Nonresponse, attrition, and panel conditioning

Bella Struminskaya

GESIS - Leibniz Institute for the Social Sciences, Germany

Online panels offer the benefits of lower costs, timeliness and access to large respondent numbers compared to surveys that use more traditional modes of data collection (face-to-face, telephone, and mail). Probability-based online panels in which potential respondents are selected randomly from a frame covering the target population allow for drawing valid conclusions about the population of interest. With the increase in nonresponse over the last years in surveys using the traditional modes of data collection (de Leeuw & de Heer, 2002), probability-based online panels are an appealing alternative. However, whether the quality of data collected by probability-based online panels is comparable to the quality of data attained by traditional data collection methods is questionable. Probability-based online panels are expensive in their recruitment and maintenance, thereby raising the question: are the costs of constructing and maintaining them justified given the quality of data that can be obtained?

There are several sources of errors in online panel surveys: excluding non-Internet users may result in coverage error; if persons selected for the study cannot be reached or do not want to participate, it may result in nonresponse error; study participants may choose to stop participating in later waves (attrition). Furthermore, by taking surveys regularly, respondents can learn to answer dishonestly or answer filter questions negatively to reduce the burden of participation (panel conditioning). All these errors can accumulate in the survey estimates and the conclusions based on these estimates may therefore be misleading.

The goal of this dissertation is to study the quality of data obtained with probability-based online panels. The dissertation aims at advancing the understanding of the causes of errors and at guiding the design decisions when recruiting and maintaining probability-based online panels. This dissertation evaluates the overall quality of estimates from an online panel and focuses on potential sources of error: nonresponse during the recruitment interview, nonresponse to the first online survey, panel attrition, panel conditioning, and the effects of the survey mode.

This dissertation consists of five studies, which are theoretically integrated by the overarching framework of the Total Survey Error (Biemer, 2010; Groves, 1989; Groves & Lyberg, 2010). The framework is extended by including theoretical knowledge on two special types of panel survey errors – panel nonresponse (attrition) and panel conditioning (Kalton, Kasprzyk, & McMillen, 1989). The empirical analyses are based on the data from a probability-based telephone-recruited online panel of Internet users in Germany: the GESIS Online Panel Pilot.

The error sources are studied in connection to the recruitment and operating steps typical for probability-based online panels. Each chapter studies a different aspect of data quality. The chapters are written as individual papers, each addressing specific research questions.

Chapter 1 introduces the theoretical framework of data quality and the Total Survey Error. The goal of Chapter 2 is to evaluate the goodness of the final estimates collected in the online panel. The data from the online panel are compared to the data from two high-quality face-to-face reference surveys: the German General Social Survey “ALLBUS” and the German sample of the European Social Survey (ESS). Furthermore, since researchers may not only be interested in single estimates, I study to what extent survey results are comparable when the data are used for modeling social phenomena. The results show several differences among the surveys on most of the socio-demographic and attitudinal variables; however, these differences average to a few percentage points. To account for the design decision to exclude non-Internet users, post-survey adjustments were performed. However, post-stratification weighting did not bring the estimates from the online panel closer to the estimates from the reference surveys.

Chapter 3 focuses on nonresponse, studying the influence of respondent characteristics, design features (incentives and fieldwork agency), and respondent characteristics specific to the survey mode (Internet experience, online survey experience). The results indicate that participation in the panel is selective: previous experience with the Internet and online surveys predicts willingness to participate and actual participation in the panel. Incentives and fieldwork agencies that performed the recruitment also influence the decision to participate.

Chapter 4 studies why panel members choose to stay in the panel or discontinue participation. The main question is whether respondents are motivated by intrinsic factors (survey experience) or extrinsic factors (monetary incentives). The findings indicate that respondents who view surveys as long, difficult, too personal are likely to attrite and that incentives (although negatively related to attrition) do not compensate for this burdensome experience.

Chapter 5 focuses on panel conditioning due to learning the survey process. To find out if more experienced respondents answer differently than less experienced respondents, I conducted two experiments, in which the order of the questionnaires was switched. The findings indicate limited evidence of advantageous panel conditioning and no evidence of disadvantageous panel conditioning.

Chapter 6 studies mode system effects (i.e., differences in the estimates as the result of the whole process by which they were collected). The data from the online panel is compared to the two reference surveys: ALLBUS 2010 and ALLBUS 2012. Both face-to-face surveys were recruited by the same fieldwork agency and employed an almost identical design. Therefore, they together serve as a “reference mode” when compared with the online panel. I use questions with identical wordings that were present in both ALLBUS surveys and replicated in the online panel. The differences in sample composition among the surveys are adjusted by propensity-score weighting. The results show that the online panel and two reference surveys differ in attitudinal measures. However, for factual questions the reference surveys differ from the online panel and not from each other. Judging by the effect sizes, the magnitude of the differences is small. The overall conclusion is that data from the online panel is fairly comparable to the data from high-quality face-to-face surveys.

The results of this dissertation provide additional insight into the processes that contribute to the quality of data produced by probability-based online panels. These results can guide researchers who plan to build online panels of Internet users or of the general population. The results will also prove useful for existing panels that consider switching to the online mode.

Structure, change over time, and outcomes of research collaboration networks: the case of GRAND

Zack Hayat

The Interdisciplinary center, herzliya, Israel

In this dissertation, I study the interplay between the structure of a collaborative research network and the research outcomes produced by its members. To achieve this goal, I examined GRAND (an acronym for Graphics, Animation and New Media), a Canadian network with over 200 researchers funded by the Canadian government. I use the social network analysis (SNA) framework in order to aid in understanding the structure of the GRAND collaborative research network, as well as how it changes over time. I then look at the changes of the structural network’s characteristics over time and how they interplay with the researchers’ outcomes. Subsequently, I explain this interplay by discussing the changes of the structural network’s characteristics as conditions that can potentially affect a researcher’s social capital and that can, in turn, affect the researcher’s outcomes.

Using data collected through the two online surveys (the first conducted on September 2010, and the second on March 2013), and a research outcomes paper based survey (conducted on May 2013) I was able to capture the research networks (structure and changes over time) of GRAND researchers. These networks captured four types of interaction among GRAND researchers: co-authorship of scholarly publications, communication activity, advice exchange, and interpersonal acquaintanceship. I was also able to get the perceptions of these researchers about their research outcomes. My sample consistent of 101 GRAND researchers, a subset of these researchers (N=50) were then interviewed.

The research outcomes were evaluated based on the four dimensions presented by Cummings and Kiesler (2005): knowledge, training, outreach and collaboration outcomes. The network structure and change over time were measured in terms of: three centrality measures (degree, betweennessand and eigenvector); density; heterogeneity; and effective size (i.e. the number of researchers to which a researcher is connected, minus a "redundancy" factor). Using a combination of social network analysis and statistical regression analysis, I was able to examine the interplay between GRAND researchers’ research outcomes, and their network structure and change over time.

More specifically I found that: (1) Knowledge outcomes (i.e. gains in new knowledge) are positively correlated with the density of the co-authorship network, and with the betweenness centrality, heterogeneity and effective size of advice network; knowledge outcomes are also negatively correlated with the degree centrality of the co-authorship network; (2) Training outcomes (i.e. training of students and post-docs) are positively correlated with the size and density of the advice network, and with the density of the communication network; training outcomes are also negatively correlated with the effective size of the co-authorship network; (3) Outreach outcomes (i.e. the formation of new partnerships/relationships) are positively correlated with the effective size of both the acquaintanceship and advice networks, as well as with the degree and eigenvector centrality of the co-authorship network; finally (4) Collaboration outcomes (i.e. collaboration that has begun and will continue beyond the scope of GRAND) are positively correlated with the degree centrality of the advice network as well as with the degree centrality and density of the co-authorship network. I then distilled and interpreted these results and framed them within the context of related literature. This interpretation was largely based on the insight I gained through the semi-structured interviews I conducted with 50 GRAND researchers.

In this dissertation, I analyze GRAND as a case study of a research network. This enables me to provide a holistic, in-depth investigation of the range, population, and structure of an interdisciplinary, multi-institutional research network. My findings lend support to the argument that social capital and social networks, when combined, yield richer theory and better predictions than when used individually. The social networks analysis conducted in this research offered precise measures of the social structure as well as the changes in the GRAND research network. The social capital-driven findings helped move beyond the relations themselves and to understand how personal relationships or social structures can either facilitate or hinder the achievement of different research outcomes. These results offer to substantiate previous work, while drawing attention to the importance of analyzing interpersonal networks when studying factors effecting research outcomes. This direction for future study is especially relevant, as research collaboration continues to increase both in scope and in importance.

Contact and Legal Notice · Contact Address:
Conference: GOR 15
Conference Software - ConfTool Pro 2.6.76
© 2001 - 2014 by H. Weinreich, Hamburg, Germany