Overview and details of the sessions of this conference. Please select a date or room to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
A3: Panel Quality
Who are leaving our panel: panel attrition and personality traits
CentERdata, Netherlands, The
Relevance & Research Question
Internet surveys are by far the fastest and cheapest way to gather data, but longitudinal data are also a rich and valuable source of information for researchers and policy makers. Combining the advantages of the Internet and of longitudinal data collection through panels, Internet panels are increasingly used. Much research has been done about the difficulties to reach people for an Internet panel (Feskens et al., 2006, 2007; Schmeets et al., 2003; Stoop, 2005; Vis & Marchand, 2011). However, these studies mainly focused on background variables such as age, social economic status, marital status and origin. This paper investigates the role different personality characteristics play in Internet panels.
Methods & Data
Our research is conducted in the CentERdata LISS panel, which combines a probability sample and traditional recruitment procedure with online interviewing. The panel consists of about 5000 households representative of the Dutch speaking population. A specialty of this panel is that people without Internet access are provided with the necessary equipment so that they are able to participate in the panel.
To investigate whether people with specific personality characteristics are more inclined to end their panel participation we use data from 2008, 2009 and 2010. More specifically, we look at whether people with specific characteristics of the Big V (measured by the 50 item IPIP questionnaire of Goldberg) are more inclined to leave the panel than others. In addition, we analyzed whether people with different personalities on survey attitude (consisting of items on survey enjoyment, survey value and survey burden) are more likely to stop participating. And finally, we look at the Inclusion of Others in the Self scale (Aron & Aron, 1992), which measures interpersonal closeness (and closeness to the panel).
A lot of time, energy, and money is spent on building Internet panels. But what happens after that? This paper focuses on which personality traits play a role in panel attrition to optimize panel quality.
Rich Profiles – Or: What's the problem with self-disclosure data?
ODC Services GmbH, Germany
Relevance & Research Question: Profile data in online panels consists mainly of self-disclosure data by the panelists. Unfortunately there are some general problems with self-disclosure data, e.g. data quality (Are you willing to provide information with high quality?), the identification of special target groups (Are you a LOHAS / Early Adopter?) or specific response behaviour (Are you always one of the first panelist who react to our invitations?) This contribution deals with the question, which simple metrics could be used to profile externally the panelists and what's the impact of these additional profile data on sampling.
Methods & Data: In a first exploratory study, we collected a variety of data for response behavior, data quality and special target groups. Based on this data we developed a short profiling questionnaire to predict the panelists response behavior. In a second study we evaluated the accuracy of our profiling method, comparing the response behavior of profiled and not-profiled panelists.
Results: In general, the additional profile data can be used to identify panelists better according to the specific requirements in studies, especially the recruitment for qualitative online studies, where the willingness to provide information voluntarily is crucial.
Added Value: Profiling in online panels usually only aims at the information itself and not at the performance when giving the information. With our contribution we'd like to show, that profile data can also be used for sampling, when studies have special requirements on response behavior. By that, it's possible to improve data quality. None of the less, we don't want to discuss only the possibilities of this method, but also its limits in our experience.
Quota Controls: Science or merely Sciencey?
Survey Sampling International, United Kingdom
Relevance & Research Question
The market research industry is wedded to quota controls. We apply Age and Gender quotas without a second thought as to why or indeed whether they are doing any good at all. Our argument is that, in the modern online sampling world, a different set of stratifications must be applied and our old assumptions simply do not apply.
Why not? The answer, in common with so many of the problems in sampling in online research, lies in the frame. The frame in traditional research was close to the population; therefore a quota controlled random sample would tend to produce samples that, within the quota strata, also contained representative numbers of all other attitudes and behaviours. This is not the case with online access panels.
Methods and Data
Our experiment uses our US panel; the topic, eye colour, is unrelated to Age and Gender but is strongly related to Ethnicity. We have conducted 2 samples. The first strictly controlled on Age, Gender and Region, the second controlled on Ethnicity alone. Our Age Gender Region ‘nat rep’ sample should underestimate the number with brown eyes. The Ethnicity we expect to estimate eye colour extremely well.
At the same time a third sample will be drawn which is simply “random enough”. Our expectation is that this sample will also under-perform on eye colour but will equal the findings from “nat rep” sample 1.
A second experiment will be undertaken where the variable of interest is unrelated to anything –left- or right-handedness. Our hypothesis is that all three samples will perform equally well.
The results are precisely as predicted.
Researchers, particularly in the commercial world, apply quota controls to ensure “representivity” as a matter of practice, they do it because they have been told to, it is part of the folklore of market research. This is not sustainable in a world where we are no longer dealing with essentially incomplete frames. More science and less folklore needs to be applied to make the best of an increasing unscientific world.