Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
B05: Images and Virtual Reality in Market Research
If I can virtually touch it, I’ll buy it? Analysing the influence of (non) interactive product presentations in the online-grocery sector
Rheinische Fachhochschule Köln, Germany
Relevance & Research Question:
The online-grocery retail has significantly risen in recent years, still many consumers criticise the lack of testability of food (BDVM, 2018). “Need for Touch” (NFT), the desire to touch products before purchase (Peck & Childers, 2003), can be seen as one reason to refuse to buy food online (Lichters, Kühn & Sarstedt, 2016). But how can food be made more tangible? Current studies show that presentations such as “embodiment” (photo of holding food with hand) or “360-degree-rotations” lead to better product evaluations compared to photos (e.g. Elder & Krishna, 2012; Choi & Taylor, 2014). However, none of these studies have experimentally tested these presentations as a surrogate for a high level of NFT in an online-grocery-shop setting.
Methods & Data:
A mixed experimental 3x2x2 online-design has been used (rated by a representative sample of the REWE-Payback-Online-Panel (N=500)), which compared presentation (photo, embodiment, 360-degree-rotation), product (apple, noodles; repeated-measurement) as well as NFT (high, low) on six variables of product evaluation (e.g. involvement & purchase intention) considering perspective-takeover as covariate. To gain more detailed data regarding activation, the autorotated 360-degree-rotation has been divided into self-rotation and no-self-rotation based on paradata and self-disclosure.
The results of repeated-measures ANCOVA show that there are significant main effects for all dependent variables for presentation across both products (p ≤0.001; f= 0.17-0.43): The 360-degree-rotation is the best presentation form followed by the photo, but contrary to expectations embodiment scores worst. In addition, self-rotation leads to an even significantly higher purchase intention, while no-self-rotation is comparable to the photo (p ≤0.001; f= 0.25). Moreover, interaction analysis tendentially reveals that if consumers rotate the noodles by themselves, there is no difference in level of purchase intention between those who have a high or low need to touch products; therefore, self-rotation seems to act as a surrogate for high NFT (p ≥0.18; f= 0.01).
Overall this study emphasizes the importance of an activating shopping experience in terms of online-grocery. Consumers desire more vivid and tangible product presentations like 360-degree-rotations of food to be more ensured in online purchase decisions using their virtual touch.
Mobile Detection of Visual Brand Touchpoints
Relevance & Research Question
Consumers encounter contacts with brands frequently in everyday life: when opening the refrigerator, passing billboards on a tram ride, viewing ads in print/TV/online media, etc. Measuring such brand touchpoints can both be a valuable feedback channel for marketing and a great source for market research. However, measuring all touchpoints of a single consumer requires many sources and often only delivers incomplete or indirect data. We address the question of how visual brand touchpoints can be measured with a single-source approach and we compare our results to a traditional questionnaire.
Methods & Data
In this research, we present an approach to capture brand, time, duration, and location of brand touchpoints in real-time by applying computer vision methods on a low-cost mobile hardware prototype. We use a deep convolutional neural network for real-time logo detection on a smartphone that is capturing images from a USB webcam mounted on the frame of a pair of sunglasses. A mobile app collects the detected brand logos, time, and location for further analysis. To guarantee privacy, no images are stored; only textual results are saved. We apply this approach in a case study, where we collect data from 26 participants walking on a reference route with 17 known potential touchpoints for 5 brands identified by 5 logos and 2 letterings. We then survey the participants with a questionnaire, asking for a protocol of their remembered brand touchpoints, for the selected brands.
The performance evaluation of the mobile logo detection shows that 94% of logos are detected. In the case study, participants recall only 64% of all detected touchpoints and the touchpoint sequence from the questionnaire only overlaps with the detected one by 39%.
Single-source detection of visual brand touchpoints provides valuable data for determining the influence and effectiveness of marketing measures. The case study shows that our method yields more reliable and more complete data than questionnaires. As additional benefits, our approach also captures the exact time, duration, and location of the touchpoints, thus revealing more detailed insights into consumers’ encounters with brands.
Revealing consumer-brand-interactions from social media pictures - a case study from the fast-moving consumer goods industry
Relevance & Research Question:
A multitude of pictures is posted on social media every day, shedding light not only on consumers’ social life but also their interactions with brands like holding, drinking or even hugging a soda bottle. These pictures represent a valuable source of knowledge for marketing which is infeasible to explore manually. This research addresses the question of how consumer-brand-interactions can be recognized and characterized automatically.
Methods & Data:
We present an approach to reveal types of consumer-brand-interactions from social media images by combining methods from computer vision and statistics. First, an image captioning method based on convolutional neural networks and recurrent neural networks estimates the polarity, involvement, and purpose of the consumer-brand-interaction and describes the consumer-brand-interaction in natural language. Afterwards, a clustering algorithm groups the images by polarity, involvement, and purpose and characterizes the different clusters by the subjects, predicates, and objects of the sentences describing the consumer-brand-interactions.
We apply this approach in a case study from the market of fast moving consumer goods (FMCG). The dataset comprises approx. 950.000 public user-generated images which have been posted on social media during a period of 5 years and which are related to 26 popular FMCG brands.
The evaluation of the image captioning approach yields good performance. Polarity, involvement, and purpose of consumer-brand-interactions are on average estimated correctly in 72% of the images and a correct sentence regarding the subject, predicate, and object are generated in 70% of the images.
The cluster analysis reveals 6 different types of consumer-brand-interactions for fast moving consumer goods ranging from random encounter, pre-usage, active usage, happy moment, and emotional engagement to endorsement.
In contrast to existing visual analytics approaches not only static objects such as products or people but dynamic interactions between consumers and brands are discovered. For example, it is not only possible to detect a woman and a soda bottle, but to differentiate whether she is sitting next to it or kissing it. Thus, marketers are in a better position to estimate the brand popularity and to tailor marketing campaigns or products to real live usage scenarios.
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: GOR 19
|Conference Software - ConfTool Pro 2.6.118
© 2001 - 2018 by Dr. H. Weinreich, Hamburg, Germany