Mobile Web Surveys: A Total Survey Error Perspective
Daily activity sees data constantly flowing through cameras, the internet, satellites, radio frequencies, sensors, private appliances, cars, smartphones, tablets and the like. Among all the tools currently used, mobile devices, especially mobile phones, smartphones and tablets, are the most widespread, with their use becoming prevalent in everyday life within both developed and developing countries. Shopping, reading newspapers, participating in forums, projecting and completing surveys, communicating with friends and making new ones, filing tax returns and getting involved in politics are all examples of how ingrained mobile technology is to modern lifestyle.
Mobile devices allow a wide range of heterogeneous activities and, as a result, have great potential in terms of the different types of data that can be collected. The use of mobile devices to collect, analyse and apply research data is explored here. This book focuses on the use of mobile devices in various research contexts, aiming to provide a detailed and updated knowledge on what is a comparatively new field of study. This is done considering different aspects: main methodological possibilities and issues; comparison and integration with more traditional survey modes or ways of participating in research; quality of collected data; use in commercial market research; representativeness of studies based only on the mobile-population; analysis of the current spread of mobile devices in several countries, and so on. Thus, the book provides interesting research findings from a wide range of countries and contexts.
This book was developed in the framework of WebDataNet’s Task Force 19. WebDataNet, was created in 2009 by a group of researchers focusing on the discussion on data collection methods. Supported by the European Union programme for the Coordination of Science and Technology, WebDataNet has become a unique, multidisciplinary network that has brought together leading web-based data collection experts from several institutions, disciplines, and relevant backgrounds from more than 35 different countries.
В статье проанализированы различия в качестве данных между компьютерным и мобильным веб-опросом. Для сравнения были использованы следующие показатели: (1) уровень откликов, (2) влияние порядка, в котором представлены ответы, (3) социальная желательность ответов, (4) уклонение от ответов,
(5) ответы на открытые вопросы. На основе эксперимента, сравнивающего два метода, показано, что мобильный веб-опрос имеет меньшую результативность по сравнению с компьютерным веб-опросом. Кроме того, ответы на открытые вопросы, заполняемые на мобильном устройстве, значительно короче по сравнению с ответами, полученными на компьютере. В то же время, не было выявлено различий между методами в уровне социальной желательности ответов, степени уклонения респондентов от ответов, а также в степени влияния порядка представления ответов.
In this paper, we conduct a meta-analysis of breakoff rates in mobile web surveys. We test if an optimization of web surveys for mobile devices, invitation mode (SMS vs. e-mail), survey length, expected duration in the invitation, survey design (scrolling vs. paging), prerecruitment, number of reminders, design complexity (grids, drop-down questions, sliders, images, progress indicator), incentives, an opportunity to skip survey questions, and an opportunity to select the preferred mode (PC or mobile web) have an effect on breakoffs. The meta-analysis is based on 14 studies (39 independent samples) conducted using online panels – probability-based and non-probability-based. We found that mobile optimized surveys, email invitations, shorter surveys, using a prerecruitment, more reminders, a less complex design, and an opportunity to choose the preferred survey mode decrease breakoff rates in mobile web surveys. No effect of a scrolling design, incentives, indicating expected duration in the invitation, and an opportunity to skip survey questions were found.
A large number of findings in survey research suggest that responses to sensitive questions are situational and can vary in relation to context. The methodological literature demonstrates that social desirability biases are less prevalent in self-administered surveys, particularly in Web surveys, when there is no interviewer and less risk of presenting oneself in an unfavorable light. Since there is a growing number of users of mobile Web browsers, we focused our study on the effects of different devices (PC or cell phone) in Web surveys on the respondents’ willingness to report sensitive information. To reduce selection bias, we carried out a two-wave cross-over experiment using a volunteer online access-panel in Russia. Participants were asked to complete the questionnaire in both survey modes: PC and mobile Web survey. We hypothesized that features of mobile Web usage may affect response accuracy and lead to more socially desirable responses compared to the PC Web survey mode. We found significant differences in the reporting of alcohol consumption by mode, consistent with our hypothesis. But other sensitive questions did not show similar effects. We also found that the presence of familiar bystanders had an impact on the responses, while the presence of strangers did not have any significant effect in either survey mode. Contrary to expectations, we did not find evidence of a positive impact of completing the questionnaire at home and trust in data confidentiality on the level of reporting. These results could help survey practitioners to design and improve data quality in Web surveys completed on different devices.