Mobile Web Surveys: A Total Survey Error Perspective
While mobile phones or cell phones have been a challenge for telephone survey researchers for
some time, the Internet or Web capabilities of mobile phones have begun to receive attention in
the last few years. There are a number of ways that Internet-enabled smartphones can affect
survey data collection, and the implications of these for various sources of errors are only
now being fully explored. There are three broad approaches to the opportunities and challenges
posed by mobile Web.
Daily activity sees data constantly flowing through cameras, the internet, satellites, radio frequencies, sensors, private appliances, cars, smartphones, tablets and the like. Among all the tools currently used, mobile devices, especially mobile phones, smartphones and tablets, are the most widespread, with their use becoming prevalent in everyday life within both developed and developing countries. Shopping, reading newspapers, participating in forums, projecting and completing surveys, communicating with friends and making new ones, filing tax returns and getting involved in politics are all examples of how ingrained mobile technology is to modern lifestyle.
Mobile devices allow a wide range of heterogeneous activities and, as a result, have great potential in terms of the different types of data that can be collected. The use of mobile devices to collect, analyse and apply research data is explored here. This book focuses on the use of mobile devices in various research contexts, aiming to provide a detailed and updated knowledge on what is a comparatively new field of study. This is done considering different aspects: main methodological possibilities and issues; comparison and integration with more traditional survey modes or ways of participating in research; quality of collected data; use in commercial market research; representativeness of studies based only on the mobile-population; analysis of the current spread of mobile devices in several countries, and so on. Thus, the book provides interesting research findings from a wide range of countries and contexts.
This book was developed in the framework of WebDataNet’s Task Force 19. WebDataNet, was created in 2009 by a group of researchers focusing on the discussion on data collection methods. Supported by the European Union programme for the Coordination of Science and Technology, WebDataNet has become a unique, multidisciplinary network that has brought together leading web-based data collection experts from several institutions, disciplines, and relevant backgrounds from more than 35 different countries.
While grids or matrix questions are a widely used format in PC web surveys, there is no agreement on the format in mobile web surveys. We conducted a two-wave experiment in an opt in panel in Russia, varying the question format (grid format and item-by-item format) and device respondents used for survey completion (smartphone and PC). The 1,678 respondents completed the survey in the assigned conditions in the first wave and 1,079 in the second wave. Overall, we found somewhat higher measurement error in the grid format in both mobile and PC web conditions. We found almost no significant effect of the question format on test–retest correlations between the latent scores in two waves and no differences in breakoff rates between the question formats. The multigroup comparison showed some measurement equivalence between the question formats. However, the difference varied depending on the length of a scale with a longer scale producing some differences in the measurement equivalence between the conditions. The levels of straightlining were higher in the grid than in the item-by-item format. In addition, concurrent validity was lower in the grid format in both PC and mobile web conditions. Finally, subjective indicators of respondent burden showed that the grid format increased reported technical difficulties and decreased subjective evaluation of the survey.
The considerable growth in the number of smart mobile devices with a fast Internet connection provides new challenges for survey researchers. In this article, I compare the data quality between two survey modes: self-administered web surveys conducted via personal computer and those conducted via mobile phones. Data quality is compared based on five indicators: (a) completion rates, (b) response order effects, (c) social desirability, (d) non-substantive responses, and (e) length of open answers. I hypothesized that mobile web surveys would result in lower completion rates, stronger response order effects, and less elaborate answers to open-ended questions. No difference was expected in the level of reporting in sensitive items and in the rate of non-substantive responses. To test the assumptions, an experiment with two survey modes was conducted using a volunteer online access panel in Russia. As expected, mobile web was associated with a lower completion rate, shorter length of open answers, and similar level of socially undesirable and non-substantive responses. However, no stronger primacy effects in mobile web survey mode were found.
In this paper, we conduct a meta-analysis of breakoff rates in mobile web surveys. We test if an optimization of web surveys for mobile devices, invitation mode (SMS vs. e-mail), survey length, expected duration in the invitation, survey design (scrolling vs. paging), prerecruitment, number of reminders, design complexity (grids, drop-down questions, sliders, images, progress indicator), incentives, an opportunity to skip survey questions, and an opportunity to select the preferred mode (PC or mobile web) have an effect on breakoffs. The meta-analysis is based on 14 studies (39 independent samples) conducted using online panels – probability-based and non-probability-based. We found that mobile optimized surveys, email invitations, shorter surveys, using a prerecruitment, more reminders, a less complex design, and an opportunity to choose the preferred survey mode decrease breakoff rates in mobile web surveys. No effect of a scrolling design, incentives, indicating expected duration in the invitation, and an opportunity to skip survey questions were found.
There is some evidence that questionnaire design (scrolling or paging) and invitation mode (SMS or e-mail) have an impact on response rates in web surveys completed on personal computers (PCs). This paper examines whether these findings can be generalized to mobile web surveys. First, we explore the effect of scrolling versus paging design on the breakoff rate, item nonresponse, and completion time in mobile web surveys. Second, we investigate which type of invitation and reminder mode (SMS or e-mail) is more effective in terms of producing higher participation rates and maximizing the percentage of respondents who complete the survey via a mobile device rather than a PC. The paper summarizes the results of an experiment conducted among members of a volunteer online access panel in Russia, who were asked to complete the survey using a mobile device. We find that the scrolling design leads to significantly faster completion times, lower (though not significantly lower) breakoff rates, fewer technical problems, and higher subjective ratings of the questionnaire. We also find that SMS invitations are more effective than e-mail invitations in mobile web surveys.