Mobile Web Surveys: A Total Survey Error Perspective
While mobile phones or cell phones have been a challenge for telephone survey researchers for
some time, the Internet or Web capabilities of mobile phones have begun to receive attention in
the last few years. There are a number of ways that Internet-enabled smartphones can affect
survey data collection, and the implications of these for various sources of errors are only
now being fully explored. There are three broad approaches to the opportunities and challenges
posed by mobile Web.
Daily activity sees data constantly flowing through cameras, the internet, satellites, radio frequencies, sensors, private appliances, cars, smartphones, tablets and the like. Among all the tools currently used, mobile devices, especially mobile phones, smartphones and tablets, are the most widespread, with their use becoming prevalent in everyday life within both developed and developing countries. Shopping, reading newspapers, participating in forums, projecting and completing surveys, communicating with friends and making new ones, filing tax returns and getting involved in politics are all examples of how ingrained mobile technology is to modern lifestyle.
Mobile devices allow a wide range of heterogeneous activities and, as a result, have great potential in terms of the different types of data that can be collected. The use of mobile devices to collect, analyse and apply research data is explored here. This book focuses on the use of mobile devices in various research contexts, aiming to provide a detailed and updated knowledge on what is a comparatively new field of study. This is done considering different aspects: main methodological possibilities and issues; comparison and integration with more traditional survey modes or ways of participating in research; quality of collected data; use in commercial market research; representativeness of studies based only on the mobile-population; analysis of the current spread of mobile devices in several countries, and so on. Thus, the book provides interesting research findings from a wide range of countries and contexts.
This book was developed in the framework of WebDataNet’s Task Force 19. WebDataNet, was created in 2009 by a group of researchers focusing on the discussion on data collection methods. Supported by the European Union programme for the Coordination of Science and Technology, WebDataNet has become a unique, multidisciplinary network that has brought together leading web-based data collection experts from several institutions, disciplines, and relevant backgrounds from more than 35 different countries.
While grids or matrix questions are a widely used format in PC web surveys, there is no agreement on the format in mobile web surveys. We conducted a two-wave experiment in an opt in panel in Russia, varying the question format (grid format and item-by-item format) and device respondents used for survey completion (smartphone and PC). The 1,678 respondents completed the survey in the assigned conditions in the first wave and 1,079 in the second wave. Overall, we found somewhat higher measurement error in the grid format in both mobile and PC web conditions. We found almost no significant effect of the question format on test–retest correlations between the latent scores in two waves and no differences in breakoff rates between the question formats. The multigroup comparison showed some measurement equivalence between the question formats. However, the difference varied depending on the length of a scale with a longer scale producing some differences in the measurement equivalence between the conditions. The levels of straightlining were higher in the grid than in the item-by-item format. In addition, concurrent validity was lower in the grid format in both PC and mobile web conditions. Finally, subjective indicators of respondent burden showed that the grid format increased reported technical difficulties and decreased subjective evaluation of the survey.
Previous studies have not found effective ways of encouraging participants to use smartphones to complete web surveys. We hypothesize that conditional differential incentives (the amount depending on the device the respondent uses to complete the web survey) can increase overall participation rates and the proportion of respondents who use a particular device in web surveys. We conducted an experiment using a volunteer online access panel in Russia with 5,474 invitations sent to regular mobile Internet users. We varied the invitation mode (SMS vs. e-mail) and encouragement to use a particular device for completing the survey – mobile phone or personal computer (PC). SMS increased the proportion of mobile web respondents, while e-mail increased the proportion of PC web respondents. As expected, differential incentives increased the overall participation rates by 8-10 percentage points if higher incentives were offered for completing the survey on a mobile phone. Contrary to expectations, offering higher incentives to PC web respondents did not produce higher participation rates compared to the control condition. Both encouraging the use of a mobile phone and offering higher incentives were effective at increasing the proportion of respondents using mobile devices. In terms of both participation rates and the proportion of respondents using mobile devices, offering incentives 50% higher was as efficient as offering incentives 100% higher for mobile web respondents. Offering higher incentives to mobile web respondents also had an effect on sample composition. Significantly higher participation rates were found among females and those with higher education.
There is some evidence that a scrolling design may reduce breakoffs in mobile web surveys compared to a paging design, but there is little empirical evidence to guide the choice of the optimal number of items per page. We investigate the effect of the number of items presented on a page on data quality in two types of questionnaires – with or without user-controlled skips. Three versions of a 30-item instrument were compared, with 5, 15 or all 30 questions presented on a page, in two different surveys, one with skips and one without. We found that displaying 30 items on a page reduced the breakoff rate by almost a third compared to presenting 5 items per page in the questionnaire without skips, however, the difference was not statistically significant. In both surveys with and without skips the completion times were significantly lower in the 30-item per page condition; however, item nonresponse rates were also higher. We give some practical recommendations to guide choices while designing questionnaires for mobile web surveys.
In this paper, we conduct a meta-analysis of breakoff rates in mobile web surveys. We test if an optimization of web surveys for mobile devices, invitation mode (SMS vs. e-mail), survey length, expected duration in the invitation, survey design (scrolling vs. paging), prerecruitment, number of reminders, design complexity (grids, drop-down questions, sliders, images, progress indicator), incentives, an opportunity to skip survey questions, and an opportunity to select the preferred mode (PC or mobile web) have an effect on breakoffs. The meta-analysis is based on 14 studies (39 independent samples) conducted using online panels – probability-based and non-probability-based. We found that mobile optimized surveys, email invitations, shorter surveys, using a prerecruitment, more reminders, a less complex design, and an opportunity to choose the preferred survey mode decrease breakoff rates in mobile web surveys. No effect of a scrolling design, incentives, indicating expected duration in the invitation, and an opportunity to skip survey questions were found.