• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site
Of all publications in the section: 3
Sort:
by name
by year
Article
Mavletova A. M. Social Science Computer Review. 2013. Vol. 31. No. 6. P. 725-743.

The considerable growth in the number of smart mobile devices with a fast Internet connection provides new challenges for survey researchers. In this article, I compare the data quality between two survey modes: self-administered web surveys conducted via personal computer and those conducted via mobile phones. Data quality is compared based on five indicators: (a) completion rates, (b) response order effects, (c) social desirability, (d) non-substantive responses, and (e) length of open answers. I hypothesized that mobile web surveys would result in lower completion rates, stronger response order effects, and less elaborate answers to open-ended questions. No difference was expected in the level of reporting in sensitive items and in the rate of non-substantive responses. To test the assumptions, an experiment with two survey modes was conducted using a volunteer online access panel in Russia. As expected, mobile web was associated with a lower completion rate, shorter length of open answers, and similar level of socially undesirable and non-substantive responses. However, no stronger primacy effects in mobile web survey mode were found.

Added: May 3, 2013
Article
Mavletova A. M., Couper M. P., Lebedev D. Social Science Computer Review. 2018. Vol. 36. No. 6. P. 647-668.

While grids or matrix questions are a widely used format in PC web surveys, there is no agreement on the format in mobile web surveys. We conducted a two-wave experiment in an opt in panel in Russia, varying the question format (grid format and item-by-item format) and device respondents used for survey completion (smartphone and PC). The 1,678 respondents completed the survey in the assigned conditions in the first wave and 1,079 in the second wave. Overall, we found somewhat higher measurement error in the grid format in both mobile and PC web conditions. We found almost no significant effect of the question format on test–retest correlations between the latent scores in two waves and no differences in breakoff rates between the question formats. The multigroup comparison showed some measurement equivalence between the question formats. However, the difference varied depending on the length of a scale with a longer scale producing some differences in the measurement equivalence between the conditions. The levels of straightlining were higher in the grid than in the item-by-item format. In addition, concurrent validity was lower in the grid format in both PC and mobile web conditions. Finally, subjective indicators of respondent burden showed that the grid format increased reported technical difficulties and decreased subjective evaluation of the survey.

Added: Nov 1, 2017
Article
Mavletova A. M. Social Science Computer Review. 2015. Vol. 33. No. 3. P. 372-398.

Several studies have measured a gamification effect in the surveys among adults. However, no experiments have been published with a focus on younger respondents. In this article, data quality between three conditions is compared among children and adolescents 7–15 years old as follows: (1) a text-only survey, (2) a visual survey with an attractive design and images, and (3) a gamified sur- vey. To test a gamification effect, an experiment using a volunteer online access panel in Russia was conducted among 1,050 children. The gamified survey produced completion time more than a third longer than the text-only survey. A higher overall item nonresponse rate was found in both the gami- fied and visual surveys. However, this was mainly due to the Flash-based questions in these condi- tions. Fewer respondents straight-lined and used middle responses in the gamified and visual surveys. It was also less burdensome to complete the gamified survey. Children requested help to answer survey questions less often. They found it more enjoyable and easier. Moreover, the sub- jective evaluation of the completion time was not different from the two other conditions. Overall,

Several studies have measured a gamification effect in the surveys among adults. However, no experiments have been published with a focus on younger respondents. In this article, data quality between three conditions is compared among children and adolescents 7–15 years old as follows: (1) a text-only survey, (2) a visual survey with an attractive design and images, and (3) a gamified survey. To test a gamification effect, an experiment using a volunteer online access panel in Russia was conducted among 1,050 children. The gamified survey produced completion time more than a third longer than the text-only survey. A higher overall item nonresponse rate was found in both the gamified and visual surveys. However, this was mainly due to the Flash-based questions in these conditions. Fewer respondents straight-lined and used middle responses in the gamified and visual surveys. It was also less burdensome to complete the gamified survey. Children requested help to answer survey questions less often. They found it more enjoyable and easier. Moreover, the subjective evaluation of the completion time was not different from the two other conditions. Overall, we suggest that a gamification effect in web surveys among children should be explored further.

Added: Aug 21, 2014