Automated assessment of learner text complexity
EFL methodology has always recognized the importance of giving student learners of foreign languages regular and quick feedback on student speech production, both written and oral, and over the past two decades there appeared various tools for the provision of automated instant feedback. The presented paper offers an application that focuses on measuring text complexity, and the results are translated into feedback related to the author’s language proficiency. Along with some standard text complexity features, this tool takes into account those that are significant for Russian learners of English. The application provides students with advice on how to improve the weaker aspects of the evaluated essay by giving the statistics of the relevant linguistic features of the text in two different colours for the better and worse levels. We point out what text features are more relevant for the assessment of the essays written in English by Russian students. We analyzed 3440 texts from Russian Error-Annotated English Learner Corpus, and for each of them we calculated the text criteria values. Then we used the methods of machine learning and statistical analysis to predict the grade that could be received for the essay.