MULTI-LEVEL STUDENT ESSAY FEEDBACK IN A LEARNER CORPUS
The paper presents the results of using some computer tools and applications for the purposes of the automated and semi-automated syntactical, lexica, and error analysis of student essays in a learner corpus. The texts in the corpus were written in English by Russian learners of English. The experiment in the research consisted in comparing the parameters of different types and at different levels in the essays graded by professional examiners as the best and those graded the lowest in the pool of about 2000 essays. At the first stage in the experiment the authors applied a syntactical tool for parsing the sentences, then analyzed the results of lexical observations in those texts, and finally collected the statistics related to the errors pointed out in manual expert annotation. The parameters that had very different values for the “good” and for the “bad” essays are regarded by the authors as worthy parts of the feedback a student has to get for the text uploaded into the learner corpus.