Course Handbook for Promoting Sustainable Excellence in English Language Testing and Assessment
Language Assessment at Kazan (Volga region) Federal University (KFU). In the paper the local impact of the course at KFU is viewed at four levels: Reactions, Learning Changes, Behaviour and Results. Impact data collected at KFU include the following: end of session written feedback, pre- and post-course questionnaires, observation in the classroom, interviews, concept maps, teacher portfolios, written assignments, tests/examinations and participant journal entries. Viewed as the first step in conducting a full Student Needs Analysis, the research is intended to inform the design and delivery of Language Assessment courses for graduates majoring in English, Linguistics or Pedagogy elsewhere. The methods, techniques and tools developed by the authors may also be adapted for application to any University course during piloting, or following its introduction
Although a variety of the English language written olympiads (language competitions) exist, fairly little is known about how they are different from traditional forms of language assessment. In Russia, olympiads in the English language are now gaining currency because they provide an opportunity to reveal creative thinking and intellectual abilities of pupils. The present study examined major differences between language olympiads and traditional forms of language assessment. A comparison of five main olympiads in the English language in terms of their levels, assessed skills and task types is made and their distinctive features are outlined. The results of a testing of a new written olympiad of the Higher School of Economics “Vysshaya proba” (Highest Degree) in the English language are analyzed. A set of test items was developed for 120 secondary school pupils in Moscow to find out whether they can easily cope with non-traditional form of assessment, which is language olympiad. The results indicate that language competition as a form of alternative assessment may be introduced at schools to encourage better learning.
Background: There are a limited number of aphasia language tests in the majority of the world’s commonly spoken languages. Furthermore, few aphasia tests in languages other than English have been standardised and normed, and few have supportive psychometric data pertaining to reliability and validity. The lack of standardised assessment tools across many of the world’s languages poses serious challenges to clinical practice and research in aphasia. Aims: The current review addresses this lack of assessment tools by providing conceptual and statistical guidance for the development of aphasia assessment tools and establishment of their psychometric properties. Main Contribution: A list of aphasia tests in the 20 most widely spoken languages is included. The pitfalls of translating an existing test into a new language versus creating a new test are outlined. Factors to be considered in determining test content are discussed. Further, a description of test items corresponding to different language functions is provided, with special emphasis on implementing important controls in test design. Next, a broad review of principal psychometric properties relevant to aphasia tests is presented, with specific statistical guidance for establishing psychometric properties of standardised assessment tools. Conclusions: This article may be used to help guide future work on developing, standardising and validating aphasia language tests. The considerations discussed are also applicable to the development of standardised tests of other cognitive functions.