Discovering dialectal differences based on oral corpora
This paper discusses a method to detect statistically significant linguistic differences between corpora while factoring in possible variability within the very corpora to be compared. Specifically, we compare two small corpora of dialects of Even, Bystraja and Lamunkhin Even, in an attempt to identify morphemes that are more frequent in either of the corpora. To investigate whether this difference might be due to an over-representation of a speaker who happens to be an outlier in terms of using a particular morpheme, we use DP, a measurement of evenness of the distribution of a specific linguistic feature across subcorpora of the same corpus.
The article discusses the most recent trends in the development of the English progressive. A corpus-based approach to linguistic research is seen as an effective means of determining reliability of the data retrieved and helps track the major diachronic dynamic in the increasing frequency of the progressive aspect that has taken place since the beginning of the 20th century. The article specifically deals with the extension of the progressive to new constructions, such as modal, present perfect and past perfect passive progressive, and also accounts for the use of progressive forms in the contextual environment not generally characteristic of them.
The paper describes the current stat of the corpus of the modern Albanian which is being created as a joint effort of the Institute of Linguistics (St. Petersburg) and the department of linguistics of the Higher School of Economics, Moscow.
This paper is devoted to the use of two tools for creating morphologically annotated linguistic corpora: UniParser and the EANC platform. The EANC platform is the database and search framework originally developed for the Eastern Armenian National Corpus (www.eanc.net) and later adopted for other languages. UniParser is an automated morphological analysis tool developed specifically for creating corpora of languages with relatively small numbers of native speakers for which the development of parsers from scratch is not feasible. It has been designed for use with the EANC platform and generates XML output in the EANC format.
UniParser and the EANC platform have already been used for the creation of the corpora of several languages: Albanian, Kalmyk, Lezgian, Ossetic, of which the Ossetic corpus is the largest (5 million tokens, 10 million planned for 2013), and are currently being employed in construction of the corpora of Buryat and Modern Greek languages. This paper will describe the general architecture of the EANC platform and UniParser, providing the Ossetic corpus as an example of the advantages and disadvantages of the described approach.
The project we present – Russian Learner Translator Corpus (RusLTC) is a multiple learner translator corpus which stores Russian students’ translations out of English and into it. The project is being developed by a cross-functional team of translator trainers and computational linguists in Russia. Translations are collected from several Russian universities; all translations are made as part of routine and exam assignments or as submissions for translation contests by students majoring in translation. As of March 2014 RusLTC contains the total of nearly 1.2 million word tokens, 258 source texts, and 1,795 translations. The paper gives a brief overview of the related research, describes the corpus structure and corpus-building technologies used; it also covers the query tool features and our error annotation solutions. In the final part we make a summary of the RusLTC-based research, its current practical applications and suggest research prospects and possibilities.
Four electronic corpora created in 2011 within the framework of the “Corpus Linguistics: the Albanian, Kalmyk, Lezgian, and Ossetic Languages” Program of Fundamental Research of the RAS are presented. The interface and functionalities of these corpora are described, engineering problems to be solved in their creation are elucidated, and the promises of their development are discussed. A particular emphasis is made on the compilation of dictionaries and automatic grammatical markup of the corpora.
The aim of the article is to inform professional readership of corpus analysis potential for L2 teaching purposes, which is based on our own implementation experience of corpus-based activities in L2 classroom. Thus, the paper is divided into four sections, including Introduction (1), Corpus Tools (2), Examples of Classroom Use (3) and Conclusion (4). Introduction outlines the recent corpus-driven changes in attitudes to language statistics, which are reflected in corpus-informed text books. Section Two, which has nine subsections, deals with corpus tools and notions of corpus analysis (concordance, collocation and colligation search, corpus statistics, semantic prosody etc.) in L2 teaching context. In particular, we discuss condensed reading, concordance vertical scanning for lexico-grammatical profiling and other teaching tools to develop L2 linguistic competence. These are later supported (Section 3) by some corpus-oriented classroom activities with possible teaching outcomes outlined. Some experience-based comments are also given regarding the language level of students that could benefit from corpus data analysis. Based on our research results, conclusion elaborates on the idea of corpus competence as well as the necessity of corpus tools to be used by both language teaching professionals and students.
Theoretically, as was initially suggested by data-driven teaching pioneers, not only the researcher, but also the learner should be given the chance of studying language through corpus. The article advocates that corpus tools for collocation search together with colligation detection (or probable grammar structures) are powerful means to develop both language and research skills.
In addition to corpus – based activities and theoretical grounding behind that, we also shared our own experience on compiling a corpora of professional discourse. Both the idea and the practicality of a small university – made corpus are evaluated. A brief comparison of a diversified corpus (such as the British National Corpus) versus a “home”- made corpus is provided.
The research also draws attention to a term chunk of language, which has been adopted by western teaching methodology, and is being considered in the paper through frequency – probability (corpus) terms. It is suggested that a chunk of language bigger than a collocation lends itself to being discovered through the combination of various corpus tools. These frequent language chunks (such as there is certain stigma attached to...) account for a large part of native speaker’s vocabulary and fluency. They are believed to be stored in memory in great amounts and retrieved virtually undivided. Chunks could be subject to minor colligational corrections while speaking. We believe that discovering frequent language chunks by language learners could be done as an educational research activity under the guidance of a language instructor. As has already been mentioned, the article provides some research activity examples in Section 3.
Thus, the article will equip the reader with a clear understanding of corpus linguistic potential in the foreign language classroom as well as with the capacity and confidence to engage in corpus analysis. It might be particularly beneficial for non-native speakers of English who happen to teach English in ESP context, since we believe that corpus research ends the monopoly of language intuition, which in today’s world is being gradually replaced by corpus statistics.