Supplementary Proceedings of the 7th International Conference on Analysis of Images, Social Networks and Texts (AIST-SUP 2018), Moscow, Russia, July 5-7, 2018
We present an obstacle avoiding path planning method based on a Voronoi diagram adjusted with tactical component in a first-person shooter video game. We use a visibility measure to aggregate information on cover positions in offline and online game modes. In order to incorporate online learning based on frag map, we introduce a path finding algorithm minimizing the probability to walk along the path through dangerous zones, and on the contrary, choosing the best positions to shoot when observing a map level. Several implementations of collision free path finding are compared under efficiency, team goal achievements, and path length measures.
The goal of this study is to analyze the Social Networks Journal contribution to the sphere of social network analysis and as a result, improve the methodology that reflects the theoretical contribution of empirical articles within three dimensions: theory building, theory testing and applied method. In addition, the paper includes the examination of journal co-evolution within the field of social network analysis. In this study, we build a model of social network journals and identify the place that Social Networks occupies within this network, with its unique impact.
The paper considers the task of the morphemic analysis of Russian words and compares the efficiency of several proposed models. These models can be divided into three groups: derivational and inflectional rule-based, proba- bilistic, and hybrid models. The latter achieved state-of-the-art results of 0.848 F-score on a test set of 500 Russian words. The models use dictionaries of morphs and words and information about the part of speech and other morphological fea- tures of the word. Importantly, our solution takes into account synchronic word- formative relations between words. This allows for analyzing words in any gram- matical form, as well as previously unseen words. Our system, which we make freely available to the community, also features morphemic annotation of entire texts and search for specified morphs.
The paper presents an attempt to solve the task of aspect-based sentiment analysis in the domain of Russian-language hotel reviews, using distributed representation of words. The authors follow an approach similar to [Blinov, Kotelnikov, 2014], but applied to a different domain and using different parameters. The authors also present a new dataset that is made available to the community. To build the vector space of words with word2vec, a corpus comprising 50 329 hotel reviews was constructed. The next step was the compilation of aspect and sentiment lexicons in the vector space obtained. The lexicon construction approach was based on iteratively expanding a small set of initially specified terms. Finally, the sentiment of aspects in actual reviews was calculated given the aspect and sentiment terms found in the text and their weights, i.e. cosine similarity to the initial terms. The model was tested on a corpus of 6 876 texts from the same domain.
Attentional neural networks have achieved remarkable results for a number of tasks in the past few years. The fascinating success of neural networks with attention mechanism in natural language processing, especially in machine translation, suggests that these models can capture the meaning of ambiguous words considering their context. In this paper we introduce a new method for constructing vectors of ambiguous words occurrences for word sense induction based on the recently introduced model Transformer that achieved state of the art results for machine translation. Similar to the CBOW model for constructing word embeddings we train the Transformer to predict a word from it's context and use its trained parameters for word sense induction. On some datasets the proposed method outperforms the simple but hard-to-beat baseline, which was among the best three methods in the recent shared task on word sense induction for the Russian language RUSSE-WSI2018. On one of the datasets our method beats the top result from the competition. Furthermore, we explore how different methods of weighing word embeddings affect the performance in word sense induction. Together with weighted sums of word2vec vectors, we explore the performance of vectors from Transformer's hidden layers and introduce a combined approach that improves previous results