• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site
Of all publications in the section: 737
Sort:
by name
by year
Book
Пономарева М. А., Дроганова К. А., Smurov I. et al. Iss. 7. Florence: Association for Computational Linguistics, 2019.

This paper provides a comprehensive overview of the gapping dataset for Russian that consists of 7.5k sentences with gapping (as well as 15k relevant negative sentences) and comprises data from various genres: news, fiction, social media and technical texts. The dataset was prepared for the Automatic Gapping Resolution Shared Task for Russian (AGRR-2019) - a competition aimed at stimulating the development of NLP tools and methods for processing of ellipsis. In this paper, we pay special attention to the gapping resolution methods that were introduced within the shared task as well as an alternative test set that illustrates that our corpus is a diverse and representative subset of Russian language gapping sufficient for effective utilization of machine learning techniques.

Added: Sep 5, 2019
Book
Edited by: A. Elizarov, N. V. Loukachevitch. Vol. 2780. CEUR-WS, 2020.
Added: Jun 16, 2021
Book
Edited by: D. I. Ignatov. Aachen: CEUR Workshop Proceedings, 2019.

Workshop concentrates on an interdisciplinary approach to modelling human behavior incorporating data mining and expert knowledge from behavioral sciences. Data analysis results extracted from clean data of laboratory experiments will be compared with noisy industrial datasets from the web e.g. Insights from behavioral sciences will help data scientists. Behavior scientists will see new inspirations to research from industrial data science. Market leaders in Big Data, as Microsoft, Facebook, and Google, have already realized the importance of experimental economics know-how for their business. 

In Experimental Economics, although financial rewards restrict subjects preferences in experiments, exclusive application of analytical game theory is not enough to explain the collected data. It calls for the development and evaluation of more sophisticated models. The more data is used for evaluation, the more statistical significance can be achieved. Since large amounts of behavioral data are required to scan for regularities, along with automated agents needed to simulate and intervene in human interactions, Machine Learning is the tool of choice for research in Experimental Economics. This workshop is aimed at bringing together researchers from both Data Analysis and Economics in order to achieve mutually beneficial results.

Added: Nov 19, 2019
Book
Edited by: J. Baixeries, S. Boytcheva, O. Pianykh et al. Iss. 6. EasyChair, 2018.

This volume contains proceedings of the first Workshop on Data Analysis in Medicine held in May 2017 at the National Research University Higher School of Economics, Moscow. The volume contains one invited paper by Dr. Svetla Boytcheva, 6 regular contributions and 2 project proposals, carefully selected and reviewed by at least two reviewers from the international program commit- tee. The papers accepted for publication report on different aspects of analysis of medical data, among them treatment of data on particular diseases (Consoli- dated mathematical growth model of Breast Cancer CoMBreC, Artificial neural networks for prediction of final height in children with growth hormone deficiency), methods of data analysis (analysis of rare diseases, methods of machine learning and Big Data, subgroup discovery for treatment optimization), and instrumental tools (explanation-oriented methods of data analysis in medicine, information support features of the medical research process, modeling frame- work for medical data semantic transformations, radiology quality management and peer-review system). Organizers of the workshop would like to thank the reviewers for their careful work and all contributors and participants of the workshop.

Added: Jun 8, 2018
Book
Edited by: R. Tagiew, D. I. Ignatov, A. Hilbert et al. Vol. 1968. Aachen: CEUR Workshop Proceedings, 2017.

Workshop concentrates on an interdisciplinary approach to modeling human behavior incorporating data mining and/or expert knowledge from behavioral sciences. Data analysis results extracted from clean data of laboratory experiments can be compared with noisy industrial data-sets from the web e.g.. Insights from behavioral sciences will help data scientists. Behavior scientists will see new inspirations to research from industrial data science. Market leaders in Big Data, as Microsoft, Facebook, and Google, have already realized the importance of experimental economics know-how for their business. In Experimental Economics, although financial rewards restrict subjects preferences in experiments, exclusive application of analytical game theory is not enough to explain the collected data. It calls for the development and evaluation of more sophisticated models. The more data is used for evaluation, the more statistical significance can be achieved. Since large amounts of behavioral data are required to scan for regularities, along with automated agents needed to simulate and intervene in human interactions, Machine Learning is the tool of choice for research in Experimental Economics. This workshop is aimed at bringing together researchers from both Data Analysis and Economics in order to achieve mutually-beneficial results.

 

Added: Oct 10, 2017
Book
Vol. 29. Iss. 4. M.: 2017.

<TBD>

Added: Aug 28, 2017
Book
Vol. 28. Iss. 3. M.: 2016.

Proceedings of ISP RAS are a double-blind peer-reviewed journal publishing scientific articles in the areas of system programming, software engineering, and computer science. The journal's goal is to develop a respected network of knowledge in the mentioned above areas by publishing high quality articles on open access. The journal is intended for researchers, students, and practitioners.

Added: Sep 14, 2016
Book
M.: 2016.

The four preceding editions of the FCA4AI Workshop showed that many researchers working in Artificial Intelligence are deeply interested by a well-founded method for classi- fication and mining such as Formal Concept Analysis (see http://www.fca4ai.hse.ru/). The first edition of FCA4AI was co-located with ECAI 2012 in Montpellier, the second one with IJCAI 2013 in Beijing, the third one with ECAI 2014 in Prague, and finally the forth and last one with IJCAI 2015 in Buenos Aires. In addition, all the proceedings of these pre- ceding editions have been published as CEUR Proceedings (http://ceur-ws.org/Vol-939/, http://ceur-ws.org/Vol-1058/, http://ceur-ws.org/Vol-1257/ and http://ceur-ws. org/Vol-1430/).

This year, the fifth workshop has again attracted many different researchers working on actual and important topics, e.g. theory, fuzzy FCA, dependencies, classification, mining of linked data, navigation, visualization, and various applications. This shows the diversity and the richness of the relations between FCA and AI.

Formal Concept Analysis (FCA) is a mathematically well-founded theory aimed at data analysis and classification. FCA allows one to build a concept lattice and a system of de- pendencies (implications) which can be used for many AI needs, e.g. knowledge discovery, learning, knowledge representation, reasoning, ontology engineering, as well as information retrieval and text processing. As we can see, there are many “natural links” between FCA and AI. Recent years have been witnessing increased scientific activity around FCA, in particular a strand of work emerged that is aimed at extending the possibilities of FCA w.r.t. knowl- edge processing, such as work on pattern structures and relational context analysis. These extensions are aimed at allowing FCA to deal with more complex than just binary data, both from the data analysis and knowledge discovery points of view and as well from the knowledge representation point of view, including, e.g., ontology engineering. All these in- vestigations provide new possibilities for AI activities in the framework of FCA. Accordingly, in this workshop, we are interested in two main issues:

How can FCA support AI activities such as knowledge processing (knowledge discov- ery, knowledge representation and reasoning), learning (clustering, pattern and data mining), natural language processing, and information retrieval.

How can FCA be extended in order to help AI researchers to solve new and complex problems in their domains.

The workshop is dedicated to discuss such issues. This year, the papers submitted to the workshop were carefully peer-reviewed by three members of the program committee and 14 papers with the highest scores were selected. We thank all the PC members for their reviews and all the authors for their contributions. 

 

Added: Oct 6, 2016
Book
Iss. 1058. Beijing: CEUR Workshop Proceedings, 2013.

This is the second edition of the FCA4AI workshop, the first edition being associated to the ECAI 2012 Conference, held in Montpellier, in August 2012 (see http://www.fca4ai.hse.ru/). In particular, the first edition of the workshop showed that there are many AI researchers interested in FCA. Based on that, the three co-editors decided to organize a second edition of the FCA4AI workshop at the IJCAI 2013 Conference in Beijing.

Formal Concept Analysis (FCA) is a mathematically well-founded theory aimed at data analysis and classification. FCA allows one to build a concept lattice and a system of dependencies (implications) which can be used for many AI needs, e.g. knowledge processing involving learning, knowledge discovery, knowledge representation and reasoning, ontology engineering, as well as information retrieval and text processing. Thus, there exist many “natural links” between FCA and AI.

Recent years have been witnessing increased scientific activity around FCA, in particular a strand of work emerged that is aimed at extending the possibilities of FCA w.r.t. knowledge processing, such as work on pattern structures and relational context analysis. These extensions are aimed at allowing FCA to deal with more complex than just binary data, both from the data analysis and knowledge discovery points of view and from the knowledge representation point of view, including, e.g., ontology engineering. All these works extend the capabilities of FCA and other new possibilities for AI activities in the framework of FCA. Accordingly, in this workshop, we are interested in two main issues:

- How can FCA support AI activities such as knowledge processing (knowledge discovery, knowledge representation and reasoning), learning (clustering, pattern and data mining), natural language processing, and information retrieval.

- How can FCA be extended in order to help AI researchers to solve new and complex problems in their domains.

The workshop is dedicated to discuss such issues.

The papers submitted to the workshop were carefully peer-reviewed by two members of the program committee and 11 papers with the highest scores were selected. We thank all the PC members for their reviews and all the authors for their contributions. We also thank the organizing committee of ECAI-2012 and especially workshop chairs Jerome Lang and Michele Sebag for the support of the workshop.

Added: Oct 26, 2014
Book
M.: Higher School of Economics Publishing House, 2018.
Added: Jan 23, 2019
Book
Edited by: V. Mkhitarian, S. Sidorov. Iss. 85: Advances in Computer Science Research. Atlantis Press, 2019.

The Third Workshop on Computer Modelling in Decision Making (CMDM 2018) was held in Saratov State University (Saratov, Russia) within the VII International Youth Research and Practice Conference ‘Mathematical and Computer Modelling in Economics, Insurance and Risk Management’. The workshop 's main topic is computer and mathematical modeling in decision making in finance, insurance, banking, economic forecasting, investment and financial analysis. Researchers, postgraduate students, academics as well as financial, bank, insurance and government workers participated in the Workshop.

Added: Oct 28, 2019
Book
Edited by: J. Impagliazzo, V. Shilov. Los Alamitos: IEEE Computer Society, 2014.

The conference successfully achieves its objectives to unite scientists from different scientific schools and directions, preserving the historical and scientific heritage of informatics and representing it in various ways. Apart from running a conference, the SoRuCom community develops the Website of the Virtual Computer Museum, publishes books and articles and holds local conferences and seminars. This volume contains best papers from a great number of reports made at a conference.

Added: Mar 13, 2016
Book
Kuznetsov S., Napoli A., Rudolph S. M.: CEUR Workshop Proceedings, 2012.

 

Formal Concept Analysis (FCA) is a mathematically well-founded theory aimed at data analysis and classi cation. FCA allows one to build a concept lattice and a system of dependencies (implications) which can be used for many AI needs, e.g. knowledge processing involving learning, knowledge discovery, knowledge representation and reasoning, ontology engineering, as well as information retrieval and text processing. Thus, there exist many \natural links" between FCA and AI. Recent years have been witnessing increased scienti c activity around FCA, in particular a strand of work emerged that is aimed at extending the possibilities of FCA w.r.t. knowl- edge processing, such as work on pattern structures and relational context analysis. These extensions are aimed at allowing FCA to deal with more complex than just binary data, both from the data analysis and knowledge discovery points of view and from the knowledge representation point of view, including, e.g., ontology engineering. All these works extend the capabilities of FCA and o er new possibilities for AI activities in the framework of FCA. Accordingly, in this workshop, we are interested in two main issues:  How can FCA support AI activities such as knowledge processing (knowledge discov- ery, knowledge representation and reasoning), learning (clustering, pattern and data mining), natural language processing, information retrieval.  How can FCA be extended in order to help AI researchers to solve new and complex problems in their domains. The workshop is dedicated to discuss such issues. The papers submitted to the workshop were carefully peer-reviewed by two members of the program committee and 11 papers with the highest scores were selected. We thank all the PC members for their reviews and all the authors for their contributions. We also thank the organizing committee of ECAI-2012 and especially workshop chairs Jer^ome Lang and Michele Sebag for the support of the workshop

Added: Jan 30, 2013
Book
Тульчинский Г. Л. СПб.: Лань, 2011.
Added: Oct 5, 2012
Book
Edited by: O. Lyashevskaya, M. Kopotev, A. Mustajoki. Abingdon: Routledge, 2018.

This edited collection presents a range of methods that can be used to analyse linguistic data quantitatively. A series of case studies of Russian data spanning different aspects of modern linguistics serve as the basis for a discussion of methodological and theoretical issues in linguistic data analysis. The book presents current trends in quantitative linguistics, evaluates methods and presents the advantages and disadvantages of each. The chapters contain introductions to the methods and relevant references for further reading.

The Russian language, despite being one of the most studied in the world, until recently has been little explored quantitatively. After a burst of research activity in the years 1960-1980, quantitative studies of Russian vanished. They are now reappearing in an entirely different context. Today we have large and deeply annotated corpora available for extended quantitative research, such as the Russian National Corpus, ruWac, RuTenTen, to name just a few (websites for these and other resources will be found in a special section in the References). The present volume is intended to fill the lacuna between the available data and the methods that can be applied to studying them.

Our goal is to present current trends in researching Russian quantitative linguistics, to evaluate the research methods vis-à-vis Russian data, and to show both the advantages and the disadvantages of the methods. We especially encouraged our authors to focus on evaluating statistical methods and new models of analysis. New findings concern applicability, evaluation, and the challenges that arise from using quantitative approaches to Russian data.

Added: Oct 11, 2016
Book
Xanthopoulos P., Pardalos P. M., Trafalis T. B. NY: Springer, 2013.

Summarizes the latest applications of robust optimization in data mining.

An essential accompaniment for theoreticians and data miners Data uncertainty is a concept closely related with most real life applications that involve data collection and interpretation. Examples can be found in data acquired with biomedical instruments or other experimental techniques. Integration of robust optimization in the existing data mining techniques aims to create new algorithms resilient to error and noise.

Added: Dec 19, 2012
Book
Edited by: S. Kuznetsov, D. Slezak, D. H. Hepting et al. Vol. 6743. Berlin; Heidelberg: Springer, 2011.

This volume contains papers presented at the 13th International Conference on Rough Sets, Fuzzy Sets and Granular Computing (RSFDGrC) held during June 25–27, 2011, at the National Research University Higher School of Economics (NRU HSE) in Moscow, Russia. RSFDGrC is a series of scientific events spanning the last 15 years. It investigates the meeting points among the four major disciplines outlined in its title, with respect to both foundations and applications. In 2011, RSFDGrC was co-organized with the 4th International Conference on Pattern Recognition and Machine Intelligence (PReMI), providing a great opportunity for multi-faceted interaction between scientists and practitioners. There were 83 paper submissions from over 20 countries. Each submission was reviewed by at least three Chairs or PC members.We accepted 34 regular papers (41%). In order to stimulate the exchange of research ideas, we also accepted 15 short papers. All 49 papers are distributed among 10 thematic sections of this volume. The conference program featured five invited talks given by Jiawei Han, Vladik Kreinovich, Guoyin Wang, Radim Belohlavek, and C.A. Murthy, as well as two tutorials given by Marcin Szczuka and Richard Jensen. Their corresponding papers and abstracts are gathered in the first two sections of this volume.

Added: Aug 31, 2012
Book
Edited by: M. Ojeda-Aciego, D. I. Ignatov, A. Lepskiy. Vol. 1687. CEUR Workshop Proceedings, 2016.

This volume contains the papers presented at the Second International Workshop on Soft Computing Applications and Knowledge Discovery (SCAKD 2016) held on July 18, 2016 at the National Research University Higher School of Economics, Moscow, Russia. Soft computing is a collection of methodologies, which aim to exploit tolerance for imprecision, uncertainty and partial truth to achieve tractability, robustness and low solution cost in real life tasks. The workshop proposes to present high quality scientific results and promising research in the area of soft computing and data mining, particularly by young researchers, with an objective of bringing them to the focus while promoting collaborative research activities. By holding the workshop in conjunction with CLA 2016, we hope to provide the participants exposure and interaction with eminent scientists, engineers, and researchers in the related fields. Each submission has been reviewed by at least two Program Committee members. Six regular papers have been accepted for publication as well as four research proposals. The program also includes one invited industry talk by the representatives of ExactPro company on Using intelligent systems and structural analysis to ensure orderly operations of the modern trading and exchange platforms. We would like to thank all the authors of submitted papers and the Program Committee members for their commitment. We are grateful to our invited speaker and our sponsors: National Research University Higher School of Economics (Moscow, Russia), Russian Foundation for Basic Research, and ExactPro. Finally, we would like to acknowledge the EasyChair system which helped us to manage the reviewing process.

Added: Sep 28, 2016
Book
Savchenko A. Switzerland: Springer, 2016.

A unified methodology for categorizing various complex objects is presented in this book. Through probability theory, novel asymptotically minimax criteria suitable for practical applications in imaging and data analysis are examined including the special cases such as the Jensen-Shannon divergence and the probabilistic neural network. An optimal approximate nearest neighbor search algorithm, which allows faster classification of databases is featured. Rough set theory, sequential analysis and granular computing are used to improve performance of the hierarchical classifiers. Practical examples in face identification (including deep neural networks), isolated commands recognition in voice control system and classification of visemes captured by the Kinect depth camera are included. This approach creates fast and accurate search procedures by using exact probability densities of applied dissimilarity measures.

This book can be used as a guide for independent study and as supplementary material for a technically oriented graduate course in intelligent systems and data mining. Students and researchers interested in the theoretical and practical aspects of intelligent classification systems will find answers to:

- Why conventional implementation of the naive Bayesian approach does not work well in image classification?

- How to deal with insufficient performance of hierarchical classification systems?

 

- Is it possible to prevent an exhaustive search of the nearest neighbor in a database?

Added: Apr 12, 2016
Book
Edited by: M. Akhin, В. М. Ицыксон, Б. Новиков et al. St. Petersburg: ООО "Цифровая фабрика "Быстрый Цвет", 2017.

The Second Conference on Software Engineering and Information Management (SEIM-2017) aims to bring together students, researchers and practitioners in different areas of software engineering and information management. We consider SEIM-2017 to be a stepping stone for young researchers, which should help them familiarize with the conference workflow, practice writing academic papers, gather valuable feedback about their research and expand their research network. The conference welcomes submissions on a wide range of topics, including but not limited to: • Algorithms and data structures • Cloud systems • Coding theory • Compilers • Crowdsourcing • Data storage and processing • Development management • Digital signal processing • Distributed systems • E-commerce / e-government • Empirical software engineering • High-performance computing • Information retrieval • Information security • Intelligent data analysis • Internet of Things • Machine learning • Mobile systems • Modelling • Natural language processing • Networks and telecommunications • (Non-)relational databases • Operating systems • Programming languages • Recommendation systems • Robotics • Semantic web • Social networks • Software analysis • Software testing • Software verification • Software virtualization • Software-defined networks • Theoretical computer science In total, we received 35 papers, each reviewed by at least 3 members of the Program Committee, of which 8 were selected for publication in CEUR-WS.org, 8 — for indexing in RSCI, and 4 were accepted as talk-only to allow the young authors to experience the process of a scientific conference. We would like to thank the members of our Program Committee for their great work and contribution to the success of our conference!

These proceedings include the SEIM-2017 papers, which were selected by the Program Committee for publication in RSCI. These papers passed not only the original review procedure, but also an additional round of post-review with the conference feedback. We thank the authors for their submissions to SEIM 2017 and hope to see them in the future! Furthermore, we would also like to thank Tatiana Mironova and Sergey Zherevchuk for their great help in organizing the conference, Computer Science Center for hosting the event, and JetBrains Research for their overall support of this endeavour! The additional information about the SEIM conference series can be found on the conference website at: http://2017. seim-conf.org/

Added: Nov 9, 2018
Book
Zykov S. V., Gromoff A., Kazantsev N. Hershey: IGI Global, 2019.

Sustaining a competitive edge in today’s business world requires innovative approaches to product, service, and management systems design and performance. Advances in computing technologies have presented managers with additional challenges as well as further opportunities to enhance their business models.

Software Engineering for Enterprise System Agility: Emerging Research and Opportunities is a collection of innovative research that identifies the critical technological and management factors in ensuring the agility of business systems and investigates process improvement and optimization through software development. Featuring coverage on a broad range of topics such as business architecture, cloud computing, and agility patterns, this publication is ideally designed for business managers, business professionals, software developers, academicians, researchers, and upper-level students interested in current research on strategies for improving the flexibility and agility of businesses and their systems.

Added: Mar 21, 2018