CLA 2016: Proceedings of the Thirteenth International Conference on Concept Lattices and Their Applications. CEUR Workshop Proceedings
The 13th International Conference on “Concept Lattices and Applications (CLA 2016)” was held at National Research University Higher School of Economics, Moscow, Russia from July 18 until July 22, 2016. The CLA conference, organized since 2002, aims to provide to everyone interested in Formal Concept Analysis and more generally in Concept Lattices or Galois Lattices, an advanced view on some of the last research trends and applications in this field. It also aims to bring together students, professors, researchers and engineers, involved in all aspects of the study of concept lattices, from theory to implementations and practical applications. As the diversity of the selected papers shows, there is a wide range of research directions, around data and knowledge processing, including data mining, knowledge discovery, knowledge representation, reasoning, pattern recognition, together with logic, algebra and lattice theory. The program of the conference includes four keynote talks given by the following distinguished researchers: Lev D. Beklemishev (Mathematical Institute of Russian Academy of Science, Moscow), J´erˆome Euzenat (INRIA Grenoble Rhˆone-Alpes), Bernhard Ganter (TU-Dresden), Boris G. Mirkin (National Research University Higher School of Economics, Moscow). This volume includes the selected papers and the abstracts of the invited talks. This year, 46 papers were submitted, from which 28 papers were accepted as regular papers. We would like to thank here the contributing authors for their valuable work, the members of the program committee and the external reviewers who analyzed the papers with care. All of them participated to the continuing quality and importance of CLA, highlighting its key role in the field. Then we would also like to thank the steering committee of CLA for giving us the occasion of leading this edition of CLA, the conference participants for their participation and support, and people in charge of the organization, especially Larisa I. Antropova, Ekaterina L. Chernyak, Dmitry I. Ignatov, Olga V. Maksimenkova, whose help was very precious in many occasions and that contributed to the success of the event. We would like to thank our sponsors, namely National Research University Higher School of Economics, ExactPro company, Russian Foundation for Basic Research. Finally, we also do not forget that the conference was managed (quite easily) with the Easychair system, for many tasks including paper submission, selection, and reviewing.
Nowadays decision tree learning is one of the most popular classification and regression techniques. Though decision trees are not accurate on their own, they make very good base learners for advanced tree-based methods such as random forests and gradient boosted trees. However, applying ensembles of trees deteriorates interpretability of the final model. Another problem is that decision tree learning can be seen as a greedy search for a good classification hypothesis in terms of some information-based criterion such as Gini impurity or information gain. But in case of small data sets the global search might be possible. In this paper, we propose an FCA-based lazy classification technique where each test instance is classified with a set of the best (in terms of some information-based criterion) rules. In a set of benchmarking experiments, the proposed strategy is compared with decision tree and nearest neighbor learning.
Pattern structures are known to provide a tool for predictive modeling and classification. However, in order to generate classification rules concept lattice should be built. This procedure may take much time and resources. In previous work it was shown that it is possible to escape the problem with so-called lazy associative classification algorithm. It does not require lattice construction and it is applicable to classification problems such as credit scoring. In this paper we adjust this method to the case of continuous target variable, i.e. regression problem, and apply it to recovery rates forecasting. We perform parameters tuning, assess the accuracy of the algorithm based on the bank data and compare it to the models adopted in the bank system and other benchmarks.
Аpproximate cluster structures are those of formal concepts and n-concepts with added numerical intensity weights. The talk presents theoretical results and computational methods for approximate clustering and n-clustering as extensions of the algebraic-geometrical properties of numerical matrices (SVD and the like) to the situations where one or most of elements of the solutions to be found are expressed by binary vectors. The theory embraces such methods as k-means, consensus clustering, network clustering, biclusters and triclusters and provides natural data analysis criteria, effective algorithms and interpretation tools.