Article
Relations between Proto-Fuzzy Concepts, Crisply Generated Fuzzy Concepts and Interval Pattern Structures
Relationships between proto-fuzzy concepts, crisply generated fuzzy concepts, and pattern structures are considered. It is shown that proto-fuzzy concepts are closely related to crisply generated fuzzy concepts in the sense that the mappings involved in the definitions coincide for crisp subsets of attributes. Moreover, a proto-fuzzy concept determines a crisp subset of attributes, which generates a (crisply generated) fuzzy concept. However, the reverse is true only in part: given a crisp subset of attributes, one can find a proto-fuzzy concept whose intent includes (but not necessarily coincides with) the given subset of attributes. Interval pattern concepts are shown to be related to crisply generated formal concepts. In particular, every crisply closed subset of objects is an extent of an interval pattern concept. Also, we establish some properties of the collection of formal concepts for a given fuzzy context.
In this paper, we consider computational problems related to finding implications in an explicitly given formal context or via queries to an oracle. We are concerned with two types of problems: enumerating implications (or association rules) and finding a single implication satisfying certain conditions. We present complexity results for some of these problems and leave others open. The paper is not meant as a comprehensive survey, but rather as a subjective selection of interesting problems.
There is a lot of usefulness measures of patterns in data mining. This paper is focused on the measures used in Formal Concept Analysis (FCA). In particular, concept stability is a popular relevancy measure in FCA. Experimental results of this paper show that high stability of a pattern in a given dataset derived from the general population suggests that the stability of that pattern is high in another dataset derived from the same population. At the second part of the paper, a new estimate of stability is introduced and studied. It es performance is evaluated experimentally. And it is shown that it is more efficient.
Formal Concept Analysis (FCA) is a mathematically well-founded theory aimed at data analysis and classication, introduced and detailed in the book of Bernhard Ganter and Rudolf Wille, \Formal Concept Analysis", Springer 1999. The area came into being in the early 1980s and has since then spawned over 10000 scientic publications and a variety of practically deployed tools. FCA allows one to build from a data table with objects in rows and attributes in columns a taxonomic data structure called concept lattice, which can be used for many purposes, especially for Knowledge Discovery and Information Retrieval. The \Formal Concept Analysis Meets Information Retrieval" (FCAIR) workshop collocated with the 35th European Conference on Information Retrieval (ECIR 2013) was intended, on the one hand, to attract researchers from FCA community to a broad discussion of FCA-based research on information retrieval, and, on the other hand, to promote ideas, models, and methods of FCA in the community of Information Retrieval. This volume contains 11 contributions to FCAIR workshop (including 3 abstracts for invited talks and tutorial) held in Moscow, on March 24, 2013. All submissions were assessed by at least two reviewers from the program committee of the workshop to which we express our gratitude. We would also like to thank the co-organizers and sponsors of the FCAIR workshop: Russian Foundation for Basic Research, National Research University Higher School of Economics, and Yandex.
We propose an approach for approximately completing a TBox w.r.t. a fixed model. By asking implication questions to a domain expert, our method approximates the subsumption relationships that hold in expert’s model and enriches the TBox with the newly discovered relationships between a given set of concept names. Our approach is based on Angluin’s exact learning framework and on the attribute exploration method from Formal Concept Analysis. It brings together the best of both approaches to ask only polynomially many questions to the domain expert.
This paper presents further development of distributed multimodal clustering. We introduce a new version of multimodal clustering algorithm for distributed processing in Apache Hadoop on computer clusters. Its implementation allows a user to conduct clustering on data with modality greater than two. We provide time and space complexity of the algorithm and justify its relevance. The algorithm is adapted for MapReduce distributed processing model. The program implemented by means of Apache Hadoop framework is able to perform parallel computing on thousands of nodes.
This book constitutes the proceedings of the 15th International Conference on Formal Concept Analysis, ICFCA 2019, held in Frankfurt am Main, Germany, in June 2019.
The 15 full papers and 5 short papers presented in this volume were carefully reviewed and selected from 36 submissions. The book also contains four invited contributions in full paper length.
The field of Formal Concept Analysis (FCA) originated in the 1980s in Darmstadt as a subfield of mathematical order theory, with prior developments in other research groups. Its original motivation was to consider complete lattices as lattices of concepts, drawing motivation from philosophy and mathematics alike. FCA has since then developed into a wide research area with applications much beyond its original motivation, for example in logic, data mining, learning, and psychology.
We propose an algorithm for learning the Horn envelope of an arbitrary domain using an expert, or an oracle, capable of answering certain types of queries about this domain. Attribute exploration from formal concept analysis is a procedure that solves this problem, but the number of queries it may ask is exponential in the size of the resulting Horn formula in the worst case. We recall a well-known polynomial-time algorithm for learning Horn formulas with membership and equivalence queries and modify it to obtain a polynomial-time probably approximately correct algorithm for learning the Horn envelope of an arbitrary domain.
A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traffic is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the final node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a finite-dimensional system of differential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the “correct” extension of solutions of a system of differential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station.
This proceedings publication is a compilation of selected contributions from the “Third International Conference on the Dynamics of Information Systems” which took place at the University of Florida, Gainesville, February 16–18, 2011. The purpose of this conference was to bring together scientists and engineers from industry, government, and academia in order to exchange new discoveries and results in a broad range of topics relevant to the theory and practice of dynamics of information systems. Dynamics of Information Systems: Mathematical Foundation presents state-of-the art research and is intended for graduate students and researchers interested in some of the most recent discoveries in information theory and dynamical systems. Scientists in other disciplines may also benefit from the applications of new developments to their own area of study.