Classification of a Sequence of Objects with the Fuzzy Decoding Method
The problem of recognition of a sequence of objects (e.g., video-based image recognition, phoneme recognition) is explored. The generalization of the fuzzy phonetic decoding method is proposed by assuming the distribution of the classified object to be of exponential type. Its preliminary phase includes association of each model object with the fuzzy set of model classes with grades of membership defined as the confusion probabilities estimated with the Kullback-Leibler divergence between model distributions. At first, each object (e.g., frame) in a classified sequence is put in correspondence with the fuzzy set which grades are defined as the posterior probabilities. Next, this fuzzy set is intersected with the fuzzy set corresponding to the nearest neighbor. Finally, the arithmetic mean of these fuzzy intersections is assigned to the decision for the whole sequence. In this paper we propose not to limit the method's usage with the Kullback-Leibler discrimination and to estimate the grades of membership of models and query objects based on an arbitrary distance with appropriate scale factor. The experimental results in the problem of isolated Russian vowel phonemes and words recognition for state-of-the-art measures of similarity are presented. It is shown that the correct choice of the scale parameter can significantly increase the recognition accuracy.
The definition of a phoneme as a fuzzy set of minimal speech units from the model database is proposed. On the basis of this definition and the Kullback-Leibler minimum information discrimination principle the novel phoneme recognition algorithm has been developed as an enhancement of the phonetic decoding method. The experimental results in the problems of isolated vowels recognition and word recognition in Russian are presented. It is shown that the proposed method is characterized by the increase of recognition accuracy and reliability in comparison with the phonetic decoding method
The paper makes a brief introduction into multiple classifier systems and describes a particular algorithm which improves classification accuracy by making a recommendation of an algorithm to an object. This recommendation is done under a hypothesis that a classifier is likely to predict the label of the object correctly if it has correctly classified its neighbors. The process of assigning a classifier to each object involves here the apparatus of Formal Concept Analysis. We explain the principle of the algorithm on a toy example and describe experiments with real-world datasets.
Soft Computing (SC) is a consortium of fuzzy logic (FL), neurocomputing (NC), evolutionary computing (EC), probabilistic computing (PC), chaotic computing (CC) and parts of machine learning theory (ML). SC is the foundation for computational intelligence and is leading to the development of numerous hybrid intelligent information, control and decision-making systems. The methodology of computing with words (CW) is an important event in the evolution of cognitive science, natural language processing, artificial intelligence, and different existing scientific theories. This is because CW can enrich the existing scientific theories and the above-mentioned science fields giving them the capability of using natural languages to operate on perception-based information, not only measurement-based information. Indeed in many real-world problems in natural sciences as well as in industrial engineering, economics, and business, often there is a need to deal with both perception and measurement based information. In the case of perception based information, the available information is not precise enough to justify the use of numbers. Such information is usually described in natural languages rather than in strict (idealized) mathematical expressions. So a strong need has appeared for a new approach, theory and technology for the development of knowledge representation, computing, and reasoning tools that allow creation of systems with high MIQ. The sessions of the ICSCCW-2011 will focus on the development and application of Soft Computing technology and computing with words paradigm in system analysis, decision and control.
This volume contains papers presented at the 13th International Conference on Rough Sets, Fuzzy Sets and Granular Computing (RSFDGrC) held during June 25–27, 2011, at the National Research University Higher School of Economics (NRU HSE) in Moscow, Russia. RSFDGrC is a series of scientific events spanning the last 15 years. It investigates the meeting points among the four major disciplines outlined in its title, with respect to both foundations and applications. In 2011, RSFDGrC was co-organized with the 4th International Conference on Pattern Recognition and Machine Intelligence (PReMI), providing a great opportunity for multi-faceted interaction between scientists and practitioners. There were 83 paper submissions from over 20 countries. Each submission was reviewed by at least three Chairs or PC members.We accepted 34 regular papers (41%). In order to stimulate the exchange of research ideas, we also accepted 15 short papers. All 49 papers are distributed among 10 thematic sections of this volume. The conference program featured five invited talks given by Jiawei Han, Vladik Kreinovich, Guoyin Wang, Radim Belohlavek, and C.A. Murthy, as well as two tutorials given by Marcin Szczuka and Richard Jensen. Their corresponding papers and abstracts are gathered in the first two sections of this volume.
Symbolic classifiers allow for solving classification task and provide the reason for the classifier decision. Such classifiers were studied by a large number of researchers and known under a number of names including tests, JSM-hypotheses, version spaces, emerging patterns, proper predictors of a target class, representative sets etc. Here we consider such classifiers with restriction on counter-examples and discuss them in terms of pattern structures. We show how such classifiers are related. In particular, we discuss the equivalence between good maximally redundant tests and minimal JSM-hyposethes and between minimal representations of version spaces and good irredundant tests.
The problem of automatic image recognition based on the minimum information discrimination principle is formulated and solved. Color histograms comparison in the Kullback–Leibler information metric is proposed. It’s combined with method of directed enumeration alternatives as opposed to complete enumeration of competing hypotheses. Results of an experimental study of the Kullback-Leibler discrimination in the problem of face recognition with a large database are presented. It is shown that the proposed algorithm is characterized by increased accuracy and reliability of image recognition.
A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traﬃc is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the ﬁnal node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a ﬁnite-dimensional system of diﬀerential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the “correct” extension of solutions of a system of diﬀerential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station.
Event logs collected by modern information and technical systems usually contain enough data for automated process models discovery. A variety of algorithms was developed for process models discovery, conformance checking, log to model alignment, comparison of process models, etc., nevertheless a quick analysis of ad-hoc selected parts of a journal still have not get a full-fledged implementation. This paper describes an ROLAP-based method of multidimensional event logs storage for process mining. The result of the analysis of the journal is visualized as directed graph representing the union of all possible event sequences, ranked by their occurrence probability. Our implementation allows the analyst to discover process models for sublogs defined by ad-hoc selection of criteria and value of occurrence probability
The geographic information system (GIS) is based on the first and only Russian Imperial Census of 1897 and the First All-Union Census of the Soviet Union of 1926. The GIS features vector data (shapefiles) of allprovinces of the two states. For the 1897 census, there is information about linguistic, religious, and social estate groups. The part based on the 1926 census features nationality. Both shapefiles include information on gender, rural and urban population. The GIS allows for producing any necessary maps for individual studies of the period which require the administrative boundaries and demographic information.
It is well-known that the class of sets that can be computed by polynomial size circuits is equal to the class of sets that are polynomial time reducible to a sparse set. It is widely believed, but unfortunately up to now unproven, that there are sets in EXPNP, or even in EXP that are not computable by polynomial size circuits and hence are not reducible to a sparse set. In this paper we study this question in a more restricted setting: what is the computational complexity of sparse sets that are selfreducible? It follows from earlier work of Lozano and Torán (in: Mathematical systems theory, 1991) that EXPNP does not have sparse selfreducible hard sets. We define a natural version of selfreduction, tree-selfreducibility, and show that NEXP does not have sparse tree-selfreducible hard sets. We also construct an oracle relative to which all of EXP is reducible to a sparse tree-selfreducible set. These lower bounds are corollaries of more general results about the computational complexity of sparse sets that are selfreducible, and can be interpreted as super-polynomial circuit lower bounds for NEXP.