Метод максимально правдоподобных рассогласований в задаче распознавания изображений на основе глубоких нейронных сетей
In this paper we focus on the image recognition problem in the case of small sample size based on the nearest neighbor rule and matching of high-dimensional feature vectors extracted with the deep convolutional neural network. We propose the novel recognition algorithm based on the maximum likelihood method for the joint density of dissimilarities between an observed image and available instances in the training set. This likelihood is estimated using the known asymptotically normally distribution of the Jensen-Shannon divergence between image features, if the latter can be treated as the probability density estimates. This asymptotic behavior is in agreement with the well-known experimental estimates of distributions of dissimilarity distances between high-dimensional vectors. The experimental study in unconstrained face recognition for the LFW (Labeled Faces in the Wild) and YTF (YouTube Faces) datasets demonstrated, that the proposed approach makes it possible to increase the recognition accuracy at 1-5% when compared with conventional classifiers.
The article is devoted to pattern recognition task with the database containing small number of samples per class. By mapping of local continuous feature vectors to a discrete range, this problem is reduced to statistical classification of a set of discrete finite patterns. It is demonstrated that Bayesian decision under the assumption that probability distributions can be estimated using the Parzen kernel and the Gaussian window with a fixed variance for all the classes, implemented in the PNN, is not optimal in the classification of a set of patterns. We presented here the novel modification of the PNN with homogeneity testing which gives an optimal solution of the latter task under the same assumption about probability densities. By exploiting the discrete nature of patterns our modification prevents the well-known drawbacks of the memory-based approach implemented in both the PNN and the PNN with homogeneity testing, namely, low classification speed and high requirements to the memory usage. Our modification only requires the storage and processing of the histograms of input and training samples. We present the results of an experimental study in two practically important tasks: 1) the problem of Russian text authorship attribution with character n-grams features; and 2) face recognition with well-known datasets (AT&T, FERET and JAFFE) and comparison of color- and gradient-orientation histograms. Our results support the statement that the proposed network provides better accuracy (1-7%) and is much more resistant to change of the smoothing parameter of Gaussian kernel function in comparison with the original PNN.
The usage of the probabilistic neural network with homogeneity testing is proposed in image recognition problem. This decision is shown to be optimal in Bayesian terms if the task is formulated as a statistical testing for homogeneity of query and model images' feature sets. The problem of the lack of computing efficiency with many classes and large dimensions of feature set is discovered. The possibility of its overcoming in the case of discrete features is explored by synthesizing the novel recognition criterion with the comparison of the histograms of query and model images. It is shown that a particular case of this criterion is the nearest neighbor rule with popular measures of similarity, namely, chi-square distance and Jensen-Shannon divergence. The results of experimental research in a problem of face recognition with widely used databases (AT&T, JAFFE) are presented. The proposed approach is demonstrated to achieve better recognition accuracy in comparison with conventional solution with reduction the recognition task to the statistical classification.
A unified methodology for categorizing various complex objects is presented in this book. Through probability theory, novel asymptotically minimax criteria suitable for practical applications in imaging and data analysis are examined including the special cases such as the Jensen-Shannon divergence and the probabilistic neural network. An optimal approximate nearest neighbor search algorithm, which allows faster classification of databases is featured. Rough set theory, sequential analysis and granular computing are used to improve performance of the hierarchical classifiers. Practical examples in face identification (including deep neural networks), isolated commands recognition in voice control system and classification of visemes captured by the Kinect depth camera are included. This approach creates fast and accurate search procedures by using exact probability densities of applied dissimilarity measures.
This book can be used as a guide for independent study and as supplementary material for a technically oriented graduate course in intelligent systems and data mining. Students and researchers interested in the theoretical and practical aspects of intelligent classification systems will find answers to:
- Why conventional implementation of the naive Bayesian approach does not work well in image classification?
- How to deal with insufficient performance of hierarchical classification systems?
- Is it possible to prevent an exhaustive search of the nearest neighbor in a database?
In the paper we present a new notion of stochastic monotone measure and its application to image processing. By definition, a stochastic monotone measure is a random value with values in the set of monotone measures and it can describe a choice of random features in image processing. In this case, a monotone measure describes uncertainty in the problem of choosing the set of features with the highest value of informativeness and its stochastic behavior is explained by a noise that can corrupt images.
This book constitutes the refereed proceedings of the 12th Industrial Conference on Data Mining, ICDM 2012, held in Berlin, Germany in July 2012. The 22 revised full papers presented were carefully reviewed and selected from 97 submissions. The papers are organized in topical sections on data mining in medicine and biology; data mining for energy industry; data mining in traffic and logistic; data mining in telecommunication; data mining in engineering; theory in data mining; theory in data mining: clustering; theory in data mining: association rule mining and decision rule mining.
The problem of management of the nonlinear object which is exposed to impact of uncontrollable indignations, is considered in a key of differential game. Synthesis of optimum managements is made with application of transformation of the nonlinear equation of initial object in the differential equation with the parameters depending on a condition. The square-law functional of quality allows to formulate synthesis conditions in the form of need of search of solutions of the equation of Rikkati. The solution of the equation of Rikkati with the parameters depending on a condition, is in a symbolical view with application of algebraic methods that allows to generalize a number of earlier published theoretical results, to receive rather constructive decisions in a number of statements of problems of management.
The article is based upon the fact that the growing demand for master data management systems has not yet produced a commonly accepted metodology for their design and development/ The article offers two mathematical models? that allow a master data management systems designer a way to formally describe their system before development and verify the system quality by measurements? unique to master data management systems.