Classification of dangerous situations for small sample size problem in maintenance decision support systems
In this paper we examine the task of maintenance decision support in classification of the dangerous situations discovered by the monitoring system. This task is reduced to the contextual multi-armed bandit problem. We highlight the small sample size problem appeared in this task due to the rather rare failures. The novel algorithm based on the nearest neighbor search is proposed. An experimental study is provided for several synthetic datasets with the situations described by either simple features or grayscale images. It is shown, that our algorithm outperforms the well-known contextual multi-armed methods with the Upper Confidence Bound and softmax stochastic search strategies.
Decision support in equipment condition monitoring systems with image processing is analyzed. Long-run accumulation of information about earlier made decisions is used to realize the adaptiveness of the proposed approach. It is shown that unlike conventional classification problems, the recognition of abnormalities uses training samples supplemented with reward estimates of earlier decisions and can be tackled using reinforcement learning algorithms. We consider the basic stages of contextual multi-armed bandit algorithms during which the probabilistic distributions of each state are evaluated to evaluate the current knowledge of the states, and the decision space is explored to increase the decision-making efficiency. We propose a new decision-making method, which uses the probabilistic neural network to classify abnormal situation and the softmax rule to explore the decision space. A modelling experiment in image processing was carried out to show that our approach allows a higher accuracy of abnormality detection than other known methods, especially for small-size initial training samples.
In this paper we examine the maintenance decision support in classification of the dangerous situations discovered by the monitoring system. This task is reduced to the contextual multi-armed bandit problem. We highlight small sample size problem appeared in this task due to the rather rare failures. The novel algorithm based on the nearest neighbor search is proposed. An experimental study is provided for several synthetic datasets with the situations described by either simple features or grayscale images. It is shown, that our algorithm outperforms the well-known contextual multi-armed methods with the Upper Confidence Bound and softmax random search strategies.
A unified methodology for categorizing various complex objects is presented in this book. Through probability theory, novel asymptotically minimax criteria suitable for practical applications in imaging and data analysis are examined including the special cases such as the Jensen-Shannon divergence and the probabilistic neural network. An optimal approximate nearest neighbor search algorithm, which allows faster classification of databases is featured. Rough set theory, sequential analysis and granular computing are used to improve performance of the hierarchical classifiers. Practical examples in face identification (including deep neural networks), isolated commands recognition in voice control system and classification of visemes captured by the Kinect depth camera are included. This approach creates fast and accurate search procedures by using exact probability densities of applied dissimilarity measures.
This book can be used as a guide for independent study and as supplementary material for a technically oriented graduate course in intelligent systems and data mining. Students and researchers interested in the theoretical and practical aspects of intelligent classification systems will find answers to:
- Why conventional implementation of the naive Bayesian approach does not work well in image classification?
- How to deal with insufficient performance of hierarchical classification systems?
- Is it possible to prevent an exhaustive search of the nearest neighbor in a database?
The paper makes a brief introduction into multiple classifier systems and describes a particular algorithm which improves classification accuracy by making a recommendation of an algorithm to an object. This recommendation is done under a hypothesis that a classifier is likely to predict the label of the object correctly if it has correctly classified its neighbors. The process of assigning a classifier to each object involves here the apparatus of Formal Concept Analysis. We explain the principle of the algorithm on a toy example and describe experiments with real-world datasets.
This book constitutes the refereed proceedings of the 6th IAPR TC3 International Workshop on Artificial Neural Networks in Pattern Recognition, ANNPR 2014, held in Montreal, QC, Canada, in October 2014. The 24 revised full papers presented were carefully reviewed and selected from 37 submissions for inclusion in this volume. They cover a large range of topics in the field of learning algorithms and architectures and discussing the latest research, results, and ideas in these areas.
In this paper, we use robust optimization models to formulate the support vector machines (SVMs) with polyhedral uncertainties of the input data points. The formulations in our models are nonlinear and we use Lagrange multipliers to give the first-order optimality conditions and reformulation methods to solve these problems. In addition, we have proposed the models for transductive SVMs with input uncertainties.
Probabilistic neural network (PNN) is the well-known instance-based learning algorithm, which is widely used in various pattern classification and regression tasks, if rather small number of instances for each class is available. The known disadvantage of this network is its insufficient classification computational complexity. The common way to overcome this drawback is the reduction techniques with selection of the most typical instances. Such approach causes the shifting of the estimates of the class probability distribution, and, in turn, the decrease of the classification accuracy. In this paper we examine another possible solution by replacing the Gaussian window and the Parzen kernel to the orthogonal series Fejér kernel and using the naïve assumption about independence of features. It is shown, that our approach makes it possible to achieve much better runtime complexity in comparison with either original PNN or its modification with the preliminary clustering of the training set.
We propose extensions of the classical JSM-method andtheNa ̈ıveBayesianclassifierforthecaseoftriadicrelational data. We performed a series of experiments on various types of data (both real and synthetic) to estimate quality of classification techniques and compare them with other classification algorithms that generate hypotheses, e.g. ID3 and Random Forest. In addition to classification precision and recall we also evaluated the time performance of the proposed methods.