Article
Fuzzy Analysis and Deep Convolution Neural Networks in Still-to-video Recognition
We discuss the video classification problem with the matching of feature vectors extracted using deep convolutional neural networks from each frame. We propose the novel recognition method based on representation of each frame as a sequence of fuzzy sets of reference classes whose degrees of membership are defined based on asymptotic distribution of the Kullback–Leibler information divergence and its relation with the maximum likelihood method. In order to increase the classification accuracy, we perform the fuzzy intersection (product triangular norms) of these sets. Experimental study with YTF (YouTube Faces) and IJB-A (IARPA Janus Benchmark A) video datasets and VGGFace, ResFace and LightCNN descriptors shows that the proposed approach allows us to increase the accuracy of recognition by 2–6% compering with the known classification methods.
The article is devoted to pattern recognition task with the database containing small number of samples per class. By mapping of local continuous feature vectors to a discrete range, this problem is reduced to statistical classification of a set of discrete finite patterns. It is demonstrated that Bayesian decision under the assumption that probability distributions can be estimated using the Parzen kernel and the Gaussian window with a fixed variance for all the classes, implemented in the PNN, is not optimal in the classification of a set of patterns. We presented here the novel modification of the PNN with homogeneity testing which gives an optimal solution of the latter task under the same assumption about probability densities. By exploiting the discrete nature of patterns our modification prevents the well-known drawbacks of the memory-based approach implemented in both the PNN and the PNN with homogeneity testing, namely, low classification speed and high requirements to the memory usage. Our modification only requires the storage and processing of the histograms of input and training samples. We present the results of an experimental study in two practically important tasks: 1) the problem of Russian text authorship attribution with character n-grams features; and 2) face recognition with well-known datasets (AT&T, FERET and JAFFE) and comparison of color- and gradient-orientation histograms. Our results support the statement that the proposed network provides better accuracy (1-7%) and is much more resistant to change of the smoothing parameter of Gaussian kernel function in comparison with the original PNN.
Autonomous taxies are in high demand for smart city scenario. Such taxies have a well specified path to travel. Therefore, these vehicles only required two important parameters. One is detection parameter and other is control parameter. Further, detection parameters require turn detection and obstacle detection. The control parameters contain steering control and speed control. In this paper a novel autonomous taxi model has been proposed for smart city scenario. Deep learning has been used to model the human driver capabilities for the autonomous taxi. A hierarchical Deep Neural Network (DNN) architecture has been utilized to train various driving aspects. In first level, the proposed DNN architecture classifies the straight and turning of road. A parallel DNN is used to detect obstacle at level one. In second level, the DNN discriminates the turning i.e. left or right for steering and speed controls. Two multi layered DNNs have been used on Nvidia Tesla K 40 GPU based system with Core i-7 processor. The mean squared error (MSE) for the detection parameters viz. speed and steering angle were 0.018 and 0.0248 percent, respectively, with 15 milli seconds of realtime response delay.
On the informatics and the software sides the questions of practical security are linked to the unstructured information processing algorithms applicable for the video array frames obtained by cross platform registration systems. Compression solutions become crucially important when the temporal evolution of the video stream exceeds the traffic capacity of the communication network. The basic image processing approach we exploited is to maintain of the highest resolution degree for the main part of the object we survey (for example, a man’s face or figure) whilst minimizing the information traffic from the image background by its artificial substitution with a homogeneous color filling. This method allowed us to obtain a significant compression rate (up to 7000).
The performance of machine learning methods is heavily dependent on the choice of data representation (or features) on which they are applied. The rapidly developing field of representation learning is concerned with questions surrounding how we can best learn meaningful and useful representations of data. We take a broad view of the field and include topics such as deep learning and feature learning, metric learning, compositional modeling, structured prediction, reinforcement learning, and issues regarding large-scale learning and non-convex optimization. The range of domains to which these techniques apply is also very broad, from vision to speech recognition, text understanding, gaming, music, etc.
An ensemble of classifiers has been built to solve the problem of video image recognition. The paper offers a way to estimate the a posteriori probability of an image belonging to a particular class in the case of an arbitrary distance and nearest neighbor method. The estimation is shown to be equivalent to the optimal naive Bayesian estimate given Kullback-Leibler divergence being used as proximity measure. The block diagram of a video image recognition system is presented. The system features automatic adaptation of the list of images of identical objects which is fed to the committee machine input. The system is tested in face recognition task using popular data bases (FERET, AT&T, Yale) and the results are discussed.
A unified methodology for categorizing various complex objects is presented in this book. Through probability theory, novel asymptotically minimax criteria suitable for practical applications in imaging and data analysis are examined including the special cases such as the Jensen-Shannon divergence and the probabilistic neural network. An optimal approximate nearest neighbor search algorithm, which allows faster classification of databases is featured. Rough set theory, sequential analysis and granular computing are used to improve performance of the hierarchical classifiers. Practical examples in face identification (including deep neural networks), isolated commands recognition in voice control system and classification of visemes captured by the Kinect depth camera are included. This approach creates fast and accurate search procedures by using exact probability densities of applied dissimilarity measures.
This book can be used as a guide for independent study and as supplementary material for a technically oriented graduate course in intelligent systems and data mining. Students and researchers interested in the theoretical and practical aspects of intelligent classification systems will find answers to:
- Why conventional implementation of the naive Bayesian approach does not work well in image classification?
- How to deal with insufficient performance of hierarchical classification systems?
- Is it possible to prevent an exhaustive search of the nearest neighbor in a database?
Since the works by Specht, the probabilistic neural networks (PNNs) have attracted researchers due to their ability to increase training speed and their equivalence to the optimal Bayesian decision of classification task. However, it is known that the PNN's conventional implementation is not optimal in statistical recognition of a set of patterns. In this article we present the novel modification of the PNN and prove that it is optimal in this task with general assumptions of the Bayes classifier. The modification is based on a reduction of recognition task to homogeneity testing problem. In the experiment we examine a problem of authorship attribution of Russian texts. Our results support the statement that the proposed network provides better accuracy and is much more resistant to change the smoothing parameter of Gaussian kernel function in comparison with the original PNN.
A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traffic is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the final node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a finite-dimensional system of differential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the “correct” extension of solutions of a system of differential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station.
Event logs collected by modern information and technical systems usually contain enough data for automated process models discovery. A variety of algorithms was developed for process models discovery, conformance checking, log to model alignment, comparison of process models, etc., nevertheless a quick analysis of ad-hoc selected parts of a journal still have not get a full-fledged implementation. This paper describes an ROLAP-based method of multidimensional event logs storage for process mining. The result of the analysis of the journal is visualized as directed graph representing the union of all possible event sequences, ranked by their occurrence probability. Our implementation allows the analyst to discover process models for sublogs defined by ad-hoc selection of criteria and value of occurrence probability
The geographic information system (GIS) is based on the first and only Russian Imperial Census of 1897 and the First All-Union Census of the Soviet Union of 1926. The GIS features vector data (shapefiles) of allprovinces of the two states. For the 1897 census, there is information about linguistic, religious, and social estate groups. The part based on the 1926 census features nationality. Both shapefiles include information on gender, rural and urban population. The GIS allows for producing any necessary maps for individual studies of the period which require the administrative boundaries and demographic information.
Existing approaches suggest that IT strategy should be a reflection of business strategy. However, actually organisations do not often follow business strategy even if it is formally declared. In these conditions, IT strategy can be viewed not as a plan, but as an organisational shared view on the role of information systems. This approach generally reflects only a top-down perspective of IT strategy. So, it can be supplemented by a strategic behaviour pattern (i.e., more or less standard response to a changes that is formed as result of previous experience) to implement bottom-up approach. Two components that can help to establish effective reaction regarding new initiatives in IT are proposed here: model of IT-related decision making, and efficiency measurement metric to estimate maturity of business processes and appropriate IT. Usage of proposed tools is demonstrated in practical cases.