• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site
Of all publications in the section: 4
Sort:
by name
by year
Article
Savchenko A. Pattern Recognition. 2012. Vol. 45. No. 8. P. 2952-2961.

The article is devoted to the problem of image recognition in real-time applications with a large database containing hundreds of classes. The directed enumeration method as an alternative to exhaustive search is examined. This method has two advantages. First, it could be applied with measures of similarity which do not satisfy metric properties (chi-square distance, Kullback-Leibler information discrimination, etc). Second, the directed enumeration method increases recognition speed even in the most difficult cases which seem to be very important in practical terms. In these cases many neighbors are located at very similar distances. In this paper we present the results of an experimental study of the directed enumeration method with comparison of color- and gradient-orientation histograms in solving the problem of face recognition with well-known datasets (Essex, FERET). It is shown that the proposed method is characterized by increased computing efficiency of automatic image recognition (3-12 times in comparison with a conventional nearest neighbor classifier).

Added: Jun 9, 2012
Article
Savchenko A. Pattern Recognition. 2017. Vol. 61. P. 459-469.

An exhaustive search of all classes in pattern recognition methods cannot be implemented in real-time, if the database contains a large number of classes. In this paper we introduce a novel probabilistic approximate nearest-neighbor (NN) method. Despite the most of known fast approximate NN algorithms, our method is not heuristic. The joint probabilistic densities (likelihoods) of the distances to previously checked reference objects are estimated for each class. The next reference instance is selected from the class with the maximal likelihood. To deal with the quadratic memory requirement of this approach, we propose its modification, which processes the distances from all instances to a small set of pivots chosen with the farthest-first traversal. Experimental study in face recognition with the histograms of oriented gradients and the deep neural network-based image features shows that the proposed method is much faster than the known approximate NN algorithms for medium databases.

Added: Aug 30, 2016
Article
Mirkin B., Amorim R. Pattern Recognition. 2012. Vol. 45. No. 3. P. 1061-1075.

This paper represents another step in overcoming a drawback of K-Means, its lack of defense against noisy features, by using feature weights in the criterion. The Weighted K-Means method by Huang et al. is extended to the corresponding Minkowski metric for measuring distances. Under Minkowski metric the feature weights become intuitively appealing feature rescaling factors in a conventional K-Means criterion. To see how this can be used in addressing another issue of K-Means, the initial setting, a method to initialize K-Means with anomalous clusters is adapted. The Minkowski metric based method is experimentally validated on datasets from the UCI Machine Learning Repository and generated sets of Gaussian clusters, both as they are and with additional uniform random noise features, and appears to be competitive in comparison with other K-Means based feature weighting algorithms.

Added: Nov 26, 2012
Article
Mirkin B., Amorim R., Makarenkov V. et al. Pattern Recognition. 2017. Vol. 67. P. 62-72.

The Minkowski weighted K-means (MWK-means) is a recently developed clustering algorithm capable of computing feature weights. The cluster-specific weights in MWK-means follow the intuitive idea that a feature with low variance should have a greater weight than a feature with high variance. The final clustering found by this algorithm depends on the selection of the Minkowski distance exponent. This paper explores the possibility of using the central Minkowski partition in the ensemble of all Minkowski partitions for selecting an optimal value of the Minkowski exponent. The central Minkowski partition appears to be also a good consensus partition. Furthermore, we discovered some striking correlation results between the Minkowski profile, defined as a mapping of the Minkowski exponent values into the average similarity values of the optimal Minkowski partitions, and the Adjusted Rand Index vectors resulting from the comparison of the obtained partitions to the ground truth. Our findings were confirmed by a series of computational experiments involving synthetic Gaussian clusters and real-world data

Added: Mar 30, 2017