Modeling Human Perception of Image Quality
Humans can determine image quality instantly and intuitively, but the mechanism of human perception of image quality is unknown. The purpose of this work was to identify the most important quantitative metrics responsible for the human perception of digital image quality. Digital images from two different datasets—CT tomography (MedSet) and scenic photographs of trees (TreeSet)—were presented in random pairs to unbiased human viewers. The observers were then asked to select the best-quality image from each image pair. The resulting human-perceived image quality (HPIQ) ranks were obtained from these pairwise comparisons with two different ranking approaches. Using various digital image quality metrics reported in the literature, we built two models to predict the observed HPIQ rankings, and to identify the most important HPIQ predictors. Evaluating the quality of our HPIQ models as the fraction of falsely predicted pairwise comparisons (inverted image pairs), we obtained 70–71% of correct HPIQ predictions for the first, and 73–76%for the second approach. Taking into account that 10–14% of inverted pairs were already present in the original rankings, limitations of the models, and only a few principal HPIQ predictors used, we find this result very satisfactory. We obtained a small set of most significant quantitative image metrics associated with the human perception of image quality. This can be used for automatic image quality ranking, machine learning, and quality-improvement algorithms.
This book constitutes the proceedings of the Third International Conference on Analysis of Images, Social Networks and Texts, AIST 2014, held in Yekaterinburg, Russia, in April 2014. The 11 full and 10 short papers were carefully reviewed and selected from 74 submissions. They are presented together with 3 short industrial papers, 4 invited papers and tutorials. The papers deal with topics such as analysis of images and videos; natural language processing and computational linguistics; social network analysis; machine learning and data mining; recommender systems and collaborative technologies; semantic web, ontologies and their applications; analysis of socio-economic data.
In this paper, we propose an iterative algorithm for edge detection. The presented method uses a 2D Gaussian function and Prewitt operator. Firstly, the selection of edge points has been achieved by a statistical method. Secondly, the boundary tracking has been performed by the rotation of the Gaussian function and by changing function's variables. The algorithm has been tested using a medical phantom. Additionally, an implementation on multicore GPUs has been designed for a better performance.
This book constitutes the proceedings of the Fourth International Conference on Analysis of Images, Social Networks and Texts, AIST 2015, held in Yekaterinburg, Russia, in April 2015. The 24 full and 8 short papers were carefully reviewed and selected from 140 submissions. The papers are organized in topical sections on analysis of images and videos; pattern recognition and machine learning; social network analysis; text mining and natural language processing.
We consider here image denoising procedures, based on computationally effective tree-serial parametric dynamic programming procedures, different representations of an image lattice by the set of acyclic graphs and non-convex regularization of a new type which allows to flexibly set a priori preferences. Experimental results in image denoising, as well as comparison with related methods, are provided. A new extended version of multi quadratic dynamic programming procedures for image denoising, proposed here, shows an improved accuracy for images of a different type.
Authors investigate forming of transfer fee of professional football players. They analyze influence on its value factors, which define «human capital» of athlete, such as age, professional achievements and «level of publicity», i.e. his ability to attract spectators’ attention. It have been determined that strength of influence of professional achievements diminishes with age and taking into account «level of publicity» significantly rise quality of transfer fee modeling.
Conformance checking is a subarea of process mining that studies relations between designed processes, also called process models, and records of observed processes, also called event logs. In the last decade, research in conformance checking has proposed a plethora of techniques for characterizing the discrepancies between process models and event logs. Often, these techniques are also applied to measure the quality of process models automatically discovered from event logs. Recently, the process mining community has initiated a discussion on the desired properties of such measures. This discussion witnesses the lack of measures with the desired properties and the lack of properties intended for measures that support partially matching processes, i.e., processes that are not identical but differ in some steps. The paper at hand addresses these limitations. Firstly, it extends the recently introduced precision and recall conformance measures between process models and event logs that possess the desired property of monotonicity with the support of partially matching processes. Secondly, it introduces new intuitively desired properties of conformance measures that support partially matching processes and shows that our measures indeed possess them. The new measures have been implemented in a publicly available tool. The reported qualitative and quantitative evaluations based on our implementation demonstrate the feasibility of using the proposed measures in industrial settings.
Let X be a semimartingale which is locally square integrable and admitting the canonical decompositions X=M+A and X=M ' +A ' with respect to measures P and P ' . Let γ be the density of A-A ' with respect to C=(〈M〉+〈M ' 〉) in the Lebesgue decomposition. Then there is a version h of the Hellinger process h(1/2;P,P ' ) such that (1-Δh) -2 ·h⪰(1/8)γ 2 ·C P- and P ' -a.s. This inequality is related with a generalization of the Cramér-Rao inequality to the case of filtered space. The author gives some applications to a continuous-time linear regression model as well as to a discrete-time autoregression model with martingale errors.
This proceedings publication is a compilation of selected contributions from the “Third International Conference on the Dynamics of Information Systems” which took place at the University of Florida, Gainesville, February 16–18, 2011. The purpose of this conference was to bring together scientists and engineers from industry, government, and academia in order to exchange new discoveries and results in a broad range of topics relevant to the theory and practice of dynamics of information systems. Dynamics of Information Systems: Mathematical Foundation presents state-of-the art research and is intended for graduate students and researchers interested in some of the most recent discoveries in information theory and dynamical systems. Scientists in other disciplines may also benefit from the applications of new developments to their own area of study.