Digital Perfusion Phantoms for Visual Perfusion Validation
Despite the increasingly broad use of perfusion applications, we still have no generally accessible means for their verification: The common sense of perfusion maps and "bona fides" of perfusion software vendors remain the only grounds for acceptance. Thus, perfusion applications are one of a very few clinical tools considerably lacking practical objective hands-on validation. MATERIALS AND METHODS. To solve this problem, we introduce digital perfusion phantoms (DPPs) - numerically simulated DICOM image sequences specifically designed to have known perfusion maps with simple visual patterns. Processing DPP perfusion sequences with any perfusion algorithm or software of choice and comparing the results with the expected DPP patterns provide a robust and straightforward way to control the quality of perfusion analysis, software, and protocols. RESULTS. The deviations from the expected DPP maps, observed in each perfusion software, provided clear visualization of processing differences and possible perfusion implementation errors. CONCLUSION. Perfusion implementation errors, often hidden behind real-data anatomy and noise, become very visible with DPPs. We strongly recommend using DPPs to verify the quality of perfusion applications.
We study the estimation of the covariance matrix Σ of a p-dimensional nor- mal random vector based on n independent observations corrupted by additive noise. Only a general nonparametric assumption is imposed on the distribution of the noise without any sparsity constraint on its covariance matrix. In this high-dimensional semiparametric deconvolution problem, we propose spectral thresholding estimators that are adaptive to the sparsity of Σ. We establish an oracle inequality for these estimators under model miss- specification and derive non-asymptotic minimax convergence rates that are shown to be logarithmic in log p/n. We also discuss the estimation of low-rank matrices based on indi- rect observations as well as the generalization to elliptical distributions. The finite sample performance of the threshold estimators is illustrated in a numerical example.
We study a problem of designing an optimal two-dimensional circularly symmetric convolution kernel (or point spread function (PSF)) with a circular support of a chosen radius R. Such function will be optimal for estimating an unknown signal (image) from an observation obtained through a convolution-type distortion with the additive random noise. This technique is then generalized to the case of an imprecisely known or random PSF of the measurement distortion. It is shown that the construction of the optimal convolution kernel reduces to a one-dimensional Fredholm equation of the first or a second kind on the interval [0,R]. If the reconstruction PSF is sought in a finite-dimensional class of functions, the problem naturally reduces to a finite-dimensional optimization problem or even a system of linear equations. We also analyze how reconstruction quality depends on the radius of the convolution kernel. It allows finding a good balance between computational complexity and quality of the image reconstruction.
In this paper we study the problem
of density deconvolution under general assumptions on the measurement error distribution. Typically
deconvolution estimators are constructed using Fourier transform techniques, and
it is assumed that
the characteristic function of
the measurement errors does not have zeros
on the real line. This assumption is rather strong and is not fulfilled
in many cases of interest. In this paper we develop a
methodology for constructing optimal density deconvolution estimators in the general setting that covers
vanishing and non--vanishing characteristic functions of the measurement errors.
We derive upper bounds on the risk of the proposed estimators and
provide sufficient conditions under which zeros of the corresponding characteristic function have no effect on estimation accuracy.
Moreover, we show that the derived conditions are also necessary in some
specific problem instances.
A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traﬃc is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the ﬁnal node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a ﬁnite-dimensional system of diﬀerential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the “correct” extension of solutions of a system of diﬀerential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station.
Event logs collected by modern information and technical systems usually contain enough data for automated process models discovery. A variety of algorithms was developed for process models discovery, conformance checking, log to model alignment, comparison of process models, etc., nevertheless a quick analysis of ad-hoc selected parts of a journal still have not get a full-fledged implementation. This paper describes an ROLAP-based method of multidimensional event logs storage for process mining. The result of the analysis of the journal is visualized as directed graph representing the union of all possible event sequences, ranked by their occurrence probability. Our implementation allows the analyst to discover process models for sublogs defined by ad-hoc selection of criteria and value of occurrence probability
The geographic information system (GIS) is based on the first and only Russian Imperial Census of 1897 and the First All-Union Census of the Soviet Union of 1926. The GIS features vector data (shapefiles) of allprovinces of the two states. For the 1897 census, there is information about linguistic, religious, and social estate groups. The part based on the 1926 census features nationality. Both shapefiles include information on gender, rural and urban population. The GIS allows for producing any necessary maps for individual studies of the period which require the administrative boundaries and demographic information.
Existing approaches suggest that IT strategy should be a reflection of business strategy. However, actually organisations do not often follow business strategy even if it is formally declared. In these conditions, IT strategy can be viewed not as a plan, but as an organisational shared view on the role of information systems. This approach generally reflects only a top-down perspective of IT strategy. So, it can be supplemented by a strategic behaviour pattern (i.e., more or less standard response to a changes that is formed as result of previous experience) to implement bottom-up approach. Two components that can help to establish effective reaction regarding new initiatives in IT are proposed here: model of IT-related decision making, and efficiency measurement metric to estimate maturity of business processes and appropriate IT. Usage of proposed tools is demonstrated in practical cases.