НЕПАРАМЕТРИЧЕСКИЕ МЕТОДЫ ОЦЕНКИ ЖЕСТКОСТИ НОМИНАЛЬНОЙ ЗАРПЛАТЫ
The present study analyzes Perm, Russia residential housing market supply focusing on sellers' heterogeneity. Many indicators of heterogeneity were consi- dered in the previous research, and all of them were proved to have a great impact on housing prices and time on the market. However, the gap exists in evaluating sellers’ pricing strategies in dynamics mostly because of unavailable data. Current study clears out the effect of time on price using data on asking price dynamics. We employ semiparametric sample selection estimation proce- dure which accounts for the unobserved property characteristics and non-random selection of objects out of the sample. We consider two main types of sellers: real estate agents and property owners, and show that real estate agents appear to be more impatient compared to property owners. Specifically, they set a lower asking price initially and are more likely to revise it over time if the object is not sold.
Probabilistic neural network (PNN) is the well-known instance-based learning algorithm, which is widely used in various pattern classification and regression tasks, if rather small number of instances for each class is available. The known disadvantage of this network is its insufficient classification computational complexity. The common way to overcome this drawback is the reduction techniques with selection of the most typical instances. Such approach causes the shifting of the estimates of the class probability distribution, and, in turn, the decrease of the classification accuracy. In this paper we examine another possible solution by replacing the Gaussian window and the Parzen kernel to the orthogonal series Fejér kernel and using the naïve assumption about independence of features. It is shown, that our approach makes it possible to achieve much better runtime complexity in comparison with either original PNN or its modification with the preliminary clustering of the training set.
In many applications, the real high-dimensional data occupy only a very small part in the high dimensional ‘observation space’ whose intrinsic dimension is small. The most popular model of such data is Manifold model which assumes that the data lie on or near an unknown manifold (Data Manifold, DM) of lower dimensionality embedded in an ambient high-dimensional input space (Manifold Assumption about high-dimensional data). Manifold Learning is a Dimensionality Reduction problem under the Manifold assumption about the processed data, and its goal is to construct a low-dimensional parameterization of the DM (global low-dimensional coordinates on the DM) from a finite dataset sampled from the DM.
Manifold Assumption means that local neighborhood of each manifold point is equivalent to an area of low-dimensional Euclidean space. Because of this, most of Manifold Learning algorithms include two parts: ‘local part’ in which certain characteristics reflecting low-dimensional local structure of neighborhoods of all sample points are constructed via nonparametric estimation, and ‘global part’ in which global low-dimensional coordinates on the DM are constructed by solving the certain convex optimization problem for specific cost function depending on the local characteristics. Both statistical properties of ‘local part’ and its average over manifold are considered in the paper. The article is an extension of the paper (Yanovich, 2016) for the case of nonparametric estimation.