Статья
Gaussian processes with multidimensional distribution inputs via optimal transport and Hilbertian embedding
Мы решаем задачу различения пациентов с рас- стройствами аутистического спектра и людей без патологии на основе графов структурных связей головного мозга (коннектомов). Для этого мы пред- лагаем использовать возможные различия в разби- ениях графов на подграфы, характерные для кон- нектомов групп нормы и патологии. Мы исполь- зуем четыре метода кластеризации, чтобы полу- чить разбиения коннектомов на подграфы. Мы оце- ниваем попарные расстояния между полученными разбиениями и строим на их основе ядро для SVM классификатора. Полученные классификаторы мы объединяем в двухуровневую модель с использова- нием стэкинга. Качество классификации для двух- уровневой модели достигает 0.73 в смысле площади под ROC-кривой (ROC AUC).
Varying coefficient models are useful generalizations of parametric linear models. They allow for parameters that depend on a covariate or that develop in time. They have a wide range of applications in time series analysis and regression. In time series analysis they have turned out to be a powerful approach to infer on behavioral and structural changes over time. In this paper, we are concerned with high dimensional varying coefficient models including the time varying coefficient model. Most studies in high dimensional nonparametric models treat penalization of series estimators. On the other side, kernel smoothing is a well established, well understood and successful approach in nonparametric estimation, in particular in the time varying coefficient model. But not much has been done for kernel smoothing in high-dimensional models. In this paper we will close this gap and we develop a penalized kernel smoothing approach for sparse high-dimensional models. The proposed estimators make use of a novel penalization scheme working with kernel smoothing. We establish a general and systematic theoretical analysis in high dimensions. This complements recent alternative approaches that are based on basis approximations and that allow more direct arguments to carry over insights from high-dimensional linear models. Furthermore, we develop theory not only for regression with independent observations but also for local stationary time series in high-dimensional sparse varying coefficient models. The development of theory for local stationary processes in a high-dimensional setting creates technical challenges. We also address issues of numerical implementation and of data adaptive selection of tuning parameters for penalization.The finite sample performance of the proposed methods is studied by simulations and it is illustrated by an empirical analysis of NASDAQ composite index data.
This book constitutes the refereed proceedings of the 6th IAPR TC3 International Workshop on Artificial Neural Networks in Pattern Recognition, ANNPR 2014, held in Montreal, QC, Canada, in October 2014. The 24 revised full papers presented were carefully reviewed and selected from 37 submissions for inclusion in this volume. They cover a large range of topics in the field of learning algorithms and architectures and discussing the latest research, results, and ideas in these areas.