Novelty Detection Using Elliptical Fuzzy Clustering in a Reproducing Kernel Hilbert Space
The series Lecture Notes in Computer Science (LNCS), including its subseries Lecture Notes in Artificial Intelligence (LNAI) and Lecture Notes in Bioinformatics (LNBI), has established itself as a medium for the publication of new developments in computer science and information technology research and teaching - quickly, informally, and at a high level.
The two-volume set LNCS 11508 and 11509 constitutes the refereed proceedings of of the 18th International Conference on Artificial Intelligence and Soft Computing, ICAISC 2019, held in Zakopane, Poland, in June 2019.
The 122 revised full papers presented were carefully reviewed and selected from 333 submissions. The papers included in the first volume are organized in the following five parts: neural networks and their applications; fuzzy systems and their applications; evolutionary algorithms and their applications; pattern classification; artificial intelligence in modeling and simulation.
The papers included in the second volume are organized in the following five parts: computer vision, image and speech analysis; bioinformatics, biometrics, and medical applications; data mining; various problems of artificial intelligence; agent systems, robotics and control.
Varying coefficient models are useful generalizations of parametric linear models. They allow for parameters that depend on a covariate or that develop in time. They have a wide range of applications in time series analysis and regression. In time series analysis they have turned out to be a powerful approach to infer on behavioral and structural changes over time. In this paper, we are concerned with high dimensional varying coefficient models including the time varying coefficient model. Most studies in high dimensional nonparametric models treat penalization of series estimators. On the other side, kernel smoothing is a well established, well understood and successful approach in nonparametric estimation, in particular in the time varying coefficient model. But not much has been done for kernel smoothing in high-dimensional models. In this paper we will close this gap and we develop a penalized kernel smoothing approach for sparse high-dimensional models. The proposed estimators make use of a novel penalization scheme working with kernel smoothing. We establish a general and systematic theoretical analysis in high dimensions. This complements recent alternative approaches that are based on basis approximations and that allow more direct arguments to carry over insights from high-dimensional linear models. Furthermore, we develop theory not only for regression with independent observations but also for local stationary time series in high-dimensional sparse varying coefficient models. The development of theory for local stationary processes in a high-dimensional setting creates technical challenges. We also address issues of numerical implementation and of data adaptive selection of tuning parameters for penalization.The finite sample performance of the proposed methods is studied by simulations and it is illustrated by an empirical analysis of NASDAQ composite index data.
This paper aims to tackle the problem of brain network classification with machine learning algorithms using spectra of networks’ matrices. Two approaches are discussed: first, linear and tree-based models are trained on the vectors of sorted eigenvalues of the adjacency matrix, the Laplacian matrix and the normalized Laplacian; next, SVM classifier is trained with kernels based on information divergence between the eigenvalue distributions. The latter approach gives promising results in the classification of autism spectrum disorder versus typical development and of the carriers versus noncarriers of an allele associated with the high risk of Alzheimer disease.
We define a most specific generalization of a fuzzy set of topics assigned to leaves of the rooted tree of a domain taxonomy. This generalization lifts the set to its 'head subject' in the higher ranks of the taxonomy tree. The head subject is supposed to 'tightly' cover the query set, possibly bringing in some errors, both 'gaps' and 'offshoots'. Our method globally minimizes a penalty function combining the numbers of head subjects and gaps and offshoots, differently weighted. We apply this to a collection of about 18000 research papers published in Springer journals on Data Science for the past 20 years. We extract a taxonomy of Data Science from the international Association for Computing Machinery Computing Classification System 2012 (ACM-CCS). We find fuzzy clusters of leaf topics over the text collection and use lifted head subjects of the thematic clusters to comment on the tendencies of current research in the corresponding aspects of the domain.
This book constitutes the refereed proceedings of the 6th IAPR TC3 International Workshop on Artificial Neural Networks in Pattern Recognition, ANNPR 2014, held in Montreal, QC, Canada, in October 2014. The 24 revised full papers presented were carefully reviewed and selected from 37 submissions for inclusion in this volume. They cover a large range of topics in the field of learning algorithms and architectures and discussing the latest research, results, and ideas in these areas.
We define and find a most specific generalization of a fuzzy set of topics assigned to leaves of the rooted tree of a taxonomy. This generalization lifts the set to a “head subject” in the higher ranks of the taxonomy, that is supposed to “tightly” cover the query set, possibly bringing in some errors, both “gaps” and “offshoots”. The method globally minimizes a penalty combining head subjects and gaps and offshoots. We apply this to extract research tendencies from a collection of about 18000 research papers published in Springer journals on data science. We consider a taxonomy of Data Science based on the Association for Computing Machinery Classification of Computing System 2012 (ACM-CCS). We find fuzzy clusters of leaf topics over the text collection and use thematic clusters’ head subjects to make some comments on the tendencies of research.
In vivo evaluation of the brain white matter maturation is still a challenging task with no existing gold standards. In this article we propose an original approach to evaluate the early maturation of the white matter bundles, which is based on comparison of infant and adult groups using the Mahalanobis distance computed from four complementary MRI parameters: quantitative qT1 and qT2 relaxation times, longitudinal λ║ and transverse λ⊥ diffusivities from diffusion tensor imaging. Such multi-parametric approach is expected to better describe maturational asynchrony than conventional univariate approaches because it takes into account complementary dependencies of the parameters on different maturational processes, notably the decrease in water content and the myelination. Our approach was tested on 17 healthy infants (aged 3- to 21-week old) for 18 different bundles. It finely confirmed maturational asynchrony across the bundles: the spino-thalamic tract, the optic radiations, the cortico-spinal tract and the fornix have the most advanced maturation, while the superior longitudinal and arcuate fasciculi, the anterior limb of the internal capsule and the external capsule have the most delayed maturation. Furthermore, this approach was more reliable than univariate approaches as it revealed more maturational relationships between the bundles and did not violate a priori assumptions on the temporal order of the bundle maturation. Mahalanobis distances decreased exponentially with age in all bundles, with the only difference between them explained by different onsets of maturation. Estimation of these relative delays confirmed that the most dramatic changes occur during the first post-natal year.