Book chapter
Topic Models Regularization and Initialization for Regression Problems
We propose a new method of feature extraction for regression problems with text data that transforms the sparse texts to dense features using regularized topic models. We also discuss the problem of topic model initialization, and propose a new approach based on Naive Bayes. This approach is compared to many others, and it achieves a quality comparable to vector space models using as little as ten topics. It also outperforms other methods for feature generation based on topic modeling, such as PLSA and Supervised LDA.
In book
The paper proposes a substantial classification of collocates (pairs of words that tend to cooccur) along with heuristics that can help to attibute a word pair to a proper type automatically.
The best studied type is frequent phrases, which includes idioms, lexicographic collocations, and syntactic selection. Pairs of this type are known to occur at a short distance and can be singled out by choosing a narrow window for collecting cooccurrence data.
The next most salient type is topically related pairs. These can be identified by considering word frequencies in individual documents, as in the wellknown distributional topic models.
The third type is pairs that occur in repeated text fragments such as popular quotes of standard legal formulae. The characteristic feature of these is that the fragment contains several aligned words that are repeated in the same sequence. Such pairs are normally filtered out for most practical purposes, but filtering is usually applied only to exact repeats; we propose a method of capturing inexact repetition.
Hypothetically one could also expect to find a forth type, collocate pairs linked by an intrinsic semantic relation or a long-distance syntactic relation; such a link would guarantee co-occurrence at a certain relatively restricted range of distances, a range narrower than in case of a purely topical connection, but not so narrow as in repeats. However we do not find many cases of this sort in the preliminary empirical study.
The main objective of the workshop is to bring together researchers who are interested in applications of topic models and improving their output. Our goal is to create a broad platform for researchers to share ideas that could improve the usability and interpretation of topic models. We hope this workshop will promote topic model applications in other research areas, making their use more effective.
We received a total of 12 paper submissions from around the world, which were subject to a rigorous peer review process by an international program committee. A total of 8 papers were accepted for publication and appear in the workshop proceedings. In keeping with the theme of "Post-processing and Applications", there was strong representation of topic model applications among the accepted papers.
Optimization, simulation and control are very powerful tools in engineering and mathematics, and play an increasingly important role. Because of their various real-world applications in industries such as finance, economics, and telecommunications, research in these fields is accelerating at a rapid pace, and there have been major algorithmic and theoretical developments in these fields in the last decade.
This volume brings together the latest developments in these areas of research and presents applications of these results to a wide range of real-world problems.
- Collection of selected contributions giving a state-of-the-art account of recent developments in the field - Covers a broad range of topics in optimization and optimal control, including unique applications - Written by an international group of experts in their respective disciplines - Broad audience of researchers, practitioners, and advanced graduate students in applied mathematics and engineeringThe paper describes the results of an experimental study of statistical topic models applied to the task of automatic single-word term extraction. The English part of the Europarl parallel corpus from the socio-political domain and the Russian articles taken from online banking magazines were used as target text collections. The experiments demonstrate that topic information can improve the quality of single-word term extraction regardless of the subject area and the target language.
The paper describes the results of an experimental study of topic models applied to the task of single-word term extraction. The experiments encompass several probabilistic and non-probabilistic topic models and demonstrate that topic information improves the quality of term extraction, as well as NMF with KL-divergence minimization is the best among the models under study.
Abstract. The paper describes the results of an experimental study of topic models applied to the task of single-word term extraction. The experiments encompass several probabilistic and non-probabilistic topic models and demonstrate that topic information improves the quality of term extraction, as well as NMF with KL-divergence minimization is the best among the models under study.
Book include abstracts of reports presented at the IX International Conference on Optimization Methods and Applications "Optimization and applications" (OPTIMA-2018) held in Petrovac, Montenegro, October 1 - October 5, 2018.
ARTM advantages: ARTM is much simpler that Bayesian Inference ARTM focuses on formalizing task-specific requirements ARTM simplifies the multi-objective PTMs learning ARTM reduces barriers to entry into PTMs research field ARTM encourages the development of regularization library ARTM restrictions: Choosing a regularization path is a new open issue for PTMs