• A
  • A
  • A
  • АБB
  • АБB
  • АБB
  • А
  • А
  • А
  • А
  • А
Обычная версия сайта
Найдено 8 публикаций
Сортировка:
по названию
по году
Статья
Rodomanov A., Kropotov D. Journal of Machine Learning Research. 2016. Vol. 48. P. 2597-2605.

We consider the problem of optimizing the strongly convex sum of a finite number of convex functions. Standard algorithms for solving this problem in the class of incremental/stochastic methods have at most a linear convergence rate. We propose a new incremental method whose convergence rate is superlinear – the Newton-type incremental method (NIM). The idea of the method is to introduce a model of the objective with the same sum-of-functions structure and further update a single component of the model per iteration. We prove that NIM has a superlinear local convergence rate and linear global convergence rate. Experiments show that the method is very effective for problems with a large number of functions and a small number of variables.

Добавлено: 11 марта 2017
Статья
Bartunov S., Vetrov D., Kondrashkin D. et al. Journal of Machine Learning Research. 2016. Vol. 51. P. 130-138.

The recently proposed Skip-gram model is a powerful method for learning high-dimensional word representations that capture rich semantic relationships between words. However, Skip-gram as well as most prior work on learning word representations does not take into account word ambiguity and maintain only single representation per word. Although a number of Skip-gram modifications were proposed to overcome this limitation and learn multi-prototype word representations, they either require a known number of word meanings or learn them using greedy heuristic approaches. In this paper we propose the Adaptive Skip-gram model which is a nonparametric Bayesian extension of Skip-gram capable to automatically learn the required number of representations for all words at desired semantic resolution. We derive efficient online variational learning algorithm for the model and empirically demonstrate its efficiency on word-sense induction task.

Добавлено: 1 октября 2016
Статья
Andresen A., Spokoiny V. Journal of Machine Learning Research. 2016. No. 17(63). P. 1-53.

We derive two convergence results for a sequential alternating maximization procedure to approximate the maximizer of random functionals such as the realized log likelihood in MLE estimation. We manage to show that the sequence attains the same deviation properties as shown for the profile M-estimator by Andresen and Spokoiny (2013), that means a finite sample Wilks and Fisher theorem. Further under slightly stronger smoothness constraints on the random functional we can show nearly linear convergence to the global maximizer if the starting point for the procedure is well chosen. ©2016 Andreas Andresen, and Vladimir Spokoiny.

Добавлено: 8 сентября 2016
Статья
Santoro A., Bartunov S., Botvinick M. et al. Journal of Machine Learning Research. 2016. Vol. 48.
Добавлено: 19 октября 2016
Статья
V'yugin V. Journal of Machine Learning Research. 2011. Vol. 12. P. 33-56.

In this paper the sequential prediction problem with expert advice is considered for the case where losses of experts suffered at each step cannot be bounded in advance. We present some modification of Kalai and Vempala algorithm of following the perturbed leader where weights depend on past losses of the experts. New notions of a volume and a scaled fluctuation of a game are introduced. We present a probabilistic algorithm protected from unrestrictedly large one-step losses. This algorithm has the optimal performance in the case when the scaled fluctuations of one-step losses of experts of the pool tend to zero.

Добавлено: 18 марта 2013
Статья
Vetrov D., Osokin A., Rodomanov A. et al. Journal of Machine Learning Research. 2014.

In the paper we present a new framework for dealing with probabilistic graphical models. Our approach relies on the recently proposed Tensor Train format (TT-format) of a tensor that while being compact allows for efficient application of linear algebra operations. We present a way to convert the energy of a Markov random field to the TT-format and show how one can exploit the properties of the TT-format to attack the tasks of the partition function estimation and the MAP-inference. We provide theoretical guarantees on the accuracy of the proposed algorithm for estimating the partition function and compare our methods against several state-of-the-art algorithms.  

Добавлено: 18 марта 2015
Статья
Lisitsyn S., Widmer C., Garcia F. J. Journal of Machine Learning Research. 2013. Vol. 14. P. 2355-2359.
Добавлено: 17 декабря 2016
Статья
Bartunov S. O., Vetrov D. Journal of Machine Learning Research. 2014. Vol. 32. No. 1. P. 1404-1412.
Добавлено: 9 июля 2014