Improving the Precision of Wireless Localization Algorithms: ML Techniques for Indoor Positioning
Due to the tremendous increase in the number of wearable devices and proximity-based services, the need for improved indoor localization techniques becomes more significant. The evolution of the positioning from a hardware perspective is pacing its way along with various software-based approaches also powered by Machine Learning (ML). In this paper, we apply ML algorithms to the real-life collected signal parameters in an indoor localization system based on Ultra-Wideband (UWB) technology to make an analysis of the signal and classify it accordingly. The contribution aims to answer the question of whether an indoor positioning system could benefit from utilizing ML for signal parameter analysis in order to increase its location accuracy, reliability, and robustness across various environments. To this end, we compare different applications of ML approaches and detail the trade-off between computational speed and accuracy
We present a model for freight train time prediction based on station network analysis and specific feature engineering. We discuss the first pipeline to improve the freight flight duration prediction in Russia. While every freight company use only reference book made by RZD (Russian Railways) based on railroad distances with accuracy measured in days, we argue that one could predict the flight duration with error less than twenty hours while decreasing error to twelve hours for certain type of freight trains.
A measurement of the charm-mixing parameter yCP using D0 → KþK−, D0 → πþπ−, and D0 → K−πþ decays is reported. The D0 mesons are required to originate from semimuonic decays of B− and B0 mesons. These decays are partially reconstructed in a data set of proton-proton collisions at center-of-mass energies of 7 and 8 TeV collected with the LHCb experiment and corresponding to an integrated luminosity of 3 fb−1. The yCP parameter is measured to be ð0.57 0.13ðstatÞ 0.09ðsystÞÞ%, in agreement with, and as precise as, the current world-average value.
The law of accelerating returns can be viewed as a concept that describes acceleration of technological progress. The idea is that tools are used for developing more advanced tools that are applied for creating even more advanced tools etc. A similar idea has been implemented in algorithms for advancing artificial intelligence. In this paper, the results of applying these algorithms in games are discussed. Nevertheless, real life tasks seem more complicated. The game theoretic approach can be applied for transition from theoretical and unrealistic games to more complex and practical tasks. Applications of the game theoretic approach to advance artificial intelligence in solving tasks in the credit industry are proposed.
Proceedings of Machine Learning Research: Volume 97: International Conference on Machine Learning, 9-15 June 2019, Long Beach, California, USA
L’ouvrage d’Adrian Mackenzie, professeur au Département de sociologie à l’Université de Lancaster, est d’un genre inédit au sein de la littérature émergente, mais encore peu étendue en sciences humaines et sociales, qui explore le fonctionnement du machine learning (ML). Les avancées spectaculaires de cette branche de l’intelligence artificielle (IA) depuis quelques années ont éclipsé les autres approches en la matière et ont soudainement transformé l’IA en un problème social et politique. Plusieurs auteurs ont déjà insisté sur la nécessité de focaliser le regard sur les outils de l’IA, en pointant les limites des travaux qui ne traitent que des effets sociaux des « algorithmes ». Comme le fait remarquer l’anthropologue des sciences et des techniques Nick Seaver, la plupart des travaux sur le sujet s’agitent au sujet des « algorithmes » ou le « big data », en insistant sur leurs effets néfastes, voire catastrophiques, pour la société sans jamais préciser exactement ce qu’ils sont. Le transfert des connaissances et des perspectives entre les spécialistes en IA et en SHS (d’ailleurs dans les deux sens) est pourtant indispensable pour en proposer une critique informée et efficace.
A search for CP violation in the Cabibbo-suppressed D0 → K+K−π+π− decay mode is performed using an amplitude analysis. The measurement uses a sample of pp collisions recorded by the LHCb experiment during 2011 and 2012, corresponding to an integrated luminosity of 3.0 fb−1. The D0 mesons are reconstructed from semileptonic b-hadron decays into D0μ−X final states. The selected sample contains more than 160 000 signal decays, allowing the most precise amplitude modelling of this D0 decay to date. The obtained amplitude model is used to perform the search for CP violation. The result is compatible with CP symmetry, with a sensitivity ranging from 1% to 15% depending on the amplitude considered.
Bayesian inference is known to provide a general framework for incorporating prior knowledge or specific properties into machine learning models via carefully choosing a prior distribution. In this work, we propose a new type of prior distributions for convolutional neural networks, deep weight prior (DWP), that exploit generative models to encourage a specific structure of trained convolutional filters e.g., spatial correlations of weights. We define DWP in the form of an implicit distribution and propose a method for variational inference with such type of implicit priors. In experiments, we show that DWP improves the performance of Bayesian neural networks when training data are limited, and initialization of weights with samples from DWP accelerates training of conventional convolutional neural networks.