Numerical optimization for Artificial Retina Algorithm
High-energy physics experiments rely on reconstruction of the trajectories of particles produced at the interaction point. This is a challenging task, especially in the high track multiplicity environment generated by p-p collisions at the LHC energies. A typical event includes hundreds of signal examples (interesting decays) and a significant amount of noise (uninteresting examples). This work describes a modification of the Artificial Retina algorithm for fast track finding: numerical optimization methods were adopted for fast local track search. This approach allows for considerable reduction of the total computational time per event. Test results on simplified simulated model of LHCb VELO (VErtex LOcator) detector are presented. Also this approach is well-suited for implementation of paralleled computations as GPGPU which look very attractive in the context of upcoming detector upgrades.
One of the most challenging data analysis tasks of modern High Energy Physics experiments is the identification of particles. In this proceedings we review the new approaches used for particle identification at the LHCb experiment. Machine-Learning based techniques are used to identify the species of charged and neutral particles using several observables obtained by the LHCb sub-detectors. We show the performances of various solutions based on Neural Network and Boosted Decision Tree models.
Reconstruction and identification in calorimeters of modern High Energy Physics experiments is a complicated task. Solutions are usually driven by a priori knowledge about expected properties of reconstructed objects. Such an approach is also used to distinguish single photons in the electromagnetic calorimeter of the LHCb detector on LHC from overlapping photons produced from high momentum pi0 decays. We studied an alternative solution based on applying machine learning techniques to primary calorimeter information, that are energies collected in individual cells around the energy cluster. Constructing such a discriminator from “first principles” allowed improve separation performance from 80% to 93%, that means reducing primary photons fake rate by factor of two. In presentation we discuss different approaches to the problem, architecture of the classifier, its optimization, and compare performance of the ML approach with classical one.
The production of W and Z bosons in association with jets is studied in the forward region of proton-proton collisions collected at a centre-of-mass energy of 8 TeV by the LHCb experiment, corresponding to an integrated luminosity of 1.98 ± 0.02 fb−1 . The W boson is identified using its decay to a muon and a neutrino, while the Z boson is identified through its decay to a muon pair. Total cross-sections are measured and combined into charge ratios, asymmetries, and ratios of W+jet and Z+jet production cross-sections. Differential measurements are also performed as a function of both boson and jet kinematic variables. All results are in agreement with Standard Model predictions.
Book include abstracts of reports presented at the IX International Conference on Optimization Methods and Applications "Optimization and applications" (OPTIMA-2018) held in Petrovac, Montenegro, October 1 - October 5, 2018.
Stochastic Local Search (SLS) is one of the most popular approaches to Boolean satisfiability problem and solvers based on this algorithm have made a substantial progress over the years. However, nearly all state of the art SLS solvers do not attempt to find a good starting point, instead using random values. We present a heuristic for finding an initial assignment based on non-linear optimization of continuous extension of given Boolean formula. This heuristic works by optimizing continuous function that represents the formula and then converting the result into discrete values. We also provide experimental evaluation of new heuristic implemented in ProbSAT solver.
Optimization, simulation and control are very powerful tools in engineering and mathematics, and play an increasingly important role. Because of their various real-world applications in industries such as finance, economics, and telecommunications, research in these fields is accelerating at a rapid pace, and there have been major algorithmic and theoretical developments in these fields in the last decade.
This volume brings together the latest developments in these areas of research and presents applications of these results to a wide range of real-world problems.- Collection of selected contributions giving a state-of-the-art account of recent developments in the field - Covers a broad range of topics in optimization and optimal control, including unique applications - Written by an international group of experts in their respective disciplines - Broad audience of researchers, practitioners, and advanced graduate students in applied mathematics and engineering
We propose a new method of feature extraction for regression problems with text data that transforms the sparse texts to dense features using regularized topic models. We also discuss the problem of topic model initialization, and propose a new approach based on Naive Bayes. This approach is compared to many others, and it achieves a quality comparable to vector space models using as little as ten topics. It also outperforms other methods for feature generation based on topic modeling, such as PLSA and Supervised LDA.
A full amplitude analysis of Λ 0 b → J/ψ pπ− decays is performed with a data sample acquired with the LHCb detector from 7 and 8 TeV pp collisions, corresponding to an integrated luminosity of 3 fb−1 . A significantly better description of the data is achieved when, in addition to the previously observed nucleon excitations N → pπ−, either the Pc(4380)+ and Pc(4450)+ → J/ψ p states, previously observed in Λ 0 b → J/ψ pK− decays, or the Zc(4200)− → J/ψ π− state, previously reported in B0 → J/ψ K+π − decays, or all three, are included in the amplitude models. The data support a model containing all three exotic states, with a significance of more than three standard deviations. Within uncertainties, the data are consistent with the Pc(4380)+ and Pc(4450)+ production rates expected from their previous observation taking account of Cabibbo suppression.
A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traﬃc is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the ﬁnal node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a ﬁnite-dimensional system of diﬀerential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the “correct” extension of solutions of a system of diﬀerential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station.
Event logs collected by modern information and technical systems usually contain enough data for automated process models discovery. A variety of algorithms was developed for process models discovery, conformance checking, log to model alignment, comparison of process models, etc., nevertheless a quick analysis of ad-hoc selected parts of a journal still have not get a full-fledged implementation. This paper describes an ROLAP-based method of multidimensional event logs storage for process mining. The result of the analysis of the journal is visualized as directed graph representing the union of all possible event sequences, ranked by their occurrence probability. Our implementation allows the analyst to discover process models for sublogs defined by ad-hoc selection of criteria and value of occurrence probability
The dynamics of a two-component Davydov-Scott (DS) soliton with a small mismatch of the initial location or velocity of the high-frequency (HF) component was investigated within the framework of the Zakharov-type system of two coupled equations for the HF and low-frequency (LF) fields. In this system, the HF field is described by the linear Schrödinger equation with the potential generated by the LF component varying in time and space. The LF component in this system is described by the Korteweg-de Vries equation with a term of quadratic influence of the HF field on the LF field. The frequency of the DS soliton`s component oscillation was found analytically using the balance equation. The perturbed DS soliton was shown to be stable. The analytical results were confirmed by numerical simulations.