We study the Chandler wobble (CW) of the pole from 1846 to 2017 extracted by the Panteleev ltering. The CW has period of 433 days, average amplitude of 0.13 milliarcseconds (mas) which is changing, and phase jump by pi in 1930-th. The CW amplitude strongly (almost to zero) decreases in 1930-th and 2010-th with the phase jump in 1930th. The envelope model contains 83- and 42-years quasi-periodicities. We think the rst one can be represented by the 166-years changes of the envelope, crossing zero in 1930th. We reconstruct Chandler input excitation based on the Euler-Liouville equation. Its amplitude has 20-years variations. We explain this based on simple model and prove, that they appear in consequence of 42-years modulation of CW. The excitation ampli es the amplitude of CW for 20 years then damps it for another 20 years. The analysis of the modulated CW signal in a sliding window demonstrates the specific effect, we called the "escargot effect", when instantaneous "virtual" retrograde component appears in the purely prograde (at long-time interval) signal. Chandler excitation envelope shape is similar to this instantaneous retrograde component, which re ects the changes of ellipticity of the approximation ellipse.
LHC experiments generate up to 10^11LHC experiments generate up to 1012 events per year. This paper describes Event Index – an event search system. Event Index’s primary function is quickly selecting subsets of events from a combination of conditions, such as the estimated decay channel or stripping lines output. Event Index is essentially Apache Lucene  optimized for read-only indexes distributed over independent shards on independent nodes. events per year. This paper describes Event Index – an event search system. Event Index’s primary function is quickly selecting subsets of events from a combination of conditions, such as the estimated decay channel or stripping lines output. Event Index is essentially Apache Lucene  optimized for read-only indexes distributed over independent shards on independent nodes.
During LHC Run 1, the LHCb experiment recorded around 1011 collision events. This paper describes Event Index — an event search system. Its primary function is to quickly select subsets of events from a combination of conditions, such as the estimated decay channel or number of hits in a subdetector. Event Index is essentially Apache Lucene  optimized for read-only indexes distributed over independent shards on independent nodes.
© Published under licence by IOP Publishing Ltd. Modern science clearly demands for a higher level of reproducibility and collaboration. To make research fully reproducible one has to take care of several aspects: research protocol description, data access, environment preservation, workflow pipeline, and analysis script preservation. Version control systems like git help with the workflow and analysis scripts part. Virtualization techniques like Docker or Vagrant can help deal with environments. Jupyter notebooks are a powerful platform for conducting research in a collaborative manner. We present project Everware that seamlessly integrates git repository management systems such as Github or Gitlab, Docker and Jupyter helping with a) sharing results of real research and b) boosts education activities. With the help of Everware one can not only share the final artifacts of research but all the depth of the research process. This been shown to be extremely helpful during organization of several data analysis hackathons and machine learning schools. Using Everware participants could start from an existing solution instead of starting from scratch. They could start contributing immediately. Everware allows its users to make use of their own computational resources to run the workflows they are interested in, which leads to higher scalability of the toolkit.
Under this study we considered active region 09415 of the 23-rd cycle of solar activity which was observed with the 2D spatial resolution at three frequencies: 17 and 34 GHz with the Nobeyama Radioheliograph (NoRH) and 17 GHz with the Solar Siberian Radiotelescope (SSRT). We detected rapid development of a compact microwave source above the neutral line of the magnetic field of leading sunspot (NLS-source) few hours before the X-class flare. The position of this source is associated with the place of the maximum of magnetic field gradient at the photosphere.
Cryo-filters are essential while studying electronic properties of nanoscale structures at very low temperatures. In this report we present the simple measuring methodology and experimental impedance characteristics of customized lumped filters cooled down to 4.2K in the 10 Hz-500 MHz frequency range. In particular, we tested the home-made permalloy-core RL filters, the MurataTMChip Ferrite Bead filter, and the ToshibaTMAmobeadsTMcores. We use the high-frequency generalization of four-terminal sensing method to account for the wiring retardation effects, which are important when working with ultralow temperature systems.
In this paper we experimentally studied the influence of geometrical parameters of the planar O-ring resonators on its Q-factor and losses. We systematically changed the gap between the bus waveguide and the ring, as well as the width of the ring. We found the highest Q= 5× 105 for gap 2.0 μm and the ring width 2 μm. This work is important for further on-chip SFWM applications since the generation rate of the biphoton field strongly depends on the quality factor as Q 3
The increasing luminosities of future Large Hadron Collider runs and next generation of collider experiments will require an unprecedented amount of simulated events to be produced. Such large scale productions are extremely demanding in terms of computing resources. Thus new approaches to event generation and simulation of detector responses are needed. In LHCb, the accurate simulation of Cherenkov detectors takes a sizeable fraction of CPU time. An alternative approach is described here, when one generates high-level reconstructed observables using a generative neural network to bypass low level details. This network is trained to reproduce the particle species likelihood function values based on the track kinematic parameters and detector occupancy. The fast simulation is trained using real data samples collected by LHCb during run 2. We demonstrate that this approach provides high-fidelity results.
Supercomputing of the exascale era is going to be inevitably limited by power efficiency. Nowadays different possible variants of CPU architectures are considered. Recently the development of ARM processors has come to the point when their floating point performance can be seriously considered for a range of scientic applications. In this work we present the analysis of the flooating point performance of the latest ARM cores and their efficiency for the algorithms of classical molecular dynamics.
An analysis is presented of experimental data where fluid–fluid phase transitions are observed for different substances at high temperatures with triple points on melting curves. Viscosity drops point to the structural character of the transition, whereas conductivity jumps remind of both semiconductor-to-metal and plasma nature. The slope of the phase equilibrium dependencies of pressure on temperature and the consequent change of the specific volume, which follows from the Clapeyron–Clausius equation, are discussed. P(V, T ) surfaces are presented and discussed for the phase transitions considered in the vicinity of the triple points. The cases of abnormal P(T ) dependencies on curves of phase equilibrium are in the focus of discussion. In particular, a P(V, T ) surface is presented when both fluid–fluid and melting P(T ) curves are abnormal. Particular attention is paid to warm dense hydrogen and deuterium, where remarkable contradictions exist between data of different authors. The possible connection of the P(V, T ) surface peculiarities with the experimental data uncertainties is outlined.
Recently found quasi-two dimensional metalloorganic compound (C4H12N2)(Cu2Cl6) (abbreviated PHCC) is an example of a spin-gap magnet. Its ground state is a nonmagnetic singlet separated from the triplet excitations by an energy gap of approximately 1 meV. This compound allows partial substitution of chlorine ions by bromine, which results in the modulation of the affected exchange bonds. We have found by means of electron spin resonance spectroscopy that this doping results in the formation of the gapless S = 1 paramagnetic centers. These centers can be interpreted as triplet excitations trapped in a potential well created by doping.
In the framework of this paper we apply multifractal formalism to the analysis of statistical behaviour of topic models under variation of the number of topics. Fractal analysis of topic models allows to show that self-similar fractal clusters exist in large textual collections. We provide numerical results for 3 topic models (PLSA, ARTM, LDA Gibbs sampling) on 2 datasets, namely, on an English-language dataset and on a Russian-language dataset. We demonstrate that forming of clusters occurs precisely in the transition regions. Linear regions do not lead to changes in fractals, therefore, it is sufficient to find transition regions for the study of textual collections. Accordingly, the problem of the analysing the evolution of topic models can be reduced to the problem of searching transition regions in topic models.
Abstract. A three-dimensional artistic fractal tomography method that implements a non-glasses 3D visualization of fractal worlds in layered media is proposed. It is designed for the glasses-free 3D vision of digital art objects and films containing fractal content. Prospects for the development of this method in art galleries and the film industry are considered.
We investigate the existence and the orthogonality of the generalized Jack symmetric functions which play an important role in the AGT relations. We show their orthogonality by deforming them to the generalized Macdonald symmetric functions.
Metal nanoparticles (NPs) serve as important tools for many modern technologies. However, the proper microscopic models of the interaction between ultrashort laser pulses and metal NPs are currently not very well developed in many cases. One part of the problem is the description of the warm dense matter that is formed in NPs after intense irradiation. Another part of the problem is the description of the electromagnetic waves around NPs. Description of wave propagation requires the solution of Maxwell's equations and the finite-difference time-domain (FDTD) method is the classic approach for solving them. There are many commercial and free implementations of FDTD, including the open source software that supports graphics processing unit (GPU) acceleration. In this report we present the results on the FDTD calculations for different cases of the interaction between ultrashort laser pulses and metal nanoparticles. Following our previous results, we analyze the efficiency of the GPU acceleration of the FDTD algorithm.
We consider the Hegselmann-Krause bounded confidence model of opinion dynamics. We assume that the opinion of an agent is influenced not only by other agents, but also by external random noises. The case of independent normally distributed external noises is considered. We perform computer modeling of deterministic and stochastic models. The properties of the models were analyzed and the difference in their behavior was revealed. We study the dependence of the number of a confidence clusters on the parameters of the problem such as the initial profile of opinions, the level of confidence, the variance of noise.
The results of numerical calculations for the mathematical model proposed for describing the magnetization in a thin film of a ferromagnetic semiconductor at temperatures below the Curie temperature in the presence of an external electric field are presented. The theoretical prediction of the existence of a piecewise continuous solution, which describes the presence of the phase transition boundary for magnetization inside the film, is confirmed. The location of this phase transition boundary depends on the external electric field and temperature.
Statistical physics is the branch that uses different mathematical methods in solving not only physical problems. The field of application may be the interdisciplinary studies of many social phenomena. The reason is that they have a stochastic nature. The aim of the paper is to display the opportunities of using the methods of natural sciences in the social sciences. The example is suggested of the joint research in demography, sociology, statistics, and ethnography of ethnically mixed families. These are the marital couples where a husband and a wife consider themselves as belonging to different ethnicities. It was demonstrated that application of the reasons used in the kinetic theory helps us to introduce new measure that describes mutual attitudes for a specific combination of ethnicities. The idea of this measure calculation is quite simple. We simply relate the number of marriages established from the reasons of full randomness of collisions of “particles” (persons) and their connection irrespective to their type, and the phenomenology – the actual number of families for a given combination of husband’s and wife’s ethnicity observed form the population censuses. What we mean by “collision” is any form of personal or social interaction (meeting, conversation, participation in small groups at work, family, schooling, tourism, journey, sports, etc.). This measure may be called inter-ethnic propensity, or its inverse value as an inter-ethnic distance. One more new measure is used to describe a propensity to form ethnically mixed marriage with a spouse of any different ethnicity. Numerically it is calculated as a share of ethnically mixed families of a given ethnicity among all the families of this ethnicity. Similar to chemistry, it may be called “valency”. It was shown that in such multiethnic country like Russia both measures cannot be estimated as the good and adequate ones. The reason is a significant inhomogeneity of ethnicity distribution by territory of the country. Some of such peoples have their own national republics, some do not have such administrative-territorial organization but reside in a few number of regions. However this does not mean that the measures introduced are the wrong ones. Simply before their calculation we require to perform co-called “geographical” decomposition that explicitly takes into account the fact and the extent of territorial distribution of population of all the ethnicities in this country by regions. In terms of kinetic approach for gases it may have the analogy of various density of different particles by the volume they are placed in, that is required at consideration of their physical properties. The paper also aims to display that using of methods from natural sciences lets us produce much more clear explanation, more simple understanding, modeling, interpretation of the processes under consideration. Description of the models and measures mentioned, the results of the approach suggested were published in the new electronic journal Demographic Review (Demograficheskoe obozrenie, in Russian) and presented at the international conferences at the HRU Higher School of Economics and Moscow State University. As a new problem statement in ethnography not solved yet an analogy with thermodynamics is suggested for analysis of ethnical population structure and its evolution. Some questions in this field are: Is the entropy actually growing over time as applied to the composition of population by ethnicities? May the dynamics of the population of the USA considered as the well-known “melting pot” for ethnicities be interpreted in the way similar to the second law of thermodynamics? Why this law is not valid in the general case for population ethnic structure at the level of city or country?
Paper is devoted constructing efficient metaheuristics algorithms for discrete optimization problems. Particularly, we consider vehicle routing problem applying original ant colony optimization method to solve it. Besides, some parts of algorithm are separated for parallel computing. Some experimental results are performed to compare the efficiency of these methods.
Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment.