Supercomputing of the exascale era is going to be inevitably limited by power efficiency. Nowadays different possible variants of CPU architectures are considered. Recently the development of ARM processors has come to the point when their floating point performance can be seriously considered for a range of scientic applications. In this work we present the analysis of the flooating point performance of the latest ARM cores and their efficiency for the algorithms of classical molecular dynamics.
An analysis is presented of experimental data where fluid–fluid phase transitions are observed for different substances at high temperatures with triple points on melting curves. Viscosity drops point to the structural character of the transition, whereas conductivity jumps remind of both semiconductor-to-metal and plasma nature. The slope of the phase equilibrium dependencies of pressure on temperature and the consequent change of the specific volume, which follows from the Clapeyron–Clausius equation, are discussed. P(V, T ) surfaces are presented and discussed for the phase transitions considered in the vicinity of the triple points. The cases of abnormal P(T ) dependencies on curves of phase equilibrium are in the focus of discussion. In particular, a P(V, T ) surface is presented when both fluid–fluid and melting P(T ) curves are abnormal. Particular attention is paid to warm dense hydrogen and deuterium, where remarkable contradictions exist between data of different authors. The possible connection of the P(V, T ) surface peculiarities with the experimental data uncertainties is outlined.
Recently found quasi-two dimensional metalloorganic compound (C4H12N2)(Cu2Cl6) (abbreviated PHCC) is an example of a spin-gap magnet. Its ground state is a nonmagnetic singlet separated from the triplet excitations by an energy gap of approximately 1 meV. This compound allows partial substitution of chlorine ions by bromine, which results in the modulation of the affected exchange bonds. We have found by means of electron spin resonance spectroscopy that this doping results in the formation of the gapless S = 1 paramagnetic centers. These centers can be interpreted as triplet excitations trapped in a potential well created by doping.
In the framework of this paper we apply multifractal formalism to the analysis of statistical behaviour of topic models under variation of the number of topics. Fractal analysis of topic models allows to show that self-similar fractal clusters exist in large textual collections. We provide numerical results for 3 topic models (PLSA, ARTM, LDA Gibbs sampling) on 2 datasets, namely, on an English-language dataset and on a Russian-language dataset. We demonstrate that forming of clusters occurs precisely in the transition regions. Linear regions do not lead to changes in fractals, therefore, it is sufficient to find transition regions for the study of textual collections. Accordingly, the problem of the analysing the evolution of topic models can be reduced to the problem of searching transition regions in topic models.
Abstract. A three-dimensional artistic fractal tomography method that implements a non-glasses 3D visualization of fractal worlds in layered media is proposed. It is designed for the glasses-free 3D vision of digital art objects and films containing fractal content. Prospects for the development of this method in art galleries and the film industry are considered.
We investigate the existence and the orthogonality of the generalized Jack symmetric functions which play an important role in the AGT relations. We show their orthogonality by deforming them to the generalized Macdonald symmetric functions.
Metal nanoparticles (NPs) serve as important tools for many modern technologies. However, the proper microscopic models of the interaction between ultrashort laser pulses and metal NPs are currently not very well developed in many cases. One part of the problem is the description of the warm dense matter that is formed in NPs after intense irradiation. Another part of the problem is the description of the electromagnetic waves around NPs. Description of wave propagation requires the solution of Maxwell's equations and the finite-difference time-domain (FDTD) method is the classic approach for solving them. There are many commercial and free implementations of FDTD, including the open source software that supports graphics processing unit (GPU) acceleration. In this report we present the results on the FDTD calculations for different cases of the interaction between ultrashort laser pulses and metal nanoparticles. Following our previous results, we analyze the efficiency of the GPU acceleration of the FDTD algorithm.
We consider the Hegselmann-Krause bounded confidence model of opinion dynamics. We assume that the opinion of an agent is influenced not only by other agents, but also by external random noises. The case of independent normally distributed external noises is considered. We perform computer modeling of deterministic and stochastic models. The properties of the models were analyzed and the difference in their behavior was revealed. We study the dependence of the number of a confidence clusters on the parameters of the problem such as the initial profile of opinions, the level of confidence, the variance of noise.
The results of numerical calculations for the mathematical model proposed for describing the magnetization in a thin film of a ferromagnetic semiconductor at temperatures below the Curie temperature in the presence of an external electric field are presented. The theoretical prediction of the existence of a piecewise continuous solution, which describes the presence of the phase transition boundary for magnetization inside the film, is confirmed. The location of this phase transition boundary depends on the external electric field and temperature.
Statistical physics is the branch that uses different mathematical methods in solving not only physical problems. The field of application may be the interdisciplinary studies of many social phenomena. The reason is that they have a stochastic nature. The aim of the paper is to display the opportunities of using the methods of natural sciences in the social sciences. The example is suggested of the joint research in demography, sociology, statistics, and ethnography of ethnically mixed families. These are the marital couples where a husband and a wife consider themselves as belonging to different ethnicities. It was demonstrated that application of the reasons used in the kinetic theory helps us to introduce new measure that describes mutual attitudes for a specific combination of ethnicities. The idea of this measure calculation is quite simple. We simply relate the number of marriages established from the reasons of full randomness of collisions of “particles” (persons) and their connection irrespective to their type, and the phenomenology – the actual number of families for a given combination of husband’s and wife’s ethnicity observed form the population censuses. What we mean by “collision” is any form of personal or social interaction (meeting, conversation, participation in small groups at work, family, schooling, tourism, journey, sports, etc.). This measure may be called inter-ethnic propensity, or its inverse value as an inter-ethnic distance. One more new measure is used to describe a propensity to form ethnically mixed marriage with a spouse of any different ethnicity. Numerically it is calculated as a share of ethnically mixed families of a given ethnicity among all the families of this ethnicity. Similar to chemistry, it may be called “valency”. It was shown that in such multiethnic country like Russia both measures cannot be estimated as the good and adequate ones. The reason is a significant inhomogeneity of ethnicity distribution by territory of the country. Some of such peoples have their own national republics, some do not have such administrative-territorial organization but reside in a few number of regions. However this does not mean that the measures introduced are the wrong ones. Simply before their calculation we require to perform co-called “geographical” decomposition that explicitly takes into account the fact and the extent of territorial distribution of population of all the ethnicities in this country by regions. In terms of kinetic approach for gases it may have the analogy of various density of different particles by the volume they are placed in, that is required at consideration of their physical properties. The paper also aims to display that using of methods from natural sciences lets us produce much more clear explanation, more simple understanding, modeling, interpretation of the processes under consideration. Description of the models and measures mentioned, the results of the approach suggested were published in the new electronic journal Demographic Review (Demograficheskoe obozrenie, in Russian) and presented at the international conferences at the HRU Higher School of Economics and Moscow State University. As a new problem statement in ethnography not solved yet an analogy with thermodynamics is suggested for analysis of ethnical population structure and its evolution. Some questions in this field are: Is the entropy actually growing over time as applied to the composition of population by ethnicities? May the dynamics of the population of the USA considered as the well-known “melting pot” for ethnicities be interpreted in the way similar to the second law of thermodynamics? Why this law is not valid in the general case for population ethnic structure at the level of city or country?
Paper is devoted constructing efficient metaheuristics algorithms for discrete optimization problems. Particularly, we consider vehicle routing problem applying original ant colony optimization method to solve it. Besides, some parts of algorithm are separated for parallel computing. Some experimental results are performed to compare the efficiency of these methods.
Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment.
We researched the relation between deposition and ultra-thin VN films parameters. To conduct the experimental study we varied substrate temperature, Ar and N2 partial pressures and deposition rate. The study allowed us to obtain the films with close to the bulk values transition temperatures and implement such samples in order to fabricate superconducting single-photon detectors
A theoretical model describing the spontaneous magnetization of a ferromagnetic semiconductor (InMn)As film in the presence of an external electric field directed across the film is considered. It is assumed that the ions of a manganese impurity with spin 5/2 are acceptors, have a uniform spatial distribution inside the semiconductor, and do not change their position under the action of an external field. The motion of holes with spin ½ changes their spatial distribution under the action of the field. The exchange interaction between manganese ions and holes allows the formation of magnetization that is non-uniform across the film thickness.
In particular, the existence of a piecewise continuous solution describing the presence of a phase transition boundary for magnetization inside a ferromagnetic semiconductor film is shown.
Simulation of the entangled (Bell) states generation of two qubits by using of unipolar picoseconds pulses was performed. As an example, a system of two coupled superconducting flux qubits interacting with fluxons that integrated with a Josephson transmission line has been considered. The influence of the pulse shape and quantum noise on the accuracy of the Bell states initialization and the way to control of nonlocal entangled states are discussed.
We present results on integration of two major GPGPU APIs with reactor-based event processing model in C++ that utilizes coroutines. With current lack of universally usable GPGPU programming interface that gives optimal performance and debates about the style of implementing asynchronous computing in C++, we present a working implementation that allows a uniform and seamless approach to writing C++ code with continuations that allow processing on CPUs or CUDA/OpenCL accelerators. Performance results are provided that show, if corner cases are avoided, this approach has negligible performance cost on latency.
We investigate geometrical aspects of a spatial evolutionary game. The game is based on the Prisoner's dilemma. We analyze the geometrical structure of the space distribution of cooperators and defectors in the steady-state regime of evolution. We develop algorithm for the identification of the interfaces between clusters of cooperators and defectors, and measure fractal properties of the interfaces.
© Published under licence by IOP Publishing Ltd. Data quality monitoring, DQM, is crucial in a high-energy physics experiment to ensure the correct functioning of the experimental apparatus during the data taking. DQM at LHCb is carried out in two phases. The first one is performed on-site, in real time, using unprocessed data directly from the LHCb detector, while the second, also performed on-site, requires the reconstruction of the data selected by the LHCb trigger system and occurs later. For the LHC Run II data taking the LHCb collaboration has re-engineered the DQM protocols and the DQM graphical interface, moving the latter to a web-based monitoring system, called Monet, thus allowing researchers to perform the second phase off-site. In order to support the operator's task, Monet is also equipped with an automated, fully configurable alarm system, thus allowing its use not only for DQM purposes, but also to track and assess the quality of LHCb software and simulation over time.
The main b-physics trigger algorithm used by the LHCb experiment is the so-called topological trigger. The topological trigger selects vertices which are a) detached from the primary proton-proton collision and b) compatible with coming from the decay of a b-hadron. In the LHC Run 1, this trigger, which utilized a custom boosted decision tree algorithm, selected a nearly 100% pure sample of b-hadrons with a typical efficiency of 60-70%; its output was used in about 60% of LHCb papers. This talk presents studies carried out to optimize the topological trigger for LHC Run 2. In particular, we have carried out a detailed comparison of various machine learning classifier algorithms, e.g., AdaBoost, MatrixNet and neural networks. The topological trigger algorithm is designed to select all "interesting" decays of b-hadrons, but cannot be trained on every such decay. Studies have therefore been performed to determine how to optimize the performance of the classification algorithm on decays not used in the training. Methods studied include cascading, ensembling and blending techniques. Furthermore, novel boosting techniques have been implemented that will help reduce systematic uncertainties in Run 2 measurements. We demonstrate that the reoptimized topological trigger is expected to significantly improve on the Run 1 performance for a wide range of b-hadron decays.
The LHCb experiment stores around 1011 collision events per year. A typical physics analysis deals with a final sample of up to 107 events. Event preselection algorithms (lines) are used for data reduction. Since the data are stored in a format that requires sequential access, the lines are grouped into several output file streams, in order to increase the efficiency of user analysis jobs that read these data. The scheme efficiency heavily depends on the stream composition. By putting similar lines together and balancing the stream sizes it is possible to reduce the overhead. We present a method for finding an optimal stream composition. The method is applied to a part of the LHCb data (Turbo stream) on the stage where it is prepared for user physics analysis. This results in an expected improvement of 15% in the speed of user analysis jobs, and will be applied on data to be recorded in 2017.