Generative Models for Fast Calorimeter Simulation
Simulation is one of the key components in high energy physics. Historically it relies on the Monte Carlo methods which require a tremendous amount of computation resources. These methods may have difficulties with the expected High Luminosity Large Hadron Collider (HL-LHC) needs, so the experiments are in urgent need of new fast simulation techniques. We introduce a new Deep Learning framework based on Generative Adversarial Networks which can be faster than traditional simulation methods by 5 orders of magnitude with reasonable simulation accuracy. This approach will allow physicists to produce a sufficient amount of simulated data needed by the next HL-LHC experiments using limited computing resources.
The monograph presents results by professor Dr. A. Shalumov’s Research School of Modeling, Information Technology and Automated Systems (Russia). The program, ASONIKA, developed by the school is reviewed here regarding reliability and quality of devices for simulation of electronics and chips during harmonic and random vibration, single and multiple impacts, linear acceleration and acoustic noise, and steady-state and transient thermal effects. Calculations are done for thermal stress during changes in temperature and power in time. Calculations are done for number of cycles to fatigue failure under mechanical loads as well as under cyclic thermal effects. Simulation results for reliability analysis are taken into account. Models, software interface, and simulation examples are presented.
For engineers and scientists involved in design automation of electronics.
It is difficult to imagine an enterprise, company, firm, an education organizations or organizations of health which does not deal with information systems. The openness and flexibility of the information systems provide a flexible and effective management. So it is necessary to adapt information system to new conditions being changed and to team up with other systems, with simulation system, for example. So it is possible change business processes, to execute their reengineering and to anticipate the conse-quences of any event and to take into account the different risks.
Nested Petri nets (NP-nets) are Petri nets with net tokens - an extension of high-level Petri nets for modeling active objects, mobility and dynamics in distributed systems. In this paper we present an algorithm for translating two-level NP-nets into behaviorally equivalent Colored Petri nets with the view of applying CPN methods and tools for nested Petri nets analysis. We prove, that the proposed translation preserves dynamic semantics in terms of bisimulation equivalence.
Financial markets have always been attractive as a means of increasing one's wealth, and those who make accurate predictions take the prize. Forecasting models such as linear ones are simple to compute, however, they give rough approximations of the underlying relationships in the data, thus, producing poor forecasts. The solution to this issue could be the nonlinear models which try to fit the data and display the relationships with higher accuracy. Previous research seems to prove this statement from the statistician's point of view which might be of little use for an investor. Therefore, the focus of this paper is on the comparison of three types of models (nonlinear: ANN, STAR, and linear: AR) in terms of financial performance. Our research is based on the initial code for GAUSS and papers by Dick van Dijk. The data used is the monthly S&P 500 Index values from 1970 to 2012 provided by the Robert Shiller's website. Forecasting index changes begins at 1995 and ends in 2012 providing up-to-date results for 14 model specifications. The best model proves to be the flexible ANN, beating the linear AR in the majority of cases, leaving the underperforming heavy-parameterized STAR model behind. Thus, it is evident that the more flexible nonlinear models outperform the heavily parameterized ones as well as linear models for the S&P 500 Index. The introduced type of performance evaluation has a more comprehensible application to the financial market analysis.
In the paper integrated information systems for corporate planning and budgeting are considered. Four groups of practical tasks exceeding the bounds of typical functionality of special-purpose planning and budgeting information systems are allocated. Several classes of information systems (simulation, statistical analysis, financial analysis and modeling, group decision making, business intelligence), which may provide the completeness of corporate planning and budgeting are denoted as solutions complementary to special-purpose planning and budgeting systems.
A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traﬃc is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the ﬁnal node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a ﬁnite-dimensional system of diﬀerential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the “correct” extension of solutions of a system of diﬀerential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station.
Event logs collected by modern information and technical systems usually contain enough data for automated process models discovery. A variety of algorithms was developed for process models discovery, conformance checking, log to model alignment, comparison of process models, etc., nevertheless a quick analysis of ad-hoc selected parts of a journal still have not get a full-fledged implementation. This paper describes an ROLAP-based method of multidimensional event logs storage for process mining. The result of the analysis of the journal is visualized as directed graph representing the union of all possible event sequences, ranked by their occurrence probability. Our implementation allows the analyst to discover process models for sublogs defined by ad-hoc selection of criteria and value of occurrence probability
The dynamics of a two-component Davydov-Scott (DS) soliton with a small mismatch of the initial location or velocity of the high-frequency (HF) component was investigated within the framework of the Zakharov-type system of two coupled equations for the HF and low-frequency (LF) fields. In this system, the HF field is described by the linear Schrödinger equation with the potential generated by the LF component varying in time and space. The LF component in this system is described by the Korteweg-de Vries equation with a term of quadratic influence of the HF field on the LF field. The frequency of the DS soliton`s component oscillation was found analytically using the balance equation. The perturbed DS soliton was shown to be stable. The analytical results were confirmed by numerical simulations.
The Handbook of CO₂ in Power Systems' objective is to include the state-of-the-art developments that occurred in power systems taking CO₂ emission into account. The book includes power systems operation modeling with CO₂ emissions considerations, CO₂ market mechanism modeling, CO₂ regulation policy modeling, carbon price forecasting, and carbon capture modeling. For each of the subjects, at least one article authored by a world specialist on the specific domain is included.