Развитие платформы Персональный виртуальный компьютер в Южно-Уральском Государственном Университете
We studied the performance of two algorithms for processing results of molecular dynamics (MD) simulation on modern computing platforms: calculations of radial distribution function (RDF) and energies. We found that both algorithms effectively parallelize both on systems with shared memory and on clusters with distributed memory. For processing the results of medium-sized MD systems, the parallelization efficiency of the RDF calculation is close to 1 in the range of up to 100 cores, and in the calculation of energy, up to 500 cores. We found that during energy calculation preferred parallelization depends on number of CPU cores. Parallelization based on atom indices is more effective on small number of cores, while parallelization based on atoms distribution in volume is preferred on large number of cores.
In universities and technical colleges with relevant IT qualifications in one semester multiple streams, courses and specializations can use software products for training purposes. IT services of universities should deal with the challenge of creating the infrastructure of educational applications that can support the educational process. We note that the number of specializations which study information technology are growing every year (for example, in HSE there are disciplines-minors, which can enroll students coming from any field). Also in the recent years, online courses have started to become popular. If the load is not planned ahead taking into account future trends, the power of even the most high-tech infrastructure will be insufficient. Calculation of the corresponding load on the infrastructure must be made in the planning process of the disciplines, so that we can reserve appropriate facilities, and thus organize an effective learning process.
Software developers use a variety of benchmarking tools that are complex and do not provide the necessary information for the participants of educational process planning.
This article discusses the construction of a simulation model that supports the educational process planning. The simulation is carried out using the capabilities of the tool AnyLogic 7. The aim of this work is to develop a simulation model designed to estimate the load on the information system used in the educational process. In addition, besides the description of the model, the article presents the results of calculations used for various options of the information system (private cloud or on a server at the university). The simulation results were confirmed by data obtained during practical classes at the university. This model gives us the opportunity to plan the educational process in order to achieve uniformity of the load on the services. If necessary, the model allows us to make a decision about the location of the educational information system: on servers of the university or in a private cloud.
This book constitutes the refereed proceedings of the Third Russian Supercomputing Days, RuSCDays 2017, held in Moscow, Russia, in September 2017. The 41 revised full papers and one revised short paper presented were carefully reviewed and selected from 120 submissions. The papers are organized in topical sections on parallel algorithms; supercomputer simulation; high performance architectures, tools and technologies.
Computer simulations are fast growing approach for doing research in sciences. It is auxiliary to experimental and analytical research. The main goal of the conference is in the development of methods and algorithms which take into account trends in the hardware development, and which may help to intensive research. Conference should play role of the venue were senior scientists and students may have opportunity to speak each other and exchange ideas and views on the developments in the area of high-performance computing in most sciences.
The performance of molecular dynamics software package Gromacs was measured on various hardware: desktop computers, clusters based on x84_64 processors or many integrated core processors, and heterogeneous system with gaming graphic cards or general purpose GPU systems. The optimal choice of hardware for molecular dynamics simulations is discussed.
High-performance computing plays an increasingly important role in modern science and technology. However, the lack of convenient interfaces and automation tools greatly complicates the widespread use of HPC resources among scientists. The paper presents an approach to solving these problems relying on Everest, a web-based distributed computing platform. The platform enables convenient access to HPC resources by means of domain-specific computational web services, development and execution of many-task applications, and pooling of multiple resources for running distributed computations. The paper describes the improvements that have been made to the platform based on the experience of integration with resources of supercomputing centers. The use of HPC resources via Everest is demonstrated on the example of loosely coupled many-task application for solving global optimization problems.
Nowadays, the wide spectrum of Intel Xeon processors is available. The new Zen CPU architecture developed by AMD has extended the number of options for x86_64 HPC hardware. Moreover, Nvidia has released a custom 64-bit Denver architecture based on the ARM instruction set. This large number of options makes the optimal CPU choice for perspective HPC systems not a straightforward procedure. Such a co-design procedure should follow the requests from the end-users community. Modern computational materials science studies are among the major consumers of HPC resources worldwide. The VASP code is perhaps the most popular tool for these research. In this work, we discuss the benchmark metric and results based on a VASP test model that give us the possibility to compare different hardware and to distinguish the best options with respect to energy-to-solution criterion.