Simplifying the Use of Clouds for Scientific Computing with Everest
Cloud computing has emerged as a new paradigm for on-demand access to a wast pool of computing resources that provides an alternative to using on-premises resources. This paper discusses the challenges related to using the cloud computing infrastructures for scientific computing. An approach based on Everest platform addressing these challenges is presented along with the prototype integration of Everest with Google Compute Engine. The proposed integration enables Everest users to seamlessly provision and use cloud-based computing resources for running different types of workloads including HPC and HTC applications. In contrast to other efforts, the presented approach also supports building and sharing domain-specific web services that automate execution of applications on dynamically provisioned cloud resources or hybrid infrastructures.
Widespread acceptance and adoption of cloud computing calls for adaptation and development of existing risk assessment models of information systems. The approach suggested in this article can be used for risk assessment of information systems functioning on the basis of cloud computing technology, and assess the effectiveness of security measures.
One of the most important problems, by development of the automated systems of scientific researches is providing efficient performance of computers. The algorithm for tasks division among the processors of molecular-dynamic sub-systems modeling of the research-informational system Slag Melt system is described. The authors recommend the method of optimizing the algorithm as well as an estimation and calculation of the system efficiency and improving its operation.
The proceedings of the 11th International Conference on Service-Oriented Computing (ICSOC 2013), held in Berlin, Germany, December 2–5, 2013, contain high-quality research papers that represent the latest results, ideas, and positions in the field of service-oriented computing. Since the first meeting more than ten years ago, ICSOC has grown to become the premier international forum for academics, industry researchers, and practitioners to share, report, and discuss their ground-breaking work. ICSOC 2013 continued along this tradition, in particular focusing on emerging trends at the intersection between service-oriented, cloud computing, and big data.
International Science and Technology Conference "Modern Networking Technologies (MoNeTec): SDN&NFV Next Generation of Computational Infrastructure" was dedicated to the Software defined Networks (SDN) and Network Function Virtualization (NFV). These technologies have emerged as the hottest networking trends of the past a few years. The conference proceeding represent the papers where the broad scope of SDN&NFV topics are discussed.
Almost all of the technologies that are now part of the cloud paradigm existed before, but so far the market has not been proposals that bring together emerging technologies in a single commercially attractive solution. However, in the last decade, there were public cloud services, through which these technologies, on the one hand, available to the developer, and on the other - it is clear to the business community. But many of the features that make cloud computing attractive, may be in conflict with traditional models of information security.
Due to the fact that cloud computing bring with them new challenges in the field of information security, it is imperative for organizations to control the process of information risk management in the cloud. In this article on the basis of Common Vulnerability Scoring System, allowing to determine the qualitative indicator of exposure to vulnerabilities of information systems, taking into account environmental factors, we propose a method of risk assessment for different types of cloud deployment environments.
Information Risk Management, determine the applicability of cloud services for the organization is impossible without understanding the context in which the organization operates and the consequences of the possible types of threats that it may face as a result of their activities. This paper proposes a risk assessment approach used in the selection of the most appropriate configuration options cloud computing environment from the point of view of safety requirements. Application of risk assessment for different types of deployment of cloud environments will reveal the ratio counter possible attacks and to correlate the amount of damage to the total cost of ownership of the entire IT infrastructure of the organization.
Cloud-based technologies proliferated in the past few years, while the manufacturing industry moved towards digitization and network. Therefore, cloud-based technologies have been adopted in the development of new generation manufacturing systems which orchestrate different activities, including product design, process and task planning, production, customer service, etc. These new cloud-ingrained technologies have the potential to change the collaboration of product development partners, the processing and sharing of information as well as utilization rates of critical equipment. Cloud-based technologies affect many aspects of manufacturing activities, and they therefore have the power to enable new or change existing business models of the manufacturing industry. Based on the literature review, this paper analyzes the latest requirements, challenges, and trends of the manufacturing industry. It structures the findings in the coherent manner and further hypothesizes how cloud computing may address identified requirements and challenges as well as realize or support new concepts in manufacturing.
A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traﬃc is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the ﬁnal node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a ﬁnite-dimensional system of diﬀerential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the “correct” extension of solutions of a system of diﬀerential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station.
Event logs collected by modern information and technical systems usually contain enough data for automated process models discovery. A variety of algorithms was developed for process models discovery, conformance checking, log to model alignment, comparison of process models, etc., nevertheless a quick analysis of ad-hoc selected parts of a journal still have not get a full-fledged implementation. This paper describes an ROLAP-based method of multidimensional event logs storage for process mining. The result of the analysis of the journal is visualized as directed graph representing the union of all possible event sequences, ranked by their occurrence probability. Our implementation allows the analyst to discover process models for sublogs defined by ad-hoc selection of criteria and value of occurrence probability
The geographic information system (GIS) is based on the first and only Russian Imperial Census of 1897 and the First All-Union Census of the Soviet Union of 1926. The GIS features vector data (shapefiles) of allprovinces of the two states. For the 1897 census, there is information about linguistic, religious, and social estate groups. The part based on the 1926 census features nationality. Both shapefiles include information on gender, rural and urban population. The GIS allows for producing any necessary maps for individual studies of the period which require the administrative boundaries and demographic information.
It is well-known that the class of sets that can be computed by polynomial size circuits is equal to the class of sets that are polynomial time reducible to a sparse set. It is widely believed, but unfortunately up to now unproven, that there are sets in EXPNP, or even in EXP that are not computable by polynomial size circuits and hence are not reducible to a sparse set. In this paper we study this question in a more restricted setting: what is the computational complexity of sparse sets that are selfreducible? It follows from earlier work of Lozano and Torán (in: Mathematical systems theory, 1991) that EXPNP does not have sparse selfreducible hard sets. We define a natural version of selfreduction, tree-selfreducibility, and show that NEXP does not have sparse tree-selfreducible hard sets. We also construct an oracle relative to which all of EXP is reducible to a sparse tree-selfreducible set. These lower bounds are corollaries of more general results about the computational complexity of sparse sets that are selfreducible, and can be interpreted as super-polynomial circuit lower bounds for NEXP.