Модель оценки рисков информационной безопасности информационных систем на основе облачных вычислений
Widespread acceptance and adoption of cloud computing calls for adaptation and development of existing risk assessment models of information systems. The approach suggested in this article can be used for risk assessment of information systems functioning on the basis of cloud computing technology, and assess the effectiveness of security measures.
The proceedings of the 11th International Conference on Service-Oriented Computing (ICSOC 2013), held in Berlin, Germany, December 2–5, 2013, contain high-quality research papers that represent the latest results, ideas, and positions in the field of service-oriented computing. Since the first meeting more than ten years ago, ICSOC has grown to become the premier international forum for academics, industry researchers, and practitioners to share, report, and discuss their ground-breaking work. ICSOC 2013 continued along this tradition, in particular focusing on emerging trends at the intersection between service-oriented, cloud computing, and big data.
International Science and Technology Conference "Modern Networking Technologies (MoNeTec): SDN&NFV Next Generation of Computational Infrastructure" was dedicated to the Software defined Networks (SDN) and Network Function Virtualization (NFV). These technologies have emerged as the hottest networking trends of the past a few years. The conference proceeding represent the papers where the broad scope of SDN&NFV topics are discussed.
Almost all of the technologies that are now part of the cloud paradigm existed before, but so far the market has not been proposals that bring together emerging technologies in a single commercially attractive solution. However, in the last decade, there were public cloud services, through which these technologies, on the one hand, available to the developer, and on the other - it is clear to the business community. But many of the features that make cloud computing attractive, may be in conflict with traditional models of information security.
Due to the fact that cloud computing bring with them new challenges in the field of information security, it is imperative for organizations to control the process of information risk management in the cloud. In this article on the basis of Common Vulnerability Scoring System, allowing to determine the qualitative indicator of exposure to vulnerabilities of information systems, taking into account environmental factors, we propose a method of risk assessment for different types of cloud deployment environments.
Information Risk Management, determine the applicability of cloud services for the organization is impossible without understanding the context in which the organization operates and the consequences of the possible types of threats that it may face as a result of their activities. This paper proposes a risk assessment approach used in the selection of the most appropriate configuration options cloud computing environment from the point of view of safety requirements. Application of risk assessment for different types of deployment of cloud environments will reveal the ratio counter possible attacks and to correlate the amount of damage to the total cost of ownership of the entire IT infrastructure of the organization.
Some provisions of SWOT analysis and assessment of its productivity in business are criticized.
The use of hardware virtualization for ensuring information security is discussed. A review of various approaches to improving the security of software systems based on virtualization is given. A review of possible scenarios of using virtualization by intruders is also presented. The application domains and limitations of the available solutions and perspectives of future development in the field are discussed.
This paperwork overviews core technologies implemented by comparably new products at information security market - web application firewalls. Web applications are a very wide-used and convenient way of presenting remote users with access to corporate information resources. It can however become single point of failure rendering all the information infrastructure unreachable for legitimate clients. To prevent malicious access attempts to endpoint information resources and, intermediately, to web server, a new class of information security solutions has been created. Web application firewalls function at the highest, seventh layer of ISO/OSI model and serves as a controlling tunnel for all the traffic heading to and from company’s web application server(s). To ensure decent levels of traffic monitoring and intrusion prevention web application firewalls are equipped with various mechanisms of data exchange session “normalness” control. These mechanisms include protocol check routines, machine learning techniques, traffic signature analysis and more dedicated means like denial of service, XSS injection and CRRF attack prevention. Ability to research and add user rules to be processed along with vendor-provided ones is important since every company has its own security policy and, therefore the web application firewall should provide security engineers with ways to tweak its rules to reflect the security policy more precisely. This research is based on wide practice experience integrating web application firewalls into security landscape of various organizations, their administration and customization. We illustrate our research of available filtering mechanisms and their implementations with example product features by market leaders, schemes and screenshots from real web application firewall systems.
In this paper we present a virtualization-based approach of protecting execution of trusted applications inside potentially compromised operating system. In out approach, we do not isolate application from other processes in any way; instead, we use hypervisor to control processes inside OS and to prevent undesired actions with application resources. The only requirement for our technique to work is presence of hardware support for virtualization; no modifications in application or OS are required.
Generalized error-locating codes are discussed. An algorithm for calculation of the upper bound of the probability of erroneous decoding for known code parameters and the input error probability is given. Based on this algorithm, an algorithm for selection of the code parameters for a specified design and input and output error probabilities is constructed. The lower bound of the probability of erroneous decoding is given. Examples of the dependence of the probability of erroneous decoding on the input error probability are given and the behavior of the obtained curves is explained.
Event logs collected by modern information and technical systems usually contain enough data for automated process models discovery. A variety of algorithms was developed for process models discovery, conformance checking, log to model alignment, comparison of process models, etc., nevertheless a quick analysis of ad-hoc selected parts of a journal still have not get a full-fledged implementation. This paper describes an ROLAP-based method of multidimensional event logs storage for process mining. The result of the analysis of the journal is visualized as directed graph representing the union of all possible event sequences, ranked by their occurrence probability. Our implementation allows the analyst to discover process models for sublogs defined by ad-hoc selection of criteria and value of occurrence probability
It is well-known that the class of sets that can be computed by polynomial size circuits is equal to the class of sets that are polynomial time reducible to a sparse set. It is widely believed, but unfortunately up to now unproven, that there are sets in EXPNP, or even in EXP that are not computable by polynomial size circuits and hence are not reducible to a sparse set. In this paper we study this question in a more restricted setting: what is the computational complexity of sparse sets that are selfreducible? It follows from earlier work of Lozano and Torán (in: Mathematical systems theory, 1991) that EXPNP does not have sparse selfreducible hard sets. We define a natural version of selfreduction, tree-selfreducibility, and show that NEXP does not have sparse tree-selfreducible hard sets. We also construct an oracle relative to which all of EXP is reducible to a sparse tree-selfreducible set. These lower bounds are corollaries of more general results about the computational complexity of sparse sets that are selfreducible, and can be interpreted as super-polynomial circuit lower bounds for NEXP.
The Handbook of CO₂ in Power Systems' objective is to include the state-of-the-art developments that occurred in power systems taking CO₂ emission into account. The book includes power systems operation modeling with CO₂ emissions considerations, CO₂ market mechanism modeling, CO₂ regulation policy modeling, carbon price forecasting, and carbon capture modeling. For each of the subjects, at least one article authored by a world specialist on the specific domain is included.