Software-dened networking (SDN) is an approach to building computer net- works that separate and abstract data planes and control planes of these systems. In a SDN a centralized controller manages a distributed set of switches. A set of open commands for packet forwarding and ow-table updating was dened in the form of a protocol known as OpenFlow. In this paper we describe an abstract formal model of SDN, introduce a tentative language for specication of SDN forwarding policies, and set up formally model-checking problems for SDNs.
The authors of [1, 2] suggested a model of information distortion by white noise. The present work discusses the asymptotic behavior of CRC error probabilities at low values of p, which is the probability of distortion of transferred information bits. On the basis of the theoretical results in two specific protocols—Е1 and ETSI EN 302307—as well as in the examples, the probability values for the error in recognizing the given packet as nondistorted in the presence of at least one distortion are assessed.
We study the value distributions for the control cyclic redundancy check (CRC) of length k, drawn at the data section of volume n. The behavior of CRC value distribution is examined at large n and fixed values of k (k = const, n → ∞). With the application of the character theory, we find the conditions of asymptomatic uniformity of the CRC distribution. The asymptomatic results can be applied during the assessment of errors of a series of protocols such as USB, X.25, HDLC, Bluetooth, Ethernet, etc.
In this work, we yield an attainable upper estimate of the degree of distinguishability of a connected permutation automaton with an assigned diameter.
Nested Petri nets (NP-nets) is an extension of Petri net formalism within the “nets-within-nets” approach, when tokens in a marking are Petri nets wich have autonomous behavior and synchronize with the system net. The formalism of NP-nets allows modeling multilevel multiagent systems with dynamic structure in a natural way. Currently there is no tool support for NP-nets simulation and analysis. The paper proposes translation of NP-nets into colored Petri nets and using CPN Tools as a virtual machine for NP-nets modeling, simulation and automatic verification.
Process mining is a new direction in the field of modeling and analysis of processes, where an important role is played by the use of information from event logs that describe the history of the system behavior. Methods and approaches used in process mining are often based on various heuristics, and experiments with large event logs are crucial for substantiating and comparing developed methods and algorithms. These experiments are very time consuming, so automation of experiments is an important task in the field of process mining. This paper presents the DPMine language developed specifically to describe and carry out process mining experiments. The basic concepts of the DPMine language as well as principles and mechanisms of its extension are described. Ways of integration of the DPMine language as dynamically loaded components into the VTMine modeling tool are considered. A sample experiment of building a fuzzy model of a process from a data log stored in a normalized database is given.
To verify realtime properties of UML statecharts one may apply a UPPAAL, toolbox for model checking of realtime systems. One of the most suitable ways to specify an operational semantics of UML statecharts is to invoke the formal model of Hierarchical Timed Automata. Since the model language of UPPAAL is based on Networks of Timed Automata one has to provide a conversion of Hierarchical Timed Automata to Networks of Timed Automata. In this paper we describe this conversion algorithm and prove that it is correct w.r.t. UPPAAL query language which is based on the subset of Timed CTL.
The intent of this paper is to describe a static analysis tool under development. The main idea behind the design of this tool is to use type and effect systems for static analysis of real programs. The tool uses LLVM bitcode files as input, thus extending the set of analyzed languages to those supported by LLVM compiler infrastructure. It uses its own parser of bitcode files and a program model similar to that of LLVM. The approach taken is to research feasibility of designing instruments for static analysis by applying known type and effect system based algorithms for detecting defects to LLVM bitcode language and effectively to original source code
A wireless network being affected by a broadcast storm attack has been considered to determine the availability of autonomous nodes and its ability to perform functional tasks under information influence. The conditions for the hacker attacks from a potential intruder have been determined. A wireless device availability analysis has been carried out. A model for determining the technical characteristics of the wireless self-organizing network device has been proposed.
Process mining is a new emerging discipline related to process management, formal process models, and data mining. One of the main tasks of process mining is the model synthesis (discovery) based on event logs. A wide range of algorithms for process model discovery, analysis, and enhancement is developed. The real-life event logs often contain noise of different types. In this paper we describe the main causes of noise in the event logs and study the effect of noise on the performance of process discovery algorithms. The experimental results of application of the main process discovery algorithms to artificial event logs with noise are provided. Specially generated event logs with noise of different types were processed using the four basic discovery techniques. Although modern algorithms can cope with some types of noise, in most cases, their use does not lead to obtaining a satisfactory result. Thus, there is a need for more sophisticated algorithms to deal with noise of different types.
Finite state transducers over semigroups are regarded as a formal model of sequential reactive programs that operate in the interaction with the environment. At receiving a piece of data a program performs a sequence of actions and displays the current result. Such programs usually arise at implementation of computer drivers, on-line algorithms, control procedures. In many cases verification of such programs can be reduced to minimization and equivalence checking problems for finite state transducers. Minimization of a transducer over a semigroup is performed in three stages. At first the greatest common left-divisors are computed for all states of a transducer, next a transducer is brought to a reduced form by pulling all such divisors ''upstream'', and finally a minimization algorithm for finite state automata is applied to the reduced transducer.
First-order program schemata represent one of the most simple models of sequential imperative programs intended for solving verification and optimization problems. We consider the decidable relation of logical–thermal equivalence on these schemata and the problem of their size minimization while preserving logical–thermal equivalence. We prove that this problem is decidable. Further we show that the first-order program schemata supplied with logical–thermal equivalence and finite-state deterministic transducers operating over substitutions are mutually translated into each other. This relationship makes it possible to adapt equivalence-checking and minimization algorithms developed in one of these models of computation to the solution of the same problems for the other model of computation. In addition, on the basis of the discovered relationship, we describe a subclass of first-order program schemata such that minimization of the program schemata from this class can be performed in polynomial time by means of known techniques for minimization of finitestate transducers operating over semigroups. Finally, we demonstrate that in the general case the minimization problem for finite-state transducers over semigroups may have several non-isomorphic solutions.
One of the most serious problems when doing program analyses is dealing with function calls. While function inlining is the traditional approach to this problem, it nonetheless suffers from the increase in analysis complexity due to the state space explosion. Craig interpolation has been successfully used in recent years in the context of bounded model checking to do function summarization which allows one to replace the complete function body with its succinct summary and, therefore, reduce the complexity, but unfortunately this technique can be applied only to a pair of unsatisfiable formulae. In this work-in-progress paper we present an approach to function summarization based on Craig interpolation that overcomes its limitation by using random model sampling. It captures interesting input/output relations, strengthening satisfiable formulae into unsatisfiable ones and thus allowing the use of Craig interpolation. Evaluation results show the applicability of this approach; in our future work we plan to do a more extensive evaluation on real-world examples. © 2015, Allerton Press, Inc.
Abstract: Properties of using various assembler commands, as well as their combined application, have been investigated in order to prepare a final determination as to whether they belong to a known program. Conclusions on the effect of the ratio used in the creation of unified signatures on result of identifications are presented.
The behavior of many systems can be properly described by taking into account time constraints, and this motivates the adaptation of existing Finite State Machine (FSM)-based test derivation methods to timed models. In this paper, we propose a method for deriving conformance tests with the guaranteed fault coverage for a complete possibly nondeterministic FSM with a single clock; such Timed FSMs (TFSMs) are widely used when describing the behavior of software and digital devices. The fault domain contains every complete TFSM with the known upper bounds on the number of states and finite boundary of input time guards. The proposed method is carried out by using an appropriate FSM abstraction of the given TFSM; the test is derived against an FSM abstraction and contains timed input sequences. Shorter test suites can be derived for a restricted fault domain, for instance, for the case when the smallest duration of an input time guard is larger than two. Moreover, the obtained test suites can be reduced while preserving the completeness, when all input time guards of the specification and an implementation under test are right closed (or all guards are left-closed). Experiments are conducted to study the length of test suites constructed by different methods.