Process mining is a relatively new field of computer science, which deals with process discovery and analysis based on event logs. In this paper we consider the problem of discovering a high-level business process model from a low-level event log, i.e. automatic synthesis of process models based on the information stored in event logs of information systems. Events in a high-level model are abstract events, which can be refined to low-level subprocesses, whose behavior is recorded in event logs. Models synthesis is intensively studied in the frame of process mining research, but only models and event logs of the same granularity are mainly considered in the literature. Here we present an algorithm for discovering high-level acyclic process models from event logs and some specified partition of low-level events into subsets associated with abstract events in a high-level model.

Process mining is a relatively new field of computer science, which deals with process discovery and analysis based on event logs. In this paper we consider the problem of models and event logs conformance checking. Conformance checking is intensively studied in the frame of process mining research, but only models and event logs of the same granularity were considered in the literature. Here we present and justify the method of checking conformance between a high-level model (e.g. built by an expert) and a low-level log (generated by a system).

A parabolic partial differential equation 𝑢′𝑡(𝑡, 𝑥) = 𝐿𝑢(𝑡, 𝑥) is considered, where 𝐿 is a linear second-order differential operator with time-independent coefficients, which may depend on 𝑥. We assume that the spatial coordinate 𝑥 belongs to a finite- or infinite-dimensional real separable Hilbert space 𝐻. Assuming the existence of a strongly continuous resolving semigroup for this equation, we construct a representation of this semigroup by a Feynman formula, i.e. we write it in the form of the limit of a multiple integral over 𝐻 as the multiplicity of the integral tends to infinity. This representation gives a unique solution to the Cauchy problem in the uniform closure of the set of smooth cylindrical functions on 𝐻. Moreover, this solution depends continuously on the initial condition. In the case where the coefficient of the first-derivative term in 𝐿 vanishes we prove that the strongly continuous resolving semigroup exists (this implies the existence of the unique solution to the Cauchy problem in the class mentioned above) and that the solution to the Cauchy problem depends continuously on the coefficients of the equation.

Information systems (IS) produce numerous traces and logs at runtime. In context of SOA-based (service-oriented architecture) IS, these logs contain details about sequences of process and service calls. Modern application monitoring and error tracking tools provide only rather straightforward log search and filtering functionality. However, ``clever'' analysis of the logs is highly useful, since it can provide valuable insights into the system architecture, interaction of business domains and services. Here we took runs event logs (trace data) of a big booking system and discovered architectural guidelines violations and common anti-patterns. We applied mature process mining techniques for discovery and analysis of these logs. Process mining aims to discover, analyze, and improve processes on the basis of IS behavior recorded as event logs. In several specific examples, we show successful applications of process mining to system runtime analysis and motivate further research in this area.

The article considers the problem of estimating autoregressive model parameters of elementary speech units such as phonemes. It is suggested an iterative algorithm based on the Newton numerical minimization technique to search an autoregressive model of phonemes specified its multiple samples. For this purpose the analytical expressions of the gradient and the Hessian of Kullback–Leibler information divergence between autoregressive models were computed. Experimental studies on a set of English phonemes showed that the developed algorithm requires less computational effort for large amounts of data, and iterations count depends little on the amount of input data as opposed to reference phoneme selection algorithm based on the criterion of a minimum sum of information divergence. Moreover, the proposed algorithm allows finding models of phonemes, which provide a higher probability of correct recognition.

Modern methods and libraries for high quality pseudorandom number generation and for generation of parallel random number streams for Monte Carlo simulations are considered. The probability equidistribution property and the parameters when the property holds at dimensions up to logarithm of mesh size are considered for Multiple Recursive Generators.

Process mining is a new direction in the field of modeling and analysis of processes, where using information from event logs, describing the history of the system behavior, plays an important role. Methods and approaches used in the process mining are often based on various heuristics, and experiments with large event logs are crucial for the study and comparison of the developed methods and algorithms. Such experiments are very time consuming, so automation of experiments is an important task in the field of process mining. This paper presents the language DPMine developed specifically to describe and carry out experiments on the discovery and analysis of process models. The basic concepts of the DPMine language, as well as principles and mechanisms of its extension are described. Ways of integration of the DPMine language as dynamically loaded components into the VTMine modeling tool are considered. An illustrating example of an experiment to build a fuzzy model of the process discovered from the log data stored in a normalized database is given.

Mathematical models of distributed computing based on the calculus of mobile processes ($\pi$-calculus) are widely used for checking the information security properties of cryptographic protocols. Since $\pi$calculus is Turing-complete, this problem is undecidable in general case. Therefore, the research is carried out only for some special classes of $\pi$-calculus processes with restricted computational capabilities, for example, for non-recursive processes, in which all runs have a bounded length, for processes with a bounded number of parallel components, etc. However, even in these cases, the proposed checking procedures are time consuming. We assume that this is due to the very nature of the $\pi$ -calculus processes. The goal of this paper is to show that even for the weakest model of passive adversary and for relatively simple protocols that use only the basic $\pi$-calculus operations, the task of checking the information security properties of these protocols is co-NP-complete.

Complex information systems are often implemented by using more than one programming language. Sometimes this variety takes a form of one host and one or few string-embedded languages. Textual representation of clauses in a string-embedded language is built at run time by a host program and then analyzed, compiled or interpreted by a dedicated runtime component (database, web browser etc.) Most general-purpose programming languages may play the role of the host; one of the most evident examples of the string-embedded language is the dynamic SQL which was specified in ISO SQL standard and is supported by the majority of DBMS. Standard IDE functionality such as code completion or syntax highlighting can really helps the developers who use this technique. There are several tools providing this functionality, but they all process only one concrete string-embedded language and cannot be easily extended for supporting another language. We present a platform which allows to easily create tools for string-embedded language processing.

During the life-cycle of an Information System (IS) its actual behaviour may not correspond to the original system model. However, to the IS support it is very important to have the latest model that reflects the current system behaviour. To correct the model, the information from the event log of the system may be used. In this paper, we consider the problem of process model adjustment (correction) using the information from an event log. The input data for this task are the initial process model (a Petri net) and the event log. The result of correction should be a new process model, better reflecting the real IS behavior than the initial model. The new model could be also built from scratch, for example, with the help of one of the known algorithms for automatic synthesis of the process model from an event log. However, this may lead to crucial changes in the structure of the original model, and it will be difficult to compare the new model with the initial one, hindering its understanding and analysis. It is important to keep the initial structure of the model as much as possible. In this paper, we propose a method for process model correction based on the principle of "divide and conquer". The initial model is decomposed in several fragments. For each fragment its conformance to the event log is checked. Fragments which do not match the log are replaced by newly synthesized ones. The new model is then assembled from the fragments via transition fusion. The experiments demonstrate that our correction algorithm gives good results when it is used for correcting local discrepancies. The paper presents the description of the algorithm, the formal justication for its correctness, as well as the results of experimental testing by some artificial examples.

One of the most simple models of computation which is suitable for representation of reactive systems behaviour is a ﬁnite state transducer which operates over an input alphabet of control signals and an output alphabet of basic actions. The behaviour of such a reactive system displays itself in the correspondence between ﬂows of control signals and compositions of basic actions performed by the system. We believe that the behaviour of this kind requires more suitable and expressive means for formal speciﬁcations than the conventional LTL. In this paper, we deﬁne some new (as far as we know) extension LP-LTL of Linear Temporal Logic speciﬁcally intended for describing the properties of transducers computations. In this extension the temporal operators are parameterized by sets of words (languages) which represent distinguished ﬂows of control signals that impact on a reactive system. Basic predicates in our variant of the temporal logic are also languages in the alphabet of basic actions of a transducer; they represent the expected response of the transducer to the speciﬁed environmental inﬂuences. In our earlier papers, we considered a model checking problem for LP-LTL and LP-CTL and showed that this problem has eﬀective solutions. The aim of this paper is to estimate the expressive power of LP-LTL by comparing it with some well known logics widely used in the computer science for speciﬁcation of reactive systems behaviour. We discovered that a restricted variant LP-1-LTL of our logic is more expressive than LTL and another restricted variant LP-n-LTL has the same expressive power as monadic second order logic S1S.

rst-order program schemata is one of the simplest models of sequential imperative programs intended for solving verication and optimization problems. We consider the decidable rela- tion of logical-thermal equivalence of these schemata and the problem of their size minimization while preserving logical-thermal equivalence. We prove that this problem is decidable. Further we show that the rst-order program schemata supplied with logical-thermal equivalence and nite state determinis- tic transducers operating over substitutions are mutually translated into each other. This relationship implies that the equivalence checking problem and the minimization problem for these transducers are also decidable. In addition, on the basis of the discovered relationship, we have found a subclass of rst- order program schemata such that their minimization can be performed in polynomial time by means of known techniques for minimization of nite state transducers operating over semigroups. Finally, we demonstrate that in general case the minimization problem for nite state transducers over semigroups may have several non-isomorphic solutions.

Finite state transducers over semigroups are regarded as a formal model of sequential reactive programs that operate in the interaction with the environment. At receiving a piece of data a program performs a sequence of actions and displays the current result. Such programs usually arise at implementation of computer drivers, on-line algorithms, control procedures. In many cases verification of such programs can be reduced to minimization and equivalence checking problems for finite state transducers. Minimization of a transducer over a semigroup is performed in three stages. At first the greatest common left-divisors are computed for all states of a transducer, next a transducer is brought to a reduced form by pulling all such divisors ''upstream'', and finally a minimization algorithm for finite state automata is applied to the reduced transducer.