Working paper
UnconstrainedMiner: Efficient Discovery of Generalized Declarative Process Models
Information systems in different domains, such as healthcare, tourism, banking, government and others, record operational behavior in the form of event logs. The process mining discipline offers dozens of techniques to discover, analyze, and visualize processes running in information systems, based on their event logs. The representational bias (the language for processes representation) plays an important role in the process discovery. In this work BPMN (Business Process Model and Notation) language was chosen as a representational bias and as a starting point for the process discovery, analysis and enhancement. BPMN is a common process modeling language, widely used by consultants, managers, analysts, and software engineers in various application domains. This work aims to bridge the gap between process mining techniques and BPMN. Existing techniques are often limited to a single perspective, e.g., just the control flow, subprocesses, or just resources. The goal of this work is to fully support the BPMN specification in the context of process mining and suggest a unified and integrated approach allowing for the discovery, analysis and enhancement of hierarchical high-level BPMN models. The approach proposed in this thesis is supported by tools that enable users to analyze discovered processes in BPMN-compliant tools and even automate their executions, using existing BPMN engines.
Event logs collected by modern information and technical systems usually contain enough data for automated process models discovery. A variety of algorithms was developed for process models discovery, conformance checking, log to model alignment, comparison of process models, etc., nevertheless a quick analysis of ad-hoc selected parts of a journal still have not get a full-fledged implementation. This paper describes an ROLAP-based method of multidimensional event logs storage for process mining. The result of the analysis of the journal is visualized as directed graph representing the union of all possible event sequences, ranked by their occurrence probability. Our implementation allows the analyst to discover process models for sublogs defined by ad-hoc selection of criteria and value of occurrence probability
In this paper, we consider an approach to reverse engineering of UML sequence diagrams from event logs of information systems with a service-oriented architecture (SOA). UML sequence diagrams are graphical models quite suitable for representing interactions in heterogeneous component systems; in particular, the latter include increasingly popular SOA- based information systems. The approach deals with execution traces of SOA systems, represented in the form of event logs. Event logs are created by almost all modern information systems primarily for debug purposes. In contrast with conventional reverse engineering techniques that require source code for analysis, our approach for inferring UML sequence diagrams deals only with available logs and some heuristic knowledge. Our method consists of several stages of building UML sequence diagrams according to different perspectives set by the analyst. They include mapping log attributes to diagram elements, thereby determining a level of abstraction, grouping several components of a diagram and building hierarchical diagrams. We propose to group some of diagram components (messages and lifelines) based on regular expressions and build hierarchical diagrams using nested fragments. The approach is evaluated in a software prototype implemented as a Microsoft Visio add-in. The add-in builds a UML sequence diagram from a given event log according to a set of customizable settings.
This book constitutes the proceedings of the 35th International Conference on Application and Theory of Petri Nets and Concurrency, PETRI NETS 2014, held in Tunis, Tunisia, in June 2014. The 15 regular papers and 4 tool papers presented in this volume were carefully reviewed and selected from 48 submissions. In addition the book contains 3 invited talks in full paper length. The papers cover various topics in the field of Petri nets and related models of concurrency.
Operational processes leave trails in the information systems supporting them. Such event data are the starting point for process mining – an emerging scientific discipline relating modeled and observed behavior. The relevance of process mining is increasing as more and more event data become available. The increasing volume of such data (“Big Data”) provides both opportunities and challenges for process mining. In this paper we focus on two particular types of process mining: process discovery (learning a process model from example behavior recorded in an event log) and conformance checking (diagnosing and quantifying discrepancies between observed behavior and modeled behavior). These tasks become challenging when there are hundreds or even thousands of different activities and millions of cases. Typically, process mining algorithms are linear in the number of cases and exponential in the number of different activities. This paper proposes a very general divide-and-conquer approach that decomposes the event log based on a partitioning of activities. Unlike existing approaches, this paper does not assume a particular process representation (e.g., Petri nets or BPMN) and allows for various decomposition strategies (e.g., SESE- or passage-based decomposition). Moreover, the generic divide-and-conquer approach reveals the core requirements for decomposing process discovery and conformance checking problems.
Process mining is a relatively new field of computer science which deals with process discovery and analysis based on event logs. In this work we consider the problem of discovering workflow nets with cancellation regions from event logs. Cancellations occur in the majority of real-life event logs. In spite of huge amount of process mining techniques little has been done on cancellation regions discovery. We show that the state-based region algorithm gives labeled Petri nets with overcomplicated control flow structure for logs with cancellations. We propose a novel method to discover cancellation regions from the transition systems built on event logs and show the way to construct equivalent workflow net with reset arcs to simplify the control flow structure.
Process mining aims to discover and analyze processes by extracting information from event logs. Process mining discovery algorithms deal with large data sets to learn automatically process models. As more event data become available there is the desire to learn larger and more complex process models. To tackle problems related to the readability of the resulting model and to ensure tractability, various decomposition methods have been proposed. This paper presents a novel decomposition approach for discovering more readable models from event logs on the basis of a priori knowledge about the event log structure: regular and special cases of the process execution are treated separately. The transition system, corresponding to a given event log, is decomposed into a regular part and a specific part. Then one of the known discovery algorithms is applied to both parts, and finally these models are combined into a single process model. It is proven, that the structural and behavioral properties of submodels are inherited by the unified process model. The proposed discovery algorithm is illustrated using a running example.
Recent breakthroughs in process mining research make it possible to discover, analyze, and improve business processes based on event data. The growth of event data provides many opportunities but also imposes new challenges. Process mining is typically done for an isolated well-defined process in steady-state. However, the boundaries of a process may be fluid and there is a need to continuously view event data from different angles. This paper proposes the notion of process cubes where events and process models are organized using different dimensions. Each cell in the process cube corresponds to a set of events and can be used to discover a process model, to check conformance with respect to some process model, or to discover bottlenecks. The idea is related to the well-known OLAP (Online Analytical Processing) data cubes and associated operations such as slice, dice, roll-up, and drill-down. However, there are also significant differences because of the process-related nature of event data. For example, process discovery based on events is incomparable to computing the average or sum over a set of numerical values. Moreover, dimensions related to process instances (e.g. cases are split into gold and silver customers), subprocesses (e.g. acquisition versus delivery), organizational entities (e.g. backoffice versus frontoffice), and time (e.g., 2010, 2011, 2012, and 2013) are semantically different and it is challenging to slice, dice, roll-up, and drill-down process mining results efficiently.
A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traffic is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the final node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a finite-dimensional system of differential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the “correct” extension of solutions of a system of differential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station.
The geographic information system (GIS) is based on the first and only Russian Imperial Census of 1897 and the First All-Union Census of the Soviet Union of 1926. The GIS features vector data (shapefiles) of allprovinces of the two states. For the 1897 census, there is information about linguistic, religious, and social estate groups. The part based on the 1926 census features nationality. Both shapefiles include information on gender, rural and urban population. The GIS allows for producing any necessary maps for individual studies of the period which require the administrative boundaries and demographic information.
Existing approaches suggest that IT strategy should be a reflection of business strategy. However, actually organisations do not often follow business strategy even if it is formally declared. In these conditions, IT strategy can be viewed not as a plan, but as an organisational shared view on the role of information systems. This approach generally reflects only a top-down perspective of IT strategy. So, it can be supplemented by a strategic behaviour pattern (i.e., more or less standard response to a changes that is formed as result of previous experience) to implement bottom-up approach. Two components that can help to establish effective reaction regarding new initiatives in IT are proposed here: model of IT-related decision making, and efficiency measurement metric to estimate maturity of business processes and appropriate IT. Usage of proposed tools is demonstrated in practical cases.
I give the explicit formula for the (set-theoretical) system of Resultants of m+1 homogeneous polynomials in n+1 variables