Article
Discovery of Cancellation Regions within Process Mining Techniques
Process mining is a relatively new field of computer science which deals with process discovery and analysis based on event logs. In this work we consider the problem of discovering workflow nets with cancellation regions from event logs. Cancellations occur in the majority of real-life event logs. In spite of huge amount of process mining techniques little has been done on cancellation regions discovery. We show that the state-based region algorithm gives labeled Petri nets with overcomplicated control flow structure for logs with cancellations. We propose a novel method to discover cancellation regions from the transition systems built on event logs and show the way to construct equivalent workflow net with reset arcs to simplify the control flow structure.
Process mining techniques relate observed behavior to modeled behavior, e.g., the automatic discovery of a process model based on an event log. Process mining is not limited to process discovery and also includes conformance checking and model enhancement. Conformance checking techniques are used to diagnose the deviations of the observed behavior as recorded in the event log from some process model. Model enhancement allows to extend process models using additional perspectives, conformance and performance information. In recent years, BPMN (Business Process Model and Notation) 2.0 has become a de facto standard for modeling business processes in industry. This paper presents the BPMN support current in ProM. ProM is the most known and used open-source process mining framework. ProM’s functionalities of discovering, analyzing and enhancing BPMN models are discussed. Support of the BPMN 2.0 standard will help ProM users to bridge the gap between formal models (such as Petri nets, causal nets and others) and process models used by practitioners.
Process mining is a new direction in the field of modeling and analysis of processes, where using information from event logs, describing the history of the system behavior, plays an important role. Methods and approaches used in the process mining are often based on various heuristics, and experiments with large event logs are crucial for the study and comparison of the developed methods and algorithms. Such experiments are very time consuming, so automation of experiments is an important task in the field of process mining. This paper presents the language DPMine developed specifically to describe and carry out experiments on the discovery and analysis of process models. The basic concepts of the DPMine language, as well as principles and mechanisms of its extension are described. Ways of integration of the DPMine language as dynamically loaded components into the VTMine modeling tool are considered. An illustrating example of an experiment to build a fuzzy model of the process discovered from the log data stored in a normalized database is given.
This book constitutes the proceedings of the 37th International Conference on Application and Theory of Petri Nets and Concurrency, PETRI NETS 2016, held in Toruń, Poland, in June 2016. Petri Nets 2016 was co-located with the Application of Concurrency to System Design Conference, ACSD 2016. The 16 papers including 3 tool papers with 4 invited talks presented together in this volume were carefully reviewed and selected from 42 submissions. Papers presenting original research on application or theory of Petri nets, as well as contributions addressing topics relevant to the general field of distributed and concurrent systems are presented within this volume.
These are the proceedings of the International Workshop on Petri Nets and Software Engineering (PNSE’13) and the International Workshop on Modeling and Business Environments (ModBE’13) in Milano, Italy, June 24–25, 2013. These are co-located events of Petri Nets 2013, the 34th international conference on Applications and Theory of Petri Nets and Concurrency.
PNSE'13 presents the use of Petri Nets (P/T-Nets, Coloured Petri Nets and extensions) in the formal process of software engineering, covering modelling, validation, and verification, as well as their application and tools supporting the disciplines mentioned above.
ModBE’13 provides a forum for researchers from interested communities to investigate, experience, compare, contrast and discuss solutions for modeling in business environments with Petri nets and other modeling techniques.
This volume constitutes the proceedings of the 34th International Conference on Application and Theory of Petri Nets and Concurrency (PETRI NETS 2013). The Petri Net conferences serve as annual meeting places to discuss the progress in the field of Petri nets and related models of concurrency. They provide a forum for researchers to present and discuss both applications and theoretical developments in this area. Novel tools and substantial enhancements to existing tools can also be presented. The satellite program of the conference comprised three workshops, a Petri net course including basic and advanced tutorials and an additional tutorial on the work of Carl Adam Petri and Anatol W. Holt.
Resource-driven automata (RDA) are finite automata, sitting in the nodes of a finite system net and asynchronously consuming/producing shared resources through input/output system ports (arcs of the system net). RDAs themselves may be resources for each other, thus allowing the highly flexible structure of the model. It was proved earlier, that RDA-nets are expressively equivalent to Petri nets. In this paper the new formalism of cellular RDAs is introduced. Cellular RDAs are RDA-nets with an infinite regularly structured system net. We build a hierarchy of cellular RDA classes on the basis of restrictions on the underlying grid. The expressive power of several major classes of 1-dimensional grids is studied.
In this work we consider modeling of services with workflow modules, which are a subclass of Petri nets. The service compatibility problem is to answer the question, whether two Web services fit together, i.e. whether the composed system is sound. We study complementarity of service produced/consumed resources, that is a necessary condition for the service compatibility. Resources, which are produced/consumed by a Web service, are described as a multiset language. We define an algebra of multiset languages and present an algorithm for checking the conformance of resources for two given structured workflow modules.
Process-aware information systems (PAIS) enable developing models for interaction of processes, monitoring accuracy of their execution and checking if they interact with each other properly. PAIS can generate large data logs that contain the information about the interaction of processes in time. Studying PAIS logs with the purpose of data mining and modeling lies within the scope of Process Mining. There is a number of tools developed for Process Mining, including the most ubiquitous ProM, whose functionality is extended by plugins. To perform an object-aware experiment one has to sequentially run multiple plugins. This process becomes extremely time-consuming in the case of large-scale experiments involving a large number of plugins. The paper proposes a concept of DPMine/P language of process modeling and analysis to be implemented in ProM. The language under development aims at joining separate stages of the experiment into a single sequence, that is an experiment model. The implementation of the basic semantics of the language is done through the concept of blocks, ports, connectors and schemes. These items are discussed in detail in the paper, and examples of their use for specific tasks are presented ibid.
A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traffic is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the final node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a finite-dimensional system of differential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the “correct” extension of solutions of a system of differential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station.
Event logs collected by modern information and technical systems usually contain enough data for automated process models discovery. A variety of algorithms was developed for process models discovery, conformance checking, log to model alignment, comparison of process models, etc., nevertheless a quick analysis of ad-hoc selected parts of a journal still have not get a full-fledged implementation. This paper describes an ROLAP-based method of multidimensional event logs storage for process mining. The result of the analysis of the journal is visualized as directed graph representing the union of all possible event sequences, ranked by their occurrence probability. Our implementation allows the analyst to discover process models for sublogs defined by ad-hoc selection of criteria and value of occurrence probability
The geographic information system (GIS) is based on the first and only Russian Imperial Census of 1897 and the First All-Union Census of the Soviet Union of 1926. The GIS features vector data (shapefiles) of allprovinces of the two states. For the 1897 census, there is information about linguistic, religious, and social estate groups. The part based on the 1926 census features nationality. Both shapefiles include information on gender, rural and urban population. The GIS allows for producing any necessary maps for individual studies of the period which require the administrative boundaries and demographic information.
Existing approaches suggest that IT strategy should be a reflection of business strategy. However, actually organisations do not often follow business strategy even if it is formally declared. In these conditions, IT strategy can be viewed not as a plan, but as an organisational shared view on the role of information systems. This approach generally reflects only a top-down perspective of IT strategy. So, it can be supplemented by a strategic behaviour pattern (i.e., more or less standard response to a changes that is formed as result of previous experience) to implement bottom-up approach. Two components that can help to establish effective reaction regarding new initiatives in IT are proposed here: model of IT-related decision making, and efficiency measurement metric to estimate maturity of business processes and appropriate IT. Usage of proposed tools is demonstrated in practical cases.