Book
Analysis of Images, Social Networks and Texts. 7th International Conference AIST 2018
This book constitutes the proceedings of the 7th International Conference on Analysis of Images, Social Networks and Texts, AIST 2018, held in Moscow, Russia, in July 2018.
The 29 full papers were carefully reviewed and selected from 107 submissions (of which 26 papers were rejected without being reviewed). The papers are organized in topical sections on natural language processing; analysis of images and video; general topics of data analysis; analysis of dynamic behavior through event data; optimization problems on graphs and network structures; and innovative systems.
Process mining is a new discipline aimed at constructing process models from event logs. Recently several methods for the discovery of transition systems from event logs were introduced. Considering these transition systems as finite state machines classical algorithms for deriving regular expressions can be applied. Regular expressions allow representing sequential process models in a hierarchical way, using sequence, choice, and iterative patterns. The aim of this work is to apply and tune an algorithm deriving regular expressions from transition systems within the process mining domain.
The paper investigates several techniques for hypernymy extraction from a large collection of dictionary definitions in Russian. First, definitions from different dictionaries are clustered, then single words and multiwords are extracted as hypernym candidates. A classification-based approach on pre-trained word embeddings is implemented as a complementary technique. In total, we extracted about 40K unique hypernym candidates for 22K word entries. Evaluation showed that the proposed methods applied to a large collection of dictionary data are a viable option for automatic extraction of hyponym/hypernym pairs. The obtained data is available for research purposes.
Process mining deals with various types of formal models. Some of them are used at intermediate stages of synthesis and analysis, whereas others are the desired goals themselves. Transition systems (TS) are widely used in both scenarios. Process discovery, which is a special case of the synthesis problem, tries to find patterns in event logs. In this paper, we propose a new approach to the discovery problem based on recurrent neural networks (RNN). Here, an event log serves as a training sample for a neural network; the algorithm extracts RNN's internal state as the desired TS that describes the behavior present in the log. Models derived by the approach contain all behaviors from the event log (i.e. are perfectly fit) and vary in simplicity and precision, the key model quality metrics. One of the main advantages of the neural method is the natural ability to detect and merge common behavioral parts that are scattered across the log. The paper studies the proposed method, its properties and possible cases where the application of this approach is sensible as compared to other methods of TS synthesis.
This paper deals with automatic classification of questions in the Russian language. In contrast to previously used methods, we introduce a convolutional neural network for question classification. We took advantage of an existing corpus of 2008 questions, manually annotated in accordance with a pragmatic 14-class typology. We modified the data by reducing the typology to 13 classes, expanding the dataset and improving the representativeness of some of the question types. The training data in a combined representation of word embeddings and binary regular expression-based features was used for supervised learning to approach the task of question tagging. We tested a convolutional neural network against a state-of-the-art Russian language question classification algorithm, an SVM classifier with a linear kernel and questions represented as word trigram counts, as the baseline model (60.22% accuracy on the new dataset). We also tested several widely-used machine learning methods (logistic regression, Bernoulli Naïve Bayes) trained on the new question representation. The best result of 72.38% accuracy (micro) was achieved with the CNN model. We also ran experiments on pertinent feature selection with a simple Multinomial Naïve Bayes classifier, using word features only, Add-1 smoothing and no strategy for out-of-vocabulary words. Surprisingly, the setting with top-1200 informative word features (by PPMI) and equal priors achieved only slightly lower accuracy, 70.72%, which also beats the baseline by a large margin.

Event logs collected by modern information and technical systems usually contain enough data for automated process models discovery. A variety of algorithms was developed for process models discovery, conformance checking, log to model alignment, comparison of process models, etc., nevertheless a quick analysis of ad-hoc selected parts of a journal still have not get a full-fledged implementation. This paper describes an ROLAP-based method of multidimensional event logs storage for process mining. The result of the analysis of the journal is visualized as directed graph representing the union of all possible event sequences, ranked by their occurrence probability. Our implementation allows the analyst to discover process models for sublogs defined by ad-hoc selection of criteria and value of occurrence probability
This book constitutes the proceedings of the First Asia Pacific Conference on Business Process Management held in Beijing, China, in August 2013.
In all, 19 contributions from seven countries were submitted. Following an extensive review process by an international Program Committee, seven full papers and one short paper were accepted for publication in this book and presentation at the conference. In addition, a keynote by Wil van der Aalst is also included.
Operational processes leave trails in the information systems supporting them. Such event data are the starting point for process mining – an emerging scientific discipline relating modeled and observed behavior. The relevance of process mining is increasing as more and more event data become available. The increasing volume of such data (“Big Data”) provides both opportunities and challenges for process mining. In this paper we focus on two particular types of process mining: process discovery (learning a process model from example behavior recorded in an event log) and conformance checking (diagnosing and quantifying discrepancies between observed behavior and modeled behavior). These tasks become challenging when there are hundreds or even thousands of different activities and millions of cases. Typically, process mining algorithms are linear in the number of cases and exponential in the number of different activities. This paper proposes a very general divide-and-conquer approach that decomposes the event log based on a partitioning of activities. Unlike existing approaches, this paper does not assume a particular process representation (e.g., Petri nets or BPMN) and allows for various decomposition strategies (e.g., SESE- or passage-based decomposition). Moreover, the generic divide-and-conquer approach reveals the core requirements for decomposing process discovery and conformance checking problems.
Human reasoning uses to distinguish things that do change and things do not. The latter are commonly expressed in the reasoning as objects, which may represent classes or instances, and classes being further divided into concept types and relation types. These became the main issue of knowledge engineering and have been well tractable by computer. The former kind of things, meanwhile, inevitably evokes consideration not only of a ``thing-that-changes'' but also of ``change-of-a-thing'' and thus claims that the change itself be another entity that needs to be comprehended and handled. This special entity, being treated from different perspectives as event, (changeable) state, transformation, process, scenario and the like, remains a controversial philosophical, linguistic and scientific entity and has gained notably less systematic attention by knowledge engineers than non-changing things. In particular, there is no clarity in how to express the change in knowledge engineering -– as some specific concept or relation type, as a statement, or proposition, in which subject is related to predicate(s), or in another way. There seems to be an agreement among the scientists that time has to be related, explicitly or implicitly, to everything we regard as change -– but the way it should be related, and whether this should be exactly the time or some generic property or condition, is also an issue of debate. To bring together the researchers who study representation of change in knowledge engineering both in fundamental and applied aspects, a workshop on Modeling States, Events, Processes and Scenarios (MSEPS 2013) was run on 12 January, 2013, in the framework of the 20th International Conference on Conceptual Structures (ICCS 2013) in Mumbai, India. Seven submissions were selected for presentation that cover major approaches to representation of the change and address such diverse domains of knowledge as biology, geology, oceanography, physics, chemistry and also some multidisciplinary contexts. Concept maps of biological and other transformations were presented by Meena Kharatmal and Nagarjuna Gadiradju. Their approach stems from conceptual graphs of Sowa and represents the vision of change as a particular type of concept or, likely, relation, defined by meaning rather than by formal properties. The work of Prima Gustiene and Remigijus Gustas follows a congenial approach but develops a different notation for representation of the change based on specified actor dependencies in application to business issues concerning privacy-related data. Nataly Zhukova, Oksana Smirnova and Dmitry Ignatov explore the structure of oceanographic data in concern of opportunity of their representation by event ontologies and conceptual graphs. Vladimir Anokhin and Biju Longhinos examine another Earth science, geotectonics, and demonstrate that its long-lasting methodological problems urge application of knowledge engineering methods, primarily engineering of knowledge about events and processes. They suggest a draft of application strategy of knowledge engineering in geotectonics and claim for a joint interdisciplinary effort in this direction. Doji Lokku and Anuradha Alladi introduce a concept of ``purposefulness'' for any human action and suggest a modeling approach based on it in the systems theory context. In this approach, intellectual means for reaching a purpose are regarded either as structure of a system, in which the purpose is achieved, or as a process that takes place in this system. These means are exposed to different concerns of knowledge, which may be either favorable or not to achieving the purpose. The resulting framework perhaps can be described in a conceptual-graph-related way but is also obviously interpretable as a statement-based pattern, more or less resembling the event bush (Pshenichny et al., 2009). This binds all the aforementioned works with the last two contributions, which represent an approach based on understanding of the change as a succession of events (including at least one event), the latter being expressed as a statement with one subject and finite number of predicates. The method of event bush that materializes this approach, previously applied mostly in the geosciences, is demonstrated here in application to physical modeling by Cyril Pshenichny, Roberto Carniel and Paolo Diviacco and to chemical and experimental issues, by Cyril Pshenichny. The reported results and their discussion form an agenda for future meetings, discussions and publications. This agenda includes, though is not limited to, - logical tools for processes modeling, - visual notations for dynamic knowledge representation, - graph languages and graph semantics, - semantic science applications, - event-driven reasoning, - ontological modeling of events and time, - process mining, - modeling of events, states, processes and scenarios in particular domains and interdisciplinary contexts. The workshop has marked the formation of a new sub-discipline in the knowledge engineering, and future effort will be directed to consolidate its conceptual base and transform the existing diversity of approaches to representation of the change into an arsenal of complementary tools sharpened for various spectral regions of tasks in different domains.
BPM 2013 was the 11th conference in a series that provides a prestigious forum for researchers and practitioners in the field of business process management (BPM). The conference was organized by Tsinghua University, China, and took place during August 26–30, 2013, in Beijing, China. Compared to previous editions of BPM, this year we noted a lower focus by authors on topics like process modeling, while we also observed a considerable growth of submissions regarding areas like process mining, conformance/compliance checking, and process model matching. The integrated consideration of processes and data remains popular, and novel viewpoints focus, among others, on data completeness in business processes, the modeling and runtime support of event streaming in business processes, and business process architectures.
The practical relevance of process mining is increasing as more and more event data become available. Process mining techniques aim to discover, monitor and improve real processes by extracting knowledge from event logs. The two most prominent process mining tasks are: (i) process discovery: learning a process model from example behavior recorded in an event log, and (ii) conformance checking: diagnosing and quantifying discrepancies between observed behavior and modeled behavior. The increasing volume of event data provides both opportunities and challenges for process mining. Existing process mining techniques have problems dealing with large event logs referring to many different activities. Therefore, we propose a generic approach to decompose process mining problems. The decomposition approach is generic and can be combined with different existing process discovery and conformance checking techniques. It is possible to split computationally challenging process mining problems into many smaller problems that can be analyzed easily and whose results can be combined into solutions for the original problems.
Grading dozens of Petri net models manually is a tedious and error-prone task. In this paper, we present Grade/CPN, a tool supporting the grading of Colored Petri nets modeled in CPN Tools. The tool is extensible, configurable, and can check static and dynamic properties. It automatically handles tedious tasks like checking that good modeling practise is adhered to, and supports tasks that are difficult to automate, such as checking model legibility. We propose and support the Britney Temporal Logic which can be used to guide the simulator and to check temporal properties. We provide our experiences with using the tool in a course with 100 participants.
Recent breakthroughs in process mining research make it possible to discover, analyze, and improve business processes based on event data. The growth of event data provides many opportunities but also imposes new challenges. Process mining is typically done for an isolated well-defined process in steady-state. However, the boundaries of a process may be fluid and there is a need to continuously view event data from different angles. This paper proposes the notion of process cubes where events and process models are organized using different dimensions. Each cell in the process cube corresponds to a set of events and can be used to discover a process model, to check conformance with respect to some process model, or to discover bottlenecks. The idea is related to the well-known OLAP (Online Analytical Processing) data cubes and associated operations such as slice, dice, roll-up, and drill-down. However, there are also significant differences because of the process-related nature of event data. For example, process discovery based on events is incomparable to computing the average or sum over a set of numerical values. Moreover, dimensions related to process instances (e.g. cases are split into gold and silver customers), subprocesses (e.g. acquisition versus delivery), organizational entities (e.g. backoffice versus frontoffice), and time (e.g., 2010, 2011, 2012, and 2013) are semantically different and it is challenging to slice, dice, roll-up, and drill-down process mining results efficiently.
Abstract — This paper presents preliminary result of research project, which is aimed to combine ontology information retrieval technology and process mining tools. The ontologies describing both data domains and data sources are used to search news in the Internet and to extract facts. Process Mining tools allows finding regularities, relations between single events or event types to construct formal models of processes which can be used for the next ensuing analysis by experts. An applicability of the approach is studied with example of the environmental technogenic disasters caused with oil spills, and followed events. Ontologies allow adjustment to new domains.
A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traffic is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the final node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a finite-dimensional system of differential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the “correct” extension of solutions of a system of differential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station.
The geographic information system (GIS) is based on the first and only Russian Imperial Census of 1897 and the First All-Union Census of the Soviet Union of 1926. The GIS features vector data (shapefiles) of allprovinces of the two states. For the 1897 census, there is information about linguistic, religious, and social estate groups. The part based on the 1926 census features nationality. Both shapefiles include information on gender, rural and urban population. The GIS allows for producing any necessary maps for individual studies of the period which require the administrative boundaries and demographic information.
Existing approaches suggest that IT strategy should be a reflection of business strategy. However, actually organisations do not often follow business strategy even if it is formally declared. In these conditions, IT strategy can be viewed not as a plan, but as an organisational shared view on the role of information systems. This approach generally reflects only a top-down perspective of IT strategy. So, it can be supplemented by a strategic behaviour pattern (i.e., more or less standard response to a changes that is formed as result of previous experience) to implement bottom-up approach. Two components that can help to establish effective reaction regarding new initiatives in IT are proposed here: model of IT-related decision making, and efficiency measurement metric to estimate maturity of business processes and appropriate IT. Usage of proposed tools is demonstrated in practical cases.
This proceedings publication is a compilation of selected contributions from the “Third International Conference on the Dynamics of Information Systems” which took place at the University of Florida, Gainesville, February 16–18, 2011. The purpose of this conference was to bring together scientists and engineers from industry, government, and academia in order to exchange new discoveries and results in a broad range of topics relevant to the theory and practice of dynamics of information systems. Dynamics of Information Systems: Mathematical Foundation presents state-of-the art research and is intended for graduate students and researchers interested in some of the most recent discoveries in information theory and dynamical systems. Scientists in other disciplines may also benefit from the applications of new developments to their own area of study.