Article
Анализ клинических путей пациентов в медицинских учреждениях на основе методов жесткой и нечеткой кластеризации
Modeling the processes in a healthcare system plays a large role in understanding its activities and serves as the basis for increasing the efficiency of medical institutions. The tasks of analyzing and modeling large amounts of urban healthcare data using machine learning methods are of particular importance and relevance for the development of industry solutions in the framework of digitalization of the economy, where data is the key factor in production. The problem of automatic analysis and determination of clinical pathways groups of patients based on clustering methods is considered in this research. Existing projects in this area reflect a great interest on the part of the scientific community in such studies; however, there is a need to develop a number of methodological approaches for their further practical application in urban outpatient institutions, taking into account the specifics of the organization being analyzed. The aim of the study is to improve the quality of management and segmentation of patient input flow in urban medical institutions based on cluster analysis methods for the further development of recommendation services. One approach to achieving this goal is the development and implementation of clinical pathways, or patient trajectories. In general, the clinical pathway of a patient might be interpreted as the trajectory when receiving medical services in respective institutions. The approach of developing groups of patient routes by the hierarchical agglomerative algorithm with the Ward method and Additive Regularization of Topic Models (ARTM) is presented in this article. A computational experiment based on public data on the routes of patients with a diagnosis of sepsis is described. One feature of the proposed approach is not just the automation of the determination of similar groups of patient trajectories, but also the consideration of clinical pathways patterns to form recommendations for organizing the resource allocation of a medical institution. The proposed approach to segmenting the input heterogeneous flow of patients in urban medical institutions on the basis of clustering consists of the following steps: 1) preparing the data of the medical institution in the format of an event log; 2) encoding patient routes; 3) determination of the upper limit of the clinical pathway length; 4) hierarchical agglomerative clustering; 5) additive regularization of topic models (ARTM); 6) identifying popular patient route patterns. The resulting clusters of routes serve as the foundation for the further development of a simulation model of a medical institution and provide recommendations to patients. In addition, these groups may underlie the development of the robotic process automation system (RPA), which simulates human actions and allows you to automate the interpretation of data to manage the resources of the institution.
We present a complex analysis of business models for large, medium and small Russian commercial banks from 2006 to 2009. The Russian banks are grouped based on homogeneity criteria of their financial and operational outcomes. The banks’ structure of assets and liabilities, profitability and liquidity ratio are taken into account. The results show how the banks are adjusted their business models before and after the financial turmoil taken place in 2008. In addition, the prevailing banking business models observed for the leading banks in Russia are defined. The banks often changing their business models are found and analyzed.
Operational processes leave trails in the information systems supporting them. Such event data are the starting point for process mining – an emerging scientific discipline relating modeled and observed behavior. The relevance of process mining is increasing as more and more event data become available. The increasing volume of such data (“Big Data”) provides both opportunities and challenges for process mining. In this paper we focus on two particular types of process mining: process discovery (learning a process model from example behavior recorded in an event log) and conformance checking (diagnosing and quantifying discrepancies between observed behavior and modeled behavior). These tasks become challenging when there are hundreds or even thousands of different activities and millions of cases. Typically, process mining algorithms are linear in the number of cases and exponential in the number of different activities. This paper proposes a very general divide-and-conquer approach that decomposes the event log based on a partitioning of activities. Unlike existing approaches, this paper does not assume a particular process representation (e.g., Petri nets or BPMN) and allows for various decomposition strategies (e.g., SESE- or passage-based decomposition). Moreover, the generic divide-and-conquer approach reveals the core requirements for decomposing process discovery and conformance checking problems.
Human reasoning uses to distinguish things that do change and things do not. The latter are commonly expressed in the reasoning as objects, which may represent classes or instances, and classes being further divided into concept types and relation types. These became the main issue of knowledge engineering and have been well tractable by computer. The former kind of things, meanwhile, inevitably evokes consideration not only of a ``thing-that-changes'' but also of ``change-of-a-thing'' and thus claims that the change itself be another entity that needs to be comprehended and handled. This special entity, being treated from different perspectives as event, (changeable) state, transformation, process, scenario and the like, remains a controversial philosophical, linguistic and scientific entity and has gained notably less systematic attention by knowledge engineers than non-changing things. In particular, there is no clarity in how to express the change in knowledge engineering -– as some specific concept or relation type, as a statement, or proposition, in which subject is related to predicate(s), or in another way. There seems to be an agreement among the scientists that time has to be related, explicitly or implicitly, to everything we regard as change -– but the way it should be related, and whether this should be exactly the time or some generic property or condition, is also an issue of debate. To bring together the researchers who study representation of change in knowledge engineering both in fundamental and applied aspects, a workshop on Modeling States, Events, Processes and Scenarios (MSEPS 2013) was run on 12 January, 2013, in the framework of the 20th International Conference on Conceptual Structures (ICCS 2013) in Mumbai, India. Seven submissions were selected for presentation that cover major approaches to representation of the change and address such diverse domains of knowledge as biology, geology, oceanography, physics, chemistry and also some multidisciplinary contexts. Concept maps of biological and other transformations were presented by Meena Kharatmal and Nagarjuna Gadiradju. Their approach stems from conceptual graphs of Sowa and represents the vision of change as a particular type of concept or, likely, relation, defined by meaning rather than by formal properties. The work of Prima Gustiene and Remigijus Gustas follows a congenial approach but develops a different notation for representation of the change based on specified actor dependencies in application to business issues concerning privacy-related data. Nataly Zhukova, Oksana Smirnova and Dmitry Ignatov explore the structure of oceanographic data in concern of opportunity of their representation by event ontologies and conceptual graphs. Vladimir Anokhin and Biju Longhinos examine another Earth science, geotectonics, and demonstrate that its long-lasting methodological problems urge application of knowledge engineering methods, primarily engineering of knowledge about events and processes. They suggest a draft of application strategy of knowledge engineering in geotectonics and claim for a joint interdisciplinary effort in this direction. Doji Lokku and Anuradha Alladi introduce a concept of ``purposefulness'' for any human action and suggest a modeling approach based on it in the systems theory context. In this approach, intellectual means for reaching a purpose are regarded either as structure of a system, in which the purpose is achieved, or as a process that takes place in this system. These means are exposed to different concerns of knowledge, which may be either favorable or not to achieving the purpose. The resulting framework perhaps can be described in a conceptual-graph-related way but is also obviously interpretable as a statement-based pattern, more or less resembling the event bush (Pshenichny et al., 2009). This binds all the aforementioned works with the last two contributions, which represent an approach based on understanding of the change as a succession of events (including at least one event), the latter being expressed as a statement with one subject and finite number of predicates. The method of event bush that materializes this approach, previously applied mostly in the geosciences, is demonstrated here in application to physical modeling by Cyril Pshenichny, Roberto Carniel and Paolo Diviacco and to chemical and experimental issues, by Cyril Pshenichny. The reported results and their discussion form an agenda for future meetings, discussions and publications. This agenda includes, though is not limited to, - logical tools for processes modeling, - visual notations for dynamic knowledge representation, - graph languages and graph semantics, - semantic science applications, - event-driven reasoning, - ontological modeling of events and time, - process mining, - modeling of events, states, processes and scenarios in particular domains and interdisciplinary contexts. The workshop has marked the formation of a new sub-discipline in the knowledge engineering, and future effort will be directed to consolidate its conceptual base and transform the existing diversity of approaches to representation of the change into an arsenal of complementary tools sharpened for various spectral regions of tasks in different domains.
BPM 2013 was the 11th conference in a series that provides a prestigious forum for researchers and practitioners in the field of business process management (BPM). The conference was organized by Tsinghua University, China, and took place during August 26–30, 2013, in Beijing, China. Compared to previous editions of BPM, this year we noted a lower focus by authors on topics like process modeling, while we also observed a considerable growth of submissions regarding areas like process mining, conformance/compliance checking, and process model matching. The integrated consideration of processes and data remains popular, and novel viewpoints focus, among others, on data completeness in business processes, the modeling and runtime support of event streaming in business processes, and business process architectures.
Data Correcting Algorithms in Combinatorial Optimization focuses on algorithmic applications of the well known polynomially solvable special cases of computationally intractable problems. The purpose of this text is to design practically efficient algorithms for solving wide classes of combinatorial optimization problems. Researches, students and engineers will benefit from new bounds and branching rules in development efficient branch-and-bound type computational algorithms. This book examines applications for solving the Traveling Salesman Problem and its variations, Maximum Weight Independent Set Problem, Different Classes of Allocation and Cluster Analysis as well as some classes of Scheduling Problems. Data Correcting Algorithms in Combinatorial Optimization introduces the data correcting approach to algorithms which provide an answer to the following questions: how to construct a bound to the original intractable problem and find which element of the corrected instance one should branch such that the total size of search tree will be minimized. The PC time needed for solving intractable problems will be adjusted with the requirements for solving real world problems.
Recent breakthroughs in process mining research make it possible to discover, analyze, and improve business processes based on event data. The growth of event data provides many opportunities but also imposes new challenges. Process mining is typically done for an isolated well-defined process in steady-state. However, the boundaries of a process may be fluid and there is a need to continuously view event data from different angles. This paper proposes the notion of process cubes where events and process models are organized using different dimensions. Each cell in the process cube corresponds to a set of events and can be used to discover a process model, to check conformance with respect to some process model, or to discover bottlenecks. The idea is related to the well-known OLAP (Online Analytical Processing) data cubes and associated operations such as slice, dice, roll-up, and drill-down. However, there are also significant differences because of the process-related nature of event data. For example, process discovery based on events is incomparable to computing the average or sum over a set of numerical values. Moreover, dimensions related to process instances (e.g. cases are split into gold and silver customers), subprocesses (e.g. acquisition versus delivery), organizational entities (e.g. backoffice versus frontoffice), and time (e.g., 2010, 2011, 2012, and 2013) are semantically different and it is challenging to slice, dice, roll-up, and drill-down process mining results efficiently.
For the development of technological innovations it is essential to ensure competent and modern commercialization within the framework of balanced business models. Multifactor cluster analysis of business models of contemporary high-technology companies and industries shows that the most effective commercialization emanate in the framework of four basic models. Company's profitability does not depend directly on the level of its technologies, but is determined by the quality of these business models. Besides trends in high-technology industries demonstrate raising segmentation and differentiation of markets and more frequent utilization of value network models.
The analysis of region differentiation of microentrepreneurship development and indexes of judicial statistics based on the current data of statistical recording are given in the article. The capabilities of cluster analysis for revelation of typological groups of the Russian region depending on the level of entrepreneurial activities and the results of law enforcement practice are represented.
This paper presents a pattern behavioral analysis of 100 largest Russian commercial banks by total assets during an eight- year period: from the first quarter of 1999 to the second quarter of 2007. Bank performance indicators are analyzed. Structural similarities in the development of the banks are examined. A cluster analysis is applied to determine banks with a similar structure of operations. This analysis allows to estimate how the structure of the Russian banking system has been changing over time. In particular, it allows to identify prevailing patterns in the behavior of Russian commercial banks and to analyze the stability of their position in a particular pattern.
How seriously does the degree of trust in basic social and political institutions for people from different countries depend on their individual characteristics? To answer this question, three types of models have been estimated using the data of the fifth wave of the World Value Survey: the first one based on the assumption about a generalized relationship for all countries, the second one taking into account heterogeneity of countries (using introduction of the country-level variables), the third type applying a preliminary subdivision of countries into five clusters. The obtained results have been used for suggestion of possible actions to increase public confidence in the basic institutions.
Abstract — This paper presents preliminary result of research project, which is aimed to combine ontology information retrieval technology and process mining tools. The ontologies describing both data domains and data sources are used to search news in the Internet and to extract facts. Process Mining tools allows finding regularities, relations between single events or event types to construct formal models of processes which can be used for the next ensuing analysis by experts. An applicability of the approach is studied with example of the environmental technogenic disasters caused with oil spills, and followed events. Ontologies allow adjustment to new domains.
I give the explicit formula for the (set-theoretical) system of Resultants of m+1 homogeneous polynomials in n+1 variables