Neuroimaging studies are accumulating fast. A significant number of these studies use functional magnetic resonance imaging (fMRI) and report stereotactic brain coordinates. In the last 15 years meta-analytic software tools have been developed to identify over-arching data agreement across studies (e.g., http://www.brainmap.org/). Meta-analytic studies help establish statistical concordance and quantitatively summarize large amounts of evidence. To date there are 944 papers on fMRI meta-analyses, as indexed by Web of Science (WOS; 28/04/18). Before analyzing coordinates researchers have to compile, systematically review relevant literature and extract stereotaxic coordinates. One process of pooling information from the articles requires manual search of the articles and manual extraction the relevant data, such as coordinates (i.e., foci), contrasts (i.e., experiments) and types of analyses (whole-brain or region of interest). Another available approach is offered by software with pre-extracted information, such as Sleuth (http://brainmap.org/sleuth/), Neurosynth (http://neurosynth.org/) and other open-source programs. Critically, these methods do not have up to date datasets covering only a limited number of studies (e.g., 11406 papers in the Neurosynth and 3294 papers in the Sleuth 2.4 at the 28/04/2018), whereas, a WOS search for the keyword (“fMRI”) yields 61976 papers. To improve the quality of the manual search for area-based meta-analyses and increase the speed of the identification of the foci of interest, we developed CoordsFinder - standalone graphical interface software for addressing the challenge of processing multiple fMRI articles reporting data in coordinate space. The software is written using WPF (C# and XAML), based on .NET Framework 4.5.2, and it supports Microsoft Windows 7 operating system or higher. The CoordsFinder estimates the foci uploaded in the software manually and searches for it inside the specified folder, which contains the pdf files of the papers, as this is the most common file format for articles. Foci coordinates can be found both in tables and in a plain text of the articles. The foci file uploaded could contain MNI or TAL space coordinates, and the software can indicate each type. In the current version, CoordsFinder can explore only files stored at the user’s computer, and process 274 papers per minute for a typical computer. Practically this software provides a solution for automatically extracting coordinates from multiple articles for effectively organizing and further analyzing data already available in the literature.
Workshop on Program Semantics, Specification and Verification: Theory and Applications is the leading event in Russia in the field of applying of the formal methods to software analysis. Proceedings of the ninth workshop dedicated to formalisms for program semantics, formal models and verification, programming and specification languages, algebraic and logical aspects of programming.
This book constitutes the refereed proceedings of the 14th International Workshop on Enterprise and Organizational Modeling and Simulation, EOMAS 2018, held in Tallinn, Estonia, in June 2018. The main focus of EOMAS is on the role, importance, and application of modeling and simulation within the extended organizational and enterprise context. The 11 full papers presented in this volume were carefully reviewed and selected from 22 submissions. They were organized in topical sections on conceptual modeling, enterprise engineering, and formal methods.
This state-of-the-art survey is dedicated to the memory of Emmanuil Markovich Braverman (1931-1977), a pioneer in developing the machine learning theory. The 12 revised full papers and 4 short papers included in this volume were presented at the conference "Braverman Readings in Machine Learning: Key Ideas from Inception to Current State" held in Boston, MA, USA, in April 2017, commemorating the 40th anniversary of Emmanuil Braverman's decease. The papers present an overview of some of Braverman's ideas and approaches. The collection is divided in three parts. The first part bridges the past and the present. Its main contents relate to the concept of kernel function and its application to signal and image analysis as well as clustering. The second part presents a set of extensions of Braverman's work to issues of current interest both in theory and applications of machine learning. The third part includes short essays by a friend, a student, and a colleague.
This book constitutes the refereed proceedings of the 7th Conference on Artificial Intelligence and Natural Language, AINL 2018, held in St. Petersburg, Russia, in October 2018. The 19 revised full papers were carefully reviewed and selected from 56 submissions and cover a wide range of topics, including morphology and word-level semantics, sentence and discourse representations, corpus linguistics, language resources, and social interaction analysis.
This book constitutes extended, revised and selected papers from the 7th International Conference on Optimization Problems and Their Applications, OPTA 2018, held in Omsk, Russia in July 2018. The 27 papers presented in this volume were carefully reviewed and selected from a total of 73 submissions. The papers are listed in thematic sections, namely location problems, scheduling and routing problems, optimization problems in data analysis, mathematical programming, game theory and economical applications, applied optimization problems and metaheuristics.
Information systems in different domains, such as healthcare, tourism, banking, government and others, record operational behavior in the form of event logs. The process mining discipline offers dozens of techniques to discover, analyze, and visualize processes running in information systems, based on their event logs. The representational bias (the language for processes representation) plays an important role in the process discovery. In this work BPMN (Business Process Model and Notation) language was chosen as a representational bias and as a starting point for the process discovery, analysis and enhancement. BPMN is a common process modeling language, widely used by consultants, managers, analysts, and software engineers in various application domains. This work aims to bridge the gap between process mining techniques and BPMN. Existing techniques are often limited to a single perspective, e.g., just the control flow, subprocesses, or just resources. The goal of this work is to fully support the BPMN specification in the context of process mining and suggest a unified and integrated approach allowing for the discovery, analysis and enhancement of hierarchical high-level BPMN models. The approach proposed in this thesis is supported by tools that enable users to analyze discovered processes in BPMN-compliant tools and even automate their executions, using existing BPMN engines.
This volume contains proceedings of the first Workshop on Data Analysis in Medicine held in May 2017 at the National Research University Higher School of Economics, Moscow. The volume contains one invited paper by Dr. Svetla Boytcheva, 6 regular contributions and 2 project proposals, carefully selected and reviewed by at least two reviewers from the international program commit- tee. The papers accepted for publication report on different aspects of analysis of medical data, among them treatment of data on particular diseases (Consoli- dated mathematical growth model of Breast Cancer CoMBreC, Artificial neural networks for prediction of final height in children with growth hormone deficiency), methods of data analysis (analysis of rare diseases, methods of machine learning and Big Data, subgroup discovery for treatment optimization), and instrumental tools (explanation-oriented methods of data analysis in medicine, information support features of the medical research process, modeling frame- work for medical data semantic transformations, radiology quality management and peer-review system). Organizers of the workshop would like to thank the reviewers for their careful work and all contributors and participants of the workshop.
The materials of The International Scientific – Practical Conference is presented below.
The Conference reflects the modern state of innovation in education, science, industry and social-economic sphere, from the standpoint of introducing new information technologies.
It is interesting for a wide range of researchers, teachers, graduate students and professionals in the field of innovation and information technologies.
This book discusses smart, agile software development methods and their applications for enterprise crisis management, presenting a systematic approach that promotes agility and crisis management in software engineering. The key finding is that these crises are caused by both technology-based and human-related factors. Being mission-critical, human-related issues are often neglected. To manage the crises, the book suggests an efficient agile methodology including a set of models, methods, patterns, practices and tools. Together, these make a survival toolkit for large-scale software development in crises. Further, the book analyses lifecycles and methodologies focusing on their impact on the project timeline and budget, and incorporates a set of industry-based patterns, practices and case studies, combining academic concepts and practices of software engineering.
Sustaining a competitive edge in today’s business world requires innovative approaches to product, service, and management systems design and performance. Advances in computing technologies have presented managers with additional challenges as well as further opportunities to enhance their business models.
Software Engineering for Enterprise System Agility: Emerging Research and Opportunities is a collection of innovative research that identifies the critical technological and management factors in ensuring the agility of business systems and investigates process improvement and optimization through software development. Featuring coverage on a broad range of topics such as business architecture, cloud computing, and agility patterns, this publication is ideally designed for business managers, business professionals, software developers, academicians, researchers, and upper-level students interested in current research on strategies for improving the flexibility and agility of businesses and their systems.
This edited collection presents a range of methods that can be used to analyse linguistic data quantitatively. A series of case studies of Russian data spanning different aspects of modern linguistics serve as the basis for a discussion of methodological and theoretical issues in linguistic data analysis. The book presents current trends in quantitative linguistics, evaluates methods and presents the advantages and disadvantages of each. The chapters contain introductions to the methods and relevant references for further reading.
The Russian language, despite being one of the most studied in the world, until recently has been little explored quantitatively. After a burst of research activity in the years 1960-1980, quantitative studies of Russian vanished. They are now reappearing in an entirely different context. Today we have large and deeply annotated corpora available for extended quantitative research, such as the Russian National Corpus, ruWac, RuTenTen, to name just a few (websites for these and other resources will be found in a special section in the References). The present volume is intended to fill the lacuna between the available data and the methods that can be applied to studying them.
Our goal is to present current trends in researching Russian quantitative linguistics, to evaluate the research methods vis-à-vis Russian data, and to show both the advantages and the disadvantages of the methods. We especially encouraged our authors to focus on evaluating statistical methods and new models of analysis. New findings concern applicability, evaluation, and the challenges that arise from using quantitative approaches to Russian data.
Session 1. The uncertainty in the measurements and calculations. Probabilistic methods in the processing of information. The Bayesian approach Session 2. Systems simulation. Complex objects control in the condition of uncertainty Session 3. Neurocomputing networks, genetic algorithms and their applications Session 4. Methods and tools for the design of expert systems and decision support systems Session 5. Intelligent measurements systems. New approaches in measurements: intellectual, soft and fuzzy measurements Session 6. Environmental information systems Session 7. Application of decision support systems in the economy and the social sphere
The Second Conference on Software Engineering and Information Management (SEIM-2017) aims to bring together students, researchers and practitioners in different areas of software engineering and information management. We consider SEIM-2017 to be a stepping stone for young researchers, which should help them familiarize with the conference workflow, practice writing academic papers, gather valuable feedback about their research and expand their research network. The conference welcomes submissions on a wide range of topics, including but not limited to: • Algorithms and data structures • Cloud systems • Coding theory • Compilers • Crowdsourcing • Data storage and processing • Development management • Digital signal processing • Distributed systems • E-commerce / e-government • Empirical software engineering • High-performance computing • Information retrieval • Information security • Intelligent data analysis • Internet of Things • Machine learning • Mobile systems • Modelling • Natural language processing • Networks and telecommunications • (Non-)relational databases • Operating systems • Programming languages • Recommendation systems • Robotics • Semantic web • Social networks • Software analysis • Software testing • Software verification • Software virtualization • Software-defined networks • Theoretical computer science In total, we received 35 papers, each reviewed by at least 3 members of the Program Committee, of which 8 were selected for publication in CEUR-WS.org, 8 — for indexing in RSCI, and 4 were accepted as talk-only to allow the young authors to experience the process of a scientific conference. We would like to thank the members of our Program Committee for their great work and contribution to the success of our conference! These proceedings include the SEIM-2017 papers, which were selected by the Program Committee for publication in RSCI. These papers passed not only the original review procedure, but also an additional round of post-review with the conference feedback. We thank the authors for their submissions to SEIM 2017 and hope to see them in the future! Furthermore, we would also like to thank Tatiana Mironova and Sergey Zherevchuk for their great help in organizing the conference, Computer Science Center for hosting the event, and JetBrains Research for their overall support of this endeavour! The additional information about the SEIM conference series can be found on the conference website at: http://2017. seim-conf.org/
The papers in this book comprise the 34 selected papers of the meeting mentioned on the cover and title page.
We propose a method (TT-GP) for approximate inference in Gaussian Process (GP) models. We build on previous scalable GP research including stochastic variational inference based on inducing inputs, kernel interpolation, and structure exploiting algebra. The key idea of our method is to use Tensor Train decomposition for variational parameters, which allows us to train GPs with billions of inducing inputs and achieve state-of-the-art results on several benchmarks. Further, our approach allows for training kernels based on deep neural networks without any modifications to the underlying GP model. A neural network learns a multidimensional embedding for the data, which is used by the GP to make the final prediction. We train GP and neural network parameters end-to-end without pretraining, through maximization of GP marginal likelihood. We show the efficiency of the proposed approach on several regression and classification benchmark datasets including MNIST, CIFAR-10, and Airline.
An entropy dissipative spatial discretization has recently been constructed for the multidimensional gas dynamics equations based on their preliminary parabolic quasi-gasdynamic (QGD) regularization. In this paper, an explicit finite-difference scheme with such a discretization is verified on several versions of the 1D Riemann problem, both well-known in the literature and new. The scheme is compared with the previously constructed QGD-schemes and its merits are noticed. Practical convergence rates in the mesh $L^1$-norm are computed. We also analyze the practical relevance in the nonlinear statement as the Mach number grows of recently derived necessary conditions for $L^2$-dissipativity of the Cauchy problem for a linearized QGD-scheme.
In the commentary to Jens Mammen’s book A New Logical Foundation for Psychology (2017), three issues are discussed. The first one concerns possible interrelations of: (a) others’ irreplaceability and existential irretrievability rigorously proved by Mammen; and (b) morality and attitudes to the others. Lem’s criticism of Heidegger’s existential philosophy, which paradoxically ignores mass homicide, is discussed in the context of topology of being. Different attitudes to the other as irreplaceable and irretrievable (e.g., in case of apprehension and execution of a murderer) are analyzed. The second issue concerns the possibility of true duplicates of the same person. The paradox of copied complexity is introduced. The third issue concerns reductionism (including brain reductionism) and opportunities to deduce various phenomena of development (mental development, actual genesis of creative thinking, etc.) from the new logical foundation for psychology built by Mammen.
The 3-coloring problem for a given graph consists in verifying whether it is possible to divide the vertex set of the graph into three subsets of pairwise nonadjacent vertices. A complete complexity classification is known for this problem for the hereditary classes defined by triples of forbidden induced subgraphs, each on at most 5 vertices. In this article, the quadruples of forbidden induced subgraphs is under consideration, each on at most 5 vertices. For all but three corresponding hereditary classes, the computational status of the 3-coloring problem is determined. Considering two of the remaining three classes, we prove their polynomial equivalence and polynomial reducibility to the third class.
The main goal of this paper is to study interconnections between credit ratings and financial indicators of industrial companies from BRICS countries. We use method of patterns, one of the modern methods of nonlinear modeling, to identify groups of heterogeneous objects with different influence on ratings. Additionally, in this research, we evaluate Tobit regression model for selected groups and establish some credit rating patterns for the BRICS industrial companies. Our results of Tobin model, may have practical implementation in short-term financial management.
The mass application of mobile cardiographs already leads to both explosive quantitative growth of the number of patients available for ECG study, registered daily outside the hospital (Big DATA in cardiology), and to the emergence of new qualitative opportunities for the study of long-term oscillatory processes (weeks, months, years) of the dynamics of the individual state of the Cardiovascular system of any patient.
The article demonstrates that new opportunities of long - term continuous monitoring of the Cardiov ascular system state of patients ' mass allow to reveal the regularities (DATA MINING) of Cardiovascular system dynamics, leading to the hypothesis of the existence of an adequate Cardiovascular system model as a distributed nonlinearself - oscillating system of the FPU recurrence model class . The presence of a meaningful mathematical model of Cardiovascular system within the framework of the FPU auto – recurrence , as a refinement of the traditional model of studying black box, further allows us to offer new computational methods for ECG analysis and prediction of Cardiovascular system dynamics for a refined diagnosis and evaluation of the effectiveness of the treatment.
Models of dependent type theories are contextual categories with some additional structure. We prove that if a theory T has enough structure, then the category T-Mod of its models carries the structure of a model category. We also show that if T has Σ-types, then weak equivalences can be characterized in terms of homotopy categories of models.
The paper considers the use of convolutional neural networks for the concurrent recognition of the gender and age of a person by video records of his face. The emphasis is on the incorporation of the approach into mobile video-recording software. We have investigated the fusion of decisions obtained during the processing of each video frame, including the use of the classifier committee based on Dempster–Shafer theory. We propose the novel age prediction method using the evaluation of the expectation of the most probable ages. We have compared existing neural-net models with a specially trained modification of the MobileNet convolution network with two outputs. The experimental results are given for such data collections as Kinect, IJB-A, Indian Movie and EmotiW. As compared with other conventional methods, our approach makes it possible to increase the age and sex recognition accuracy by 2-5% and 5-10% respectively.
Cardiovascular disease associated with metabolic syndrome has a high prevalence, but the mechanistic basis of metabolic cardiomyopathy remains poorly understood. We characterised the cardiac transcriptome in a murine metabolic syndrome (MetS) model (LDLR−/−; ob/ob, DKO) relative to the healthy, control heart (C57BL/6, WT) and the transcriptional changes induced by ACE-inhibition in those hearts. RNA-Seq, differential gene expression and transcription factor analysis identified 288 genes differentially expressed between DKO and WT hearts implicating 72 pathways. Hallmarks of metabolic cardiomyopathy were increased activity in integrin-linked kinase signalling, Rho signalling, dendritic cell maturation, production of nitric oxide and reactive oxygen species in macrophages, atherosclerosis, LXR-RXR signalling, cardiac hypertrophy, and acute phase response pathways. ACE-inhibition had a limited effect on gene expression in WT (55 genes, 23 pathways), and a prominent effect in DKO hearts (1143 genes, 104 pathways). In DKO hearts, ACE-I appears to counteract some of the MetS-specific pathways, while also activating cardioprotective mechanisms. We conclude that MetS and control murine hearts have unique transcriptional profiles and exhibit a partially specific transcriptional response to ACE-inhibition.