The CCIS series is devoted to the publication of proceedings of computer science conferences. Its aim is to efficiently disseminate original research results in informatics in printed and electronic form. While the focus is on publication of peer-reviewed full papers presenting mature work, inclusion of reviewed short papers reporting on work in progress is welcome, too. Besides globally relevant meetings with internationally representative program committees guaranteeing a strict peer-reviewing and paper selection process, conferences run by societies or of high regional or national relevance are also considered for publication.
In online social networks, high level features of user behavior such as character traits can be predicted with data from user profiles and their connections. Recent publications use data from online social networks to detect people with depression propensity and diagnosis. In this study, we investigate the capabilities of previously published methods and metrics applied to the Russian online social network VKontakte. We gathered user profile data from most popular communities about suicide and depression on VK.com and performed comparative analysis between them and randomly sampled users. We have used not only standard user attributes like age, gender, or number of friends but also structural properties of their egocentric networks, with results similar to the study of suicide propensity in the Japanese social network Mixi.com. Our goal is to test the approach and models in this new setting and propose enhancements to the research design and analysis. We investigate the resulting classifiers to identify profile features that can indicate depression propensity of the users in order to provide tools for early depression detection. Finally, we discuss further work that might improve our analysis and transfer the results to practical applications.
The book includes 64 papers submitted to the International conference in computer linguistics and intellectual technologies Dialogue 2019 and presents a broad spectrum of theoretical and applied research of natural language description, language simulation, and creation of applied computer technologies.
Computer simulations are nowadays a rmly established third pillar of modern natural sciences, complementing experimentation and paper-and-pencil theoret- ical studies. Simulations, experiments in silico, prove indispensable in diverse areas of research in physics and other natural sciences. This volume collects papers based on presentations delivered at the Sec- ond International Conference on Computer Simulations in Physics and beyond (CSP2017), which took place October 9-12, 2017 in Moscow. The Conference, which continues a biannual tradition started by an innaugural conference in 2015, took place on campus of A.N. Tikhonov Moscow Institute of Electronics and Mathematics, was jointly organized by the National Research University Higher School of Economics, the Landau Insitute for Theoretical Physics and Science Center in Chernogolovka. As the name implies, the Conference is a multidisciplinary meeting, with a focus on computational physics and related subjects. Indeed, methods of computational physics prove useful in a broad spectrum of research in multiple branches of natural sciences, and this volume provides a sample. We hope that this volume will interest a wide range of readers, and we are already looking forward for the next conference in this biannual series.
This is the first textbook on attribute exploration, its theory, its algorithms for applications, and some of its many possible generalizations. Attribute exploration is useful for acquiring structured knowledge through an interactive process, by asking queries to an expert. Generalizations that handle incomplete, faulty, or imprecise data are discussed, but the focus lies on knowledge extraction from a reliable information source.
The method is based on Formal Concept Analysis, a mathematical theory of concepts and concept hierarchies, and uses its expressive diagrams. The presentation is self-contained. It provides an introduction to Formal Concept Analysis with emphasis on its ability to derive algebraic structures from qualitative data, which can be represented in meaningful and precise graphics.
This book constitutes the proceedings of the 20th International Conference on Conceptual Structures, ICCS 2013, held in Mumbai, India, in January 2013. The 22 full papers presented were carefully reviewed and selected from 43 submissions for inclusion in the book. The volume also contains 3 invited talks. ICCS focuses on the useful representation and analysis of conceptual knowledge with research and business applications. It advances the theory and practice in connecting the user's conceptual approach to problem solving with the formal structures that computer applications need to bring their productivity to bear. Conceptual structures (CS) represent a family of approaches that builds on the successes of artificial intelligence, business intelligence, computational linguistics, conceptual modeling, information and Web technologies, user modeling, and knowledge management.
This volume contains a collection of papers based on lectures and presentations delivered at the International Conference on Constructive Nonsmooth Analysis (CNSA) held in St. Petersburg (Russia) from June 18-23, 2012. This conference was organized to mark the 50th anniversary of the birth of nonsmooth analysis and nondifferentiable optimization and was dedicated to J.-J. Moreau and the late B.N. Pshenichnyi, A.M. Rubinov, and N.Z. Shor, whose contributions to NSA and NDO remain invaluable.
The first four chapters of the book are devoted to the theory of nonsmooth analysis. Chapters 5-8 contain new results in nonsmooth mechanics and calculus of variations. Chapters 9-13 are related to nondifferentiable optimization, and the volume concludes with four chapters containing interesting and important historical chapters, including tributes to three giants of nonsmooth analysis, convexity, and optimization: Alexandr Alexandrov, Leonid Kantorovich, and Alex Rubinov. The last chapter provides an overview and important snapshots of the 50-year history of convex analysis and optimization.
This is a textbook in data analysis. Its contents are heavily influenced by the idea that data analysis should help in enhancing and augmenting knowledge of the domain as represented by the concepts and statements of relation between them. According to this view, two main pathways for data analysis are summarization, for developing and augmenting concepts, and correlation, for enhancing and establishing relations. Visualization, in this context, is a way of presenting results in a cognitively comfortable way. The term summarization is understood quite broadly here to embrace not only simple summaries like totals and means, but also more complex summaries such as the principal components of a set of features or cluster structures in a set of entities.
The material presented in this perspective makes a unique mix of subjects from the fields of statistical data analysis, data mining, and computational intelligence, which follow different systems of presentation.
This book concentrates on in-depth explanation of a few methods to address core issues, rather than presentation of a multitude of methods that are popular among the scientists. An added value of this edition is that I am trying to address two features of the brave new world that materialized after the first edition was written in 2010. These features are the emergence of “Data science” and changes in student cognitive skills in the process of global digitalization. The birth of Data science gives me more opportunities in delineating the field of data analysis. An overwhelming majority of both theoreticians and practition-ers are inclined to consider the notions of ‘data analysis” (DA) and “machine learning” (ML) as synonymous. There are, however, at least two differences between the two. First comes the difference in perspectives. ML is to equip computers with methods and rules to see through regularities of the environment - and behave accordingly. DA is to enhance conceptual understanding. These goals are not inconsistent indeed, which explains a huge overlap between DA and ML. However, there are situations in which these perspectives are not consistent. Regarding the current students’ cognitive habits, I came to the conclusion that they prefer to immediately get into the “thick of it”. Therefore, I streamlined the presentation of multidimensional methods. These methods are now organized in four Chapters, one of which presents correlation learning (Chapter 3). Three other Chapters present summarization methods both quantitative (Chapter 2) and categorical (Chapters 4 and 5). Chapter 4 relates to finding and characterizing partitions by using K-means clustering and its extensions. Chapter 5 relates to hierarchical and separative cluster structures. Using encoder-decoder data recovery approach brings forth a number of mathematically proven interrelations between methods that are used for addressing such practical issues as the analysis of mixed scale data, data standardization, the number of clusters, cluster interpretation, etc. An obvious bias towards summarization against correlation can be explained, first, by the fact that most texts in the field are biased in the opposite direction, and, second, by my personal preferences. Categorical summarization, that is, clustering is considered not just a method of DA but rather a model of classification as a concept in knowledge engineering. Also, in this edition, I somewhat relaxed the “presentation/formulation/computation” narrative struc-ture, which was omnipresent in the first edition, to be able do things in one go. Chapter 1 presents the author’s view on the DA mainstream, or core, as well as on a few Data science issues in general. Specifically, I bring forward novel material on the role of DA, including its successes and pitfalls (Section 1.4), and classification as a special form of knowledge (Section 1.5). Overall, my goal is to show the reader that Data science is not a well-formed part of knowledge yet but rather a piece of science-in-the-making.
Crisis is a burning issue; this is not a phenomenon, which can be conquered forever. Current approach to crisis is an optimized collaboration, which allows for manageable, measurable and predictable software development. Crisis is a new reality to live and work with. The current software development crisis dates back to the 1960s. The root cause of crisis is misbalance between resources and options. Understanding the nature of crisis helps to understand the reasons for the future crises.
This book is a navigator in lifecycle models, methodologies, principles and practices for predictable and efficient software development in crisis, i.e. under rapid requirement changes, resource deficit and other uncertainties. Therefore, the starting chapters suggest the major approaches to software development and their applicability in crisis. Further narration is case-based; it involves large-scale software implementations in different industries and knowledge transfer processes in IT education. The book suggests a set of principles that potentially marry the client’s and the developer’s views of the future software product in order to avoid or to mitigate the crisis.
The book will be helpful for students, postdocs, theorists and practitioners in software development. It suggests approved principles and practices of crisis management for software development.
This CCIS volume published by Springer contains the post-proceedings of the XXI International Conference on Data Analytics and Management in Data Intensive Domains (DAMDID/RCDL 2019) that took place during October 15–18 at the Kazan Federal University, Russia.
DAMDID is held as a multidisciplinary forum of researchers and practitioners from various domains of science and research, promoting cooperation and exchange of ideas in the area of data analysis and management in domains driven by data-intensive research. Approaches to data analysis and management being developed in specific data-intensive domains (DID) of X-informatics (such as X = astro, bio, chemo, geo, med, neuro, physics, chemistry, material science, etc.), social sciences, as well as in various branches of informatics, industry, new technologies, finance, and business are expected to contribute to the conference content.
The “Data Analytics and Management in Data Intensive Domains” conference (DAMDID) traditionally is planned as a multidisciplinary forum of researchers and practitioners from various domains of science and research, promoting cooperation and exchange of ideas in the area of data analysis and management in domains driven by data intensive research. Approaches to data analysis and management being developed in specific data intensive domains (DID) of X-informatics (such as X = astro, bio, chemo, geo, medicine, neuro, physics, etc.), social sciences, as well as in various branches of informatics, industry, new technologies, finance and business constitute the universe of the conference discourse. DAMDID conference was formed in 2015 as a result of transformation of the RCDL conference (“Digital libraries: advanced methods and technologies, digital collections”, http://rcdl.ru) so that the continuity with RCDL has been preserved after many years of its successful work.
We study synchronization aspects in parallel discrete event simulation (PDES) algorithms. Our analysis is based on the recently introduced model of virtual times evolution in an optimistic synchronization algorithm. This model connects synchronization aspects with the properties of the profile of the local virtual times. The main parameter of the model is a “growth rate” q = 1/(1 + b), where b is a mean rollback length. We measure the average utilization of events and the desynchronization between logical processes as functions of the parameter q. We found that there is a phase transition between an “active phase”, i.e. when the utilization of the average processing time is finite, and an “absorbing state” with zero utilization, vanishing at a critical point qc ≈ 0.136. The average desynchronization degree (i.e. the vari- ance of local virtual times) grows with the parameter q. We also investi- gate the influence of the sparse distant communications between logical processes and found that they do not change drastically the synchronization properties in the optimistic synchronization algorithm, which is the sharp contrast with the conservative algorithm . Finally, we compare our results with the existing case-study simulations.
This book constitutes the refereed proceedings of the 28th International Conference on Data Analytics and Management in Data Intensive Domains, DAMDID/RCDL 2016, held in Ershovo, Moscow, Russia, in October 2016.
The 16 revised full papers presented together with one invited talk and two keynote papers were carefully reviewed and selected from 57 submissions. The papers are organized in topical sections on semantic modeling in data intensive domains; knowledge and learning management; text mining; data infrastructures in astrophysics; data analysis; research infrastructures; position paper.
Data Mining in Agriculture represents a comprehensive effort to provide graduate students and researchers with an analytical text on data mining techniques applied to agriculture and environmental related fields. This book presents both theoretical and practical insights with a focus on presenting the context of each data mining technique rather intuitively with ample concrete examples represented graphically and with algorithms written in MATLAB®.
Doctoral students were invited to the Doctoral Consortium held in conjunction with the main conference of ECIR 2013. The Doctoral Consortium aimed to provide a constructive setting for presentations and discussions of doctoral students’ research projects with senior researchers and other participating students. The two main goals of the Doctoral Consortium were: 1) to advise students regarding current critical issues in their research; and 2) to make students aware of the strengths and weakness of their research as viewed from different perspectives. The Doctoral Consortium was aimed for students in the middle of their thesis projects; at minimum, students ought to have formulated their research problem, theoretical framework and suggested methods, and at maximum, students ought to have just initiated data analysis. The Doctoral Consortium took place on Sunday, March 24, 2013, at the ECIR 2013 venue, and participation is by invitation only. The format was designed as follows: The doctoral students presents summaries of their work to other participating doctoral students and the senior researchers. Each presentation was followed by a plenary discussion, and individual discussion with one senior advising researcher. The discussions in the group and with the advisors were intended to help the doctoral student to reflect on and carry on with their thesis work.
The book presents the most important aspects of safe digital image workflows, starting from the basic practical implications and gradually uncovering the underlying concepts and algorithms. With an easy-to-follow, down-to-earth presentation style, the text helps you to optimize your diagnostic imaging projects and connect the dots of medical informatics.