Book
12th International Summer School on Reasoning Web Summer School, RW 2016
The question whether an ontology can safely be replaced by another, possibly simpler, one is fundamental for many ontology engineering and maintenance tasks. It underpins, for example, ontology versioning, ontology modularization, forgetting, and knowledge exchange. What ‘safe replacement’ means depends on the intended application of the ontology. If, for example, it is used to query data, then the answers to any relevant ontology-mediated query should be the same over any relevant data set; if, in contrast, the ontology is used for conceptual reasoning, then the entailed subsumptions between concept expressions should coincide. This gives rise to different notions of ontology inseparability such as query inseparability and concept inseparability, which generalize corresponding notions of conservative extensions. In this chapter, we survey results on various notions of inseparability in the context of description logic ontologies, discussing their applications, useful modeltheoretic characterizations, algorithms for determining whether two ontologies are inseparable (and, sometimes, for computing the difference between them if they are not), and the computational complexity of this problem

The number of space objects will grow several times in a few years due to the planned launches of constellations of thousands microsatellites. It leads to a significant increase in the threat of satellite collisions. Spacecraft must undertake collision avoidance maneuvers to mitigate the risk. According to publicly available information, conjunction events are now manually handled by operators on the Earth. The manual maneuver planning requires qualified personnel and will be impractical for constellations of thousands satellites. In this paper we propose a new modular autonomous collision avoidance system called "Space Navigator". It is based on a novel maneuver optimization approach that combines domain knowledge with Reinforcement Learning methods.
Heaps are well-studied fundamental data structures, having myriads of applications, both theoretical and practical. We consider the problem of designing a heap with an “optimal” extract-min operation. Assuming an arbitrary linear ordering of keys, a heap with n elements typically takes O(log n) time to extract the min-imum. Extracting all elements faster is impossible as this would violate the Ω(n log n) bound for comparison-based sorting. It is known, however, that is takes only O(n + k log k) time to sort just k smallest elements out of n given, which prompts that there might be a faster heap, whose extract-min performance depends on the number of elements extracted so far. In this paper we show that is indeed the case. We present a version of heap that performs insert in O(1) time and takes only O(log ∗ n + log k) time to carry out the k-th extraction (where log ∗ denotes the iterated logarithm). All the above bounds are worst-case.
A linguistic method for determining whether given text is a rumor or disinformation is proposed, based on web mining and linguistic technology comparing two text fragments. We hypothesize about a family of content generation algorithms which are capable of producing deception from a portion of genuine, original text. We then propose a disinformation detection algorithm which finds a candidate source of text on the web and compares it with the given text, applying parse thicket technology. Parse thicket is a graph combined from a sequence of parse trees augmented with inter-sentence relations for anaphora and rhetoric structures. We evaluate our algorithm in the domain of customer reviews, considering a product review as an instance of possible deception. It is confirmed as a plausible way to detect rumor and deception in a web document.
Currently, the competitiveness of the company largely depends on how it uses the opportunities offered by modern information technologies. The Internet of things, big data, blockchain, artificial intelligence technologies - all of this brings companies to a new level of interaction and competition, gives new opportunities to build logistics processes, adjusts supply chain management. It is no secret that an important factor of success in the market is the possession of information, or rather knowledge. The development of Internet technologies and mobile devices has led to the fact that a rare event is not fixed by people or any device now: visiting the store, a purchase, a trip, a meeting, etc. Information about these and other events is often in free access in the form of text messages in social networks, blogs, news websites, but the problem is to find, to analyze, and to draw conclusions. Text mining tools help to solve these tasks. The article reviews the experience of using text mining tools to solve problems in management, marketing, finance spheres, discusses possible applications of text mining in logistics and supply chain management. The article describes the process, the main typical text mining tasks, a review of the functionalities of modern text mining tools.
We assess and compare computer science skills among final-year computer science undergraduates (seniors) in four major economic and political powers that produce approximately half of the science, technology, engineering, and mathematics graduates in the world. We find that seniors in the United States substantially outperform seniors in China, India, and Russia by 0.76–0.88 SDs and score comparably with seniors in elite institutions in these countries. Seniors in elite institutions in the United States further outperform seniors in elite institutions in China, India, and Russia by ∼0.85 SDs. The skills advantage of the United States is not because it has a large proportion of high-scoring international students. Finally, males score consistently but only moderately higher (0.16–0.41 SDs) than females within all four countries.