Checking the Data Complexity of Ontology-Mediated Queries: A Case Study with Non-uniform CSPs and Polyanna
It has recently been shown that first-order- and datalog-rewritability of ontology-mediated queries (OMQs) with expressive ontologies can be checked in NExpTime using a reduction to CSPs. In this paper, we present a case study for OMQs with Boolean conjunctive queries and a fixed ontology consisting of a single covering axiom 𝐴 -> 𝐹 v 𝑇, A -> F v T, possibly supplemented with a disjointness axiom for T and F. The ultimate aim is to classify such OMQs according to their data complexity: AC0, L, NL, P or coNP. We report on our experience with trying to distinguish between OMQs in P and coNP using the reduction to CSPs and the Polyanna software for finding polymorphisms.
We design a decidable extension of the description logic SROIQ underlying the Web Ontology Language OWL 2. The new logic, called SR+OIQ, supports a controlled use of role axioms whose right-hand side may contain role chains or role unions. We give a tableau algorithm for checking concept satisfiability with respect to SR+OIQ ontologies and prove its soundness, completeness and termination.
We design temporal description logics (TDLs) suitable for reasoning about temporal conceptual data models and investigate their computational complexity. Our formalisms are based on DL-Lite logics with three types of concept inclusions (ranging from atomic concept inclusions and disjointness to the full Booleans), as well as cardinality constraints and role inclusions. The logics are interpreted over the Cartesian products of object domains and the flow of time (ℤ, <), satisfying the constant domain assumption. Concept and role inclusions of the TBox hold at all moments of time (globally), and data assertions of the ABox hold at specified moments of time. To express temporal constraints of conceptual data models, the languages are equipped with flexible and rigid roles, standard future and past temporal operators on concepts, and operators “always” and “sometime” on roles. The most expressive of our TDLs (which can capture lifespan cardinalities and either qualitative or quantitative evolution constraints) turns out to be undecidable. However, by omitting some of the temporal operators on concepts/roles or by restricting the form of concept inclusions, we construct logics whose complexity ranges between NLogSpace and PSpace. These positive results are obtained by reduction to various clausal fragments of propositional temporal logic, which opens a way to employ propositional or first-order temporal provers for reasoning about temporal data models.
This book constitutes a collection of selected contributions from the 12th International Conference on Perspectives in Business Informatics Research, BIR 2013, held in Warsaw, Poland, in September 2013. Overall, 54 submissions were rigorously reviewed by 41 members of the Program Committee representing 21 countries. As a result, 19 full and 5 short papers from 12 countries have been selected for publication in this volume. This book also includes the two keynotes by Witold Abramowicz and Bernhard Thalheim. The papers cover many aspects of business information research and have been organized in topical sections on: business process management; enterprise and knowledge architectures; organizations and information systems development; information systems and services; and applications.
The paper describes the development of a portal about development and use of tools based on the (meta) modeling (using DSM, DSL, etc.). The architecture of a portal, information retrieval subsystem and document management are described.
The purpose of the portal is the creation of "selfdeveloping" resource, which provides intelligent search and automatic processing of the results (documents and sources), easy navigation on the found resources. Implementation is based on the ontologies approach.
The main feature of suggested methods is an integrated approach to development. The approach bases on a multi-level ontology repository. The portal allows searching and analyzing information, creating and researching model, publishing research results. Software gives an opportunity of a flexible customizing. The main topic of this paper is an intelligent information search means based on semantic indexation, automatic document classification, tracking of semantic links between documents and automatic summarization.
This book constitutes the refereed proceedings of the 4th Conference on Knowledge Engineering and the Semantic Web, KESW 2013, held in St. Petersburg, Russia, in October 2013. The 18 revised full papers presented together with 7 short system descriptions were carefully reviewed and selected from 52 submissions. The papers address research issues related to knowledge representation, semantic web, and linked data.
Today many problems that are dedicated to a particular problem domain can be solved using DSL. Thus to use DSL it must be created or it can be selected from existing ones. Creating a completely new DSL in most cases requires high financial and time costs. Selecting an appropriate existing DSL is an intensive task because such actions like walking through every DSL and deciding if current DSL can handle the problem are done manually. This problem appears because there are no DSL repository and no tools for matching suitable DSL with specific task. This paper observes an approach for implementing an automated detection of requirements for DSL (ontology-based structure) and automated DSL matching for specific task.
This paper further investigates the succinctness landscape of query rewriting in OWL 2 QL. We clarify the worst-case size of positive existential (PE), non-recursive Datalog (NDL), and first-order (FO) rewritings for various classes of tree-like conjunctive queries, ranging from linear queries up to bounded treewidth queries. More specifically, we establish a superpolynomial lower bound on the size of PE-rewritings that holds already for linear queries and TBoxes of depth 2. For NDL-rewritings, we show that polynomial-size rewritings always exist for tree-shaped queries with a bounded number of leaves (and arbitrary TBoxes), and for bounded treewidth queries and bounded depth TBoxes. Finally, we show that the succinctness problems concerning FO-rewritings are equivalent to well-known problems in Boolean circuit complexity. Along with known results, this yields a complete picture of the succinctness landscape for the considered classes of queries and TBoxes
The aim of this article is to highlight the relationships between contemporary tendencies in the humanities (the new ontologies) and contemporary architectural practices. The author articulates the distinction between the optics of the «old ontologies» and the new ones. The ontologies considered to be new ones are flat, free from classical opposition between the whole and the parts and based on modality of possibility, but not obligation. Objects and practices traditionally referred to as architecture appear to be based on the principles of the «old ontologies». For them human being is an extraordinary object compared to others, the part-to-whole relationships appear to reflect either the superiority of the whole (society) or the superiority of the part (individual), finally, they are aimed at creating an “it has to be this way” picture. The new ontologies seem to be impossible to apply to architecture in its traditional meaning. Nevertheless, a two-fold link between the new ontologies and architecture can be posed. On the one hand, the former offer a new language to describe the variety of traditional architecture and accept that all of directions, styles and buildings are ontologically coordinate. On the other hand, the new ontologies enable some new architectural practices (computer architecture, architecture of virtual space and speculative architecture) which do not substitute for traditional architecture, but accompany it.
Keywords: new ontologies, flat ontologies, architecture, computer architecture, architecture of virtual space, speculative architecture