Article
Обзор развития методов лексической оптимизации запросов
The presented overview is concerned with lexical query optimization and covers papers published in the last four decades. The paper consists of five sections. The first section – Introduction – classifies query optimization techniques into semantic optimizations and lexical optimizations. Semantic optimizations usually relies on data integrity rules that are stores within metadata part of databases, and on data statistics. This kind of optimizations is discussed in many textbooks and papers. Lexical optimizations (more often called rewriting) use only a text of query and no other information about database and its structure. Lexical optimizations are further classified into query transformations, query amelioration, and query reduction. The second section of the paper discusses techniques of query transformation such as predicate pushdown, transformation of nested query into query with joins, etc. Query amelioration is a topic of the third section with a focus on magic set optimizations. The forth section covers query reduction optimizations. The section briefly describes traditional approaches (such as tableau-based) and considers in more details three new algorithms proposed by authors. The fifth section concludes the paper.
A problem of effective and efficient approximate query evaluation is addressed as a special case of multi-objective optimization with 2 criteria: the computational resources and the quality of result. The proposed optimization and execution model provides for interactive trade of quality for speed.
Queryoptimizationtechniquesareprovedtobeessentialforhighperfor- mance of database management systems. In the context of new querying paradigms, such as similarity based search, exact query evaluation is neither computationally feasible nor meaningful, and approximate query evaluation is the only reasonable option.
In this paper a problem of resource allocation for approximate evaluation of com- plex queries is considered. An approximate algorithm for a near-optimal resource allocation is presented, providing the best feasible quality of the output subject to a limited total cost of a query. The results of experiments have shown that the ap- proximate resource allocation algorithm is accurate and efficient.
This paper describes enhanced algorisms of lexical optimization query. These algorisms detect and remove redundant conditions from query restriction to simplify it. The paper also presents results of implementation of these optimization techniques and those effects on query processing speed. The paper includes four sections. The first section (Introduction) provides general context of the paper. The second section describes three proposed algorithms of lexical query optimization. The first one is the algorithm of absorption. This algorithm allows to find and remove a wide set of conditions that are redundant but are not equal textually even after standardization of whole restriction expression. The second algorithm is an adaptation of well-known Quin-McCluskey algorithm initially designed for minimization of Boolean functions. The last algorithm of lexical query optimization is based on techniques for optimization of systems of linear inequalities. The third section of the paper discusses results of efficiency evaluation of the proposed algorithms. The forth section concludes the paper.
We present the architecture and technologies underpinning the OBDA system Ontop and taking full advantage of storing data in relational databases. We discuss the theoretical foundations of Ontop: the tree-witness query rewriting,T-mappings and optimisations based on database integrity constraints and SQL features. We analyse the performance of Ontop in a series of experiments and demonstrate that, for standard ontologies, queries and data stored in relational databases, Ontop is fast, efficient and produces SQL rewritings of high quality.
This paper describes enhanced algorisms of lexical optimization query. These algorisms detect and remove redundant conditions from query restriction to simplify it. The paper also presents results of implementation of these optimization techniques and those effects on query processing speed. The paper includes four sections. The first section (Introduction) provides general context of the paper. The second section describes three proposed algorithms of lexical query optimization. The first one is the algorithm of absorption. This algorithm allows to find and remove a wide set of conditions that are redundant but are not equal textually even after standardization of whole restriction expression. The second algorithm is an adaptation of well-known Quin-McCluskey algorithm initially designed for minimization of Boolean functions. The last algorithm of lexical query optimization is based on techniques for optimization of systems of linear inequalities. The third section of the paper discusses results of efficiency evaluation of the proposed algorithms. The forth section concludes the paper.
In the last years native RDF stores made enormous progress in closing the performance gap compared to RDBMS. This albeit smaller gap, however, still prevents adoption of RDF stores in scenarios with high requirements on responsiveness. We try to bridge the gap and present a native RDF store “OntoQuad” and its fundamental design principles. Basing on previous researches, we develop a vector database schema for quadruples, its realization on index data structures, and ways to efficiently implement the joining of two and more data sets simultaneously. We also offer approaches to optimizing the SPARQL query execution plan which is based on its heuristic transformations. The query performance efficiency is checked and proved on BSBM tests. The study results can be taken into consideration during the development of RDF DBMS’s suitable for storing large volumes of Semantic Web data, as well as for the creation of large-scale repositories of semantic data.
A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traffic is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the final node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a finite-dimensional system of differential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the “correct” extension of solutions of a system of differential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station.
Event logs collected by modern information and technical systems usually contain enough data for automated process models discovery. A variety of algorithms was developed for process models discovery, conformance checking, log to model alignment, comparison of process models, etc., nevertheless a quick analysis of ad-hoc selected parts of a journal still have not get a full-fledged implementation. This paper describes an ROLAP-based method of multidimensional event logs storage for process mining. The result of the analysis of the journal is visualized as directed graph representing the union of all possible event sequences, ranked by their occurrence probability. Our implementation allows the analyst to discover process models for sublogs defined by ad-hoc selection of criteria and value of occurrence probability
The geographic information system (GIS) is based on the first and only Russian Imperial Census of 1897 and the First All-Union Census of the Soviet Union of 1926. The GIS features vector data (shapefiles) of allprovinces of the two states. For the 1897 census, there is information about linguistic, religious, and social estate groups. The part based on the 1926 census features nationality. Both shapefiles include information on gender, rural and urban population. The GIS allows for producing any necessary maps for individual studies of the period which require the administrative boundaries and demographic information.
Existing approaches suggest that IT strategy should be a reflection of business strategy. However, actually organisations do not often follow business strategy even if it is formally declared. In these conditions, IT strategy can be viewed not as a plan, but as an organisational shared view on the role of information systems. This approach generally reflects only a top-down perspective of IT strategy. So, it can be supplemented by a strategic behaviour pattern (i.e., more or less standard response to a changes that is formed as result of previous experience) to implement bottom-up approach. Two components that can help to establish effective reaction regarding new initiatives in IT are proposed here: model of IT-related decision making, and efficiency measurement metric to estimate maturity of business processes and appropriate IT. Usage of proposed tools is demonstrated in practical cases.