Cascade heap: Towards time-optimal extractions
Heaps are well-studied fundamental data structures, having myriads of applications, both theoretical and practical. We consider the problem of designing a heap with an “optimal” extract-min operation. Assuming an arbitrary linear ordering of keys, a heap with n elements typically takes O(log n) time to extract the min-imum. Extracting all elements faster is impossible as this would violate the Ω(n log n) bound for comparison-based sorting. It is known, however, that is takes only O(n + k log k) time to sort just k smallest elements out of n given, which prompts that there might be a faster heap, whose extract-min performance depends on the number of elements extracted so far. In this paper we show that is indeed the case. We present a version of heap that performs insert in O(1) time and takes only O(log ∗ n + log k) time to carry out the k-th extraction (where log ∗ denotes the iterated logarithm). All the above bounds are worst-case.
Nowadays environmental science experiences tremendous growth of raster data: N-dimensional (N-d) arrays coming mainly from numeric simulation and Earth remote sensing. An array DBMS is a tool to streamline raster data processing. However, raster data are usually stored in files, not in databases. Moreover, numerous command line tools exist for processing raster files. This paper describes a distributed array DBMS under development that partially delegates raster data processing to such tools. Our DBMS offers a new N-d array data model to abstract from the files and the tools and processes data in a distributed fashion directly in their native file formats. As a case study, popular satellite altimetry data were used for the experiments carried out on 8- and 16-nodes clusters in Microsoft Azure Cloud. New array DBMS is up to 70× faster than SciDB which is the only freely available distributed array DBMS to date.
Heaps are well-studied fundamental data structures, having myriads of applications, both theoretical and practical. We consider the problem of designing a heap with an “optimal” extract-min operation. Assuming an arbitrary linear ordering of keys, a heap with n elements typically takes O(log n) time to extract the minimum. Extracting all elements faster is impossible as this would violate the Ω (nlog n) bound for comparison-based sorting. It is known, however, that is takes only O(n+ klog k) time to sort just k smallest elements out of n given, which prompts that there might be a faster heap, whose extract-min performance depends on the number of elements extracted so far. In this paper we show that this is indeed the case. We present a version of heap that performs insert in O(1) time and takes only O(log ∗ n+ log k) time to carry out the k-th extraction (where log ∗ denotes the iterated logarithm). All the above bounds are worst-case. © 2018, Springer Science+Business Media, LLC, part of Springer Nature.
Relativisation involves dependencies which, although unbounded, are constrained with respect to certain island domains. The Lambek calculus L can provide a very rudimentary account of relativisation limited to unbounded peripheral extraction; the Lambek calculus with bracket modalities Lb can further condition this account according to island domains. However in naïve parsing/theorem-proving by backward chaining sequent proof search for Lb the bracketed island domains, which can be indefinitely nested, have to be specified in the linguistic input. In realistic parsing word order is given but such hierarchical bracketing structure cannot be assumed to be given. In this paper we show how parsing can be realised which induces the bracketing structure in backward chaining sequent proof search with Lb.
We assess and compare computer science skills among final-year computer science undergraduates (seniors) in four major economic and political powers that produce approximately half of the science, technology, engineering, and mathematics graduates in the world. We find that seniors in the United States substantially outperform seniors in China, India, and Russia by 0.76–0.88 SDs and score comparably with seniors in elite institutions in these countries. Seniors in elite institutions in the United States further outperform seniors in elite institutions in China, India, and Russia by ∼0.85 SDs. The skills advantage of the United States is not because it has a large proportion of high-scoring international students. Finally, males score consistently but only moderately higher (0.16–0.41 SDs) than females within all four countries.
Modern cybernetics and computer engineering papers and topics are presented in the proceedings. This proceedings is a Vol. 3 of the Computer Science On-line Conference proceedings. Papers in this part discuss modern cybernetics and applied informatics in technical systems. This book constitutes the refereed proceedings of the Applied Informatics and Cybernetics in Intelligent Systems section of the 9th Computer Science On-line Conference 2020 (CSOC 2020), held on-line in April 2020. CSOC 2020 has received (all sections) more than 270 submissions from more than 35 countries. More than 65% of accepted submissions were received from Europe, 21% from Asia, 8% from Africa, 4% from America and 2% from Australia. CSOC 2020 conference intends to provide an international forum for the discussion of the latest high-quality research results in all areas related to Computer Science. Computer Science On-line Conference is held on-line, and modern communication technology, which are broadly used, improves the traditional concept of scientific conferences. It brings equal opportunity to participate for all researchers around the world.
The article examines the problems of defining the term computer simulations of scientific experiments. The first part analyzes the original method for classifying variations of terms proposed by Duran as the most successful for demonstrating significant existing contradictions among philosophers regarding the place and role of computer simulations in the philosophy of science. In the second part of the article, the term itself is formulated by the author through the identification of the main features of computer simulations as a result of studying the nature of experimental data as transferring traces of an experiment from a graphematical space to a representative one. Following the concept of transposition, the author derives a relevant term from the essence of computer simulations revealed by him, claiming a new epistemological significance for such kind of scientific experiments for the philosophy of science.
Data management and analysis is one of the fastest growing and most challenging areas of research and development in both academia and industry. Numerous types of applications and services have been studied and re-examined in this field resulting in this edited volume which includes chapters on effective approaches for dealing with the inherent complexity within data management and analysis. This edited volume contains practical case studies, and will appeal to students, researchers and professionals working in data management and analysis in the business, education, healthcare, and bioinformatics areas.