Article
A fast scaling algorithm for the weighted triangle-free 2-matching problem
A perfect 2-matching in an undirected graph G=(V,E) is a function x:E→0,1,2 such that for each node v∈V the sum of values x(e) on all edges e incident to v equals 2. If supp(x)=e∈E∣x(e)≠0 contains no triangles then x is called triangle-free. Polyhedrally speaking, triangle-free 2-matchings are harder than 2-matchings, but easier than usual 1-matchings. Given edge costs c:E→R + , a natural combinatorial problem consists in finding a perfect triangle-free matching of minimum total cost. For this problem, Cornuéjols and Pulleyblank devised a combinatorial strongly-polynomial algorithm, which can be implemented to run in O(VElogV) time. (Here we write V, E to indicate their cardinalities |V|, |E|.) If edge costs are integers in range [0,C] then for both 1- and 2-matchings some faster scaling algorithms are known that find optimal solutions within O(Vα(E,V)logVElog(VC)) and O(VElog(VC)) time, respectively, where α denotes the inverse Ackermann function. So far, no efficient cost-scaling algorithm is known for finding a minimum-cost perfect triangle-free2-matching. The present paper fills this gap by presenting such an algorithm with time complexity of O(VElogVlog(VC)).
The author of this article compares the rules of accounting and disclosure in the financial (accounting) statements of the inventories according to the international and local standards, estimates the impact of differences between the applied rules on the financial condition of the organization.
This volume contains the papers presented at the 6th International Conference on Similarity Search and Applications (SISAP 2013), held at A Coruna, Spain, during October 2–4, 2013. The International Conference on Similarity Search and Applications (SISAP) is an annual forum for researchers and application developers in the area of similarity data management. It aims at the technological problems shared by many application domains, such as data mining, information retrieval, computer vision, pattern recognition, computational biology, geography, biometrics, machine learning, and many others that need similarity searching as a necessary supporting service. Traditionally, SISAP conferences have put emphasis on the distance-based searching, but in general the conference concerns both the effectiveness and efficiency aspects of any similarity search approach.
In this paper, we present a modification of dynamic programming algorithms (DPA), which we denote as graphical algorithms (GrA). For some single machine scheduling problems, it is shown that the time complexity of the GrA is less than the time complexity of the standard DPA. Moreover, the average running time of the GrA is often essentially smaller. A GrA can also solve large-scale instances and instances, where the parameters are not integer. For some problems, GrA has a polynomial time complexity in contrast to a pseudo-polynomial complexity of a DPA.
Information systems have been developed in parallel with computer science, although information systems have roots in different disciplines including mathematics, engineering, and cybernetics. Research in information systems is by nature very interdisciplinary. As it is evidenced by the chapters in this book, dynamics of information systems has several diverse applications. The book presents the state-of-the-art work on theory and practice relevant to the dynamics of information systems. First, the book covers algorithmic approaches to numerical computations with infinite and infinitesimal numbers. Also the book presents important problems arising in service-oriented systems, such as dynamic composition, analysis of modern service-oriented information systems, and estimation of customer service times on a rail network from GPS data. After that, the book addresses the complexity of the problems arising in stochastic and distributed systems. In addition, the book discusses modulating communication for improving multi-agent learning convergence. Network issues, in particular minimum risk maximum clique problems, vulnerability of sensor networks, influence diffusion, community detection, and link prediction in social network analysis, as well as a comparative analysis of algorithms for transmission network expansion planning are described in subsequent chapters. We thank all the authors and anonymous referees for their advice and expertise in providing valuable contributions, which improved the quality of this book. Furthermore, we want to thank Springer for helping us to produce this book.
We revisit the problems of computing the maximal and the minimal non-empty suffixes of a substring of a longer text of length n, introduced by Babenko, Kolesnichenko and Starikovskaya [CPM’13]. For the minimal suffix problem we show that for any 1 ≤ τ ≤ logn there exists a linear-space data structure with(τ)query time and(nlogn/τ)preprocessing time. As a sample application, we show that this data structure can be used to compute the Lyndon decomposition of any substring of the text in(kτ)time, where k is the number of distinct factors in the decomposition. For the maximal suffix problem we give a linear-space structure with(1)query time and(n)preprocessing time, i.e., we manage to achieve both the optimal query and the optimal construction time simultaneously.