Procedia Computer Science. 2nd International Conference on Information Technology and Quantitative Management, ITQM 2014. National Research University Higher School of Economics (HSE) in Moscow (Russia) on June 3-5, 2014
In this paper, the characteristics of lexicological synthesis of slightly formalized text documents are presented. This technology provides a significant reduction in labour costs for the creation of text documents. It also improves text quality by reducing the probability of the occurrence of errors during the formation of documents and implementation of the requirements for their design features. An additional advantage of this way of creating documents is the decrease in the volume of stored information, as well as the improvement in the security of documents during their transmission over communication channels.
In this paper we consider the problem of finding a spanning k-tree of minimum weight in a complete weighted graph which has a number of applications in designing reliable telecommunication networks. This problem is known to be NP-hard. We propose four effective heuristics: the first heuristic is based on the idea of a well-known Prim's algorithm, the second one is based on a dynamic programming approach, and the other two use the idea of iterative improvement from a starting solution. Preliminary numerical experiment was performed to compare the effectiveness of the proposed algorithms with known heuristics and exact algorithms.
In this paper we propose several possible modifications to the OAC-triclustering algorithms based on the prime operators. This method based on the framework of Formal Concept Analysis showed some rather promising results in the previous research. But while it is fast and ecient with respect to such measures as average density of the output, diversity, coverage, and noisetolerance, it produces rather large number of triclusters. This makes it almost impossible for the expert to manually check the results. We show that the proposed post-processing techniques not only reduce the size of the output for this approach and keep the good values for the measures, but also keep the time complexity of the original algorithm.