Lower bounds on almost-separating binary codes
Separating codes have been used in many areas as diverse as automata synthesis, technical diagnosis and traitor tracing schemes. In this paper, we study a weak version of separating codes called almost separating codes. More precisely, we derive lower bounds on the rate of almost separating codes. From the main result it is seen that the lower bounds on the rate for almost separating codes are greater than the currently known lower bounds for ordinary separating codes. Moreover, we also show how almost separating codes can be used to construct a family of fingerprinting codes.
This article uses case studies of visual art installations to elaborate an alternative view of the way art is experienced by museum and gallery visitors. In particular, it is argued that the orthodox and influential decoding perspective in the sociology of art overlooks the situated and experiential nature of art, especially when art takes the form of installations. In order to study experiences of art installations, this article draws on recent developments in cultural sociology and the sociology of music to reintroduce the idea of mediation into thinking about and with art. A focus on processes of mediation allows me to address the communications and interactions which emerged at the particular art installation under consideration here, a piece called PharmaConcert by Evgeniy Chertoplyasov that was displayed at the Winzavod Art Centre in Moscow in 2011. Detailed analysis of the forms of interactions at this exhibition shows that as audience members perceive artworks, they transform abstract expectations of artworks into a series of specific and situated actions. Simultaneously, other mediation processes reassemble the audiences through shared experience of contested meanings of an artwork. The paper challenges the orthodox sociological notion of what an ‘audience’ is and instead sees audiences as an emerging form of communication and interaction specific to a particular artwork / installation.
A novel method for computing the discrete Fourier transform (DFT) over a finite field based on the Goertzel-Blahut algorithm is described. The novel method is currently the best one for computing the DFT over even extensions of the characteristic two finite field, in terms of multiplicative complexity.
We introduce a new compression scheme for high-dimensional vectors that approximates the vectors using sums of M codewords coming from M different codebooks. We show that the proposed scheme permits efficient distance and scalar product computations between compressed and uncompressed vectors. We further suggest vector encoding and codebook learning algorithms that can minimize the coding error within the proposed scheme. In the experiments, we demonstrate that the proposed compression can be used instead of or together with product quantization. Compared to product quantization and its optimized versions, the proposed compression approach leads to lower coding approximation errors, higher accuracy of approximate nearest neighbor search in the datasets of visual descriptors, and lower image classification error, whenever the classifiers are learned on or applied to compressed vectors.
This paperwork is dedicated to research of k-length CRC value distribution for data interval with volume n. It is shown in research  that CRC value could be represented as a sum of special-way formed random vectors over k-dimension vector space GFk(2) over a two-element 0,1-field (GF(2)).If the message or errors are modeled using independent random 0,1 values sequences, CRC could be regarded as a sum of independent random vector components.
The paperwork examines CRC value distribution’s behavior under conditions of big n’s and fixed values of k. By means of character theory application we find the conditions of asymptotic unification of CRC distribution.
Asymptotic results could be applied for error estimation for various telecommunication protocols (e.g. USB, X.25, HDLC, Bluetooth, Ethernet and others).
An arithmetic theory of oppositions is devised by comparing expressions, Boolean bitstrings, and integers. This leads to a set of correspondences between three domains of investigation, namely: logic, geometry, and arithmetic. The structural properties of each area are investigated in turn, before justifying the procedure as a whole. To finish, I show how this helps to improve the logical calculus of oppositions, through the consideration of corresponding operations between integers.
This book constitutes the refereed proceedings of the 44th International Conference on Current Trends in Theory and Practice of Computer Science, SOFSEM 2018, held in Krems, Austria, in January/February 2018. The 48 papers presented in this volume were carefully reviewed and selected from 97 submissions. They were organized in topical sections named: foundations of computer science; software engineering: advances methods, applications, and tools; data, information and knowledge engineering; network science and parameterized complexity; model-based software engineering; computational models and complexity; software quality assurance and transformation; graph structure and computation; business processes, protocols, and mobile networks; mobile robots and server systems; automata, complexity, completeness; recognition and generation; optimization, probabilistic analysis, and sorting; filters, configurations, and picture encoding; machine learning; text searching algorithms; and data model engineering.
This book constitutes the refereed post-conference proceedings of the 29th International Workshop on Combinatorial Algorithms, IWOCA 2018, held in Singapore, Singapore, in July 2018. The 31 regular papers presented in this volume were carefully reviewed and selected from 69 submissions. They cover diverse areas of combinatorical algorithms, complexity theory, graph theory and combinatorics, combinatorial optimization, cryptography and information security, algorithms on strings and graphs, graph drawing and labelling, computational algebra and geometry, computational biology, probabilistic and randomised algorithms, algorithms for big data analytics, and new paradigms of computation.
This article proposes a mutual adaptation of the procedure for extracting digital watermark from video sequences and fingerprinting code decoding. Proposed solution includes two interrelated techniques. The first one is a usage of “soft” decision making instead of “hard” decision during watermark extraction. This means that the additional information on extracted bit reliability will be obtained from the hidden channel. The second one is an including of the reliability to decoding procedure. The use of such a solution allows more effective work of fingerprinting codes and achieves zero accusation error rates.