Formal language theory has a deep connection with such areas as static code analysis, graph database querying, formal verifica- tion, and compressed data processing. Many application problems can be formulated in terms of languages intersection. The Bar-Hillel theo- rem states that context-free languages are closed under intersection with a regular set. This theorem has a constructive proof and thus provides a formal justification of correctness of the algorithms for applications mentioned above. Mechanization of the Bar-Hillel theorem, therefore, is both a fundamental result of formal language theory and a basis for the certified implementation of the algorithms for applications. In this work, we present the mechanized proof of the Bar-Hillel theorem in Coq.
We present a new concept of biclique as a tool for preimage attacks, which employs many powerful techniques from differential cryptanalysis of block ciphers and hash functions. The new tool has proved to be widely applicable by inspiring many authors to publish new results of the full versions of AES, KASUMI, IDEA, and Square. In this paper, we show how our concept leads to the first cryptanalysis of the round-reduced Skein hash function, and describe an attack on the SHA-2 hash function with more rounds than before.
Human anatomical brain networks derived from the analysis of neuroimaging data are known to demonstrate modular organization. Modules, or communities, of cortical brain regions capture information about the structure of connections in the entire network. Hence, anatomical changes in network connectivity (e.g., caused by a certain disease) should translate into changes in the community structure of brain regions. This means that essential structural differences between phenotypes (e.g., healthy and diseased) should be reflected in how brain networks cluster into communities. To test this hypothesis, we propose a pipeline to classify brain networks based on their underlying community structure. We consider network partitionings into both non-overlapping and overlapping communities and introduce a distance between connectomes based on whether or not they cluster into modules similarly. We next construct a classifier that uses partitioning-based kernels to predict a phenotype from brain networks. We demonstrate the performance of the proposed approach in a task of classifying structural connectomes of healthy subjects and those with mild cognitive impairment and Alzheimer’s disease.
Our submission to SV-COMP’21 is based on the software verification framework CPAchecker and implements the extension to the thread-modular approach. It considers every thread separately, but in a special environment which models thread interactions. The environment is expressed by projections of normal transitions in each thread. A projection contains a description of possible effects over shared data and synchronization primitives, as well as conditions of its application. Adjusting the precision of the projections, one can find a balance between the speed and the precision of the whole analysis.
Implementation on the top of the CPAchecker framework allows combining our approach with existing algorithms and analyses. Evaluation on the sv-benchmarks confirms the scalability and soundness of the approach.
There is a high demand in practical methods and tools to ensure total correctness of critical software components. A usual assumption is that the machine code (or binary code) generated by a compiler follows the semantics of the programming language. Unfortunately, modern compilers such as GCC and LLVM are too complex to be thoroughly verified, and bugs in the generated code are not uncommon. As an alternative approach, we suggest proving that the generated machine code still satisfies the functional properties expressed at the source code level. The suggested approach takes an ACSL-annotated C function along with its binary code and checks that the binary code meets the ACSL annotations. The main steps are as follows: (1) disassembling the machine code and extracting its semantics; (2) adapting the annotations to the machine level and generating the verification conditions. The implementation utilizes MicroTESK, Frama-C, Why3, and other tools. To illustrate the solution, we use the RISC-V microprocessor architecture; however, the approach is relatively independent of the target platform as it is based on formal specifications of the instruction set. It is worth noting that the presented method can be exploited as a test oracle for compiler testing.
We developed the project risk rating (PRR) for telecommunication companies. It provides qualitative risk scores assessment of capital expenditures (capex) projects to rank them by severity of exposures, to check their fit into the company’s risk profile and, ultimately, to combine projects into the efficient project portfolio with the lowest risk given return. We discussed the definition, functions and advantages of investment controlling and presented the reference model of its main subsystem – project portfolio controlling responsible for building the efficient capex project portfolio. Then, we developed the model of PRR; worked out the example of PRR’s scorecard and discussed the advantages of the PRR over the existing risk assessment tools in project portfolio management.
Finite state transducers over semigroups can be regarded as a formal model of sequential reactive programs. In this paper we introduce a uniform tech- nique for checking eectively functionality, k-valuedness, equivalence and inclusion for this model of computation in the case when a semigroup these transducers op- erate over is embeddable in a decidable group.
Let G = (V,E) be a digraph with disjoint sets of sources S ⊂ V and sinks T ⊂ V endowed with an S–T flow f : E → Z+. It is a well-known fact that f decomposes into a sum_st(fst) of s–t flows fst between all pairs of sources s ∈ S and sinks t ∈ T . In the usual RAM model, such a decomposition can be found in O(E log V 2 E ) time. The present paper concerns the complexity of this problem in the external memory model (introduced by Aggarwal and Vitter). The internal memory algorithm involves random memory access and thus becomes inefficient. We propose two novel methods. The first one requires O(Sort(E) log V 2 E ) I/Os and the second one takes O(Sort(E) log U) expected I/Os (where U denotes the maximum value of f).
Despite the fact that working with imaginaries is already widely acknowledged in Internet studies their use in Internet policy studies still remains limited. In this paper, I approach studying Internet governance from the perspective of “sociotechnical imaginaries”. I argue that this concept can prove to be especially useful in studying authoritarian policy-settings and elucidate the relationship between stakeholders in the field of Internet governance.
Development of new HPC architectures proceeds faster than the corresponding adjustment of the algorithms for such fundamental mathematical models as quantum and classical molecular dynamics. There is the need for clear guiding criteria for the computational efficiency of a particular model on a particular hardware. LINPACK benchmark alone can no longer serve this role. In this work we consider a practical metric of the time-to-solution versus the computational peak performance of a given hardware system. In this metric we compare different hardware for the CP2K and LAMMPS software packages widely used for atomistic modeling. The metric considered can serve as a universal unambiguous scale that ranges different types of supercomputers.
The paper focuses on solving the problem of hair removal in dermatology applications. The proposed hair removal algorithm is based on Gabor filtering and PDE-based image reconstruction. It also includes the edge sharpening stage using a new warping algorithm. The idea of warping is to move pixels from the neighborhood of the blurred edge closer to the edge. The proposed technique preserves the overall luminosity and textures of the image, while making the edges sharper and less noisy.