In this chapter we are going to examine the logical connections between various descriptions of the Scientific Revolution proposed by Alexandre Koyré. We are going to propose an attentive and detailed reading of texts written by Koyré in different periods of his life in order to identify various aspects of his interpretation of the revolution in thought that occurred in early modern Europe. His most famous description of the Scientific Revolution (the dual characterization) indicates two aspects of the process that led to the emergence of classical physics: “destruction of the Cosmos” and “geometrization of space”. However, Koyré frequently used other expressions for characterization of the period, such as “mathematization of Nature”, or transition “from the world of more-or-less to the universe of precision” and “from the closed world to the open universe”. We could expect that Koyré would try to reduce his initial dual characterization to one single formula. I argue here that, on the contrary, the duality of description had a special meaning which permits us to keep in focus the complexity of the intellectual change that occurred during 17th century, when new science was rising from a new conception of reality, and a new world-view was emerging from the new science

The following topics about subgroups of the Cremona groups are discussed: (1) maximal tori; (2) conjugacy and classification of diagonalizable subgroups of codimensions 0 and 1; (3) conjugacy of finite abelian subgroups; (4) algebraicity of normalizers of diagonalizable subgroups; (5) torsion primes.

*Due to the complexity and large dimensions of the task of digital system design debugging decomposition by method of modeling as a whole, algebraic models of decomposition methods are proposed, namely, methods of vertical and horizontal structure decomposition, functional decomposition, decomposition based on error types. An algebraic model of the digital systems software is presented. The software is considered as a semi group of operators.*

In big data problems the data usually are collected on many sites, have a huge volume, and new pieces of data are constantly generated. It is often impossible to collect all the data needed for a research project on one computer, and even impractical, since one computer would not be able to process it in a reasonable time. An appropriate data analysis algorithm should, working in parallel on many computers, extract from each set of raw data some intermediate compact “information”, gradually combine and update it, and finally, use the accumulated information to produce the result. When new data appears, it must extract information from them, add it to the accumulated one, and eventually update the result. We consider several examples of a suitable transformation of processing algorithms, discuss specific features of the emerging information spaces and, in particular, their algebraic properties. We also show that the information space often can be equipped with an order relation that reflects the "quality" of the information.

Algorithmic statistics studies explanations of observed data that are good in the algorithmic sense: an explanation should be simple i.e. should have small Kolmogorov complexity and capture all the algorithmically discoverable regularities in the data. However this idea can not be used in practice as is because Kolmogorov complexity is not computable.

In recent years resource-bounded algorithmic statistics were created [7, 8]. In this paper we prove a polynomial-time version of the following result of ‘classic’ algorithmic statistics.

Assume that some data were obtained as a result of some unknown experiment. What kind of data should we expect in similar situation (repeating the same experiment)? It turns out that the answer to this question can be formulated in terms of algorithmic statistics [6]. We prove a polynomial-time version of this result under a reasonable complexity theoretic assumption. The same assumption was used by Antunes and Fortnow [1].

Algorithmic statistics has two different (and almost orthogonal) motivations. From the philosophical point of view, it tries to formalize how the statistics works and why some statistical models are better than others. After this notion of a "good model" is introduced, a natural question arises: it is possible that for some piece of data there is no good model? If yes, how often these bad ("non-stochastic") data appear "in real life"? Another, more technical motivation comes from algorithmic information theory. In this theory a notion of complexity of a finite object (=amount of information in this object) is introduced; it assigns to every object some number, called its algorithmic complexity (or Kolmogorov complexity). Algorithmic statistic provides a more fine-grained classification: for each finite object some curve is defined that characterizes its behavior. It turns out that several different definitions give (approximately) the same curve. In this survey we try to provide an exposition of the main results in the field (including full proofs for the most important ones), as well as some historical comments. We assume that the reader is familiar with the main notions of algorithmic information (Kolmogorov complexity) theory.

In algorithmic statistics quality of a statistical hypothesis (a model) P for a data x is measured by two parameters: Kolmogorov complexity of the hypothesis and the probability P(x). A class of models SijSij that are the best at this point of view, were discovered. However these models are too abstract. To restrict the class of hypotheses for a data, Vereshchaginintroduced a notion of a strong model for it. An object is called normal if it can be explained by using strong models not worse than without this restriction. In this paper we show that there are “many types” of normal strings. Our second result states that there is a normal object x such that all models SijSij are not strong for x. Our last result states that every best fit strong model for a normal object is again a normal object.

Algorithmic statistics considers the following problem: given a binary string

x

(e.g., some

experimental data), find a “good” explanation of this data. It uses algorithmic information

theory to define formally what is a good explanation. In this paper we extend this framework in

two directions.

First, the explanations are not only interesting in themselves but also used for prediction: we

want to know what kind of data we may reasonably expect in similar situations (repeating the

same experiment). We show that some kind of hierarchy can be constructed both in terms of

algorithmic statistics and using the notion of a priori probability, and these two approaches turn

out to be equivalent (Theorem 5).

Second, a more realistic approach that goes back to machine learning theory, assumes that

we have not a single data string

x

but some set of “positive examples”

x

1

,...,x

l

that all belong

to some unknown set

A

, a property that we want to learn. We want this set

A

to contain all

positive examples and to be as small and simple as possible. We show how algorithmic statistic

can be extended to cover this situation (Theorem 11)

A survey of main results in algorithmic statistics

Purpose: The purpose of the article is to determine the perspectives of improving the system of emergency medical aid and services in the conditions of digital economy and to develop the algorithm of this system's work on the basis of the Internet of Things. Methodology: The methods of systematization, logical analysis, and block schemes are used. Results: As a result of studying the peculiarities of the applied universal algorithm of the work of the system of emergency medical aid and services, current problems and their causal connections are determined. It is substantiated that in the conditions of digital economy there's a possibility for full-scale technological modernization of the system of emergency medical aid and services, which allows improving it due to complex solving of all determined topical problems. An algorithm of the work of the system of emergency medical aid and services on the basis of the Internet of Things is developed. Recommendations: The offered algorithm is recommended for practical application, as it ensures the following advantages: Automatic call for emergency medical aid if necessary, substantial reduction of the period of patient's waiting for a transport vehicle for providing emergency medical aid and services, reduction and automatization of organizational procedures that accompany the process of provision of emergency medical aid and services, overcoming the deficit or absence of necessary medication for providing highly-effective emergency medical aid and services, and increase of competence of medical staff that provide emergency medical aid and services due to systemic collection of feedback from patients. These advantages allow guaranteeing timely provision of emergency medical aid and services and insurance payments for compensating the expenditures of medical organizations, thus increasing the effectiveness of work of the system of emergency medical aid and services. © Springer Nature Switzerland AG 2019.

The paper presents algorithms for automatic detection of non-stationary periods of cardiac rhythm during professional activity. While working and subsequent rest operator passes through the phases of mobilization, stabilization, work, recovery and the rest. The amplitude and frequency of non-stationary periods of cardiac rhythm indicates the human resistance to stressful conditions. We introduce and analyze a number of algorithms for non-stationary phase extraction: the diﬀerent approaches to phase preliminary detection, thresholds extraction and ﬁnal phases extraction are studied experimentally. These algorithms are based on local extremum computation and analysis of linear regression coeﬃcient histograms. The algorithms do not need any labeled datasets for training and could be applied to any person individually. The suggested algorithms were experimentally compared and evaluated by human experts.

The companies that are IT-industry leaders perform from several tens to several hundreds of projects simultaneously. The main problem is to decide whether the project is acceptable to the current strategic goals and resource limits of a company or not. This leads firms to an issue of a project portfolio selection; therefore, the challenge is to choose the subset of all projects which satisfy the strategic objectives of a company in the best way. In this present article we propose the multi-objective mathematical model of the project portfolio selection problem, defined on the fuzzy trapezoidal numbers. We provide an overview of methods for solving this problem, which are a branch and bound approach, an adaptive parameter variation scheme based on the epsilon-constraint method, ant colony optimization method and genetic algorithm. After analysis, we choose ant colony optimization method and SPEA II method, which is a modification of a genetic algorithm. We describe the implementation of these methods applied to the project portfolio selection problem. The ant colony optimization is based on the max min ant system with one pheromone structure and one ant colony. Three modification of our SPEA II implementation were considered. The first adaptation uses the binary tournament selection, while the second requires the rank selection method. The last one is based on another variant of generating initial population. The part of the population is generated by a non-random manner on the basis of solving a one-criterion optimization problem. This fact makes the population more strongly than an initial population, which is generated completely by random.

We consider NP-hard scheduling problem on a single machine minimizing total tardiness on a single machine. We present a number of polynomial and pseudo-polynomial algorithms for special cases of the problem. Based on these algorithms, we give the algorithm that solves Event-Odd Partition problem in pseudo-polynomial time. This algorithm for Partition problem can handle instances with non-integer parameters.

**The problem of optimal control is formulated for a class of nonlinear objects that can be represented as objects with a linear structure and parameters that depend on the state. The linear structure of the transformed nonlinear system and the quadratic functional of quality allow for the synthesis of optimal control, i.e. parameters of the regulator, move from the need to search for solutions of the Hamilton-Jacobi equation to an equation of the Riccati type with parameters that depend on the state.** **The main problem of implementing optimal control is related to the problem of finding a solution to such an equation at the pace of object functioning.** **The paper proposes an algorithmic method of parametric optimization of the regulator. This method is based on the use of the necessary conditions for the optimality of the control system under consideration. The constructed algorithms can be used both to optimize the non-stationary objects themselves, if the corresponding parameters are selected for this purpose, and to optimize the entire managed system by means of the corresponding parametric adjustment of the regulators. The example of drug treatment of patients with HIV is demonstrated the effectiveness of the developed algorithms.**

This paper aims to analyse the philosophical premises on which the idea of unity of law (identity of legal system) is based. In the history of legal philosophy this idea found its main arguments in the presumption of totality of legal regulation. Such totality translated the philosophical tenets of holism according to which law is not limited to the positive-law rules and institutes. To substantiate the idea of systemacity of law, one can turn to the modern debates about logic of social cohesion and construct a legal system identity as a purely intellectual hypothesis necessary for thinking about law. This integrity can be described as a unity of discourse, or as a unity of societal practices. This reconstruction of integrity of law can be extended by appealing to the basic ideas of normative philosophy of law (from Hart and Kelsen to Raz and Dworkin) and is reconcilable with the conception of normative systems of Bulygin–Alchourron.

While the number of non-tariff barriers in the world is rising, the EAEU is pursuing a decrease of NTBs – and alignment of technical standards with the EU. However, immediate benefits to European companies have not yet materialised.

This paper describes an approach to the architecture of the broadcasting complex for a multi-camera online video live streaming based on a combination of approaches, protocols, and equipment from related fields: IPTV, CCTV, computer networks. The proposed approach is based on the already established commercial service and represents the results of experimental development.

Tragedy as a literary genre and theatrical form was introduced to Russia around 1750 by Aleksandr Sumarokov (1717–1777), a dramatist and stage director active at the courts of Empresses Elizabeth (r. 1741–1761) and Catherine II (r. 1762–1796). In Petersburg, as in other European capitals, theatrical performances were a central element of what Grigorii Gukovskii called the “spectacle of the imperial court”. Richard Wortman elaborates on this concept in his by now standard work, “Scenarios of Power: Myth and Ceremony in Russian Monarchy”.

One of the most popular statements in the systemic transition literature since the second half of the 1990th is that different experiences of the CEE and Baltic states, on the one hand, and the most of the CIS countries, on the other hand, are embedded in different social norms and values, encouraging efforts in the new EU member states and preventing it in some of CIS countries.

As it follows from the idea of discourse, it is not discussed without its certain usage. Its participants’ social, political and cultural characteristics are reflected in it. At the same time various voices belonging to previous discourse practices can intermingle in it, which creates new senses of meaning. In relation to this the notion of interdiscursivity is spelled out in the present article and is compared with the notion of “intertextuality”. The former is illustrated with the help of examples borrowed from B.Obama’s victory speech (2008). All of them add certain musicality to the address as they present links to songs popular in America.