Data aggregation for p-median problems
In this paper, we use a pseudo-Boolean formulation of the p-median problem and using data aggregation, provide a compact representation of p-median problem instances. We provide computational results to demonstrate this compactification in benchmark instances. We then use our representation to explain why some p-median problem instances are more difficult to solve to optimality than other instances of the same size. We also derive a preprocessing rule based on our formulation, and describe equivalent p-median problem instances, which are identical sized instances which are guaranteed to have identical optimal solutions.
Lately, the problem of cell formation (CF) has gained a lot of attention in the industrial engineering literature. Since it was formulated (more than 50 years ago), the problem has incorporated additional industrial factors and constraints while its solution methods have been constantly improving in terms of the solution quality and CPU times. However, despite all the efforts made, the available solution methods (including those for a popular model based on the p-median problem, PMP) are prone to two major types of errors. The first error (the modeling one) occurs when the intended objective function of the CF (as a rule, verbally formulated) is substituted by the objective function of the PMP. The second error (the algorithmic one) occurs as a direct result of applying a heuristic for solving the PMP. In this paper we show that for instances that make sense in practice, the modeling error induced by the PMP is negligible. We exclude the algorithmic error completely by solving the adjusted pseudo-Boolean formulation of the PMP exactly, which takes less than one second on a general-purpose PC and software. Our experimental study shows that the PMP-based model produces high-quality cells and in most cases outperforms several contemporary approaches.
In the course of recent ten years algorithms and technologies for network structures analysis have been applied to financial markets among other approaches. The first step of such an analysis is to describe the considered financial market via the correlation matrix of stocks prices over a certain period of time. The second step is to build a graph in which vertices represent stocks and edge weights represent correlation coefficients between the corresponding stocks. In this paper we suggest a new method of analyzing stock markets based on dividing a market into several substructures (called stars) in which all stocks are strongly correlated with a leading (central, median) stock. The method is based on the p-median model a feasible solution to which is represented by a collection of stars. Our method is able to find an exact solution for relatively small-sized markets (less than 1000 stocks) and a high-quality solution for large-sized (many thousands of stocks) markets. We observed an important ``median nesting" property of returned solutions: the p leading stocks, or medians, of the stars are repeated in the solution for p+1 stars.
In the course of recent fifteen years the network analysis has become a powerful tool for studying financial markets. In this work we analyze stock markets of the USA and Sweden. We study cluster structures of a market network constructed from a correlation matrix of returns of the stocks traded in each of these markets. Such cluster structures are obtained by means of the P-Median Problem (PMP) whose objective is to maximize the total correlation between a set of stocks called medians of size p and other stocks. Every cluster structure is an undirected disconnected weighted graph in which every connected component (cluster) is a star, or a tree with one central node (called a median) and several leaf nodes connected with the median by weighted edges. Our main observation is that in non-crisis periods of time cluster structures change more chaotically, while during crises they show more stable behavior and fewer changes. Thus an increasing stability of a market graph cluster structure obtained via the PMP could be used as an indicator of a coming crisis.
The article considers the Views of L. N. Tolstoy not only as a representative, but also as a accomplisher of the Enlightenment. A comparison of his philosophy with the ideas of Spinoza and Diderot made it possible to clarify some aspects of the transition to the unique Tolstoy’s religious and philosophical doctrine. The comparison of General and specific features of the three philosophers was subjected to a special analysis. Special attention is paid to the way of thinking, the relation to science and the specifics of the worldview by Tolstoy and Diderot. An important aspect is researched the contradiction between the way of thinking and the way of life of the three philosophers.
Tolstoy's transition from rational perception of life to its religious and existential bases is shown. Tolstoy gradually moves away from the idea of a natural man to the idea of a man, who living the commandments of Christ. Starting from the educational worldview, Tolstoy ended by creation of religious and philosophical doctrine, which were relevant for the 20th century.
This important new book offers the first full-length interpretation of the thought of Martin Heidegger with respect to irony. In a radical reading of Heidegger's major works (from Being and Time through the ‘Rector's Address' and the ‘Letter on Humanism' to ‘The Origin of the Work of Art' and the Spiegel interview), Andrew Haas does not claim that Heidegger is simply being ironic. Rather he argues that Heidegger's writings make such an interpretation possible - perhaps even necessary.
Heidegger begins Being and Time with a quote from Plato, a thinker famous for his insistence upon Socratic irony. The Irony of Heidegger takes seriously the apparently curious decision to introduce the threat of irony even as philosophy begins in earnest to raise the question of the meaning of being. Through a detailed and thorough reading of Heidegger's major texts and the fundamental questions they raise, Haas reveals that one of the most important philosophers of the 20th century can be read with as much irony as earnestness. The Irony of Heidegger attempts to show that the essence of this irony lies in uncertainty, and that the entire project of onto-heno-chrono-phenomenology, therefore needs to be called into question.
The article is concerned with the notions of technology in essays of Ernst and Friedrich Georg Jünger. The special problem of the connection between technology and freedom is discussed in the broader context of the criticism of culture and technocracy discussion in the German intellectual history of the first half of the 20th century.
This proceedings publication is a compilation of selected contributions from the “Third International Conference on the Dynamics of Information Systems” which took place at the University of Florida, Gainesville, February 16–18, 2011. The purpose of this conference was to bring together scientists and engineers from industry, government, and academia in order to exchange new discoveries and results in a broad range of topics relevant to the theory and practice of dynamics of information systems. Dynamics of Information Systems: Mathematical Foundation presents state-of-the art research and is intended for graduate students and researchers interested in some of the most recent discoveries in information theory and dynamical systems. Scientists in other disciplines may also benefit from the applications of new developments to their own area of study.
A form for an unbiased estimate of the coefficient of determination of a linear regression model is obtained. It is calculated by using a sample from a multivariate normal distribution. This estimate is proposed as an alternative criterion for a choice of regression factors.