Universal gradient methods for convex optimization problems
The direct and inverse projections (DIP) method was proposed to reduce the feature space to the given dimensions oriented to the problems of randomized machine learning and based on the procedure of “direct” and “inverse” design. The “projector” matrices are determined by maximizing the relative entropy. It is suggested to estimate the information losses by the absolute error calculated with the use of the Kullback–Leibler function (SRC method). An example illustrating these methods was given.
In 2006, Russia amended its competition law and added the concepts of ‘collective dominance’ and its abuse. This was seen as an attempt to address the common problem of ‘conscious parallelism’ among firms in concentrated industries. Critics feared that the enforcement of this provision would become tantamount to government regulation of prices. In this paper we examine the enforcement experience to date, looking especially closely at sanctions imposed on firms in the oil industry. Some difficulties and complications experienced in enforcement are analysed, and some alternative strategies for addressing anticompetitive behaviour in concentrated industries discussed.