The unit commitment problem has been a very important problem in the power system operations, because it is aimed at reducing the power production cost by optimally scheduling the commitments of generation units. Meanwhile, it is a challenging problem because it involves a large amount of integer variables. With the increasing penetration of renewable energy sources in power systems, power system operations and control have been more affected by uncertainties than before. This paper discusses a stochastic unit commitment model which takes into account various uncertainties affecting thermal energy demand and two types of power generators, i.e., quick-start and non-quick-start generators. This problem is a stochastic mixed integer program with discrete decision variables in both first and second stages. In order to solve this difficult problem, a method based on Benders decomposition is applied. Numerical experiments show that the proposed algorithm can solve the stochastic unit commitment problem efficiently, especially those with large numbers of scenarios.
We consider reformulations of a class of bilevel linear integer programs as equivalent linear mixed-integer programs (linear MIPs). Themost common technique to reformulate such programs as a single-level problem is to replace the lower-level linear optimization problem by Karush–Kuhn–Tucker (KKT) optimality conditions. Employing the strong duality (SD) property of linear programs is an alternative method to perform such transformations. In this note, we describe two SD-based reformulations where the key idea is to exploit the binary expansion of upper-level integer variables. We compare the performance of an offthe- shelfMIP solver with the SD-based reformulations against the KKT-based one and show that the SD-based approaches can lead to orders of magnitude reduction in computational times for certain classes of instances
Flow variations over time generalize standard network flows by introducing an element of time. In contrast to the classical case of static flows, a flow over time in such a network specifies a flow rate entering an arc for each point in time. In this setting, the capacity of an arc limits the rate of flow into the arc at each point in time. Traditionally, flows over time are computed in time-expanded networks that contain one copy of the original network for each discrete time step. While this method makes available the whole algorithmic toolbox developed for static network flows, its drawback is the enormous size of the time-expanded network. In this paper, we extend the results about the minimum flow problem to network flows (with n nodes and m arcs) in which the time-varying lower bounds can involve both the source and the sink nodes (as in Fathabadi et al.) and also one additional node other than the source and the sink nodes. It is shown that this problem for the set (Formula presented.) of time points can be solved by at most n minimum flow computations, by suitably extending the dynamic minimum flow algorithm and reoptimization techniques. The running time of the presented algorithm is (Formula presented.).
In this paper, we consider the well-known resource-constrained project scheduling problem. We give some arguments that already a special case of this problem with a single type of resources is not approximable in polynomial time with an approximation ratio bounded by a constant. We prove that there exist instances for which the optimal makespan values for the non-preemptive and the preemptive problems have a ratio of O(log n), where n is the number of jobs. This means that there exist instances for which the lower bound of Mingozzi et al. has a bad relative error of O(log n), and the calculation of this bound is an NP-hard problem. In addition, we give a proof that there exists a type of instances for which known approximation algorithms with polynomial time complexity have an approximation ratio of at least equal to O( √ n), and known lower bounds have a relative error of at least equal to O(log n). This type of instances corresponds to the single machine parallel-batch scheduling problem 1|p − batch, b=∞|Cmax.
Performance impacts of ordering and production control policies in the presence of capacity disruptions are studied on the real-life example of a retail supply chain with product perishability considerations. Constraints on product perishability typically result in reductions in safety stock and increases in transportation frequency. Consideration of the production capacity disruption risks may lead to safety stock increases. This trade-off is approached with the help of a simulation model that is used to compare supply chain performance impacts with regard to coordinated and non-coordinated ordering and production control policies. Real data of a fast moving consumer goods company is used to perform simulations and to derive novel managerial insights and practical recommendations on inventory, on-time delivery and service level control. In particular, for the first time, the effect of ‘postponed redundancy’ has been observed. Moreover, a coordinated production–ordering contingency policy in the supply chain within and after the disruption period has been developed and tested to reduce the negative impacts of the ‘postponed redundancy’. The lessons learned from experiments provide evidence that a coordinated policy is advantageous for inventory dynamics stabilization, improvement in on-time delivery, and variation reduction in customer service level.
This survey paper attempts to cover a broad range of topics related to computational biomedicine. The field has been attracting great attention due to a number of benefits it can provide the society with. New technological and theoretical advances have made it possible to progress considerably. Traditionally, problems emerging in this field are challenging from many perspectives. In this paper,we considered the influence of big data on the field, problems associated with massive datasets in biomedicine and ways to address these problems. We analyzed the most commonly used machine learning and feature mining tools and several new trends and tendencies such as deep learning and biological networks for computational biomedicine.
In this paper we consider the analysis of an M/D[y] /1 vacation queue with periodically gated discipline. The motivation of introducing the new periodically gated discipline lies in modeling a kind of contention-based bandwidth reservation mechanism applied in wireless networks. The analysis approach applied here consists of two steps and it is based on appropriately chosen characteristic epochs of the system. We provide approximate expressions for the probability-generating function of the number of customers at arbitrary epoch as well as for the Laplace–Stieljes transform and for the mean of the steady-state waiting time. Several numerical examples are also provided. In the second part of the paper we discuss how to apply the periodically gated vacation model to the non real-time uplink traffic in IEEE 802.16-based wireless broadband networks.
We study a general model of multi-dimensional screening for discrete types of consumers without the single-crossing condition or any other essential restrictions. Such generality motivates us to introduce graph theory into optimization by treating each combination of active constraints as a digraph. Our relaxation of the constraints (a slackness parameter) excludes bunching and cycles among the constraints. Then, the only possible solution structures arerivers, which are acyclic rooted digraphs, and the Lagrange multipliers can be used to characterize the solutions. Relying on these propositions, we propose and justify an optimization algorithm. In the experiments, its branch-and-bound version with a good starting plan shows fewer iterations than a complete search among all rivers.
Research into the market graph is attracting increasing attention in stock market analysis. One of the important problems connected with the market graph is its identification from observations. The standard way of identifying the market graph is to use a simple procedure based on statistical estimations of Pearson correlations between pairs of stocks. Recently a new class of statistical procedures for market graph identification was introduced and the optimality of these procedures in the Pearson correlation Gaussian network was proved. However, the procedures obtained have a high reliability only for Gaussian multivariate distributions of stock attributes. One of the ways to correct this problem is to consider different networks generated by different measures of pairwise similarity of stocks. A new and promising model in this context is the sign similarity network. In this paper the market graph identification problem in the sign similarity network is reviewed. A new class of statistical procedures for the market graph identification is introduced and the optimality of these procedures is proved. Numerical experiments reveal an essential difference in the quality between optimal procedures in sign similarity and Pearson correlation networks. In particular, it is observed that the quality of the optimal identification procedure in the sign similarity network is not sensitive to the assumptions on the distribution of stock attributes.
We present an analysis of the distribution of voting power in the Reichstag of the Weimar Republic based on the outcomes of the nine general elections in the period 1919-1933. The paper contains a brief description of the political and electoral system of the Weimar Republic and a characterization of the main political actors and their political views. The power distributions are evaluated by means of the Banzhaf index and two new indices which take into account the parties' preferences to coalesce. A model is constructed to evaluate the parties' preferences with reference to the closeness of the ideological positions in a one-dimensional political space.
This paper studies bankruptcy problems with nontransferable utility as a generalization of bankruptcy problems with monetary estate and claims. Following the theory on TU-bankruptcy, we introduce a duality notion for NTU-bankruptcy rules and derive several axiomatic characterizations of the proportional rule and the constrained relative equal awards rule.
Support vector machines (SVM) is one of the well known supervised classes of learning algorithms. Basic SVM models are dealing with the situation where the exact values of the data points are known. This paper studies SVM when the data points are uncertain. With some properties known for the distributions, chance-constrained SVM is used to ensure the small probability of misclassification for the uncertain data. As infinite number of distributions could have the known properties, the robust chance-constrained SVM requires efficient transformations of the chance constraints to make the problem solvable. In this paper, robust chance-constrained SVM with second-order moment information is studied and we obtain equivalent semidefinite programming and second order cone programming reformulations. The geometric interpretation is presented and numerical experiments are conducted. Three types of estimation errors for mean and covariance information are studied in this paper and the corresponding formulations and techniques to handle these types of errors are presented.
Uncertainty is a concept associated with data acquisition and analysis, usually appearing in the form of noise or measure error, often due to some technological constraint. In supervised learning, uncertainty affects classification accuracy and yields low quality solutions. For this reason, it is essential to develop machine learning algorithms able to handle efficiently data with imprecision. In this paper we study this problem from a robust optimization perspective. We consider a supervised learning algorithm based on generalized eigenvalues and we provide a robust counterpart formulation and solution in case of ellipsoidal uncertainty sets. We demonstrate the performance of the proposed robust scheme on artificial and benchmark datasets from University of California Irvine (UCI) machine learning repository and we compare results against a robust implementation of Support Vector Machines.
Supervised classification is one of the most powerful techniques to analyze data, when a-priori information is available on the membership of data samples to classes. Since the labeling process can be both expensive and time-consuming, it is interesting to investigate semi-supervised algorithms that can produce classification models taking advantage of unlabeled samples. In this paper we propose LapReGEC, a novel technique that introduces a Laplacian regularization term in a generalized eigenvalue classifier. As a result, we produce models that are both accurate and parsimonious in terms of needed labeled data. We empirically prove that the obtained classifier well compares with other techniques, using as little as 5% of labeled points to compute the models.
In this paper, we consider some scheduling problems on a single machine, where weighted or unweighted total tardiness has to be maximized in contrast to usual minimization problems. These problems are theoretically important and have also practical interpretations. For the weighted tardiness maximization problem, we present an NP-hardness proof and a pseudo-polynomial solution algorithm. For the unweighted total tardiness maximization problem with release dates, NP-hardness is proven. Complexity results for some other classical objective functions (e.g., the number of tardy jobs, total completion time) and various additional constraints (e.g., deadlines, weights and/or release dates of jobs may be given) are presented as well.
We consider the problem of maximizing total tardiness on a single machine, where the first job starts at time zero and idle times between the processing of jobs are not allowed.We present a modification of an exact pseudo-polynomial algorithm based on a graphical approach, which has a polynomial running time. This result settles the complexity status of the problem under consideration which was open.