This paper describes BBMCW, a new efficient exact maximum clique algorithm tailored for large sparse graphs which can be bit-encoded directly into memory without a heavy performance penalty. These graphs occur in real-life problems when some form of locality may be exploited to reduce their scale. One such example is correspondence graphs derived from data association problems. The new algorithm is based on the bit-parallel kernel used by the BBMC family of published exact algorithms. BBMCW employs a new bitstring encoding that we denote ‘watched’, because it is reminiscent of the ‘watched literal’ technique used in satisfiability and other constraint problems. The new encoding reduces the number of spurious operations computed by the BBMC bit-parallel kernel in large sparse graphs. Moreover, BBMCW also improves on bound computation proposed in the literature for bit-parallel solvers. Experimental results show that the new algorithm performs better than prior algorithms over data sets of both real and synthetic sparse graphs. In the real data sets, the improvement in performance averages more than two orders of magnitude with respect to the state-of-the-art exact solver IncMaxCLQ.
In this paper we consider the two-stage stochastic linear assignment (2SSLA) problem, which is a stochastic extension of the classical deterministic linear assignment problem. For each agent and job, the decision maker has to decide whether to make assignments now or to wait for the second stage. Assignments of agents and jobs, for which decisions are delayed to the second stage, are then completed based on the scenario realized. We discuss two greedy approximation algorithms from the literature and derive a simple necessary optimality condition that generalizes the key ideas behind both of these approaches. Subsequently, based on this result we design a new greedy approximation method. Theoretical observations and the results of computational experiments are also presented.
The preemptive single machine scheduling problem of minimizing the total weighted completion time with arbitrary processing times and release dates is an important NP-hard problem in scheduling theory. In this paper we present an efficient high-quality heuristic for this problem based on the WSRPT (Weighted Shortest Remaining Processing Time) rule. The running time of the suggested algorithm increases only as a square of the number of jobs. Our computational study shows that very large size instances might be treated within extremely small CPU times and the average error is always less than 0.1%.
We propose a novel algorithm portfolio model that incorporates time series forecasting techniques to predict online the performance of its constituent algorithms. The predictions are used to allocate computational resources to the algorithms, accordingly. The proposed model is demonstrated on parallel algorithm portfolios consisting of three popular metaheuristics, namely tabu search, variable neighbourhood search, and multistart local search. Moving average and exponential smoothing techniques are employed for forecasting purposes. A challenging combinatorial problem, namely the detection of circulant weighing matrices, is selected as the testbed for the analysis of the proposed approach. Experimental evidence and statistical analysis provide insight on the performance of the proposed algorithms and reveal the benefits of using forecasting techniques for resource allocation in algorithm portfolios.
This paper deals with the problem of preemptive scheduling in a two-stage supply chain framework. The supply chain environment contains two stages: production and transportation. In the production stage jobs are processed on a manufacturer's bounded serial batching machine, preemptions are allowed, and set-up time is required before a new batch is processed. In the transportation stage each batch is delivered to a customer by a single vehicle. The objective is to minimize the makespan by making decisions for both mutually coordinated stages. Specifically, two versions are studied. The first one is that all jobs are available to be processed at time zero, and the second one is that jobs have different release times. An time algorithm is developed for the first version, and we show that it can produce the optimal schedule for the entire problem. For the second version, based on some useful properties we have designed an time heuristic and a novel lower bound. The worst-case performance ratio of our algorithm is bounded by 2. Our computational study with random instances of different scales shows high-quality solutions for either small-scale or large-scale instances returned in a reasonable PC times.