Skip to main content

Title: Optimizing the extraction of timber from a large forest

Authors: Miguel Constantino (CMAFcIO, Ciências, ULisboa), Marta Mesquita (CMAFcIO, ISA, ULisboa), Susete Marques (Centro de Estudos Florestais and Laboratory Terra, ISA, ULisboa) and José G. Borges (Centro de Estudos Florestais and Laboratory Terra, ISA, ULisboa)

Abstract: One of the main challenges in forest management planning involves decisions regarding the extraction of timber from the forest, namely the construction and maintenance of forest roads, the gathering of timber at loading sites in management units, and the transport to the final destination (warehouse or factory).

Management units are represented on a map by polygons defined by vertices and edges. Forest roads can be built along polygon edges. In addition, there is a network of public roads that cross the forest. The intersection of any of these public roads with an edge of a polygon also defines a vertex (exit vertex). In each period of a planning horizon, timber harvested from a management unit is gathered at one of its vertices. It is then loaded and transported by trucks that travel along forest roads until they reach an exit vertex and take a public road to the final destination. A forest road segment only exists in a period if it is built in that period, or if it existed in the previous period and is maintained.

A natural MILP formulation considers binary variables associated with road segment construction and maintenance, and continuous variables  corresponding to timber flow on road segment {i,j} in the direction from to j in period t.  Among others, big-M constraints are needed to relate  and binary variables that guarantee the existence of the road {i,j}. Unfortunately, the best big-M value is a very large value compared to the values that variables  take in most solutions and the formulation provides weak linear relaxation bounds. To improve the formulation, one can disaggregate variables  into multiflow variables  with index u corresponding to the management unit of origin of the timber flow. This allows the reduction of the big-M value, leading to very tight linear relaxation bounds. However, the number of variables increases prohibitively.

We exploit some features of a real-world problem, arising in a forest in Northwest Portugal, to improve the values of the big-M in the initial formulation and substantially reduce the number of variables  in the strengthened formulation. We present computational experience and discuss the applicability of the proposed solution approach.


Title: ε-constraint based representation methods for multi-objective integer programming

Authors: Mariana Mesquita da Cunha (CEG-IST, IST, ULisboa), José Rui Figueira (CEG-IST, IST, ULisboa) and Ana Paula Barbosa-Póvoa (CEG-IST, IST, ULisboa)

Abstract: Many decision problems, although sometimes modelled as single objective, are, in fact, multi-objective problems. However, in real world problems, not only generating the whole Pareto front is often impractical, showing and analysing it with a decision maker may be overwhelming. Hence, to simplify the interaction process with the decision maker, one option is to show a representation of the Pareto front and to focus on generating only the desired subset of Pareto front solutions. Considering that a good subset of solution is one where the uniformity, coverage and cardinality of the subset are considered, three generation algorithms based on the ε-constraint method that address each of those concerns are put forward. Additionally, the algorithm that targets the subset’s cardinality and the one that focuses on uniformity present procedures that skip the search for redundant solutions. Finally, all three algorithms are insensitive to the quality of the estimation of the Pareto front extremes. Results show that the proposed algorithms, apart from generating representations of the Pareto front in an efficient way, are also competitive when generating the full Pareto front.


Title: Decision Space Robust Representations for Discrete Multi-Objective Optimization Problems

Authors: Michael Stiglmayr (School of Mathematics and Natural Sciences, University of Wuppertal), José Rui Figueira (CEG-IST, IST, ULisboa), Kathrin Klamroth (School of Mathematics and Natural Sciences, University of Wuppertal), Luis Paquete (Department of Informatics Engineering, CISUC, University of Coimbra), Britta Schulze (School of Mathematics and Natural Sciences, University of Wuppertal)

Abstract: We introduce robustness measures in the context of discrete multi-objective programming problems. The proposed measures are in line with the concept of decision robustness, which considers the uncertainty with respect to the implementation of a specific solution. An efficient solution is considered to be decision robust if many solutions in its neighborhood are efficient as well. This rather new area of research differs from robustness concepts dealing with imperfect knowledge of data parameters. Our approach implies a two-phase procedure, where in the first phase the set of all efficient solutions is computed, and in the second phase the neighborhood of each one of the solutions is determined. The indicators we propose are based on the knowledge of these neighborhoods.
We extend these robustness measures to subsets of efficient solutions such that the robustness of the representation can be considered as a representation quality measure like coverage, uniformity or eps-indicator.


Title: A decomposition algorithm for the robust berth allocation and quay crane scheduling problem under uncertainty

Authors: Filipe Rodrigues (ISEG, ULisboa) and Agostinho Agra (Universidade de Aveiro)

Abstract: We consider an integrated berth allocation and quay crane scheduling problem where the arrival times of the vessels may be affected by uncertainty. The problem is modelled as a two-stage robust mixed integer program where the berth allocation decisions are taken before the exact arrival times are known, and the crane assignment and scheduling operations are adjusted to the arrival times. The robust problem is solved by a decomposition algorithm that decomposes the problem into a master problem and a separation problem. The second-stage decisions are represented by binary variables that strongly affect the overall efficiency of the decomposition algorithm. In this talk, we present several improvement strategies to make the proposed algorithm more efficient. Such strategies can easily be extended to other optimization problems and their positive effect is clearly shown by the computational results.


Title: Designing Master Surgery Schedules with Downstream Unit Integration via Stochastic Programming

Authors: Daniel Santos (CEG-IST, IST, ULisboa) and Inês Marques

Abstract: Surgical activity has a substantial impact in all areas of hospitals. Additionally, social concerns arise related to equity and speed of access. Therefore, operating room management is paramount in modern society. This work studies the master surgery scheduling problem which is the problem of assigning surgical specialties to operating room blocks, which represent a shift of an  operating room. For a master surgery schedule to be applicable in practice, multiple considerations must be taken into account. The particular focus of this work is in the integration of downstream units (i.e., wards or the ICU). Although, in a tactical planning scenario, operational bed requirements are unknown, these may be estimated based on historical data. We propose a novel stochastic programming model that captures the uncertainty in the bed requirements, with a recourse function that penalizes the overutilization of beds. A solution approach based on Benders decomposition is developed and results for generated instances mimicking real-life data are presented.


Title: New Piecewise Relaxation with a Logarithmic Partitioning Scheme for Quadratically Constrained Problems

Authors: Pedro Castro (CMAFcIO, ULisboa)

Abstract:The multiparametric disaggregation technique (MDT) can be used to approximate (Teles et al., 2013) or relax (Kolodziej et al., 2013; Castro, 2016) bilinear terms in the context of the global optimization of (mixed-integer) quadratically constrained problems, (MI)QCPs. Its main advantage compared to spatial branch-and-bound algorithms is that bilinear terms are tackled simultaneously instead of sequentially, following partitioning of one of the variables in every bilinear term. As a consequence, orders of magnitude reductions in computational time can be achieved when incorporated in 2-stage, MILP-NLP global optimization algorithms. The piecewise relaxation from MDT can consider any number, N, of intervals in the partition but only becomes a logarithmic partitioning scheme when N is a power of the chosen basis for numeric representation. In contrast, the logarithmic partioning scheme (LR) of Misener et al. (2011) can handle any value of N. The major difference is that is that LR needs just  binary variables per partitioned variable instead of N, leading to a superior computational performance with increasing N. Aiming at closing the gap between MDT and LR, we now generalize the MDT relaxation to benefit from a logarithmic partitioning scheme for a wider range of values for N. The idea is to select the optimal interval in the partition by using a mixed-radix numeral system, following the prime factorization of N. The mixed-radix MDT is compared to its predecessor, the single base MDT (linear partitioning scheme) and to LR, by considering benchmark instances of the well-known q- and tp-formulations of the pooling problem. Note that all mixed-integer linear programming (MILP) relaxations compute the same upper bound for the profit maximization problem and so we can simply use the computational time metric to evaluate the performance. The results over a set of 66 data points show that the new mixed-radix formulation has a 74.2% probability of being the fastest, compared to just 6.1% of the single base MDT. Thus, the generalization of the MDT relaxation had a major impact in computational performance.