Combinatorial Benders' Cuts for the Strip Packing Problem

We study the strip packing problem, in which a set of two-dimensional rectangular items has to be packed in a rectangular strip of ﬁxed width and inﬁnite height, with the aim of minimizing the height used. The problem is important because it models a large number of real-world applications, including cutting operations where stocks of materials such as paper or wood come in large rolls and have to be cut with minimum waste, scheduling problems in which tasks require a contiguous subset of identical resources, and container loading problems arising in the transportation of items that cannot be stacked one over the other. The strip packing problem has been attacked in the literature with several heuristic and exact algorithms, nevertheless, benchmark instances of small size remain unsolved to proven optimality. In this paper we propose a new exact method that solves a large number of the open benchmark instances within a limited computational effort. Our method is based on a Benders’ decomposition, in which in the master we cut items into unit-width slices and pack them contiguously in the strip, and in the slave we attempt to reconstruct the rectangular items by ﬁxing the vertical positions of their unit-width slices. If the slave proves that the reconstruction of the items is not possible, then a cut is added to the master, and the algorithm is reiterated. We show that both the master and the slave are strongly (cid:78)(cid:80) -hard problems and solve them with tailored preprocessing, lower and upper bounding techniques, and exact algorithms. We also propose several new techniques to improve the standard Benders’ cuts, using the so-called combinatorial Benders’ cuts, and an additional lifting procedure. Extensive computational tests show that the proposed algorithm provides a substantial breakthrough with respect to previously published algorithms.


Introduction
In the strip packing problem (SPP) we are given a set = 1 2 n of rectangular items of width w j and height h j , and a rectangular strip of width W and infinite height.The aim is to pack the items in the strip by minimizing the height used for the packing.Items cannot overlap, must be packed with their edges parallel to the borders of the strip, and cannot be rotated.A SPP solution is depicted in Figure 1(a), where a set of seven items is packed in a strip of width W = 10, by using minimum height z = 9.
The SPP is important because it models a large number of real-world applications.It models cutting applications in the manufacturing industry, where stock of materials such as paper, wood, glass, and metal come in large rolls and have to be cut by minimizing waste; see, e.g., Gilmore and Gomory (1965).It also models scheduling problems in which tasks require a contiguous subset of identical resources, see, e.g., Augustine et al. (2009), and packing problems arising in the transportation of items that cannot be stacked one over the other, see, e.g., Iori et al. (2007).
The SPP is a challenging combinatorial problem.It is -hard in the strong sense, and also very difficult to solve in practice.Benchmark instances proposed decades ago and containing just 20 items remain unsolved to proven optimality despite dozens of attempts.In terms of exact algorithms, the best results for the SPP have been obtained by the use of combinatorial branch-and-bound algorithms that build solutions by packing items one at a time in the strip.Among these, we cite the algorithms by Martello et al. (2003), Lesh et al. (2004), Bekrar et al. (2007), Alvarez-Valdes et al. (2009), Kenmochi et al. (2009), Boschetti and Montaletti (2010), and Arahori et al. (2012).Techniques based on other concepts have also been developed: mixed integer linear programs (MILP) were used in Sawaya and Grossmann (2005), Westerlund et al. (2007), and Castro and Oliveira (2011), whereas SAT-based algorithms were developed in Grandcolas and Pinto (2010) and Soh et al. (2010).
In terms of approximation schemes, Harren et al. (2011) presented a 5/3 + polynomial time approximation 1 Downloaded from informs.org by [58.7.36.26] on 28 April 2014, at 15:17 .For personal use only, all rights reserved.scheme.Kenyon and Rémila (2000) proposed an asymptotic fully polynomial time approximation scheme providing a solution of cost not higher than 1 + opt + O 1/ 2 , where opt is the optimal solution value.Jansen and Solis-Oba (2009) presented an asymptotic polynomial time approximation scheme giving solutions bounded by For what concerns heuristic algorithms with good practical computational performance, almost all metaheuristic paradigms have been applied to the SPP.In recent years, good results have been obtained with a GRASP technique in Alvarez-Valdes et al. (2008), a squeaky wheel optimization methodology in Burke et al. (2011), a two-stage heuristic in Leung et al. (2011), and a skyline heuristic in Wei et al. (2011).
The majority of the exact algorithms for the SPP make use of the two following relaxations, obtained by "cutting" items into unit-width or unit-height slices, respectively.The first relaxation is based on the well-known bin packing problem, in which a set of weighted items has to be packed into the minimum number of capacitated bins.
Definition 1.The bin packing problem with contiguity constraints (1CBP) is the relaxation of the SPP obtained by cutting each item j into h j slices of height 1 and width w j , and the strip into bins of height 1 and width W .The aim is to pack the slices into the minimum number of bins, by ensuring that slices derived from the same item are contiguous one to the other: If the kth slice of item j (j ∈ k = 1 2 h j ) is packed in bin i, then the k + 1 th slice, if any, must be packed in bin i + 1.
The second relaxation is based on the standard parallel processor scheduling problem (P C max ), in which a set of jobs having a given processing time has to be scheduled on a set of processors, so as to minimize the largest total processing time assigned to a processor (makespan).
Definition 2. The parallel processor scheduling problem with contiguity constraints (P cont C max ) is the relaxation of the SPP obtained by cutting each item j into w j slices of width 1 and height h j , and the strip into W vertical slices.We associate each item slice to a job having processing time h j , and each vertical strip slice to a processor.The aim is to assign the slices to the processors, by minimizing the makespan and ensuring contiguity between the slices of the same item (if the kth slice of item j is assigned to processor i, then the k +1 th slice, if any, must be assigned to processor i + 1).
Figure 1(b) shows an optimal solution for the 1CBP relaxation of the SPP instance of Figure 1(a): Items have been cut horizontally and packed in nine bins.Similarly, Figure 1(c) gives an optimal solution to the relaxation induced by the P cont C max on the same SPP instance: Items have been cut vertically and assigned to the W processors using minimum makespan 9. (Note that, in the literature, most graphical representations of bin packing and parallel processing scheduling problems, when not related to the SPP, draw bins as vertical containers and processors as horizontal lines, so they are "90 rotated" with respect to our figures.) Both the 1CBP and the P cont C max are known to be strongly -hard.From a practical point of view they are, however, easier than the SPP, and the solution values they provide (that can be different one from the other) are usually tight lower bounds on the optimal SPP height.
Several algorithms, starting from Martello et al. (2003) and notably including Alvarez-Valdes et al. (2009), also used the 1CBP and/or the P cont C max solution to try to compute a feasible solution for the original SPP instance.Suppose we are given the P cont C max solution of Figure 1(c), we can try to obtain the feasible SPP solution of Figure 1(a) by adjusting the y-coordinates of the slices, so that all slices belonging to the same item are at the same y-coordinate, always ensuring that there is no overlapping among items.This problem can be defined as follows.
Definition 3. Given a feasible P cont C max solution using makespan z, in which the first slice of an item j ∈ is packed in processor x j , problem y-check is to determine if Downloaded from informs.org by [58.7.36.26] on 28 April 2014, at 15:17 .For personal use only, all rights reserved.
there exists an array y j , with 0 y j z − h j , and such that the solution in which any item j is packed with its bottom left corner in position x j y j is feasible for the SPP.
Problem y-check is strongly -complete (this is proved in §3.1), and most of the attempts developed in the literature for its solution are heuristics.The approach we propose is very innovative with respect to the literature, because we solve, instead, problem y-check with an exact algorithm, and, most important, we use it in a systematic way to optimally solve the SPP.
In particular, we propose a new exact algorithm for the SPP that exploits the full potentiality of the introduced relaxations by means of a Benders' decomposition.At the first step, in the master problem, we solve to optimality the P cont C max relaxation.Then we try to obtain a feasible SPP solution, by solving the slave problem y-check.If a feasible solution is achieved, then it is also optimal, and hence we terminate.Otherwise a Benders' cut prohibiting the current P cont C max solution is added to the master, and the procedure is reiterated.
Benders' cuts are known to be weak in practice, and hence we try to strengthen them by borrowing the concept of combinatorial Benders' cuts, introduced in Codato and Fischetti (2006).In practice we look for a minimal infeasible subset, i.e., a minimal subset of items that still causes the infeasibility of the considered solution, and introduce the cut only for this subset, instead than for the complete set of items.After this has been done, we further improve the cut by means of a tailored lifting procedure based on the solution of linear programs.
The three problems we address (master, slave, and search for the minimal infeasible subset) are all difficult, and possibly need to be solved several times.However, our resulting algorithm is usually fast in practice and, also because of a large number of optimization techniques that we propose, obtains very good computational results on the benchmark sets of instances.

Main Contributions of This Paper
The main contributions of this work are the following: • we propose an innovative Benders' decomposition that models the SPP and exploits the full potentiality of the P cont C max relaxation; • we present several preprocessing, lower bounding, heuristic, and exact algorithms for the master problem (derived from the P cont C max ), taken from the related literature or newly developed; • we prove that the slave problem in our decomposition (problem y-check) is strongly -hard; • we solve problem y-check with an algorithm based on new preprocessing techniques and a new enumeration tree enriched with fathoming criteria, and show that this method is highly efficient in practice; • we propose nontrivial ways to strengthen the Benders' cuts into combinatorial Benders' cuts, and present a new effective lifting procedure based on the solution of a linear model; • we design an overall algorithm for the SPP, and test it on the benchmark instances obtaining very good computational results.In particular, we solve for the first time to proven optimality instances cgcut03 by Christofides and Whitlock (1977), and instances gcut04 and gcut11 by Beasley (1985).We provide 34 new proven optimal solutions for the 500 instances proposed by Berkey and Wang (1987) and Martello and Vigo (1998).We obtain, on average, better solutions than all previously published exact algorithms, with a comparable or smaller computational effort.

A Benders' Decomposition for the Strip Packing Problem
We first provide the necessary notation, and then describe our decomposition approach and the prior work in the related literature.

Notation
We suppose the strip is located in the positive quadrant of the Cartesian coordinate system, with its bottom left corner located in position 0 0 , as shown in Figure 1(a).Let H be a valid upper bound on the optimal solution value.For simplicity we call rows the H unit-height bins obtained by cutting the strip horizontally (see Figure 1(b)), and columns the W unit-width processors obtained by cutting the strip vertically (see Figure 1(c)).Rows are numbered from 0 to H − 1, and columns from 0 to W − 1.We say that an item covers a row, respectively a column, if the row, respectively the column, intersects the item in the considered packing (e.g., item 3 in Figure 1(a) covers rows 2 and 3, and columns 2-5).
We say that an item j is packed in position p j r j if its bottom left corner has x-coordinate equal to p j and y-coordinate equal to r j (e.g., item 3 in Figure 1(a) is packed in 2 2 ).For feasibility we have that 0 p j W − w j and 0 r j H − h j .This set of feasible positions may be reduced by considering the well-known principle of normal patterns by Herz (1972) and Christofides and Whitlock (1977), which states that there is an optimal solution in which each item is moved as down and as left as possible (hence touching at its left, and at its bottom, either the strip or the border of another item).To this aim we define (1) the sets of normal patterns for item j along the x-and y-axis, respectively.Sets j and j are computed using a standard dynamic programming (DP) technique; see, e.g., Christofides and Whitlock (1977).We similarly define j q = p j ∈ j q − w j + 1 p j q (3) the subset of normal patterns along the x-axis for which item j occupies column q, and the subset of normal patterns along the y-axis for which item j occupies row t, respectively.Finally, let = j∈ j and = j∈ j be the global sets of normal patterns along the x-and y-axis, respectively.

A Mathematical-Logic Model
The SPP can be modeled by using two sets of variables: A binary variable x jp taking value 1 if item j is packed in column p, 0 otherwise, and a continuous nonnegative variable y j giving the height of the bottom border of item j.A single variable z is then used to define the total height of the solution.The SPP can be described through the following mathematical-logic model: j∈ p∈ j q h j x jp z q ∈ (7) nonoverlap y j y j + h j j ∈ p∈ j q x jp = 1 q ∈ (9) x jp ∈ 0 1 j ∈ p ∈ j (10) Constraints (6) impose that each item is packed in exactly one column.Constraints (7) force z to be not smaller than the total height of the items that occupy any column q, whereas constraints (8) force z to be not smaller than the upper border of any item j.Note that constraints (7) are not strictly necessary for the correctness of the model, but are essential for our decomposition approach.Logical constraints (9) impose that the vertical intervals y j y j + h j corresponding to the set of items that occupies the same column q, do not overlap.

The Decomposition Approach
In the classical decomposition approach by Benders (1962), the aim is to solve an MILP problem P min c T y + f x : Ay + g x b, y 0 x ∈ D x .The method starts by finding a vector x ∈ D x , and considers the linear slave problem SP: min c T y + f x : Ay + g x b, y 0 , which can be solved by means of the dual slave SD max u T b − g x + f x : u T A c, u 0 .A solution ū of SD induces a linear constraint z ūT b − g x + f x , the so-called Benders' cut, that is used to populate the master problem MP min z z u T k b − g x + f x k = 1 2 K x ∈ D x , where u 1 u 2 u K are the solutions of K dual problems obtained by iterating the above procedure.
A special case occurs when c = 0 and we start by optimally solving the master problem obtained by removing variables y from P , i.e., by setting x = arg min f x : g x b, x ∈ D x .The slave SP then becomes a feasibility check on the system Ay + g x b y 0 .If SP has a solution ȳ, then x ȳ is an optimal solution to P .If, instead, SP has no feasible solution, then x is not feasible for P and we know that at least one of the x j variables must take a value different from xj .We write this condition as a linear constraint and add it to the master problem.
A better implementation does not add the cut containing all the x variables, but finds a smaller (possibly minimal) subset of variables that still induces infeasibility in the slave problem, and uses this set to derive the cut.The resulting constraint is called combinatorial Benders' cut in Codato and Fischetti (2006), but the method can also be seen as an implementation of the logic-based Benders' decomposition approach presented in Hooker (2000), Jain andGrossmann (2001), andHooker andOttosson (2003).We finally observe that, in the case of combinatorial Benders' cuts, it is not necessary that the slave is a continuous linear model.
For the SPP, when we remove variables y from model ( 5)-( 11), we obtain an MILP that models the P cont C max problem, namely, (P cont C max ) min z (12) p∈ j x jp = 1 j ∈ (13) j∈ p∈ j q h j x jp z q ∈ (14) Model ( 12)-( 15) was used in Boschetti and Montaletti (2010) to obtain their lower bound L BM F 1 .Suppose now an integer solution = z s x s jp to ( 12)-( 15) has been computed: the slave is then to find a feasible solution, if any, to problem y-check y j + h j z s j ∈ (16) nonoverlap y j y j + h j j ∈ p∈ j q x s jp = 1 q ∈ (17) y j 0 j ∈ (18) Downloaded from informs.org by [58.7.36.26] on 28 April 2014, at 15:17 .For personal use only, all rights reserved.
If y-check returns a feasible solution, then we obtained an optimal solution of the original SPP instance.Otherwise we forbid the current P cont C max solution , by adding a cut to the master problem.To this aim, let denote the x-coordinate of the first slice of item j in solution .The Benders' cut is then Suppose now we can find a reduced subset of items C s ⊆ , such that, if we pack all its items in position p s j , then problem y-check still has an infeasible solution (the way in which we look for C s is discussed in §4).Then, we obtain the combinatorial Benders' cut We can then model the SPP as the following master problem: j∈ p∈ j q h j x jp z q ∈ (24) The good aspect of this decomposition is the fact that it allows to develop tailored optimization techniques both for the master and the slave, taking advantage of their combinatorial structures.
In particular, we solve the slave with preprocessing techniques and an enumeration tree enriched with fathoming criteria, obtaining an algorithm (see §3) that is very fast in practice.For the master, we found computationally convenient to develop an iterative procedure that attempts different tentative strip heights, in an interval given by valid lower and upper bounds.At each attempt it solves the recognition version of the master, and updates the tentative strip height accordingly.The procedure is described in details in §5, and is based on preprocessing techniques, a large set of lower and upper bounding algorithms, and the direct solution of the MILP ( 22)-( 26) with delayed cut generation.
The latter algorithm largely benefits from techniques aimed at finding improved Benders' cuts, that we describe in §4.The literature on this area of research is quite new, so we briefly summarize it in the next section.

Prior Work
The concept of primal decomposition of an MILP was originally proposed by Benders (1962), who studied the case in which the master results in an MILP and the slave in an LP.Later, Geoffrion (1972) generalized it to the case in which also the slave is an MILP.
In recent years, Hooker and Ottosson (2003) presented the concept of logic-based Benders' decomposition, a general framework in which both master and slave are MILPs, and the slave is solved by logical deduction methods, whose outcome is used to produce valid cuts.An interesting use of this approach is the one in which the master is solved by using standard MILP optimization, and the slave with a constraint program (CP).Successful examples of this type of decomposition have been proposed by, e.g., Jain and Grossmann (2001) and Hooker (2007).Jain and Grossmann (2001) present hybrid MILP/CP decomposition methods to solve a class of problems where the variables in the slave have zero coefficient in the original objective function, and apply them to the problem of scheduling jobs on parallel machines with release and due dates, while minimizing the sum of input processing costs.The master is an MILP that produces an assignment of jobs to machines and is solved with IBM Ilog Cplex.The slave is a CP that checks the feasibility of each assignment of jobs to a machine, and is solved with IBM Ilog Scheduler.Hooker (2007) proposes a similar approach to solve again a class of scheduling problems on parallel machines, but with the aim of minimizing either cost, makespan, or total tardiness.
Note that our algorithm differs from the ones by Jain and Grossmann (2001) and Hooker (2007), because it solves master and slave with dedicated combinatorial algorithms, and considers the more general case in which the activities of the machines are not independent one from the other, but strictly related among them (intuitively, an item j must be assigned to w j machines).
The name "combinatorial Benders' cuts" was introduced by Codato and Fischetti (2006), who studied a decomposition in which the master is an ILP involving binary variables x, and the slave is an LP involving continuous variables y.Whenever a solution to the master is infeasible for the slave, they look for a minimal infeasible subsystem (MIS) of the LP associated to the slave, and then introduce in the master the corresponding cut.Since the problem of determining a MIS is -hard, they make use of a greedy algorithm.Moreover, they limit their study to the case in which the constraints relating x and y are given by linear inequalities, each containing a single entry for the x array.Our approach differs from the one by Codato and Fischetti (2006) in several aspects, the most important being the fact that the slave is not an LP, but a strongly -complete problem.

Solution of the Slave Problem
We are given the input of the SPP, plus a tentative strip height z s and a vector p s j that gives the x-coordinate Downloaded from informs.org by [58.7.36.26]   (i.e., column) in which item j is packed.In our decomposition vector p s j is obtained from a starting solution = z s x s jp to ( 12)-( 15), by using ( 19).In this section we describe how to solve the resulting y-check problem, see Definition 3, which calls for the determination of the y-coordinates to be assigned to each item so as to obtain a feasible SPP solution of height z s , if any exists.We first discuss the problem complexity, and then present our solution algorithm.

Complexity
Let us consider the SPP solution depicted in Figure 2, where 9 items are packed in a strip of height z s = 2B + 3, with B a given integer positive value.All items have width 1, with the exception of items 3, 4, and 5 that have width 3.They are packed in the x-coordinates p s = 0 4, 0, 1, 2, 1, 3, 3, 1 , and have heights h Items 3 and 5 cannot be packed at the same y-coordinate, because they would overlap, so, because of the presence of items 1 and 2, item 3 must be assigned at the bottom of the strip and item 5 at the top (the obvious symmetric solution where 3 is at the top and 5 at the bottom is also possible).A further examination of the remaining items shows that the only feasible y-check solution for items 4, 6, 7, 8, and 9 is the one depicted in the figure.This solution leaves two empty buckets of width 1 and height B. These buckets can be used to prove the -completeness of y-check, by using a transformation from the following problem.
Definition 4. Partition.Given n items, each having weight s j ∈ Z + (j = 1 2 n), find a subset S ⊆ 1 2 n such that j∈S s j = n j=1 s j /2, if any.
Proof.Given an instance of Partition with n items, each having weight s j , we construct an instance of y-check with n = 9 + n items.We first set B = n j=1 s j /2 and z s = 2B + 3. The first nine items that we select for the y-check instance are those depicted in Figure 2, whose dimensions and x-coordinates are described at the beginning of §3.1.We have shown above that these nine items can be feasibly packed in a strip of height z s , if and only if they are given y-coordinates as in Figure 2. Thus, in each feasible y-check solution two empty buckets of width 1 and height B are left at the x-coordinate 2 of the strip.We complete the instance by adding a y-check item for each item of Partition.These n items have width w j = 1, x-coordinate p s j = 2, and height equal to the weight of the corresponding item in Partition, i.e., h j = s j−9 , for j = 10 11 n.In a feasible y-check solution items 10 11 n must be packed inside the two buckets of height B, so y-check has a feasible solution if and only if Partition has.Since Partition is -complete, the same holds for y-check.
By using an extension of the above lemma, we can prove the following stronger result.

Algorithms for Problem y -Check
For the solution of y-check we developed several algorithms, and obtained the best computational performance by using a combinatorial enumeration tree, enriched by reduction and fathoming criteria.The resulting algorithm, called y-check algorithm in the remaining of the paper, starts with the three new preprocessing techniques, invoked in sequence one after the other.
Preprocessing 1: Merge Items.For any item j, let us define j , respectively j , the subset of items that can be packed at the left, respectively right, of j.Formally, In a first step, we consider items one at a time, for nonincreasing order of p s j .For a given item j, if h i h j holds for all i ∈ j , then we attempt packing the items of j in the substrip of width p s j and height h j , by invoking the y-check enumeration tree described at the end of this section.If all items in j fit into the substrip, then we merge j and j into a unique item, say k, having w k = p s j + w j , height h k = h j and p s k = 0.This preserves the optimality of the solution, because no other item can enter the Downloaded from informs.org by [58.7.36.26] on 28 April 2014, at 15:17 .For personal use only, all rights reserved.
induced substrip.This first step is based on ideas proposed by Clautiaux et al. (2007) and Alvarez-Valdes et al. (2009) for two-dimensional packing, but extends them to problem y-check.
In a second step, if not all the items in j fit into the substrip, or there are some items i having h i > h j , then we remove items from j in an iterative way.We proceed from left to right: Let p be the first column occupied by an item in j and w be the largest width of an item in j being packed in p.We check if j can be exactly partitioned into two subsets, one completely contained in the columns 0 1 p + w − 1 , and one completely contained in the columns p + w p + w + 1 p s j − 1 .If this is possible, then we focus our search on the latter group of columns.Formally, we check if there are no items i ∈ j having p s i < p + w and p s i + w i > p + w.If no items with this property exist, then we set j = i ∈ \ j p + w < p s i + w i p s j .In this way, we are left with a reduced set of items and a reduced substrip of width p s j − p + w and height h j .Once again, no item outside j may enter this reduced substrip at the left of j, and hence we invoke the enumeration tree to try to merge j and the reduced set j .If instead j cannot be partitioned, then we increase p to be the next column where an item of j is packed, and reattempt the partition.
We reiterate until a merging is obtained or j is empty.We then repeat the process with j , for which we iterate from right to left.For an example, consider Figure 1(c) and j = 4.At the left, 4 = 2 and no merging is possible.At the right, 4 = 5 7 at the first iteration and no merging is possible.At the second iteration 4 = 5 , and 4 and 5 are merged into a unique item of width 3 and height 2.
Preprocessing 2: Lift Item Widths.We consider items one at a time, by nondecreasing width, breaking ties by nondecreasing height.For any item j, we compute j and j using ( 27) and ( 28), respectively.Let j = max i∈ j p i + w i if j is not empty, and j = 0 otherwise.Similarly, let r j = min i∈ j p i if j is not empty, and r j = W otherwise.We move item j to its left as much as possible, by setting p s j = j , and then enlarge its width as much as possible, by setting w j = r j − j .We then reiterate with the next item.
Note that this preserves the optimality of the solution, because no item can be packed side-by-side with j in the columns between j and r j .Consider again Figure 1(c).This second preprocessing would produce w 3 = 5, w 4 = 5 (recall items 4 and 5 were merged by the previous preprocessing) and w 6 = 8.The outcome is depicted in Figure 3(a).
Preprocessing 3: Shrink the Strip.This technique is based on the following simple idea.Suppose that a column p is occupied by a set S of items, and that a feasible packing for these items exists.A consequence is that, if no item outside S occupies column p + 1, then the packing of the items in S is also feasible for column p + 1.In practice, it is enough to check the feasibility only for those columns where the left border of an item is packed.Thus we remove all other columns from the instance and reduce the widths of the items and of the strip accordingly.Consider for example the instance of Figure 3(a).The only columns that we keep are column 0 (p s 1 = p s 2 = 0), column 2 (p s 3 = p s 4 = p s 6 = 2), and column 7 (p s 7 = 7).The instance that is given to the successive y-check enumeration tree in depicted in Figure 3(b).
Note that these y-check preprocessing techniques consistently reduce the size of the addressed instances and are much more effective than the standard techniques for the SPP (see, e.g., Boschetti and Montaletti 2010, and references therein), because they largely benefit from the additional information given by the vector p s j .At the end of the preprocessing phase, all items having width equal to the width of the strip are packed at the bottom of the strip, and then removed from the instance.The packing of the remaining items is attempted with the following exact algorithm.
Enumeration Tree for Problem y-Check.This procedure constructs partial solutions by adding one item at a time, starting from an empty solution.For the sake of simplicity we continue using the standard notation adopted so far, i.e., n, W , H , w j , h j , but recall that these values may have been modified by preprocessing.Following a notation common in two-dimensional packing, we define the skyline as the line that touches the top of the packed items; see the dashed line in Figure 3(c).Let h used p be the height of the skyline in column p (i.e., h used p gives the sum of the heights of the packed items and of the possibly created holes that cover column p).Let us also define the niche as the horizontal segment of the skyline that has the smallest value of h used ; see again Figure 3(c).If more horizontal segments having the same smallest value of h used exist, then the niche is the left most one.In the following we suppose that the niche starts in column and ends in column r (with r included in the niche).
We define h left and h right as the heights of the left and right border of the niche, respectively, and compute them as follows.
⊆ be the set of items still to be packed at a given node, and r ⊆ the set of items that can be packed in the niche, i.e., r = j ∈ : p s j and p s j + w j r + 1 .
The enumeration tree has one node for each partial solution and branches on the items that can be packed in the associated niche.At the root node the strip is empty and the niche corresponds to the whole strip.We by selecting an item j ∈ r , in order, and packing it in position p s j .The last node is obtained by packing no item at all in the niche, and, in this case, we close the whole niche and lift the skyline by setting h used p = min h left h right for p = + 1 r.When an item j is packed, if p s j > , then we close the rectangular space of height h j , starting in and terminating in p s j − 1, by setting h used p = min h left h used p +h j for all p = +1 p s j −1.This is done to kill symmetries, because the solutions in which an item was packed in this rectangular space were already explored by previous nodes, because of the sorting.The resulting tree is explored in depth-first search.
At each node the following fathoming criteria are used: 1. let h pack p be the total height of the items in that cover column p, for p = 0 1 W − 1.If h used p + h pack p > z s holds for a certain column p, then the current node is fathomed; 2. if r contains at least one item, say j, such that h used p s j + h j min h left h right , then we create a single descendant node by packing j in p s j , and skip the (dominated) node in which no item is packed in the niche; 3. if there are two items j and k in r , with j < k, having w j = w k , h j = h k and p s j = p s k , then we fathom those nodes that attempt packing k before j (because the same solution is found by the "twin" node that packs first j and then k); 4. when packing item j, we check if there is an item k having w k = w j , k > j and being already packed in p s j at height h used p s j − h k , i.e., at the top of the skyline.If this is the case, then the node is fathomed (again, the same solution is found by the twin node that packs first j and then k, note that in this case h j can be different from h k ); 5. when packing item j, if p s j > holds, then we check if there exists an item k ∈ r , k = j, that enters completely in the rectangle at the left of j that would be closed by the packing of j, i.e., an item k having p s k , p s k + w k p s j , and h k min h left − h used h j .If such k exists, then we fathom the node (because the same solution is found by packing first k and then j).

Improving and Lifting the Benders' Cuts
In the following, we suppose we are given a solution = z s x s jp , which is infeasible to y-check and has to be cut from the master problem.To improve the standard Benders' cuts (20), we developed a procedure that uses four steps (all newly developed ideas).In the first three steps, described in §4.1, we look for minimal infeasible subsets, i.e., minimal subsets of items still producing an infeasible y-check instance, and use them to derive the stronger cuts (21).In the last step, described in §4.2, we try to further lift the cut by adding x jp variables through the solution of linear models.In the following, recall that vector p s gives the x-coordinates in which items are packed in .

Finding Minimal Infeasible Subsets of Items
The problem of determining a minimal infeasible subset is -hard, because the underlying recognition problem, y-check, is -complete, thus we are content to solve it in a greedy fashion.
The first step of our procedure looks for vertical cuts in the packing induced by , i.e., for columns p ∈ such that can be partitioned into two sets, 1 = j ∈ p s j + w j p and 2 = j ∈ p s j p , with 1 ∪ 2 = .If such a column exists, then the packing of the items in 1 is not influenced by the packing of the items in 2 , and vice versa.Thus, we reexecute the y-check algorithm on both 1 and 2 and determine which subset is infeasible (at least one is infeasible because is so).Clearly, if k vertical cuts are found, the set of items is partitioned into k + 1 subsets accordingly.Then, the successive steps of our procedure are executed on the resulting infeasible subset(s) of items.
The second step tries to remove one column at a time from the strip.It first removes from column 0 and all items j having p s j = 0.If the reduced instance is infeasible for y-check, then it continues by removing the next column from the left in which at least one item is packed, and all items packed in that column.It reiterates as long as the Downloaded from informs.org by [58.7.36.26] on 28 April 2014, at 15:17 .For personal use only, all rights reserved.reduced instance remains infeasible.Then, it starts from the right, by selecting column W − 1 and all items that occupy it, removing them from the instance and invoking the y-check algorithm.Again, it reiterates as long as the reduced instance remains infeasible.The instance obtained at the end of this process possibly has a reduced bin width and a smaller number of items, but it is still a cause of infeasibility and is then passed to the next step for further reduction.
The third step considers the items one at a time according to a given ordering.It removes the current item from and reexecutes the y-check algorithm on the reduced instance.If the outcome is feasible, then the item is reinserted in , otherwise we found a reduced cause of infeasibility and keep the current item out of .In any case, we reiterate with the next item until all items have been scanned.At the end of the scan, we are left with a reduced subset of items, that still induces an infeasibility.The result of this step depends on the order in which items are selected.For this reason we perform several attempts with different orderings.The first attempt selects items according to a nondecreasing value of area.In the second attempt we assign a success score with each item.This is initially set to 0 at the beginning of the solution of the current SPP instance, and then increased by one unit if the removal of the item from was successful in previous iterations of the decomposition algorithm (i.e., if it led to a reduced instance that was still infeasible).The second attempt we perform then selects items according to nondecreasing value of success score.The third attempt simply selects items randomly, and is executed 10 times.
A simple hash list is used to keep track of the reduced instances for which the y-check algorithm was invoked, so as to avoid duplicate calls.At the end of these three steps we typically obtain a few minimal infeasible subsets of items, C s ⊆ , that can be used for the cuts of type (21).

Lifting the Cut
The fourth and last step of our procedure aims at lifting (21), for a certain C s ⊆ , as follows.For each item j, let us define s j the subset of items that vertically overlap with j in solution , i.e., the set of items that have at least one column in common with j in .Suppose now that we move j from p s j to a different position at its left or at its right, keeping all other items in their original position in .As long as s j remains the same, then we know that is still infeasible for y-check.Without the need of supplementary calls to y-check, we can thus obtain a set of x-coordinates for the left border of j that keep infeasible.In terms of binary variables, we can add to the left-hand side of the cut all the x jp variables corresponding to the selected x-coordinates, without affecting the right-hand side (because just one of the coordinates can be selected for packing j), thus lifting the original cut.
To obtain the most effective lifting we proceed in the following way.For each item j ∈ C s we introduce two nonnegative variables, l s j and r s j , and denote with [l s j r s j ] the interval along the x-axis in which we look for the x-coordinate of j.Let also s p j be the subset of items that vertically overlap with j, when j is packed in column p ∈ l s j r s j .If s p j = s j for all j ∈ C s , then problem y-check is still infeasible.We can thus find the largest lifting by solving the LP: Constraints ( 30) impose that items j and i overlap in any solution in which j is packed in any column of [l s j r s j ] and i in any column of [l s i r s i ] (to this aim, note that, if i ∈ s j , then j ∈ s i ).Constraints ( 31) and ( 32) are used to state that, for any item j, the selected interval [l s j r s j ] is such that (i) the item always lies inside the strip, and (ii) the original position p s j of the item in is inside the interval.We then use the optimal solution ( ls j , rs j ) to ( 29)-( 32), to obtain the following lifted combinatorial Benders' cut: Summarizing, starting from the original Benders' cut (20), we obtain several (much stronger) lifted combinatorial Benders' cuts (33) and add all of them to the master problem.The computational effectiveness of this procedure is shown in §6.

An Exact Algorithm for the Strip Packing Problem
The mathematical models and the algorithms presented in the previous sections have been inserted into an overall algorithm for the solution of the SPP.This algorithm also makes use of several additional techniques to speed up its convergence to the optimum, either best practices coming from the related literature, or newly developed procedures.
For this reason we called it BLUE, from Benders' decomposition with lower and upper bound enhancements.
An informal pseudo-code is outlined in Algorithm 1. Intuitively, BLUE first preprocesses the instance and computes an upper bound U and a lower bound L on the optimal solution value.Then, as long as L is strictly smaller than U , it solves the recognition version of the SPP, called SPP(L), in which the height of the strip is fixed to L, and the aim is to find a feasible packing of the items not exceeding L, if any (the problem is also known in the literature as the two-dimensional orthogonal packing problem; see, e.g., Clautiaux et al. 2007).The SPP(L) instance is first passed to a preprocessing procedure, and then solved by two exact methods, namely, a combinatorial branch-and-bound and the Benders' decomposition of §2.In the remaining of this section we give the details of each step of the algorithm.Downloaded from informs.org by [58.7.36.26]

Preprocessing and Bounds
In the following we suppose items are sorted by nonincreasing width, breaking ties by nonincreasing height.Algorithm BLUE first preprocesses the instance using the three techniques described in Section 2.2 of Boschetti and Montaletti (2010).The first technique aims at packing large items at the bottom of the strip, and requires to run a heuristic (which in our case is the algorithm described below to compute U ) on a subinstance.The second technique computes the maximum total width W of a subset of items that can be packed side by side without exceeding the strip width W (by solving a standard subset sum problem), and then, if the resulting value is strictly smaller than W , it reduces the strip width by setting W = W .The third technique computes, for any item j in order, the maximum total width w j of a subset of items that can be packed side by side with j without exceeding W (again by solving a subset sum problem), and then, if w j + w j < W , it increases the item width by setting w j = W − w j .
Lower Bounds.To obtain a valid lower bound we first make use of three polynomial time procedures from the literature: 1.The simple lower bound L 1 = max j∈ w j h j /W max j∈ h j .
2. A more sophisticated lower bound, L 2 , based on the most performing dual feasible functions.In practice, L 2 is evaluated as L BM dff , described in Section 3.2 of Boschetti and Montaletti (2010), but with the inclusion of only the first three dual feasible functions (the fourth one was disregarded because more time consuming and not computationally effective on our instances).
3. A third lower bound, L 3 , obtained by invoking the alternating constructive procedure described in Section 4.2.6 of Alvarez-Valdes et al. ( 2009).
Then we invoke two newly developed and more timeconsuming procedures.The first one is obtained by considering the relaxation of the 1CBP (see Definition 1), in which we remove the constraint that horizontal slices must be packed contiguously one with the other, and only impose that each bin contains at most one slice of each item.The problem, known in the literature as the noncontiguous bin packing problem (NCBP), can be modeled as follows.
We define a pattern t as a subset of items whose total width is not larger than W , and describe it by a column a 1t a jt a nt T ∈ 0 1 n , where a jt takes value 1 if item j is in pattern t, 0 otherwise.Let be the family of all patterns containing at most one slice for each item, and let z t be an integer variable giving the number of times that pattern t is used (t ∈ ).The NCBP is then t∈ a jt z t h j j ∈ (35) Model ( 34)-( 36) corresponds to model F2 by Boschetti and Montaletti (2010), that they solve by generating a priori the entire set of undominated patterns to compute their lower bound L BM F 2 .In the literature, the NCBP relaxation is also sometimes referred to as "bar relaxation," see, e.g., Belov and Rohling (2013).Since the NCBP is strongly -hard, we are content with the continuous relaxation of (34)-( 36), which we solve by the standard column generation algorithm originally proposed by Gilmore and Gomory (1961) for the cutting stock problem.The fact that an item should appear at most once in a pattern is easily taken into account in the slave knapsack problem that has to be solved to generate columns.This is done by associating a binary variable with each item j ∈ (instead of an integer variable not greater than h j , as in the standard algorithm for the cutting stock).This column generation procedure is quite standard for one-dimensional packing problems, but, as far as we know, this is the first time it is applied to the solution of the NCBP.
Let L 4 be the round up of the resulting solution value.Now that we know that all horizontal slices can be continuously packed in L 4 bins, we check if the same fact holds for the vertical slices.We then fix the height of the strip to be L 4 , "cut" items and strip using the vertical orientation, and again solve the continuous relaxation of ( 34)-(36).In this second attempt patterns are combinations of items whose total height does not exceed L 4 , and h j is replaced by w j in (35).If the resulting solution value is greater than W , then no feasible packing of the vertical slices exists, so we increase L 4 by one.We reiterate this second attempt as long as its solution value is greater than W .
The last procedure is obtained by solving the root node of the MILP ( 12)-( 15) for the P cont C max , and storing the rounded up value of the resulting makespan as L 5 .The Downloaded from informs.org by [58.7.36.26] on 28 April 2014, at 15:17 .For personal use only, all rights reserved.lower bound we obtain is L = max i∈1 5 L i , where L 1 , L 2 and L 3 have polynomial complexity, L 4 has pseudopolynomial complexity (if the ellipsoid algorithm is used to solve the LPs, see Caprara and Monaci 2009) and is fast in practice, whereas computing L 5 is strongly -hard and can be time consuming for the large instances (this is why at this step we limit its solution to the root node).
Upper Bounds.To obtain a valid upper bound we start by invoking (our implementation of) the algorithm by Leung et al. (2011).This is a two-stage approach, in which the first stage is a constructive heuristic, and the second one is an improvement procedure based on simulated annealing.In our implementation we impose a limit of 10 4 iterations to the simulated annealing, to lower the computation time.We call U 1 the resulting solution value.
We then invoke a new heuristic based on the solution of the 1CBP (recall Figure 1(b)).At each iteration this heuristic selects an unpacked item, and packs all its slices left justified in the bins, starting from the bottom-most bin that has enough residual space to accommodate a slice.It reiterates until all items are packed.Bins are closed once they cannot accommodate any more items.The resulting partial solutions have a classical staircase structure.In details, the first item j is selected randomly, and its slices are packed at the left of bins 0 1 h j − 1.Let r be the index of the bottom-most open bin.The next item is chosen by using a score-based method that has the aim of filling bin r in the best possible way.A score v j is assigned to each item j that is still unpacked, with v j initially set to 2 if j is as large as the residual space in r, 0 otherwise.Then, if by packing j in r, the top of j is as high as the first horizontal segment at its left in the staircase, then v j is increased by 2. If instead the top of j is as high as the top of any other horizontal segment in the staircase but the first, then v j is increased by 1.The item with highest score is selected (ties are broken randomly) and all its slices are packed.
After the heuristic packed all items, if the resulting 1CBP solution value is smaller than U 1 , then we try to obtain a feasible SPP solution by invoking the y-check algorithm, with a limit of 2 • 10 5 iterations (determined on the basis of preliminary computational experiments and discussed below in §6.2) and three CPU seconds (necessary to limit the effort for instances having large values of n or W ). The instances for which y-check was invoked are stored in a hash table, so as to avoid checking them twice.The algorithm is executed 10 times, and the best solution value is stored in U 2 .We then compute U = min U 1 U 2 .
Summarizing, the initialization of Algorithm BLUE computes in order U 1 L 1 L 2 L 3 L 4 L 5 , and U 2 , and updates U and L accordingly.The execution is clearly stopped whenever L = U .

Closing the Gap
After the preprocessing and the computation of the bounds, if L is strictly smaller than U , then we enter a loop in which we try to solve SPP(L).We first try to lift the item heights.To this aim we use the third preprocessing technique described in §5.1, but work on heights instead of widths.Formally, we compute for any item j in order the maximum total height h j of a subset of items that can be packed vertically over j without exceeding L, and then, if h j + h j < L, we set h j = L − h j .
As a general remark, it is usually "easier" (i.e., computationally faster) to solve a P cont C max problem with a small number of columns, than a 1CBP with a large number of bins.For this reason we compute the sets of normal patterns along the x-and y-axis, by using ( 1) and ( 2), respectively, and setting H = L.Then, if j∈ j < j∈ j we "rotate" the instance, i.e., we exchange L with W , and h j with w j for all j ∈ .Clearly after the instance is solved we restore the original dimensions.Note that the two terms in the check give the number of variables in the resulting master problems with one orientation or the other; see (26).
We then invoke the two exact procedures for solving the SPP(L), both described in details below.If these procedures prove that a feasible solution at height L exists, then we stop the algorithm with a proof of optimality.If instead they prove that no feasible solution exists at height L, then we increase L and reiterate.In the latter case, we compute the minimum integer > L such that there exists a feasible combination of item heights whose value is exactly , and then set L = .If the two procedures fail in proving feasibility or infeasibility at height L, then the algorithm terminates by returning a heuristic solution.
Branch and Bound for the SPP(L).The first attempt to find an exact SPP(L) solution is based on a branch and bound (B&B) for the recognition version of the P cont C max in which the makespan is fixed to L. This B&B starts with W empty columns, and enumerates solutions by packing one item at a time in the left-most column that still has some residual space to accommodate items.Packing an item j in a column p means, in our approach, packing all the slices of j at the bottom of columns p p + 1 p + w j − 1.As a consequence, the resulting partial solutions have a staircase structure.
Let hp be the total height of the slices packed in column p in a partial solution.At each node, the B&B first selects the left-most column p, which still has hp < L. It then creates a descendant node for any item j satisfying hp + h j L. It packs j in p, computes h min as the minimum height of the items still to be packed, and then sets hq = hq + h j if hq + h j + h min L, hq = L otherwise, for q = p p + 1 p + w j − 1.If column p is not empty (i.e., hp > 0 holds), then the B&B also creates an additional node in which it packs no item at all in p, and in this case it sets hp = L.
At any node of the tree we apply the following fathoming criteria: 1.If there are two items j and k, with h j = h k , w j = w k and j > k, then we pack j only after k has been packed.Downloaded from informs.org by [58.7.36.26]    in the table.Among them, instances gcut01 and gcut03 are very easy and were solved by almost all algorithms in a short time.Instance gcut02 was already solved by BM10 in seven seconds, and by AIT12 in 255 seconds, whereas we need less than two seconds.Instance gcut04 was still an open problem, and we solve it in less than nine seconds.
The remaining nine instances are characterized by a large width, from 500 to 3,000.They were addressed just by BM10, that could solve six of them, whereas we can solve one instance more, gcut11, by using a similar computational effort.For the two instances still unsolved to proven optimality, the intervals between the lower and upper bound values provided by BM10 are, respectively, 5 814 5 895 for gcut08 and 4 776 4 947 for gcut13.The optimal solutions of the two most important open problems that we closed in these two sets, namely, cgcut03 and gcut04, are depicted in the appendix.They are very interesting, because characterized by complex nonguillotine structures, that create large holes and make it difficult to compute both the lower and the upper bound.The solutions that we obtained on all other instances are available on our website.
The details of the results on the n instances by Burke et al. ( 2004) are given in Table 4.In terms of exact Downloaded from informs.org by [58.7.36.26] on 28 April 2014, at 15:17 .For personal use only, all rights reserved.algorithms, this set was addressed only by KINYN09 (just the first 12 instances, using algorithm StaircasePP) and BM10.Algorithm BLUE largely outperforms these two exact algorithms, solving more instances to proven optimality, using usually less time.Note that these instances have been built ad hoc to have zero waste, and hence represent an interesting test bed more for heuristics than for exact algorithms (indeed, that was their original scope), because of the fact that the computation of sophisticated lower bounds is useless.Indeed, for the three instances that we could not solve to optimality, good upper bound values were already provided by BM10 (81 for n08, 301 for n12, and 961 for n13), but optimal solutions were then found by the heuristic of Wei et al. (2011).
The details of the results on the 10 classes proposed by Berkey and Wang (1987) and Martello and Vigo (1998) are given in Table 5.Each class contains 50 instances, divided into five groups of 10 instances, one for each value of n ∈ 20 40 60 80 100 .These sets have been addressed by APT08, BM10, and AIT12.Each line in the table gives the number of optimal solutions and average time (for the instances solved to proven optimality), for each group and each algorithm.We present results only for those groups in which at least one instance was solved by one of the algorithms.BLUE is on average faster and provides a higher number of proven optimal solutions than the previous algorithms.Notably, it solves for the first time all instances with n = 20 for the classes 3, 4, and 5.

Evaluation of the Behavior of BLUE
The most effective approaches published in the SPP literature are branch-and-bound algorithms based on the idea of building solutions by packing one item at a time in the strip.Algorithm BLUE has a completely different approach that, intuitively, divides items into slices, packs slices, and then attempts the reconstruction of the original items.This innovative approach appears to perform better on all benchmark instances.This can be noted, for example, in Figure 4.It is important to notice that all the components of BLUE contribute to the good results.The root node solves to optimality 279 instances out of 560, with an average time of about 10 seconds.Our new upper bound U 2 improves 154 times the upper bound U 1 that we derived from the literature.The new lower bound L 4 improves 122 times the maximum value among the first three lower bounds that we took from the literature (L 1 , L 2 , and L 3 ), and then L 5 obtains other 15 further improvements.
The main loop decreases the upper bound in 49 cases, and, most important, increases the lower bound in 106 cases.In this way it solves to optimality the other 98 instances.The loop is iterated on average just once per instance.Inside the loop, the feasibility or infeasibility of an instance is proven 39 times by the branch and bound and 328 times by the Benders' decomposition.Overall, the Benders' decomposition is the most important part of the algorithm, because it performs very well on the difficult instances.As mentioned in §5.2, the standard delayed cut generation method is the implementation of the decomposition that gave the best results, but it is worth noticing that the branch-and-cut Downloaded from informs.org by [58.7.36.26] on 28 April 2014, at 15:17 .For personal use only, all rights reserved.Table 6.Evaluation of the impact of the different cuts on some previously unsolved instances from the 10 classes.y-check algorithm and the procedure to lift the Benders' cuts.Considering the overall run on the 560 instances, the y-check algorithm has been invoked 2,490 times by the exact algorithms of §5.2.It terminated within the given iteration limit for 96.6% of the cases.In Figure 5 we show the evolution, in terms of percentage of instances solved, for the first two seconds of computation.In just 0.01 seconds the algorithm solves 40% of the instances.This is caused by the fact that the algorithm is very quick in finding a feasible solution, if any: it takes just 0.01 seconds on average and 0.24 in the worst case to prove feasibility.Then the task becomes harder, but still the algorithm needs just 0.8 seconds on average to prove the infeasibility of an instance.Being the problem is strongly -hard, it is not surprising that a few y-check instances remain unsolved after one minute.This behavior motivated the choice of the maximum number of iterations given to the y-check algorithm, which is quite small inside the heuristic used to compute U 2 (2 • 10 5 , see §5.1), where the aim is to find feasible solutions, and much larger inside the exact B&B for the SPP(L) (2 • 10 7 , see §5.2), where proving infeasibility is also important.
In Table 6, the impact of the procedures to combine and lift the Bender's cuts is evaluated on a few successful examples, taken from instances in the 10 classes that were not solved to proven optimality by previously published algorithms.We compare the results obtained by a version of BLUE that uses the Benders' cuts (20), another version that uses the combinatorial Benders' (21) obtained by the procedure of §4.1, and the final version that uses the lifted combinatorial Benders' (33).For each case and each instance we report the optimality of the solution (opt), the computational time (sec), the total time spent inside the MILP of the decomposition (sec MILP), the number of added cuts (num cuts), the number of calls to the y-check algorithm (num y-ch.) and the total seconds it elapsed (sec y-ch.).
The standard cuts solve three out of six instances.The combinatorial cuts do not increase the number of optimal solutions, but can decrease consistently the computational effort, as happens for example in instance 05-40-01, where the time decreases from 610 to 17 seconds.The lifted cuts improve consistently the previous configurations, solving all instances to optimality and decreasing the computational effort.The cuts are efficient in both strengthening the formulation, and thus reducing sec MILP, and in reducing the number of calls and the time spent for problem y-check.

Conclusions
We proposed an innovative algorithm for the exact solution of the strip packing problem, which is based on a Benders' decomposition and is enriched with several tailored techniques.We proved that the slave problem arising in the decomposition is difficult, but solved it with an algorithm that is efficient in practice.We improved the standard Downloaded from informs.org by [58.7.36.26] on 28 April 2014, at 15:17 .For personal use only, all rights reserved.
Figure 1.(a) An optimal SPP solution; (b) the 1CBP relaxation; (c) the P cont C max relaxation.
Figure 2.A framework to reduce Partition to y-check.

Figure 4 .
Figure 4.Number of optimal solutions (out of 50) per class, from the most difficult to the easiest.

Table 2 .
Results and comparison on the cgcut instances.

Table 3 .
Results and comparison on the gcut instances.

Table 4 .
Results and comparison on the n instances.