The Meet-inthe-Middle Principle for Cutting and Packing Problems

Cutting and packing (C&P) is a fundamental research area that models a large number of managerial and industrial optimization issues. A solution to a C&P problem basically consists in a set of oneor multidimensional items packed in/cut from one or more bins, by satisfying problem constraints and minimizing a given objective function. Normal patterns are a well-known C&P technique used to build solutions where each item is aligned to the bottom of the bin along each dimension. The rationale in their use is that they can reduce the search space while preserving optimality, but the drawback is that their number grows consistently when the number of items and the size of the bin increase. In this paper we propose a new set of patterns, called meet-in-the-middle, that lead to several interesting results. Their computation is achieved with the same time complexity as that of the normal patterns, but their number is never higher, and in practical applications it frequently shows reductions of about 50%. The new patterns are applied to improve some state-of-the-art C&P techniques, including arc-flow formulations, combinatorial branch-andbound algorithms, and mixed integer linear programs. The efficacy of the improved techniques is assessed by extensive computational tests on a number of relevant applications.


Introduction
A solution to a cutting and packing (C&P) problem basically consists in a set of one-or multidimensional items packed in (or cut from) one or more bins, by satisfying some constraints and minimizing a given objective function.Typical constraints impose that all items should lie entirely within the bin in which they are packed, and that they should not overlap among themselves.Typical objective functions require the maximization of some item profits (knapsack problems) or the minimization of the number of the selected bins (bin packing problems and variants).
C&P problems are a fundamental research area in the field of Operations Research, as they model several real-world issues, arising for example in production industry (see, e.g., Vanderbeck 2001), transportation (see, e.g., Iori et al. 2007), and container-loading (see, e.g., Bortfeldt and Wäscher 2013), just to cite some.We refer to Wäscher et al. (2007) for an extensive typology of the several C&P problems and for further hints on their applications.
Normal patterns are a well-known C&P technique that builds solutions where each item is aligned to the bottom of each dimension.They were independently introduced first by Herz (1972) (who called them canonical dissections) and then by Christofides and Whitlock (1977) in the context of two-dimensional cutting, and have been used later on in literally hundreds of algorithms.Consider for example Figure 1: Figure 1-(a) gives a solution to a general two-dimensional problem where eight items are feasibly packed in a single bin; Figure 1-(b) provides an equivalent solution satisfying the principle of normal patterns.Intuitively, solution (a) can be transformed in solution (b) by repeatedly moving each item to its left and/or down until its border touches that of another item or that of the bin.On the basis of this observation, Herz (1972) and Christofides and Whitlock (1977) defined the set of normal patterns as the set of all possible item width combinations, and then proved that the search for an optimum may be limited to solutions where each item is packed in a normal pattern.
The drawback of normal patterns is that their efficacy is noticed almost only at the beginning of the bin, and then decreases consistently towards the end of it.Intuitively, a certain width p is a normal pattern if there exists a combination of item widths whose sum is p.This might be difficult to obtain for small p values, but becomes easier for large values.In practice, when the number of items is high and the bin width is large, the effect of the normal patterns tends to be irrelevant in the second half of the bin.
Several attempts have been developed in the literature to try to overcome this drawback.In this paper we continue this line of research and propose an idea that proved to be very effective in practice.It consists of a new set of patterns, called meet-in-the-middle (MIM), obtained by aligning items along each dimension either to the bottom of the bin or to the top of it.In details, we consider the first dimension of the bin (the width), fix a certain threshold value along it (for example, the half bin width), force all items whose left border is at the left of the threshold to be left aligned, and force the remaining items to be right aligned.The process is then repeated for the successive dimensions.Consider again Figure 1 and suppose that the thresholds for the two dimensions are fixed to, respectively, the half bin width and the half bin height (dashed lines).Then, using the 1 The Meet-in-the-Middle Principle for Cutting and Packing Problems CIRRELT-2016-28 MIM idea solution (a) is transformed into solution (c).As it will be shown in the next section, the search for an optimum may be limited to solutions satisfying the MIM patterns.
The idea is simple, yet it provides interesting results.The number of the MIM patterns is never higher that that of the normal ones, and in practical applications it is much smaller.Moreover, further reductions may be obtained by additional preprocessing techniques.In addition, the computational effort required to compute the MIM patterns is the same than that required for the normal ones.We also note that the MIM principle is useful to reduce not only the normal patterns, but also other forms of patterns that have been presented in the literature (and indeed our computational work builds upon the patterns by Boschetti et al. 2002).
To the best of our knowledge the MIM principle has not been investigated in the literature, and in the next sections we describe its application to some classical C&P problems.Indeed, the MIM patterns easily apply to several optimization techniques, such as mathematical formulations, where they allow to reduce the number of variables and constraints, and combinatorial enumeration trees, where they can be used to fathom unnecessary nodes.
The name that we adopted comes from cryptography ("meet in the middle attack").It has been used previously in the C&P literature by Horowitz and Sahni (1974) to describe their branch-andbound algorithm for the binary knapsack problem.They divide the input item set, having n items, into two mutually-exclusive subsets having n/2 items each.They enumerate the partial solutions on each subset, and then merge the partial solutions to build an optimal one.The complexity of their algorithm is O(2 n/2 ), which is the best known for the problem.Their approach is very different from the one that we propose here, as it divides the set of items instead of dividing the space of the bin along each dimension.
The remainder of the paper is organized as follows.In Section 2 the related literature is revised, the MIM patterns are presented and some properties are discussed.The remaining sections describe some relevant applications to well-known C&P problems.The classical cutting stock and bin packing problems are solved in Section 3, the "old" two-dimensional two-stage cutting stock problem in Section 4, and the fundamental two-dimensional orthogonal packing problem in Section 5.
All sections provide evidence of our assessments by means of extensive computational tests.These have been obtained by implementing all algorithms in C++ and running them on a PC equipped with an Intel 2.667 GHz Westmere EP X5650 processor.We used Cplex 12.6 as mixed integer linear programming (MILP) solver, imposing it to run on a single thread.

Meet-in-the-Middle Principle
We consider a generic orthogonal C&P problem in k dimensions, in which we are given a set I = {1, 2, . . ., n} of items and a bin.Both the items and the bin are k-dimensional rectangular boxes.Each item i ∈ I has width w d i and the bin has width W d , for d = 1, 2, . . ., k.We suppose that all widths are positive integer values.
A feasible solution is a packing of an item set I ′ ⊆ I in the bin, such that all items are completely contained in the bin and they do not overlap one with the other.Two feasible solutions are equivalent one with the other if they pack the same item set I ′ .We say that a reduction property preserves optimality, if it possibly removes some solutions but guarantees that at least one is kept among all equivalent solutions for any set I ′ ⊆ I.
We make use of a Cartesian coordinate system whose axes are parallel to the edges of the bin (see Figure 1).For a given box, we call lowest the box corner that is closest to the origin of the system (in two dimensions, the lowest corner is the bottom-left one).We say that an item i is packed in position p i if its lowest corner is in p i .
Our techniques apply to any dimension d, but for the sake of clarity in the remaining of this section we focus on the first dimension, that is, the width.When no confusion arises, we thus write for short W instead of W d and w i instead of w d i .For descriptive purposes, in the next Sections 2.1-2.3 we resume the following example a number of times.

Normal Patterns and Known Reductions
According to Herz (1972) and Christofides and Whitlock (1977) the set of normal patterns can be formally defined as (1) For Example 1, we have 5, 10, 12, 15, 17, 20, 22, 25, 27}.This set was introduced in the context of cutting, and also includes patterns positioned towards the end of the bin that are only needed to model cuts for the right borders of the items.If one is interested instead in modeling only the positions where the items can be packed (lowest corners), as in our case, then N 0 can be conveniently reduced.Following Beasley (1985a), this results in where w min = min j∈I {w j }.Set N models the simple fact that no item can be packed with its lowest corner after W − w min .In the remaining of the paper we refer to (2) when we mention normal patterns.For Example 1, we have 5, 10, 12, 15, 17, 20, 22}. Terno et al. (1987) (see also Scheithauer and Terno 1996) attempt to reduce N 0 as follows.For a given pattern p ∈ N 0 , let w(W −p) = max{x = j∈S w j : x ≤ W −p, S ⊆ I}.If p+ w(W −p) < W , then packing an item in p leads to a loss of at least W − p − w(W − p) width units.If there is a subsequent pattern, q > p, such that q + w(p) ≤ W , then the packing of an item in p can be skipped, as an equivalent or better solution can be obtained by packing it in q (as this would lead to a not greater loss).Moreover, Terno et al. (1987) noticed that for the computation of the w(W −p) values one can use directly the entries in N 0 .Formally, by setting w(W − p) = max {x ∈ N 0 : x ≤ W − p}, the set T 0 of the so-called reduced raster points (raster points for short in the following) is then For Example 1, we have 5, 10, 12, 15, 17, 22, 27}.The only item that can be packed in 20 has width 5, but this option is skipped as an equivalent solution can be obtained by packing the item in 22. Similarly to what done for N 0 , here we reduce the raster points by computing T = {x ∈ T 0 : x ≤ W − w min }.Boschetti et al. (2002) followed a different strategy, and conveniently reduced the set of normal patterns for a given item i, by computing its possible patterns as combinations of items rather than i.Formally, let As shown in Christofides and Whitlock (1977), the computation of N 0 may be obtained by a standard dynamic programming procedure, that we report in Algorithm 1. Procedure NormalPatterns works in O(nW ): It first computes the feasible item width combinations using a support array T and then stores the resulting values in N 0 .The computation of B i , for any i ∈ I, may be obtained by invoking NormalPatterns(I \ {i};W − w i ), and thus the computation of B requires O(n 2 W ).

Algorithm 1 NormalPatterns(I;W )
1: Require: I: set of items, W : bin width 2: T ← [0 to W ]: an array with all entries initialized to 0 3: T [0] ← 1 4: for i ∈ I do 5: for p = W − w i to 0 do 6: if In the following we pursue the idea of reducing the number of patterns for each item, thus building upon the regular normal patterns in (4).This is done because these patterns led to good computational results in several applications (see, e.g., Boschetti andMontaletti 2010 andCôté et al. 2014a), and because they model in a natural way choices made in standard C&P techniques such as arc-flow formulations and combinatorial branch-and-bound algorithms.However, our results can be easily adapted to the simpler case of a direct computation of the standard normal patterns starting from (2).
We note that other attempts have been made in the literature to reduce the normal patterns.Among these we mention: Côté et al. (2014a), that use a quick preprocessing technique in the idea of the raster points; Côté et al. (2014b), that divide a set of two-dimensional items into two subsets according to an input packing ordering, and then pack the first half on the bottom of the bin and the second half on the top; and Alvarez-Valdes et al. (2005), who propose reduction rules for the specific case where just two item widths are given (pallet loading).

Meet-in-the-Middle Patterns
The meet-in-the-middle (MIM) patterns are defined for each item i ∈ I and for a threshold t ∈ {1, 2, . . ., W } as the combination of two types of patterns.First the left patterns are computed as and then the right patterns as In practice, an item is packed in a left pattern when the coordinate x of its lowest corner is at the left of t (x ≤ t − 1), and in a right pattern otherwise (x ≥ t).Refer again to Figure 1-(c) for a 5 The Meet-in-the-Middle Principle for Cutting and Packing Problems CIRRELT-2016-28 graphical example.The "min" function in ( 5) is used to impose that an item i is not packed after W − w i .Note also that large items having width w i > W − t are always left aligned, because R it = ∅ when W − w i − t < 0. The set of MIM patterns for item i is then simply assessed by and the overall set as The computation of each M it set may be obtained in O(nW ) by running Algorithm 2. The left patterns are computed by determining all item combinations, rather than the selected item i, whose total width does not exceed t − 1 and the residual space left by w i in the bin.For the right patterns, we first compute the standard set of (left-aligned) normal patterns whose total width does not exceed the residual space, if any, obtained by subtracting from the bin width both w i and t (consider that NormalPatterns returns the empty set when W − w i − t < 0).Then we obtain R it by mapping each left-aligned pattern p into a right-aligned pattern W − w i − p.
Some interesting properties may be noticed for the MIM patterns.
Proposition 1 Optimality is preserved by considering only solutions where all items are packed in MIM patterns.
Proof The proof follows the footsteps of the simple one in Herz (1972) and Christofides and Whitlock (1977).Suppose a feasible packing for a generic item set I ′ ⊆ I is provided.Then select an item i ∈ I ′ not packed in a MIM pattern, if any, and repeat the following procedure for each dimension: if the lowest corner of i is at the left of t, then move i as much as possible to the left, otherwise move it as much as possible to the right.Reiterate until all items are packed in a MIM pattern.The thesis follows because the procedure holds for any I ′ ⊆ I.
Proposition 2 The set of MIM patterns computed for t = W is equivalent to the set B.
Proof By replacing t with W in ( 5) and ( 6) As it will be shown in Section 2.4 below, the value taken by t may have a strong impact on the cardinality of the resulting set M t .We thus define the minimal set of MIM patterns as A direct consequence of Property 2 and of the minimality of M is the following.

Proposition 3 |M| ≤ |B| (and consequently |M| ≤ |N |).
The computation of the minimal set M may be trivially obtained by invoking Algorithm 2 for each value of t and storing the best result according to (9), thus using a time complexity of O(n 2 W 2 ).A better implementation reduces the computational effort as follows.
Proof The proof is based on Algorithm 3, which we describe in details.We first compute all sets B i and we sum them together to determine the number of left patterns having value p and the number of right patterns having value W − w i − p (steps 2-9).The arrays T lef t and T right store the resulting values.The same arrays are then used to compute the cumulative number of patterns in an incremental way, on the basis of the following observation.
Let us consider two threshold values t 1 and t 2 , with t 2 ≥ t 1 + 1.We rewrite twice equation ( 5), by first replacing t with t 1 and then with t 2 .Then, by computing the difference between the two resulting equations, we obtain In other words, the left patterns up to t 2 − 1 may be computed by summing those up to t 1 − 1 and those in the interval [t 1 , t 2 − 1].The same reasoning applies to the right patterns, by using an incremental process from right to left.
Coming back to Algorithm 3, the incremental computation of the left and right patterns is performed at steps 10-13.Then, steps 14-20 determine the threshold value t min for which the overall number of patterns is a minimum, and steps 21-29 build the resulting MIM patterns.
An important remark is thus that the minimal set of MIM patterns may reduce the set of regular normal patterns defined in (4), and may be computed with the same algorithmic complexity.Note that the same remark applies when considering the original normal patterns in (2): the resulting number of MIM patterns would not exceed that of the normal patterns because of Property 3; their computation would require O(nW ) (the same complexity required for N ) by a simplified version Algorithm 3 MinimalMIMSet(I;W ) 1: Require: I: set of items, W : bin width 2: T lef t , T right ← [0 to W ]: two arrays with all entries initialized at zero 3: for i ∈ I do 4: for p ∈ B i do 6: end for 9: end for 10: for p = 1 to W do 11: end for 28: M ← M ∪ M i 29: end for 30: return M of Algorithm 3 in which the computation of all B i is replaced by that of N , and the double loops in i and p are replaced by single loops in p.Note also that one could be interested in using a different criterion for the minimality of M, for example replacing i∈I |M is | with |M s | in (9).In this case too Property 4 holds via a trivial modification of Algorithm 3, in which at steps 6 and 7

8
The Meet-in-the-Middle Principle for Cutting and Packing Problems CIRRELT-2016-28

Further Reduction by Preprocessing Criteria
The MIM patterns can be reduced, while preserving optimality, by applying two criteria.
Proposition 5 Consider a feasible solution packing a generic item set I ′ ⊆ I in the bin.Consider then an item k ∈ I ′ and a threshold value t.Among the equivalent solutions that pack I ′ , there exists one satisfying both the MIM patterns and the following condition: if t ≤ W −w k 2 , then k is packed in a right pattern, else it is packed in a left pattern.
Proof Let us first prove the "if" part.Given a solution where k is packed in a left pattern, we transform it into an equivalent solution satisfying the MIM patterns and where k is packed in a right pattern.We use a two-step procedure (refer to Figure 2 below).In the first step we perform a reflection of the original solution using axle x = W/2, and obtain a mirror solution.An item i originally packed in a pattern p i is packed in a pattern p ′ i = W − p i − w i in the mirror solution.The mirror solution is feasible but does not necessarily satisfy the MIM patterns, thus the second step of our procedure simply moves as much as possible to the left all items i packed in a pattern p ′ i ≤ t − 1, and to the right all items i packed in p ′ i ≥ t.The new solution is feasible and accomplishes the MIM principle.To guarantee that in the new solution item k is in the right patterns, we need to show that p ′ k ≥ t.As k was packed in a left pattern, we have A similar reasoning applies to "else" part by using the fact that t ≥ ⌈(W − w k )/2⌉ + 1, and this concludes the proof.
Observe for example Figure 2  We can thus apply Proposition 5 to reduce the search space while preserving optimality.We found computationally convenient to apply it in the following way.
Preprocessing 1 Select an item k of minimum width, remove it from the computation of the left patterns when t ≤ ⌈(W − w k )/2⌉ and of the right patterns when t ≥ ⌈(W − w k )/2⌉ + 1.
Classical C&P preprocessing techniques attempt to increase the widths of the items as much as possible while preserving optimality (see, e.g., Boschetti et al. 2002).This usually results in more constrained packings that can be easier to solve in practice (because bounding techniques can have an improved performance).Here we pursue this line of research, but try to increase the width of an item when it is packed in a particular MIM pattern, and then show how this can also be used to reduce the number of patterns.To this aim let us first define Proposition 6 Consider an item k ∈ I and a pattern p ∈ M kt for an arbitrary threshold t.
Then, optimality is preserved by setting w kp = q − p.
Then, optimality is preserved by setting R kt = R kt ∪ {q} \ {p}, and setting Proof Let us first concentrate on part A).Parameter q gives the value of the leftmost pattern that can be used for packing an item at the right of item k, and takes the value W if no such pattern exists.If q > p, then, because of Proposition 1, we know that among the optimal solutions there exists one in which no item is packed in a position belonging to the width interval from p + w k to q − 1 (where there are no MIM patterns).We can thus increase the item width to w ip = q − p, because this does not cause an overlapping with other items in any solution that does not violate the MIM principle.
The proof of part B) is somehow specular.In this case we consider the set of all item packings at the left of k, and in this set we compute q as the rightmost position where an item can end (0 if no such position exists).Then, in a solution satisfying the MIM principle no item can have its lowest corner in the interval between q and p.Thus, we can move to the left the current right pattern, from p to q, and then enlarge the item width to w kq = p + w k − q (thus preserving the same value for the right border of item k when packed in this pattern), knowing that the packing of k in q will not cause any overlapping with other items.
Proposition 7 Consider an item k ∈ I and two patterns p, s ∈ M kt with p < s.By using Proposition 6, enlarge the width of the item when it is packed in the two patterns to, respectively, w kp and w ks .If s + w ks ≤ p + w kp , then the removal of pattern p from M kt preserves optimality.
Proof There are two cases of interest, if item k if packed in pattern s, then it occupies the width interval [s, s + w ks ].If instead it is packed in p, then it occupies the interval [p, p + w kp ].If p < s and s + w ks ≤ p + w kp , then [s, s + w ks ] ⊆ [p, p + w kp ].Consequently any solution where k is packed in p can be replaced by an equivalent one in which k is packed in s, without affecting optimality.Note that the opposite does not hold, as there could be items whose right border is in the interval between p and s, that can be packed side by side with k when it is packed in s, but not when it is packed in p.
The above results lead to our second preprocessing criterion.
Preprocessing 2 Enlarge the widths of all items in all patterns using Proposition 6, first for all the left patterns and then for all the right ones.Then remove redundant patterns, first left and then right, following Proposition 7.

Evaluation
We conclude this section by presenting a numerical evaluation of the size of the different sets of patterns that we discussed.We concentrate on the widths of three well-known sets of twodimensional instances, namely, the cgcut by Christofides and Whitlock (1977), and the gcut and ngcut by Beasley (1985a,b).
The results that we obtained are summarized in   summing the patterns of all items (namely, In details, the set N i is the subset of N that can be used to pack item i, and is computed as Among the methods in the literature, the raster points provide on average the best values and are particularly effective on the gcut instances.The MIM patterns always provide equivalent or larger reductions than those by the literature, with a single exception on instance gcut05 where |M| > |T |.They are particularly effective for large values of bin width: the reduction that they achieve is quite limited for the ngcut instances, becomes larger for the cgcut ones, and is very relevant for the gcut ones.In particular, for 10 out of 13 gcut instances the reduction in terms of total number of patterns is higher than 70%.
In Figure 3 we show the evolution of the MIM patterns for instance ngcut10 under different threshold values t.For the rightmost value, we have The number of left patterns increases when t increases, the opposite happens for the number of right patterns, and in t min = 13 their sum achieves the minimum value 123.The preprocessing techniques always decrease the number of MIM patterns, and also lead to a minimum in t min = 13 (of value 102).This is a typical behavior noticed for many instances.

12
The Meet-in-the-Middle Principle for Cutting and Packing Problems CIRRELT-2016-28

Application I: Bin Packing and Cutting Stock Problem
The bin packing problem (BPP) requires to cut/pack a set of n one-dimensional items, each having width w i , from/into the minimum number of identical bins of capacity W .The cutting stock problem (CSP) is the BPP version in which all items having the same width are grouped together.The CSP input consists of m item types, where each type i has width w i and number copies (demand) equal to d i (and n = m i=1 d i ).Branch-and-Price algorithms are the most powerful technique to solve the BPP and the CSP, but in recent years, thanks also to the progress of commercial MILP solvers, pseudo-polynomial formulations became a valid alternative, see Delorme et al. (2016, forthcoming).
To the best of our knowledge, the most famous among these formulations is the arc-flow by Valério de Carvalho (1999), which explicitly refers to the CSP.Let G = (V, A) be a digraph where V = {0, 1, . . ., W } is the set of vertices representing all partial bin fillings, and A is the set of arcs (p, q) representing either (i) the packing of an item of width q − p starting from the partial bin filling p ("item arc"), or (ii) an empty portion of the bin between fillings p and q ("loss arc").By introducing x pq as an integer variable giving the number of times arc (p, q) ∈ A is selected, and defining δ − (q) (resp.δ + (q)) the set of arcs entering (resp.leaving) a vertex q, the CSP can be modeled as min z (11) s.t.
Constraints ( 12) impose flow conservation, whereas constraints (13) state that all item demands must be fulfilled.Each possible packing of a bin is thus represented by a path from 0 to W in the digraph, and the aim is to minimize the number of selected paths.The computational behavior of model ( 11)-( 14) strictly depends on the set of arcs, that should guarantee optimality but at the same time be as small as possible.To this aim, Valério de Carvalho (1999) preliminary sorted items according to non-increasing width, and then created only those item arcs that fulfilled this sorting.In this way, the largest item can only start in zero, the second largest item can start in zero or right after the largest one, and so on (clearly, this would not preserve optimality for problems where items have two or more dimensions).He then imposed that loss arcs cannot be used before item arcs, and he created only unit-width loss arcs in the interval [w min , w min + 1, . . ., W ]. In the following we call this the normal arc-flow formulation.
Here we show how further improvements can be obtained with the MIM principle.Suppose again that items are sorted by non-increasing width.Moreover, for any i = 1, 2, . . ., m, let di j = d j for all j = 1, 2, . . ., i − 1 and di i = d i − 1. 13 The Meet-in-the-Middle Principle for Cutting and Packing Problems CIRRELT-2016-28 Proposition 8 In the normal arc-flow formulation the set of patterns where an item i can be packed (i.e., partial bin fillings where an item arc can start) is given by In practice, B ′ i is the subset of B i (recall that items having the same width are grouped together in item types in the CSP notation) which is induced by the adopted ordering: a copy of item type i can have its lowest corner in a pattern created by combinations of the previous items and of the first d i − 1 copies of i.
In our implementation, we take advantage of the item ordering to obtain a simple computation of Indeed, it is enough to run a modified version of Algorithm 1 that provides all B ′ i sets in just one call.Details are provided in the electronic companion to this paper.We also introduce two small reductions with respect to Valério de Carvalho (1999): in all our formulations we remove the original unit-width loss arcs and introduce only loss arcs that connect pairs of consecutive vertices in B ′ ; we remove a loss arc connecting two vertices if there is an item arc connecting the same two vertices (this is possible because of the "≥" in ( 13)).
Similarly to what seen for the regular normal patterns in ( 15), also the MIM patterns may be restated by considering the item sorting.We can formally define the left and right patterns for the CSP as for i ∈ I, and then obtain the minimal set M ′ of MIM patterns by following ( 7), (8), and (9).In our implementation, we compute M ′ by using a modified version of Algorithm 3 that takes advantage of the item sorting.Also this algorithm is provided in the electronic companion.Once the minimal set has been obtained, we use it to build a reduced set A of arcs by considering, for each item i, only those item arcs that start in a MIM pattern.We obtain a small reduction by imposing each item of width larger than W/2 to have its lowest corner in 0. A further reduction is obtained by applying Preprocessing 2. We disregard instead Preprocessing 1 because it is incompatible with the adopted non-increasing width sorting.An example of the graphs associated to the three arc-flow formulations (normal, MIM-based, and MIM-based plus Preprocessing 2) is given in Figure 4.It refers to a CSP instance with w = (6, 5, 3, 2), d = (1, 1, 2, 2), and W = 8. Figure 4-(a) presents the normal arc-flow formulation, which contains 10 item arcs (depicted in straight lines) and 6 loss arcs (dotted lines).Figure 4-(b) gives the MIM-based formulation computed for t = 2, which contains 4 left arcs (straight lines), 5 right arcs (dashed lines), and 4 loss arcs (dotted lines).Figure 4-(c) shows the further reduction obtained by Preprocessing 2: item arc (4,6) is enlarged to (3,6) and then removed because dominated by (3,5); consequently, the two loss arcs (3,4) and (4,5) are also removed.The three arc-flow formulations have been tested on the classical CSP and BPP benchmark sets, with a time limit of 1200 seconds per instance (refer to Delorme et al. 2016, forthcoming for details on the benchmark sets).Table 2 shows the results that we obtained.The table first reports the name of the set, the number of instances (#inst.), the average number of items (n), and the average bin width (W ).Then, for each formulation, it reports the average number of arcs (|A|), the total number of instances unsolved to proven optimality (#fails), and the average number of elapsed seconds (sec).The minimum values of #fails for each group of instances are highlighted in bold.The last line reports overall total numbers of instances and fails, and overall average numbers of arcs and seconds.The arc-flow using the MIM patterns needs roughly 50% less arcs than the normal one, and it is more efficient.This behavior is particularly evident for the set Scholl 3: the average number of arcs decreases from about 1.5 million arcs to about 50.000; the MIM-based formulation solves all instances to proven optimality while the normal one could not solve any.Further reduction on the number of arcs is obtained by the preprocessing.Overall the MIM-based formulation solves to proven optimality 48 more instances than the normal one and requires a smaller average computational time.The formulation using the preprocessing technique solves 4 more instances to proven optimality with a similar computational effort.4 Application II: Non-Exact Two-Stage Cutting Stock Problem In this section we solve the non-exact two-stage guillotine cutting stock problem (2S-CSP), which is the generalization of the CSP of Section 3 in which: (i) items and bins are two-dimensional rectangles; (ii) items can be produced from the bins by a series of successive guillotine cuts, that is, cuts that traverse entirely the bin (or the residual bin portion under processing) from one edge to the other; (iii) just two series of cuts can be used; and (iv) a final trimming stage is possibly adopted to remove waste.In practice each bin is first cut down along its height into horizontal strips (1st stage), these slices are then cut vertically across their widths (2nd stage), and then, if the obtained items are higher than the required ones, a final horizontal cutting stage is adopted for removing the waste.The 2S-CSP has been introduced in the 1960s by Gilmore and Gomory (1965), and has attracted the interest of many researchers and practitioners in the following decades because it can naturally model cutting problems that arise in many production industries such as steel, wood, glass, etc.The problem has been solved with several optimization techniques, including MILP models by Lodi et al. (2004), arc-flow formulations by Macedo et al. (2010) and Silva et al. (2010), and branch-and-price algorithms by Alves et al. (2009) and Mrad et al. (2013).
Here we apply the MIM patterns to the formulation by Macedo et al. (2010), which we briefly recall.To ease notation, we denote the height of the items by h i instead of w 2 i and that of the bins by H instead of W 2 .Let m * be the number of different item heights and {h * 1 , h * 2 , . . ., h * m * } the corresponding set.We create m * +1 digraphs G 0 , G 1 , . . ., G m * as follows.G 0 = (V 0 , A 0 ) is a standard digraph associated to the 1st stage cut, where V 0 = {0, 1, . . ., H} and A 0 is the set of arcs (a, b) representing either the cutting of a strip of height b − a starting at height a, or an empty portion of a bin between heights a and b.G s = (V s , A s ) is instead a multi-digraph associated to 16 The Meet-in-the-Middle Principle for Cutting and Packing Problems CIRRELT-2016-28 a 2nd stage cut on a strip s of height h * s , for s ∈ {1, 2, . . ., m * }, where V s = {0, 1, . . ., W } and A s contains arcs (d, e, i) of two types: for i ∈ I, arc (d, d + w i , i) refers to the cut of an item i starting at width d; for i = 0, arc (d, e, 0) is a loss arc representing an empty portion in the strip between widths d and e.Let also A s (i) ⊆ A s define the subset of arcs referring to a given item i ∈ I.
Following Macedo et al. (2010), the set A 0 is constructed by creating all patterns where an item can start, as in ( 15), but replacing widths with heights and preliminary sorting items according to non-increasing height.Sets A s are built using the same principle, but imposing that only arcs referring to items i of height h i = h * s can start from vertex 0. In terms of decision variables, let z 0 be the number of used bins, z s the number of adopted strips of height h * s , y ab the number of times in which arc (a, b) is used for a 1st stage cut, and x s dei the number of times in which arc (d, e, i) is used for a 2nd stage cut on a strip s.The 2S-CSP is thus: x s dei + (e,f,i)∈δ + (e) s=1,2,...,m * (d,d+w i ,i)∈A s (i) Flow conservation is imposed for the first stage by constraints ( 19) and for the 2nd stage by constraints (21).Constraints (20) link together the y and z variables, and (22) impose that all demands are fulfilled.We call normal this arc-flow formulation.Similarly to what was done for the CSP, also for the 2S-CSP we improve the normal formulation by replacing the regular normal patterns with the MIM, and then with the MIM plus Preprocessing 2. The way in which we impose these modifications follow the footsteps of what done in Section 3 for the CSP, but uses the item orderings suggested by Macedo et al. (2010).
The results of our implementations are shown in Table 3.The columns have the same meanings as those in Table 2.The formulations have been tested on the two publicly available benchmark sets for the 2S-CSP, A and ATP, attempting the case in which the 1st stage cut is along the height (A(H) and ATP(H)) or along the width (A(W ) and ATP(W )).Following previous works in the literature, each model was allowed a time limit of 7200 seconds.The MIM-based formulation improves the normal one by reducing the average number of arcs (to almost one third) and the average computational effort, finding 5 additional proven optimal solutions.The use of Preprocessing 2 reduces the number of arcs by an additional 5%, but does not help improving the number of proven optimal solutions, as the number of fails increases from 16 to 17.We believe this fact may be imputed to a worsening of the behavior of the automatic Cplex heuristics, that fail in finding a good feasible solutions for two instances (for which a quick solution was instead found when Preprocessing 2 was not used).
In Table 4 we compare our best algorithm with those in the literature, that we call for short MMH for Mrad et al. (2013), MAV for Macedo et al. (2010), LMV for Lodi et al. (2004), AMM for Alves et al. (2009), and SAV for Silva et al. (2010).The MIM-based arc-flow formulation has a smaller number of fails than the other algorithms.Algorithm MAV is also based on the normal arc-flow formulation, but it includes a number of improvements and is very competitive as it solves all instances of the set A(H) to proven optimality.Still the MIM-based arc-flow formulation is quite faster than it.The two-dimensional orthogonal packing problem (2OPP) is the basic feasibility test of determining whether a set I of rectangular items fits or not into a rectangular bin.Rotation of the items is not allowed.The 2OPP arises as a subproblem in many two-dimensional C&P problems, such as knapsack, bin packing, and strip packing.It has been tackled with several algorithms, including, e.g., constraint programming techniques by Clautiaux et al. (2008), and mixed methods by Mesyagutov et al. (2012) and Belov and Rohling (2013).Here we solve it first by means of branch-and-bound algorithms and then by primal decomposition techniques.Once again we ease notation by writing h i instead of w 2 i and H instead of W 2 (recall the general problem definition in Section 2).

Combinatorial Branch-and-Bound Algorithms
Combinatorial branch-and-bound (B&B) algorithms for C&P attempt to build solutions by packing one item at a time in the bin, according to a specific enumeration rule.Here we use a basic B&B that builds upon the rule by Boschetti and Montaletti (2010).We start from the empty bin and pack items one at a time from the bottom to the top.At a given partial packing, let the skyline represent the set of the top borders of the items (or bottom of the bin where no item has been packed yet), that is, a set of consecutive horizontal segments positioned at different y-coordinates.
Let the niche be the segment of the skyline positioned at the smallest y-coordinate (breaking ties by smallest x-coordinate).The borders of the niche are the vertical segments at its left and right.For example, in Figure 1-(b) the niche is the horizontal segment [17,20] at the y-coordinate 0, its left border is the vertical segment [0, 4], and its right border is the vertical segment [0, 20].Let p be the x-coordinate of the leftmost position of the current niche.Note that p ∈ B because it is a feasible combination of item widths.We attempt packing in p any item i that can fit and for which the condition p ∈ B i is satisfied.We select items by non-increasing order of width, and create a new node in the enumeration tree for each resulting packing.We also create a last additional node, that we call loss node, in which no item is packed in p.Let ĥ be the minimum height among the heights of left and right borders of the current niche and the heights of the residual items to be packed.When a loss node is explored, the portion of the niche going from p to the successive pattern in B and having height ĥ is closed and considered unavailable for packing.To this aim the set B is re-evaluated at every node by taking in consideration only the items that still have to be packed.The next pattern in B inside the niche is selected, if any, and the process is iterated, from left to right.When no such pattern exists, the current niche is closed and the next niche is computed.The tree is explored in depth-first fashion.Nodes are fathomed by the use of a simple estimation of the area of the residual items to be packed and or the residual bin area (continuous bound), and by the so-called DP-cuts of Kenmochi et al. (2009).
Many improvements can be done to this simple scheme (preprocessing techniques, improved computations of ĥ, techniques to fathom nodes, . . .), but this is out of the scope of this paper.
Here, similarly to what was done in the previous sections, we call normal this basic B&B technique and attempt to improve it by the use of the MIM patterns.The first improved version is the one in which B is replaced by M. This reduces the number of nodes because: (i) only items i for which p ∈ M i are selected for packing in p, and (ii) the distance between two consecutive patterns in M is typically larger than in B, and hence the area made unavailable when a loss node is selected is larger.The second version makes use also of Preprocessings 1 and 2. The last improved version also changes the way in which the tree is explored, by including a new branching scheme (MIM-branch).Let t min be the threshold value used to build M (see Section 2).If p < t min then we proceed from left to right in the selection of the positions in the niche as done in the previous branching schemes, otherwise we proceed from right to the left.In other words, if p ≥ t min then we select for packing the first position q from the right, and pack there all items i for which q ∈ M i .A computational evaluation of these four techniques is proposed in Table 5.Each algorithm was run with a time limit of 900 seconds.We selected the well-known 2OPP benchmark instances, namely sets E, C, N, and T, plus the two sets of instances created by Mesyagutov et al. (2012), that we call MSB-450 and MSB-630.We refer to Mesyagutov et al. (2012) for details on all benchmark sets.The column headings are the same as in the previous tables, with the exception of "nodes", that report the number, in millions, of explored nodes.Table 5: Evaluation of different B&B algorithms on 2OPP instances (nodes = 10 6 explored nodes).
The normal B&B fails in providing a certified proof of feasibility or infeasibility for 345 out of 1213 instances.The effect of the MIM is not relevant for sets E, C, N, and T, but it is quite remarkable for the MSB sets, where #fails decrease by 37 units.The use of the new branching scheme is effective, because it allows the algorithm to solve 10 more instances and obtain a strong decrease in the number of explored nodes.primal decomposition with combinatorial Benders cuts.The decomposition first take cares of the horizontal positions of the items, and makes use of a binary variable x ip taking the value 1 if item i is packed in pattern p along the x-axis, 0 otherwise.Let B i,q denote the subset of patterns for which item i occupies position q, formally B i,q = {p ∈ B i : q − w i + 1 ≤ p ≤ q}.Then the 2OPP can be modeled as the following integer linear feasibility test:

Primal Decomposition Methods
i∈I p∈B i,q i∈I x i,p s i ≤ n − 1 ∀ s infeasible for the SP, (28) Constraint ( 26) impose that each item is packed once.Constraints ( 27) state that the sum of the item heights on a certain pattern q does not exceed the bin height.Before discussing constraints (28), let us focus on the sub-model induced by ( 26)-( 27) and ( 29).Solving this sub-model requires to find an allocation for unit-width slices of the items into the bin, in such a way that all slices are contiguous one with the other.This problem (known in the literature as the bar relaxation or as the bin packing problem with contiguity constraints) corresponds to the first master problem (MP) of the proposed decomposition.Suppose a solution s for the MP is found, in which any item i ∈ I is packed in a pattern p s i .Then the slave problem (SP) is to determine the set of vertical positions for all the items that lead a packing without overlapping, if any.If such a set is found, then the model returns a feasible 2OPP solution, otherwise a feasibility cut (28) is added to the MP to disregard solution s.The approach works well when the simple feasibility cuts are improved into the much stronger lifted combinatorial Benders cuts, as discussed in Côté et al. (2014a).
Here we solve the problem by using the same algorithm in Côté et al. (2014a): the MP is solved with Cplex, the SP with a dedicated branch-and-bound (Section 3 of their article), and the feasibility cuts are improved with greedy procedures and a lifting based on linear programming (Section 4 of their article).The only difference is that we replace the regular patterns B with the MIM patterns M in model ( 26)-( 29).
The results that we obtained are presented in Table 6.The column headings are the same used for the previous tables, with the addition of "var", that denotes the average number of x ip variables in the different mathematical models.Each algorithm was run with a time limit of 900 seconds.The normal decomposition fails for 181 instances.The MIM patterns are effective in reducing the number of variables and thus allow the algorithm to close 25 more instances.The two preprocessing techniques are not effective on sets N and T. This probably happens because these sets are composed entirely by 2OPP feasible instances with zero waste ("perfect packing" instances), that have been created mostly with the aim of testing heuristic algorithms, and, as noticed in Section 4, preprocessing may have a slight negative influence on the automatic Cplex heuristics.The preprocessing techniques provide instead positive improvements on the two MSB sets, where they allow the decomposition method to find 5 more proven optimal solutions.

Conclusions
In this paper we proposed a principle to reduce the number of patterns in multi-dimensional cutting and packing (C&P) problems.It consists of a new set of patterns, called meet-in-the-middle (MIM), obtained by aligning items along each dimension either to the bottom of the bin or to the top of it.
The computation of the MIM patterns does not require additional effort with respect to previous methods in the literature and usually leads to a smaller number of patterns.Further reduction criteria can also be applied.Extensive computational tests showed the efficiency of the proposed techniques on a number of relevant C&P problems.
The MIM principle can be used in several optimization algorithms, because it usually reduces the number of variables required by mathematical models and the number of nodes explored by combinatorial branch-and-bound algorithms.The principle can be adapted to a large number of applications, not only in C&P but also in other combinatorial optimization fields, such as vehicle routing and scheduling.There is thus a large number of possible future research applications.
, and consider k = 2 and t = 4 ≤ ⌈(W − w k )/2⌉ = 9. Figure 2-(a) gives a solution in which k is packed in a left pattern; Figure 2-(b) provides the corresponding mirror solution; and Figure 2-(c) presents the solution obtained at the end of the second step of our procedure, satisfying the MIM principle and having k packed in a right pattern.

Figure 2 :
Figure 2: (a) solution satisfying MIM and having item k = 2 packed in a left pattern; (b) mirror solution; (c) solution satisfying MIM and having k packed in a right pattern.

Figure 3 :
Figure 3: MIM patterns for different values of t on instance ngcut10.
For the MIM patterns we tested both versions in which either i∈I |M is | or |M s | is minimized in (9).In the last line we show the percentage reductions of a set, say X , with respect to the normal patterns, computed as 100( |X i | − |N i |)/ |N i | and 100(|X | − |N |)/|N |.
Côté et al. (2014a) solved the strip packing problem by iteratively calling an inner model to test the feasibility of 2OPP instances.Here we describe their model for the 2OPP, which is based on a 20The Meet-in-the-Middle Principle for Cutting and Packing ProblemsCIRRELT-2016-28

Table 1 .
For the literature, the table gives the cardinality of the sets (namely, |N |, |B|, and |T |) and the total number of patterns obtained by

Table 1 :
Computational evaluation (% reductions evaluated with respect to N ) 11

15
The Meet-in-the-Middle Principle for Cutting and Packing Problems

Table 2 :
Impact of the MIM patterns on standard CSP instances.

Table 3 :
Impact of the MIM patterns on the 2S-CSP.

21
The Meet-in-the-Middle Principle for Cutting and Packing ProblemsCIRRELT-2016-28

Table 6 :
Evaluation of different decomposition algorithms on 2OPP instances.