Splitting Methods for a Class of Horizontal Linear Complementarity Problems

In this paper, we propose two splitting methods for solving horizontal linear complementarity problems characterized by matrices with positive diagonal elements. The proposed procedures are based on the Jacobi and on the Gauss–Seidel iterations and differ from existing techniques in that they act directly and simultaneously on both matrices of the problem. We prove the convergence of the methods under some assumptions on the diagonal dominance of the matrices of the problem. Several numerical experiments, including large-scale problems of practical interest, demonstrate the capabilities of the proposed methods in various situations.


Introduction
Horizontal linear complementarity problems (HLCPs) are a well-known generalization of linear complementarity problems (LCPs) [1] and have applications in many different fields, including structural mechanics, mechanical and electrical engineering and transportation science. Several solution techniques have, then, been devised and new ones are proposed to this day. In particular, some popular approaches are based on Interior Point (IP) methods (see, e.g., [2]) or rely on reducing the HLCP to an LCP a nn x n + n−1 j=1 a nj x i indicate the i-th component of x (k) and y (k) , respectively. More compactly, collecting terms in x (k) and in y (k) on the right-hand side of the system, for i = 1, . . . , n we obtain {x with a ii , b ii > 0 by the positivity of the diagonals of A and B and with where, for compactness, we have set j =i = n j=1 j =i .
Thus, for i = 1, . . . , n, the solution {x } of (2) has the simple form We refer to this method as projected Jacobi method for HLCPs. In the following, it is useful to denote w (k) and to introduce a few results involving sums and differences of terms w (k) i + and w (k) i − . In particular, it is easy to notice that w (k) i + + w (k) i − = |w (k) i |. Moreover, for any two arbitrary real numbers c, d, we have Indeed, if c > d ≥ 0 or if 0 > c ≥ d, one of the two terms in absolute value in the left-hand side vanishes and the equality is readily proved. Similarly, if c ≥ 0 > d, the left-hand side becomes |c| + |d|, with |c| = c and |d| = −d. Then, c − d = |c − d| and the equality holds true. All other possible cases easily follow. Using these results, we hereafter analyze the convergence of the projected Jacobi method. We first assume A, B strictly diagonally dominant, case where (assuming also the positivity of the diagonals) the solution of the HLCP(A, B, c) is unique [12]. Theorem 2.1 Let A, B ∈ R n×n be strictly diagonally dominant by columns with positive diagonal elements. Then, the sequence {x (k) , y (k) } generated by (3)-(4) converges to the unique solution (x * , y * ) of the HLCP (A, B, c) for all initial vectors x (0) , y (0) .
and consider that we can thus write 1 Then, noticing that, by (5), we can express the iterates in (4) as and considering the definitions of w (k) i in (3) and of w * i in (7), we can express the residual at the i-th index and at k-th iteration as In the trivial case of A and B diagonal, the residual is evidently zero in a single iteration. Otherwise, let us define 1 It is easy to prove that max{0, w * i /a ii } and max{0, −w * i /b ii } are the solution x * i , y * i of the HLCP (A, B, c), i = 1, . . . , n. Indeed, replacing in the i-th row of By the positivity of a ii and b ii , we then have the nonnegativity of x i and y i and that x i is positive when y i = 0 (and vice versa). Applying (6), we obtain Let us then evaluate this new expression. Proceeding as in (9), we can write where, in the last passage, we have reversed the order of the summations [13, p. 36] and have brought terms not dependent on the index j out of the inner sum.
If we now define μ := max l=1,n j =l a jl a ll , we can make a further evaluation and write n j=1 where μ ∈]0, 1[ by the strict column diagonal dominance of A and B. Then, proceeding iteratively, we find for any generic i-th index. Thus, lim k→∞ w (k) i − w * i = 0 for i = 1, . . . , n. This implies that w (k) i → w * i for k → ∞ for all i = 1, . . . , n and then lim k→∞ x (k)

The Projected Gauss-Seidel Iteration for HLCPs
If we apply a Gauss-Seidel splitting to A and B instead of the Jacobi splitting employed in the previous section, we can analogously formulate a projected Gauss-Seidel method. In this regard, let us again consider (1) with A, B real matrices with positive diagonal elements. Then, given two arbitrary vectors x (0) ∈ R n and y (0) ∈ R n , for k = 0, 1, . . . and for i = 1, . . . , n, let {x } be the solution of the horizontal linear complementarity problem with a ii , b ii > 0 by the positivity of the diagonals of A and B and with Thus, for i = 1, . . . , n, the solution of (13) has the simple form We refer to this method as projected Gauss-Seidel method for HLCPs. Let us now prove that the iterates computed by (15) converge to the solution (x * , y * ). Proof Similarly to what done for the projected Jacobi method, we definẽ Then, by (15), we can write the solution {x By (14) and (17) and proceeding similarly as in Theorem 2.1, we find that, at the generic k-th iteration, we have (18) On the other hand, by (6), we also have since l =h |a lh /a hh | and l =h |b lh /b hh | are strictly smaller than one by strict column diagonal dominance of A and B. Combining (18) and (19), we then obtain Hence, subtracting n j=1 |w from all members of the previous chain of inequalities, we obtain n j=1 w (k) Consider the middle expression. By (6), we can write any of its terms as with a i j a j j > 0 by strict column diagonal dominance of A and, analogously, ξ 2 j > 0 by strict column diagonal dominance of B, for j = 1, . . . , n. Moreover, considering the first and the last member of (20) and proceeding iteratively, we also have Thus, the sequence n j=1 |w where we have exploited (20) with the middle member written by using (21) and (22).

Convergence for A, B Not Strictly Diagonally Dominant
We now provide some remarks on the uniqueness of solution and on the convergence of the presented procedures when A and B are not strictly diagonally dominant by columns. The proofs here provided exploit the concepts of column representativeness and of column W-property [12].  Regarding convergence of the methods, let us start from the projected Gauss-Seidel method. Assume, for simplicity and with no loss of generality, that only the first column of A and B is strictly diagonally dominant. Proceeding as in Theorem 3.1, we find the same result as in (24), where, however, We thus only find thatw By proceeding as in Theorem 3.1, we find Moreover, by the column diagonal dominance of A and B, we can also write This inequality still holds in strict sense, since where the middle expression has been obtained analogously as in (21)-(22), with ξ 1 j = ξ 2 j = 0 for j = 2, . . . , n, as A and B are (not strictly) diagonally dominant in the columns j = 2, . . . , n. For compactness, let us denote by α k the first member of (25). Then, the inequality between the first and the last member can be written compactly as α k < β k + α k−1 , where β k contains the terms of iteration k of the last member of (25). Moreover, α k is bounded, since all of its terms are contained in the k-th term of (23), which is bounded. Then, since we also have that β k tends to zero for k → ∞, the sequence of the {α k } converges and, by (25), we can write Thus, since the middle member of the previous expression can be rewritten as However, a 12 a 22 > 0 and b 12 b 22 > 0 by hypothesis. Thus,w (k) 2 → w * 2 for k → ∞ must hold true. By induction, the same applies also to successive components of the residual.
Proceeding as we did for the projected Gauss-Seidel method, we thus find |w (k) 1 − w * 1 | → 0 for k → ∞ and, by induction, we finally have w (k) j → w * j for k → ∞ also for the following j = 2, . . . , n.

Numerical Experiments
In this section, we present the numerical experiments that we use to show the effectiveness of the proposed methods.
First, we solve a set of problems where A and B are randomly generated. Full, random matrices are generated by the rand function in MATLAB 2015b and scaled so that every component belongs to the interval [− 10, 10]. Then, diagonal elements are set equal to the sum of the absolute values of the respective column plus, potentially, a random number in ]0, 1[. In particular, we consider both matrices where strict diagonal dominance holds for all columns and full, column diagonally dominant matrices where strict diagonal dominance holds only at the first column. The HLCPs defined by these matrices admit a unique solution for any c. Thus, also c is generated at random, with values between − 10 and 10. We also consider problems where random matrices have a particular structure, like triangular strictly diagonally dominant matrices or matrices where all off-diagonal elements have the same sign. In all cases, initial iterates are chosen as random sequences of zeros and ones satisfying complementarity. Since all these problems are meant as a first validation of the procedure, we do not impose a stopping condition, but we simply evaluate the residual after a given number k * of iterations by replacing x and y in Ax − B y = c by the computed iterates x (k * ) and y (k * ) (which satisfy also the complementarity by the projection).
Then, we consider HLCPs arising in mechanical engineering to model cavitation (which is the formation of gaseous bubbles) in hydrodynamic lubrication. The formulation of the problem in complementarity form can be found in [15], while we follow [11] for the HLCP formulation arising from finite difference discretizations of the differential problem. The complementarity variables represent here pressure and density and are thus denoted by p and r (instead of x and y). Moreover, they depend on the space variables, which we denote by ζ in 1D problems and by (ζ, η) in 2D problems.
We solve the four test problems presented in [11] and identify them by P1, P2, P3 and P4, respectively. These problems, whose solution is known, encompass various cases which can occur. In particular, P1, P2 and P3 regard 1D cases, while P4 refers to a 2D case. In all these problems, A and B are M-matrices. Moreover, in 1D, A is tridiagonal and B is lower bidiagonal. In 2D, A is block tridiagonal and B is block Table 1 Mean and maximum residuals after 20 iterations for 100 HLCP (A, B, c) [11], where the test problems were solved by an IP method. In order to perform a consistent comparison, we enforce a stopping criterion as similar as possible to that of the IP method in [11]. We then stop the procedures as the norms of the residual and of the difference between two successive iterates become smaller than a given tolerance tol. Due to the low computational complexity of the iterations, the stopping criterion is checked every 1000 iterations. All experiments have been performed in Unix environment on a laptop equipped with a dual core 2.7 GHz Intel Core i5 processor (Broadwell series).

Results and Analysis
We first validate the procedures by applying the projected Jacobi and Gauss-Seidel methods to solving series of 100 HLCPs (A, B, c) with random A, B and c, generated as described in Sect. 5. For each problem, we compute the l 2 -norm of the residual after 20 iterations of the methods. Finally, we evaluate whether the algorithms converged by analyzing the arithmetic mean and the maximum of all these residuals, thus ensuring that the procedures converged for every single problem. We repeat the process for problems of different dimensions n. In Table 1, by S D D we refer to problems with strictly column diagonally dominant matrices, while D D refers to A, B random, full and column diagonally dominant matrices with positive diagonal entries and with strict diagonal dominance holding only at the first column. Moreover, in all tables, we use the e-notation to denote the powers of 10.
In all cases, we compute the correct solution of the considered HLCPs. Indeed, the mean and the maximum of the l 2 -norms of the residuals of the considered problems are always small. Moreover, both methods converge quickly for this kind of problems. Indeed, both the mean and the maximum of residuals decrease rapidly to values in the order of 10 −13 −10 −14 . More significant differences between the Jacobi and the Gauss-Seidel procedures can only be seen when n is small, but the residuals are nonetheless in the order of at most 10 −8 −10 −9 also in this case.
After this first validation by full, random matrices, we then pass to the analysis of structured matrices, which are not only more interesting but also potentially more Table 2 Mean and maximum residuals after 20, 000 iterations for 100 HLCP (A, B, c) with A, B generated at random as in Sect. 5 A, B, c) where A and B are column diagonally dominant matrices (strictly at the first column) with positive diagonal elements, A has negative off-diagonal elements and B has positive off-diagonal elements. In this case, several evaluations performed in the convergence theorems apply with the equality sign. The results for these problems for n = 100 after 20,000 iterations are reported in Table 2.
Again, both algorithms converge in all cases. As expected, convergence is however slower. Notwithstanding this, the proposed procedures are not computationally onerous and, as we will see later, they are competitive also when structured matrices are used. Finally, we also notice a significant difference between the projected Jacobi and the Gauss-Seidel iterations: indeed, the convergence of the Gauss-Seidel method appears much faster for these problems.
Lastly, we provide also experimentally a few insights into the convergence of the proposed methods. On one hand, experiments conducted with completely random, full matrices generally converge also when A and B are row diagonally dominant. However, row-diagonal dominance does not ensure, in general, the convergence of the algorithm. To see this experimentally, let us consider A lower triangular matrix with positive diagonal elements and negative off-diagonal elements and B upper triangular matrix with positive elements. By the structure of A and B, column diagonal dominance implies that several rows are not diagonally dominated. Analogously, A and B row diagonally dominant have, in general, several columns which are not diagonally dominated. We can then more accurately study the effect of diagonal dominance on convergence. Let us then consider the cases of A and B strictly column or row diagonally dominant and apply the methods to solving 100 problems of this kind with n = 100. The results after 20 iterations are reported in Table 3.
As expected and consistently with the convergence analysis earlier performed, column diagonal dominance ensures the convergence of both projected Jacobi and Gauss-Seidel iterations. Instead, row-diagonal dominance is, in general, not sufficient for the methods to converge. This is consistent also with the role of the columns of A and B in horizontal complementarity, highlighted by the role of the column Wproperty for the uniqueness of solution of HLCPs.
Let us now pass to the analysis of horizontal complementarity problems arising in practical applications. In particular, let us consider the HLCPs arising in hydrodynamic lubrication and described in Sect. 5. Both the projected Jacobi and the Gauss-Seidel iterations have then been implemented so to exploit sparsity, in order to increase the efficiency of the procedures. Thus, computational times remain reasonable also when thousands of iterations are run in large problems.
Starting from 1D problems, the results for P1 and P3 with tol = 10 −8 are shown in Table 4, where we report the number of iterations it, the computational time t and the l 2 -norm of the residual res and of the difference between two successive iterates at convergence, Δp and Δr. The computed solutions for n = 100 are reported in Fig. 1. The results for P2 are analogous to those for P1 and hence not reported.
Both the Jacobi and the Gauss-Seidel iterations converge in all cases and the efficiency of the algorithm is apparent, especially when n is quite small. Nonetheless, this is enough to compute accurate solutions of 1D problems: indeed, the curves in Fig. 1, obtained with n = 100 in t 10 −2 s, are in perfect agreement with results in the literature (see [11]), further validating the proposed approaches. Increasing n, more iterations are triggered but computational times remain, nonetheless, reasonable. In this context, it is also interesting to notice that the Jacobi iteration converges in roughly twice as many iterations as Gauss-Seidel's.
Finally, we analyze 2D problems with tol = 10 −4 . In this context, we analyze the behavior of the proposed procedure in larger scale problems. Table 5 reports the results obtained applying the projected Gauss-Seidel iteration to solving problem P4. Analogous results can be obtained using the projected Jacobi method, with a difference in efficiency comparable to 1D cases. For instance, with n = 100, projected Jacobi requires 10,000 iterations and t = 2.86 to compute a solution with the same accuracy of the one computed by projected Gauss-Seidel in Table 5. Figure 2 represents the complementarity solutions computed by the Gauss-Seidel method for n = 500. We remark that, dealing with 2D problems, A and B are block matrices of order n 2 . We therefore consider cases where A and B are up to 250,000 × 250,000.
Again, both Jacobi and Gauss-Seidel methods converge for all considered n. Computational times are remarkable as well. For instance, in [11], P4 was solved by an IP method with a tolerance of 10 −4 on the norm of the residual and of the increment. Using an efficient implementation (where inner linear systems were solved by efficient PETSc routines [16]), solving the problem with n = 100 required more than 600 s. Here, on the other hand, we achieved the same accuracy in less than 2 s using the same discretization.
Considering larger dimensions, which were not treated in previous works, the number of iterations and computational times increase, but the procedure always converge. Moreover, also computational times are not problematic, especially considering that all computations have been performed in series.

Table 4
Results for 1D HLCPs arising in hydrodynamic lubrication

Conclusions
We have presented two splitting procedures for solving HLCPs characterized by matrices with positive diagonal elements. Contrarily to existing techniques, the proposed methods act directly on splittings of the matrices of the problem. Hence, the iteration is fast and simple to implement. We have proved the convergence of the proposed methods under some conditions over the diagonal dominance of A and of B. Then, we have performed several numerical experiments, demonstrating the capabilities of the procedures in both random test problems and in HLCPs of practical interest. For these latter problems, in several cases, computational times have been smaller than those in the literature. This highlights the efficiency of the procedure and is interesting also in view of future developments of this study to include other splitting techniques.