The standard linear programming problem is
Without loss of generality we can take . We let be an optimal point.
The dual to the standard problem is
The variables are called dual variables and the variables are slack variables.
Theorem If is feasible for (P) and is feasible for (D), then
Further, if is optimal for (P) and is optimal for (D), then
The second condition is called complementary slackness.
A basic solution is given by a set of column indices such that the square matrix formed by the columns of is nonsingular. Then
Suppose is optimal. Then and
and hence .
The constrained primal linear programming problem is
Given lower bounded variables and upper bounded variables , the problem becomes
and the dual problem is
Note that the objective function can be written
To define a basic solution, we need to split the variables into three sets, the basic variables , the lower bounded variables and upper bounded variables . A basic solution then satisfies
The most general form of the linear programming problem is
Define auxiliary variables , and . The primal and auxiliary variables can be either basic , lower, upper or fixed. A lower or upper variable satisfies or , and a fixed variable satisfies (alternatively, .
After the introduction of auxiliary variables, the system can be written in the form (CP) above, with equations
If there are variables, inequality constraints and equality constraints, then the total number of basic variables , since there are equations and .
Given a basic solution, we can split the equations and variables into The basic variables are therefore given by
have the equations
The basic variables are therefore given by
Since the objective function is given by , we have
Set dual variables
This shows how the objective changes with changes in and . The main difficulty performing an update is that the number of basic variables may change during an algorithm.
Given a partition of variables into upper, lower and basic, we can test for a solution of the constrained feasibility problem.
since and .
Due to numerical errors, the solution to a feasibility problem may be inaccurate. It is therefore important to be able to validate solutions to feasibility problems. One approach is to use rational arithmetic, but this is frequently too expensive. The alternative is to use interval arithmetic. For this, we need our problems to be robustly solvable.
Note that robustly solving an inequality is straightforward; we merely change to . To robustly solve a system of linear equalities, we need to choose a basis and express the basic variables in terms of the non-basic variables.
Given a basis , and if necessary lower and upper variables and , we can attempt to solve robust feasibility problems by finding a robust basis. We can also set the nonbasic variables to the exact values, as long as the basic variables robustly satisfy the constraints. In order to check that a basis gives a solution to the robust dual feasibility problem,
TODO: It is not clear whether a robustly solvable problem has a robust basic solution.
A robust certificate for the primal feasibility problem is a base such that is nonsingular, and an such that ; this is equivalent to . A robust certificate of infeasibility is a point such that and .
Theorem Suppose is robust. Then either has full row rank and there exists such that , or there exists such that and .
A robust certificate for the dual feasibility problem is a point such that . A robust certificate of infeasibility is a base such that is nonsingular, and a vector such that and . We prove by setting , so that .
Theorem Suppose is robust. Then either there exists such that , or has full row rank and there exists such that and .
Proof If , then this holds also for perturbations of . If and is nonsingular, then . Perturbing and keeping constant, we obtain a perturbation of , and hence a certificate for the perturbed problem.
Conversely, suppose the problem is robustly solvable. Then the problem is solvable for and sufficiently small. Hence there exists such that .
Suppose the problem is robustly unsolvable Let be a matrix with all positive entries. Since is robustly unsolvable, is unsolvable for some . Then there exists such that , and . Then if , then , and since and .
To solve the robust primal feasibility problem,
we choose a vector and consider the problem
Let , and . Then we obtain the standard primal optimisation problem If the optimal value is negative, then we have found such that , so taking , we have . If the optimal value is non-negative, then we can attempt to solve the dual robust optimisation problem . Since if this problem is feasible, it has unbounded solutions, we can instead choose look for a positive optimal value of
To solve the robust dual feasibility problem,
we choose and consider the problem
Let , and . Then we obtain the standard dual optimisation problem If the optimal value is positive, then we have found such that . If the optimal value is zero or negative, then we can attempt to solve the primal robust feasibility problem However, even if the dual feasibility problem is unsolvable (i.e. ), the primal may become solvable by a perturbation of . We therefore consider a robust version and introduce to make a problem Note that if this problem has negative value, then we have found such that , and , which implies that the original dual problem has no solution. Taking , we obtain
Suppose we wish to update a basis of the standard linear programming problem.
In general, if , then
Suppose we wish to update a dual feasible basis of the standard linear programming problem; note that the reduced costs satisfy satisfy .
Suppose we wish to update a basis of the standard linear programming problem.
Remark: Since we do not assume the existence of in our number types, we use for the constraint ; this is the only unbounded constraint we allow.
See Chvatal [Chapter 8, pp 129] for more details on constrained feasibility.
For a linear programming problem of standard form, with an matrix, the number of iterations of the simplex algorithm for practical problems grows approximately as .
The homogeneous self-dual algorithm is a method for simultaneously computing optimality and feasibility. Consider the primal and dual problems
The homogeneous self-dual problem is
A feasible point is always given by . When , the first three equations reduce to , and . If and denote the slack variables, the final constraint becomes
We have the following result:
To relate (HSD) to (P) and (D), we have:
To related (HSD) to the central path, we have:
A potential function is given by
See: David G. Luenberger and Yinyu Ye "Linear and nonlinear programming".
In the Geometry module, we need to solve the following linear programming problems to test intersection.
We notice that by introducing slack variables, we can convert all problems into a standard linear programming problem with constraints.
We can convert the standard dual feasibility problem into a primal linear programming problem , but it is not so straightforward to convert a constrained dual feasibility problem into its dual. Instead we add slack variables and solve . We can use the reduced simplex algorithm to take advantage of sparseness.
We shall be most concerned with the following problem:
Without the bounds on , it is easy to show that no solution exists if there exists such that , , and .
To solve the problem, introduce extended variables , slack variables and a value . The original (dual) feasibility problem becomes
whereas the prime feasibility problem becomes
. The original problem is feasible if the optimum value is positive, and infeasible if it is negative.
Robust primal and dual basic feasible solutions are easily found. Take , and such that . Finally, scale so that the sum is equal to . Take , and , and
We can now apply the standard interior point update. Although the matrix has been augmented, we can still recover a problem such that we only need to invert a matrix with only one extra row. Hence the dimension of the matrix inversion problem is equal to the dimension of the underlying space plus one.
Consider feasibility of the linear programming problem
Special cases of this occur in, for example testing whether the image of a constraint set intersects another constraint set.
Introduce slack variables to transform the problem to
The duality conditions are
and the value function is
The complementary slackness conditions are
Consider a step updating the variables in an attempt to solve the complementarity system. The Newton equations are
where and are the residuals and which are typically zero.
Assuming the residuals are zero, we have
and hence so . Combined with the equation , we obtain the system
The left-hand matrix is symmetric, so can be (fairly) easily factorised. Further, the matrix is independent of the problem, so the initial part of the factorisation can be pre-computed.
Indeed, it may be better to completely eliminate at the start of the problem by explicitly solving for some variables of x. If we have a square submatrix of which is invertible, then
and the inequalities for become