# linprog(method=’simplex’)¶

scipy.optimize.linprog(c, method='simplex', callback=None, options={'c0': None, 'A': None, 'b': None, 'maxiter': 1000, 'disp': False, 'tol': 1e-12, 'bland': False, '_T_o': None})

Minimize a linear objective function subject to linear equality and non-negativity constraints using the two phase simplex method. Linear programming is intended to solve problems of the following form:

Minimize:

c @ x


Subject to:

A @ x == b
x >= 0

Parameters: c : 1D array Coefficients of the linear objective function to be minimized. c0 : float Constant term in objective function due to fixed (and eliminated) variables. (Purely for display.) A : 2D array 2D array such that A @ x, gives the values of the equality constraints at x. b : 1D array 1D array of values representing the right hand side of each equality constraint (row) in A. callback : callable, optional (simplex only) If a callback function is provided, it will be called within each iteration of the simplex algorithm. The callback must require a scipy.optimize.OptimizeResult consisting of the following fields: x : 1D array The independent variable vector which optimizes the linear programming problem. fun : float Value of the objective function. success : bool True if the algorithm succeeded in finding an optimal solution. slack : 1D array The values of the slack variables. Each slack variable corresponds to an inequality constraint. If the slack is zero, the corresponding constraint is active. con : 1D array The (nominally zero) residuals of the equality constraints, that is, b - A_eq @ x phase : int The phase of the optimization being executed. In phase 1 a basic feasible solution is sought and the T has an additional row representing an alternate objective function. status : int An integer representing the exit status of the optimization: 0 : Optimization terminated successfully 1 : Iteration limit reached 2 : Problem appears to be infeasible 3 : Problem appears to be unbounded 4 : Serious numerical difficulties encountered  nit : int The number of iterations performed. message : str A string descriptor of the exit status of the optimization. Other Parameters —————- maxiter : int The maximum number of iterations to perform. disp : bool If True, print exit status message to sys.stdout tol : float The tolerance which determines when a solution is “close enough” to zero in Phase 1 to be considered a basic feasible solution or close enough to positive to serve as an optimal solution. bland : bool If True, use Bland’s anti-cycling rule [3] to choose pivots to prevent cycling. If False, choose pivots which should lead to a converged solution more quickly. The latter method is subject to cycling (non-convergence) in rare instances. x : 1D array Solution vector. status : int An integer representing the exit status of the optimization: 0 : Optimization terminated successfully 1 : Iteration limit reached 2 : Problem appears to be infeasible 3 : Problem appears to be unbounded 4 : Serious numerical difficulties encountered  message : str A string descriptor of the exit status of the optimization. iteration : int The number of iterations taken to solve the problem.

Notes

The expected problem formulation differs between the top level linprog module and the method specific solvers. The method specific solvers expect a problem in standard form:

Minimize:

c @ x


Subject to:

A @ x == b
x >= 0


Whereas the top level linprog module expects a problem of form:

Minimize:

c @ x


Subject to:

A_ub @ x <= b_ub
A_eq @ x == b_eq
lb <= x <= ub


where lb = 0 and ub = None unless set in bounds.

The original problem contains equality, upper-bound and variable constraints whereas the method specific solver requires equality constraints and variable non-negativity.

linprog module converts the original problem to standard form by converting the simple bounds to upper bound constraints, introducing non-negative slack variables for inequality constraints, and expressing unbounded variables as the difference between two non-negative variables.

References

 [1] Dantzig, George B., Linear programming and extensions. Rand Corporation Research Study Princeton Univ. Press, Princeton, NJ, 1963
 [2] Hillier, S.H. and Lieberman, G.J. (1995), “Introduction to Mathematical Programming”, McGraw-Hill, Chapter 4.
 [3] (1, 2) Bland, Robert G. New finite pivoting rules for the simplex method. Mathematics of Operations Research (2), 1977: pp. 103-107.

#### Previous topic

scipy.optimize.linprog

#### Next topic

linprog(method=’interior-point’)