SciPy

linprog(method=’Simplex’)

scipy.optimize.linprog(c, A_ub=None, b_ub=None, A_eq=None, b_eq=None, bounds=None, method='simplex', callback=None, options={'disp': False, 'bland': False, 'tol': 1e-12, 'maxiter': 1000})

Solve the following linear programming problem via a two-phase simplex algorithm.

maximize: c^T * x

subject to: A_ub * x <= b_ub
A_eq * x == b_eq
Parameters:

c : array_like

Coefficients of the linear objective function to be maximized.

A_ub : array_like

2-D array which, when matrix-multiplied by x, gives the values of the upper-bound inequality constraints at x.

b_ub : array_like

1-D array of values representing the upper-bound of each inequality constraint (row) in A_ub.

A_eq : array_like

2-D array which, when matrix-multiplied by x, gives the values of the equality constraints at x.

b_eq : array_like

1-D array of values representing the RHS of each equality constraint (row) in A_eq.

bounds : array_like

The bounds for each independent variable in the solution, which can take one of three forms:: None : The default bounds, all variables are non-negative. (lb, ub) : If a 2-element sequence is provided, the same

lower bound (lb) and upper bound (ub) will be applied to all variables.

[(lb_0, ub_0), (lb_1, ub_1), ...] : If an n x 2 sequence is provided,

each variable x_i will be bounded by lb[i] and ub[i].

Infinite bounds are specified using -np.inf (negative) or np.inf (positive).

callback : callable

If a callback function is provide, it will be called within each iteration of the simplex algorithm. The callback must have the signature callback(xk, **kwargs) where xk is the current solution vector and kwargs is a dictionary containing the following:: “tableau” : The current Simplex algorithm tableau “nit” : The current iteration. “pivot” : The pivot (row, column) used for the next iteration. “phase” : Whether the algorithm is in Phase 1 or Phase 2. “bv” : A structured array containing a string representation of each

basic variable and its current value.

Returns:

A scipy.optimize.OptimizeResult consisting of the following fields:

x : ndarray
    The independent variable vector which optimizes the linear
    programming problem.
slack : ndarray
    The values of the slack variables.  Each slack variable corresponds
    to an inequality constraint.  If the slack is zero, then the
    corresponding constraint is active.
success : bool
    Returns True if the algorithm succeeded in finding an optimal
    solution.
status : int
    An integer representing the exit status of the optimization::
     0 : Optimization terminated successfully
     1 : Iteration limit reached
     2 : Problem appears to be infeasible
     3 : Problem appears to be unbounded
nit : int
    The number of iterations performed.
message : str
    A string descriptor of the exit status of the optimization.

See also

For documentation for the rest of the parameters, see scipy.optimize.linprog

Options:

maxiter : int

The maximum number of iterations to perform.

disp : bool

If True, print exit status message to sys.stdout

tol : float

The tolerance which determines when a solution is “close enough” to zero in Phase 1 to be considered a basic feasible solution or close enough to positive to to serve as an optimal solution.

bland : bool

If True, use Bland’s anti-cycling rule [3] to choose pivots to prevent cycling. If False, choose pivots which should lead to a converged solution more quickly. The latter method is subject to cycling (non-convergence) in rare instances.

References

[R500]Dantzig, George B., Linear programming and extensions. Rand Corporation Research Study Princeton Univ. Press, Princeton, NJ, 1963
[R501]Hillier, S.H. and Lieberman, G.J. (1995), “Introduction to Mathematical Programming”, McGraw-Hill, Chapter 4.
[R502]Bland, Robert G. New finite pivoting rules for the simplex method. Mathematics of Operations Research (2), 1977: pp. 103-107.

Examples

Consider the following problem:

Minimize: f = -1*x[0] + 4*x[1]

Subject to: -3*x[0] + 1*x[1] <= 6
1*x[0] + 2*x[1] <= 4
x[1] >= -3

where: -inf <= x[0] <= inf

This problem deviates from the standard linear programming problem. In standard form, linear programming problems assume the variables x are non-negative. Since the variables don’t have standard bounds where 0 <= x <= inf, the bounds of the variables must be explicitly set.

There are two upper-bound constraints, which can be expressed as

dot(A_ub, x) <= b_ub

The input for this problem is as follows:

>>> from scipy.optimize import linprog
>>> c = [-1, 4]
>>> A = [[-3, 1], [1, 2]]
>>> b = [6, 4]
>>> x0_bnds = (None, None)
>>> x1_bnds = (-3, None)
>>> res = linprog(c, A, b, bounds=(x0_bnds, x1_bnds))
>>> print(res)
Optimization terminated successfully.
     Current function value: -22.000000
     Iterations: 1
status: 0
x: array([ 10.,  -3.])
slack: array([ 39.,   0.])
nit: 1
message: 'Optimization terminated successfully.'
fun: -22.0
success: True