linprog(method=’highs’)#

scipy.optimize.linprog(c, A_ub=None, b_ub=None, A_eq=None, b_eq=None, bounds=None, method='highs', callback=None, options={'maxiter': None, 'disp': False, 'presolve': True, 'time_limit': None, 'dual_feasibility_tolerance': None, 'primal_feasibility_tolerance': None, 'ipm_optimality_tolerance': None, 'simplex_dual_edge_weight_strategy': None}, x0=None)

Linear programming: minimize a linear objective function subject to linear equality and inequality constraints using one of the HiGHS solvers.

Linear programming solves problems of the following form:

\[\begin{split}\min_x \ & c^T x \\ \mbox{such that} \ & A_{ub} x \leq b_{ub},\\ & A_{eq} x = b_{eq},\\ & l \leq x \leq u ,\end{split}\]

where \(x\) is a vector of decision variables; \(c\), \(b_{ub}\), \(b_{eq}\), \(l\), and \(u\) are vectors; and \(A_{ub}\) and \(A_{eq}\) are matrices.

Alternatively, that’s:

minimize:

c @ x

such that:

A_ub @ x <= b_ub
A_eq @ x == b_eq
lb <= x <= ub

Note that by default lb = 0 and ub = None unless specified with bounds.

Parameters
c1-D array

The coefficients of the linear objective function to be minimized.

A_ub2-D array, optional

The inequality constraint matrix. Each row of A_ub specifies the coefficients of a linear inequality constraint on x.

b_ub1-D array, optional

The inequality constraint vector. Each element represents an upper bound on the corresponding value of A_ub @ x.

A_eq2-D array, optional

The equality constraint matrix. Each row of A_eq specifies the coefficients of a linear equality constraint on x.

b_eq1-D array, optional

The equality constraint vector. Each element of A_eq @ x must equal the corresponding element of b_eq.

boundssequence, optional

A sequence of (min, max) pairs for each element in x, defining the minimum and maximum values of that decision variable. Use None to indicate that there is no bound. By default, bounds are (0, None) (all decision variables are non-negative). If a single tuple (min, max) is provided, then min and max will serve as bounds for all decision variables.

methodstr

This is the method-specific documentation for ‘highs’, which chooses automatically between ‘highs-ds’ and ‘highs-ipm’. ‘interior-point’ (default), ‘revised simplex’, and ‘simplex’ (legacy) are also available.

Returns
resOptimizeResult

A scipy.optimize.OptimizeResult consisting of the fields:

x1D array

The values of the decision variables that minimizes the objective function while satisfying the constraints.

funfloat

The optimal value of the objective function c @ x.

slack1D array

The (nominally positive) values of the slack, b_ub - A_ub @ x.

con1D array

The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x.

successbool

True when the algorithm succeeds in finding an optimal solution.

statusint

An integer representing the exit status of the algorithm.

0 : Optimization terminated successfully.

1 : Iteration or time limit reached.

2 : Problem appears to be infeasible.

3 : Problem appears to be unbounded.

4 : The HiGHS solver ran into a problem.

messagestr

A string descriptor of the exit status of the algorithm.

nitint

The total number of iterations performed. For the HiGHS simplex method, this includes iterations in all phases. For the HiGHS interior-point method, this does not include crossover iterations.

crossover_nitint

The number of primal/dual pushes performed during the crossover routine for the HiGHS interior-point method. This is 0 for the HiGHS simplex method.

ineqlinOptimizeResult

Solution and sensitivity information corresponding to the inequality constraints, b_ub. A dictionary consisting of the fields:

residualnp.ndnarray

The (nominally positive) values of the slack variables, b_ub - A_ub @ x. This quantity is also commonly referred to as “slack”.

marginalsnp.ndarray

The sensitivity (partial derivative) of the objective function with respect to the right-hand side of the inequality constraints, b_ub.

eqlinOptimizeResult

Solution and sensitivity information corresponding to the equality constraints, b_eq. A dictionary consisting of the fields:

residualnp.ndarray

The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x.

marginalsnp.ndarray

The sensitivity (partial derivative) of the objective function with respect to the right-hand side of the equality constraints, b_eq.

lower, upperOptimizeResult

Solution and sensitivity information corresponding to the lower and upper bounds on decision variables, bounds.

residualnp.ndarray

The (nominally positive) values of the quantity x - lb (lower) or ub - x (upper).

marginalsnp.ndarray

The sensitivity (partial derivative) of the objective function with respect to the lower and upper bounds.

See also

For documentation for the rest of the parameters, see scipy.optimize.linprog

Options
maxiterint

The maximum number of iterations to perform in either phase. For ‘highs-ipm’, this does not include the number of crossover iterations. Default is the largest possible value for an int on the platform.

dispbool (default: False)

Set to True if indicators of optimization status are to be printed to the console during optimization.

presolvebool (default: True)

Presolve attempts to identify trivial infeasibilities, identify trivial unboundedness, and simplify the problem before sending it to the main solver. It is generally recommended to keep the default setting True; set to False if presolve is to be disabled.

time_limitfloat

The maximum time in seconds allotted to solve the problem; default is the largest possible value for a double on the platform.

dual_feasibility_tolerancedouble (default: 1e-07)

Dual feasibility tolerance for ‘highs-ds’. The minimum of this and primal_feasibility_tolerance is used for the feasibility tolerance of ‘highs-ipm’.

primal_feasibility_tolerancedouble (default: 1e-07)

Primal feasibility tolerance for ‘highs-ds’. The minimum of this and dual_feasibility_tolerance is used for the feasibility tolerance of ‘highs-ipm’.

ipm_optimality_tolerancedouble (default: 1e-08)

Optimality tolerance for ‘highs-ipm’. Minimum allowable value is 1e-12.

simplex_dual_edge_weight_strategystr (default: None)

Strategy for simplex dual edge weights. The default, None, automatically selects one of the following.

'dantzig' uses Dantzig’s original strategy of choosing the most negative reduced cost.

'devex' uses the strategy described in [15].

steepest uses the exact steepest edge strategy as described in [16].

'steepest-devex' begins with the exact steepest edge strategy until the computation is too costly or inexact and then switches to the devex method.

Curently, None always selects 'steepest-devex', but this may change as new options become available.

unknown_optionsdict

Optional arguments not used by this particular solver. If unknown_options is non-empty, a warning is issued listing all unused options.

Notes

Method ‘highs-ds’ is a wrapper of the C++ high performance dual revised simplex implementation (HSOL) [13], [14]. Method ‘highs-ipm’ is a wrapper of a C++ implementation of an interior-point method [13]; it features a crossover routine, so it is as accurate as a simplex solver. Method ‘highs’ chooses between the two automatically. For new code involving linprog, we recommend explicitly choosing one of these three method values instead of ‘interior-point’ (default), ‘revised simplex’, and ‘simplex’ (legacy).

The result fields ineqlin, eqlin, lower, and upper all contain marginals, or partial derivatives of the objective function with respect to the right-hand side of each constraint. These partial derivatives are also referred to as “Lagrange multipliers”, “dual values”, and “shadow prices”. The sign convention of marginals is opposite that of Lagrange multipliers produced by many nonlinear solvers.

References

13(1,2)

Huangfu, Q., Galabova, I., Feldmeier, M., and Hall, J. A. J. “HiGHS - high performance software for linear optimization.” Accessed 4/16/2020 at https://www.maths.ed.ac.uk/hall/HiGHS/#guide

14

Huangfu, Q. and Hall, J. A. J. “Parallelizing the dual revised simplex method.” Mathematical Programming Computation, 10 (1), 119-142, 2018. DOI: 10.1007/s12532-017-0130-5

15

Harris, Paula MJ. “Pivot selection methods of the Devex LP code.” Mathematical programming 5.1 (1973): 1-28.

16

Goldfarb, Donald, and John Ker Reid. “A practicable steepest-edge simplex algorithm.” Mathematical Programming 12.1 (1977): 361-371.