scipy.optimize.line_search#
- scipy.optimize.line_search(f, myfprime, xk, pk, gfk=None, old_fval=None, old_old_fval=None, args=(), c1=0.0001, c2=0.9, amax=None, extra_condition=None, maxiter=10)[source]#
Find alpha that satisfies strong Wolfe conditions.
- Parameters:
- fcallable f(x,*args)
Objective function.
- myfprimecallable f’(x,*args)
Objective function gradient.
- xkndarray
Starting point.
- pkndarray
Search direction.
- gfkndarray, optional
Gradient value for x=xk (xk being the current parameter estimate). Will be recomputed if omitted.
- old_fvalfloat, optional
Function value for x=xk. Will be recomputed if omitted.
- old_old_fvalfloat, optional
Function value for the point preceding x=xk.
- argstuple, optional
Additional arguments passed to objective function.
- c1float, optional
Parameter for Armijo condition rule.
- c2float, optional
Parameter for curvature condition rule.
- amaxfloat, optional
Maximum step size
- extra_conditioncallable, optional
A callable of the form
extra_condition(alpha, x, f, g)
returning a boolean. Arguments are the proposed stepalpha
and the correspondingx
,f
andg
values. The line search accepts the value ofalpha
only if this callable returnsTrue
. If the callable returnsFalse
for the step length, the algorithm will continue with new iterates. The callable is only called for iterates satisfying the strong Wolfe conditions.- maxiterint, optional
Maximum number of iterations to perform.
- Returns:
- alphafloat or None
Alpha for which
x_new = x0 + alpha * pk
, or None if the line search algorithm did not converge.- fcint
Number of function evaluations made.
- gcint
Number of gradient evaluations made.
- new_fvalfloat or None
New function value
f(x_new)=f(x0+alpha*pk)
, or None if the line search algorithm did not converge.- old_fvalfloat
Old function value
f(x0)
.- new_slopefloat or None
The local slope along the search direction at the new value
<myfprime(x_new), pk>
, or None if the line search algorithm did not converge.
Notes
Uses the line search algorithm to enforce strong Wolfe conditions. See Wright and Nocedal, ‘Numerical Optimization’, 1999, pp. 59-61.
Examples
>>> import numpy as np >>> from scipy.optimize import line_search
A objective function and its gradient are defined.
>>> def obj_func(x): ... return (x[0])**2+(x[1])**2 >>> def obj_grad(x): ... return [2*x[0], 2*x[1]]
We can find alpha that satisfies strong Wolfe conditions.
>>> start_point = np.array([1.8, 1.7]) >>> search_gradient = np.array([-1.0, -1.0]) >>> line_search(obj_func, obj_grad, start_point, search_gradient) (1.0, 2, 1, 1.1300000000000001, 6.13, [1.6, 1.4])