Minimize a function using the downhill simplex algorithm.
This algorithm only uses function values, not derivatives or second derivatives.
Parameters : | func : callable func(x,*args)
x0 : ndarray
args : tuple, optional
callback : callable, optional
xtol : float, optional
ftol : number, optional
maxiter : int, optional
maxfun : number, optional
full_output : bool, optional
disp : bool, optional
retall : bool, optional
|
---|---|
Returns : | xopt : ndarray
fopt : float
iter : int
funcalls : int
warnflag : int
allvecs : list
|
See also
Notes
Uses a Nelder-Mead simplex algorithm to find the minimum of function of one or more variables.
This algorithm has a long history of successful use in applications. But it will usually be slower than an algorithm that uses first or second derivative information. In practice it can have poor performance in high-dimensional problems and is not robust to minimizing complicated functions. Additionally, there currently is no complete theory describing when the algorithm will successfully converge to the minimum, or how fast it will if it does.
References
[R76] | Nelder, J.A. and Mead, R. (1965), “A simplex method for function minimization”, The Computer Journal, 7, pp. 308-313 |
[R77] | Wright, M.H. (1996), “Direct Search Methods: Once Scorned, Now Respectable”, in Numerical Analysis 1995, Proceedings of the 1995 Dundee Biennial Conference in Numerical Analysis, D.F. Griffiths and G.A. Watson (Eds.), Addison Wesley Longman, Harlow, UK, pp. 191-208. |