__Keywords:__functions optimization nonlinear equations numerical computing iteration algorithms derivatives convergence condition order of convergence

A. T. Kogut

There is considered a problem of unconstrained optimization of the multivariable function under the given values of minimized function, when it is possible to use numerical methods for nonlinear equations roots calculation. The interpolation way for recurrent procedures forming is selected, and the linearization scheme of approximating Taylor polynomials reducing to two-stage and multi-step computation algorithms structure is offered.
Use of the simplest linearization scheme including only first derivatives of minimized function leads to Newton’s method, which is well-known and extensively used in computing practice. In paper we offer to term as Newton’s method the procedure of numerical calculation of the equation of sufficient conditions for extremum of function, i. e. its derivation is equal to zero.
When using second derivative in polynomial approximmant, the vector of the difference between k-th and (k+1)-th steps will enter into the expression as a nonlinear quantity or in implicit form. We consider one of these vectors as known value to define the explicit form, which leads us to two forms of polynomial approximation or linearization schemes.
Further construction of iterative methods related to determination the value of this vector. Two ways are offered. In first case, we compute the vector by some first order method, e. g. Newton’s method. Then we get two-stage procedure, when we determine the vector of difference on basis of Newton’s method at first stage, and improve the root value by second order interation algorithm at second one.
The multistage itration procedures use second derivatives only and turned out, when the unknown vector is the difference between k-th and (k+1)-th approximations to required root of nonlinear equation.
We derived the analytic formulas defining the conditions and the speed of the convergence and showed that two-stage polynomial approximation algorithms have third order of convergence, and multi-step algorithms gave the order like secant method.
The determination of convergence conditions of offered iterative procedures and well-down Newton’s method is resulted on basis of Taylor’s formula. The derived equalities contain the difference between (k+1)-th approximation to exact value of determinate root in the left part and the difference between k-th one in the right part, and the order of convergence is determined by including power.
The simulation results for two-stage and multi-step second order algorithms during minimization of test function are given. They confirm the reliability of analytic formulas and theoretical analysis. We recommend using developed algorityms in problems of applied optimization.
The article is intended for specialists working in the area iof Control, Computing Engineering and Information Science.

References: