Optim initial step length guess

Optim initial step length guess

Tip

This example is also available as a Jupyter notebook: optim_initialstep.ipynb

This example shows how to use the initial step length procedures with Optim. We solve the Rosenbrock problem with two different procedures.

First, run Newton with the (default) initial guess and line search procedures.

using Optim, LineSearches
import OptimTestProblems.MultivariateProblems
UP = MultivariateProblems.UnconstrainedProblems
prob = UP.examples["Rosenbrock"]

algo_st = Newton(alphaguess = InitialStatic(), linesearch = HagerZhang())
res_st = Optim.optimize(prob.f, prob.g!, prob.h!, prob.initial_x, method=algo_st)
Results of Optimization Algorithm
 * Algorithm: Newton's Method
 * Starting Point: [-1.2,1.0]
 * Minimizer: [1.0000000000000033,1.0000000000000067]
 * Minimum: 1.109336e-29
 * Iterations: 23
 * Convergence: true
   * |x - x'| ≤ 0.0e+00: false
     |x - x'| = 1.13e-08
   * |f(x) - f(x')| ≤ 0.0e+00 |f(x)|: false
     |f(x) - f(x')| = 6.35e+13 |f(x)|
   * |g(x)| ≤ 1.0e-08: true
     |g(x)| = 6.66e-15
   * Stopped by an increasing objective: false
   * Reached Maximum Number of Iterations: false
 * Objective Calls: 71
 * Gradient Calls: 71
 * Hessian Calls: 23

We can now try with the initial step length guess from Hager and Zhang.

algo_hz = Newton(alphaguess = InitialHagerZhang(α0=1.0), linesearch = HagerZhang())
res_hz = Optim.optimize(prob.f, prob.g!, prob.h!, prob.initial_x, method=algo_hz)
Results of Optimization Algorithm
 * Algorithm: Newton's Method
 * Starting Point: [-1.2,1.0]
 * Minimizer: [0.999999999585337,0.9999999991702594]
 * Minimum: 1.719626e-19
 * Iterations: 24
 * Convergence: true
   * |x - x'| ≤ 0.0e+00: false
     |x - x'| = 4.16e-06
   * |f(x) - f(x')| ≤ 0.0e+00 |f(x)|: false
     |f(x) - f(x')| = 3.65e+09 |f(x)|
   * |g(x)| ≤ 1.0e-08: true
     |g(x)| = 6.63e-10
   * Stopped by an increasing objective: false
   * Reached Maximum Number of Iterations: false
 * Objective Calls: 61
 * Gradient Calls: 38
 * Hessian Calls: 24

From the result we see that this has reduced the number of function and gradient calls, but increased the number of iterations.

This page was generated using Literate.jl.