[Insight-developers] v3 optimizers, vnl cost function adaptor and parameter scaling

M Stauffer -V- mstauff at verizon.net
Tue Oct 2 17:25:31 EDT 2012


Hi,

I'm trying to understand the behavior of the v3 optimizers and
SingleValuedVnlCostFunctionAdaptor.

The optimizers (e.g. LBFGSOptimizer) scale up the initial parameters by
the user-set scales in StartOptimization().
The comments say:
  // We also scale the initial parameters up if scales are defined.
  // This compensates for later scaling them down in the cost function
adaptor
  // and at the end of this function.

Then in the cost-func adaptor, before the metric is called, the
parameters supplied by the optimizer are scaled down, so they're back to
the original values. Then the metric is evaluated. The derivative is
then scaled down by the scales.

Then at the end of optimization, the parameters are scaled down again
back to their original values.

Does anyone know why this is done? It seems the vnl optimizers are
effectively getting a gradient that's scaled down by scales^2 relative
to their internal parameters.

Thanks,
Michael



More information about the Insight-developers mailing list