[Insight-users] Parameter scales for registration (second try)

brian avants stnava at gmail.com
Tue May 7 10:40:54 EDT 2013


also - to take away a bit of the "mystery" surrounding v4 optimization,
let's see how the gradient descent AdvanceOneStep function works:

void
GradientDescentOptimizerv4
::AdvanceOneStep()
{
  itkDebugMacro("AdvanceOneStep");

  /* Begin threaded gradient modification.
   * Scale by gradient scales, then estimate the learning
   * rate if options are set to (using the scaled gradient),
   * then modify by learning rate. The m_Gradient variable
   * is modified in-place. */
  this->ModifyGradientByScales();
  this->EstimateLearningRate();
  this->ModifyGradientByLearningRate();

  try
    {
    /* Pass graident to transform and let it do its own updating */
    this->m_Metric->UpdateTransformParameters( this->m_Gradient );
    }
  catch ( ExceptionObject & )
    {
    this->m_StopCondition = UPDATE_PARAMETERS_ERROR;
    this->m_StopConditionDescription << "UpdateTransformParameters error";
    this->StopOptimization();

    // Pass exception to caller
    throw;
    }

  this->InvokeEvent( IterationEvent() );
}


i hope this does not look too convoluted.  then the base metric class does
this:

template<unsigned int TFixedDimension, unsigned int TMovingDimension, class
TVirtualImage>
void
ObjectToObjectMetric<TFixedDimension, TMovingDimension, TVirtualImage>
::UpdateTransformParameters( const DerivativeType & derivative,
ParametersValueType factor )
{
  /* Rely on transform::UpdateTransformParameters to verify proper
   * size of derivative */
  this->m_MovingTransform->UpdateTransformParameters( derivative, factor );
}


so the transform parameters should be updated in a way that is consistent
with:

newPosition[j] = currentPosition[j] + transformedGradient[j] * factor /
scales[j];

factor defaults to 1 ....  anyway, as you can infer from the above
discussion, even the basic gradient descent optimizer can be used to take "
regular steps "  if you want.



brian




On Tue, May 7, 2013 at 10:23 AM, brian avants <stnava at gmail.com> wrote:

> brad
>
> did this issue ever go up on jira?  i do remember discussing with you at a
> meeting.   our solution is in the v4 optimizers.
>
> the trivial additive parameter update doesnt work in more general cases
> e.g. when you need to compose parameters with parameter updates.
>
> to resolve this limitation, the v4 optimizers pass the update step to the
> transformations
>
> this implements the idea that  " the transforms know how to update
> themselves "
>
> there are several other differences, as nick pointed out, that reduce the
> need for users to experiment with scales .
>
> for basic scenarios like that being discussed by joel, i prefer the
> conjugate gradient optimizer with line search.
>
> itkConjugateGradientLineSearchOptimizerv4.h
>
> when combined with the scale estimators, this leads to registration
> algorithms with very few parameters to tune.   1 parameter if you dont
> consider multi-resolution.
>
>
> brian
>
>
>
>
> On Tue, May 7, 2013 at 9:27 AM, Nick Tustison <ntustison at gmail.com> wrote:
>
>> Hi Brad,
>>
>> I certainly don't disagree with Joel's findings.  It seems like a
>> good fix which should be put up on gerrit.  There were several
>> components that we kept in upgrading the registration framework.
>> The optimizers weren't one of them.
>>
>> Also, could you elaborate a bit more on the "convoluted" aspects
>> of parameter advancement?  There's probably a reason for it and
>> we could explain why.
>>
>> Nick
>>
>>
>>
>> On May 7, 2013, at 8:58 AM, Bradley Lowekamp <blowekamp at mail.nih.gov>
>> wrote:
>>
>> > Nick,
>> >
>> > What we are observing is an algorithmic bug in the
>> RegularStepGradientOptimzer. The ITKv4 optimizers have quite a convoluted
>> way to advance the parameters, and likely don't contain this bug.
>> >
>> >
>> > I think the figure Joel put together does a good job of illustrating
>> the issue:
>> >
>> > http://i.imgur.com/DE6xqQ5.pnggithu
>> >
>> >
>> > It just I think the math here:
>> >
>> https://github.com/Kitware/ITK/blob/master/Modules/Numerics/Optimizers/src/itkRegularStepGradientDescentOptimizer.cxx#L44
>> >
>> > newPosition[j] = currentPosition[j] + transformedGradient[j] * factor;
>> >
>> > should be:
>> >
>> > newPosition[j] = currentPosition[j] + transformedGradient[j] * factor /
>> scales[j];
>> >
>> > Brad
>> >
>> >
>> > On May 7, 2013, at 8:07 AM, Nick Tustison <ntustison at gmail.com> wrote:
>> >
>> >> Not quite.  See below for a relevant block of code.
>> >> The optimizer can take an optional scales estimator.
>> >>
>> >>
>> >>   typedef
>> itk::RegistrationParameterScalesFromPhysicalShift<MetricType>
>> ScalesEstimatorType;
>> >>   typename ScalesEstimatorType::Pointer scalesEstimator =
>> ScalesEstimatorType::New();
>> >>   scalesEstimator->SetMetric( singleMetric );
>> >>   scalesEstimator->SetTransformForward( true );
>> >>
>> >>   typedef itk::ConjugateGradientLineSearchOptimizerv4
>> ConjugateGradientDescentOptimizerType;
>> >>   typename ConjugateGradientDescentOptimizerType::Pointer optimizer =
>> ConjugateGradientDescentOptimizerType::New();
>> >>   optimizer->SetLowerLimit( 0 );
>> >>   optimizer->SetUpperLimit( 2 );
>> >>   optimizer->SetEpsilon( 0.2 );
>> >>   //    optimizer->SetMaximumLineSearchIterations( 20 );
>> >>   optimizer->SetLearningRate( learningRate );
>> >>   optimizer->SetMaximumStepSizeInPhysicalUnits( learningRate );
>> >>   optimizer->SetNumberOfIterations( currentStageIterations[0] );
>> >>   optimizer->SetScalesEstimator( scalesEstimator );
>> >>   optimizer->SetMinimumConvergenceValue( convergenceThreshold );
>> >>   optimizer->SetConvergenceWindowSize( convergenceWindowSize );
>> >>   optimizer->SetDoEstimateLearningRateAtEachIteration(
>> this->m_DoEstimateLearningRateAtEachIteration );
>> >>   optimizer->SetDoEstimateLearningRateOnce(
>> !this->m_DoEstimateLearningRateAtEachIteration );
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> On May 7, 2013, at 8:01 AM, Joël Schaerer <joel.schaerer at gmail.com>
>> wrote:
>> >>
>> >>> Hi Nick,
>> >>>
>> >>> I did indeed have a look at these new classes (not a very thorough
>> one, I must confess). However if I understand correctly they allow
>> estimating the parameter scales, but don't change the way the scales are
>> used by the optimizer?
>> >>>
>> >>> joel
>> >>>
>> >>> On 07/05/2013 13:52, Nick Tustison wrote:
>> >>>> Hi Brad,
>> >>>>
>> >>>> Have you seen the work we did with the class
>> >>>>
>> >>>>
>> http://www.itk.org/Doxygen/html/classitk_1_1RegistrationParameterScalesEstimator.html
>> >>>>
>> >>>> and it's derived classes for the v4 framework?  They describe
>> >>>> a couple different approaches to scaling the gradient for use
>> >>>> with the v4 optimizers.
>> >>>>
>> >>>> Nick
>> >>>>
>> >>>>
>> >>>> On May 7, 2013, at 6:59 AM, Bradley Lowekamp <blowekamp at mail.nih.gov>
>> wrote:
>> >>>>
>> >>>>> Hello Joel,
>> >>>>>
>> >>>>> I have encountered the same issue. I ended up creating my own
>> "ScaledRegularStepGradientDescentOptimizer" derived from the an ITK one.
>> Please find it attached. Please note, I don't think I have migrated this
>> code to ITKv4.... but I not certain.
>> >>>>>
>> >>>>> I reported this issue to the ITKv4 registration team, but I am not
>> sure what happened to it.
>> >>>>>
>> >>>>> I also tried to make the change in ITK a while ago, and a large
>> number of the registration tests failed... not sure if the results were
>> better or worse, they were just different.
>> >>>>>
>> >>>>> Brad
>> >>>>>
>> >>>>> <itkScaledRegularStepGradientDescentOptimizer.h>
>> >>>>>
>> >>>>> On Apr 25, 2013, at 11:10 AM, Joël Schaerer <
>> joel.schaerer at gmail.com> wrote:
>> >>>>>
>> >>>>>
>> >>>
>> >>
>> >
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.itk.org/pipermail/insight-users/attachments/20130507/c36299ec/attachment.htm>


More information about the Insight-users mailing list