[Insight-users] Another question about the optimizer.

Luis Ibanez luis . ibanez at kitware . com
Mon, 23 Jun 2003 13:26:22 -0400


Hi Raghavendra,

If the optimizer step is so large that it can span from
one minimum to another, then what you may want to do is
to reduce the step size.

It is a basic assumption of regular descent algorithms
that your steps are spanning a monotonic section of the
function.

But... if you want to go ahead with your approach, that
sounds reasonable too. It may be easier for you to simply
copy the code from the itkRegularStepGradientDescent
optimiers into a new optimizer, and then modify the
code in lines 203-224 for adding your rule.


Regards,


    Luis




---
Raghavendra Chandrashekara wrote:
> Hi Luis,
> 
> But what would happen if there are two minima which are very close 
> together but I set the initial step length to be too large. Isn't it 
> possible to jump from one minimum to other when stepping along the 
> gradient direction. Now when the step length is halved, because of the 
> change in direction, wouldn't the optimizer get stuck in the second 
> minimum when the first minimum is what we really want?
> 
> Thanks,
> 
> Raghavendra
> 
> Luis Ibanez wrote:
> 
>>
>> Hi Raghavendra
>>
>> The RegularStepGradientDescentOptimizer is
>> already doing all this for you.
>>
>> Please look at the code in
>>
>> Insight/Code/Numerics/
>>    itkRegularStepGradientDescentOptimizer.cxx
>>
>> in particular look at lines 203 to 224.
>>
>> The optimizer is checking if the new gradient
>> vector has an angle of more than 90 degrees
>> with the previous gradient, and if so, it
>> reduces the m_CurrentStepLength by half.
>>
>> You may want to look at this code, and verify
>> if it is doing what you want.  If after that,
>> you find that it is worth to create a variant
>> of the optimizer, we will be happy to add it
>> to the set of optimizers in the toolkit.
>>
>> Please let us know what you find.
>>
>>
>> Thanks
>>
>>
>>    Luis
>>
>>
>> ------------------------------------
>> Raghavendra Chandrashekara wrote:
>>
>>> Hi Luis,
>>>
>>> I am trying to control the itk::RegularStepGradientDescentOptimizer 
>>> so that it doesn't move in the gradient direction if there is no 
>>> improvement. What I am doing is storing the metric measure value in 
>>> the previous iteration and the current iteration. After the optimizer 
>>> has moved in the gradient direction, I check to see if there is any 
>>> improvement. If not then I would like to reduce the step length by 2 
>>> and try again.
>>>
>>> But I've come across two problems and I'm not sure what's the best 
>>> way to solve them:
>>>
>>> (1) There is no function to set the current step length. So I am 
>>> reducing the maximum step length by a factor of 2.
>>>
>>> (2) Because the optimizer has already moved in the gradient direction 
>>> I would like to move back one step, but there is no function which 
>>> allows me to do this.
>>>
>>> Please can you tell me if what I am trying to do is sensible or if 
>>> there is another way of achieving the same thing?
>>>
>>> Thanks,
>>>
>>> Raghavendra
>>>