[Insight-users] Re: [Insight-developers] a small bug initkConjugateGradientOptimizer

Zachary Pincus zpincus at stanford.edu
Wed Jun 22 17:24:48 EDT 2005


Also, the OnePlusOneEvolutionaryOptimizer (  
http://itk.org/Insight/Doxygen/html/ 
classitk_1_1OnePlusOneEvolutionaryOptimizer.html ) is an optimizer with  
a simulated annealing flavor. It takes random steps within a certain  
radius in parameter space, and adjusts that radius (anisotropically) to  
selectively favor directions which have produced good results in the  
past. With each "good" step, the radius grows, and with each "bad" step  
it shrinks (akin to the "cooling" that you do toward the end of a  
simulated annealing run).

Zach

On Jun 22, 2005, at 2:08 PM, Karthik Krishnan wrote:

>
>
> Einstein, Daniel R wrote:
>
>> Anish,
>>  As far as I can tell, all of the optimization algorithms from Netlib  
>> are local. Global optimization is considerably harder, and requires  
>> much more crunching. Examples are simulated annealing, multi-starts,  
>> i.e. stochastically sampling the solutions space, particle swarm  
>> methods, and sequential response surfaces. I am new enough to ITK  
>> that I cannot say which if any of these might usefully be implemented  
>> in ITK.
>
> http://itk.org/Insight/Doxygen/html/classitk_1_1SPSAOptimizer.html was  
> recently added
>
>> Particle swarm methods are an interesting option because they are so  
>> easy to program. Be advised, however, there are no global methods  
>> that guarantee conversion.
>>  Dan
>>
>> Daniel R Einstein, PhD
>> Biological Monitoring and Modeling
>> Pacific Northwest National Laboratory
>> P.O. Box 999; MSIN P7-59
>> Richland, WA 99352
>> Tel: 509/ 376-2924
>> Fax: 509/376-9064
>> _daniel.einstein at pnl.gov_ <mailto:daniel.einstein at pnl.gov>
>>
>>
>> ---------------------------------------------------------------------- 
>> --
>> *From:* insight-users-bounces+daniel.einstein=pnl.gov at itk.org  
>> [mailto:insight-users-bounces+daniel.einstein=pnl.gov at itk.org] *On  
>> Behalf Of *Ashish Poddar
>> *Sent:* Wednesday, June 22, 2005 10:58 AM
>> *To:* Luis Ibanez
>> *Cc:* insight-users @ itk. org
>> *Subject:* [Insight-users] Re: [Insight-developers] a small bug  
>> initkConjugateGradientOptimizer
>>
>> Hi,
>>  Most of the optimizers which I have came across help in finding the  
>> local minima around the given initial approximation. But in that case  
>> I mostly end up in a wrong place. Is there any algorithm which helps  
>> to scan the global space somehow and help in determining the global  
>> minima.
>>  Other problem that I am facing with the conjugate gradient method is  
>> that the scales do not work for conjugate gradient optimizer. The  
>> scales are only taken into account in the very first initialization  
>> step and are never considered again in any of the latter iterations.  
>> I want to fix some of the parameters by setting the scale to some  
>> extreme value (I used to set 100 or something for regular step  
>> gradient descent optimizer and it used to serve the purpose very  
>> conveniently).
>>  any help will be highly appreciated,
>> with regards,
>> Ashish.
>>
>>  On 5/18/05, *Luis Ibanez* <luis.ibanez at kitware.com  
>> <mailto:luis.ibanez at kitware.com>> wrote:
>>
>>
>>     Hi Ashish,
>>
>>     The Conjugate Gradient method is only convenient when the cost
>>     function has smooth second derivatives.  If your cost function
>>     is noisy, is is unlikely that this optimizer will behave nicely.
>>
>>     Note that is is common to find that Image Metrics are rather
>>     noisy functions.
>>
>>
>>
>>        Regards,
>>
>>
>>           Luis
>>
>>
>>
>>     --------------------
>>
>>     Ashish Poddar wrote:
>>
>>     > Hi,
>>     >
>>     > I also am right now struggling with the initialization options
>>     for the
>>     > Conjugate Gradient for which I could not find any examples.  
>> While
>>     > searching I came across an example for Levenberg Marquardt  
>> Optimizer
>>     > which seems to be having similar interface as that of conjugate
>>     > gradient optimizer. However the similar initialization did not
>>     worked
>>     > for Conjugate gradient. If someone can point out any reference  
>> for
>>     > Conjugate Gradient, it would be great.
>>     >
>>     > Earlier I was using regular step gradient descent optimizer with
>>     these
>>     > parameters:
>>     > Transform - Centered Affine
>>     > Scale for first 9 parameters - 1.0
>>     > Scale for next 6 parameters - 0.0001
>>     > Number of Iterations - 400
>>     > Minimum Step Length - 0.0001
>>     > Maximum Step Length - 0.005
>>     >
>>     > Any help will be highly appreciated,
>>     > with regards,
>>     > Ashish.
>>     >
>>     >
>>     >
>>     > On 5/17/05, Ashish Poddar <ahpoddar at gmail.com
>>     <mailto:ahpoddar at gmail.com>> wrote:
>>     >
>>     >>Hi Luis,
>>     >>
>>     >>Thank you for the quick action. probably similar change is  
>> required
>>     >>for Levenberg Marquardt Optimizer too.
>>     >>
>>     >>with regards,
>>     >>Ashish.
>>     >>
>>     >>On 5/16/05, Luis Ibanez <luis.ibanez at kitware.com
>>     <mailto:luis.ibanez at kitware.com>> wrote:
>>     >>
>>     >>>Hi Ashish,
>>     >>>
>>     >>>Thanks for pointing this out.
>>     >>>
>>     >>>You are right, the GetValue() method should be const.
>>     >>>
>>     >>>A fix has now been committed to the CVS repository.
>>     >>>
>>     >>>Please let us know if you encounter any other problem.
>>     >>>
>>     >>>    Thanks
>>     >>>
>>     >>>       Luis
>>     >>>
>>     >>>----------------------
>>     >>>Ashish Poddar wrote:
>>     >>>
>>     >>>>hi,
>>     >>>>
>>     >>>>I am not sure whether it qualifies as a bug or not, but surely
>>     affects
>>     >>>>the re-usability and pluggability model of ITK Library.
>>     >>>>
>>     >>>>the GetValue() function in ConjugateGradientOptimizer class
>>     currently is
>>     >>>>
>>     >>>>MeasureType GetValue();
>>     >>>>
>>     >>>>but in case of RegularStepGradientDescentOptimizer class its
>>     defined by macro as
>>     >>>>
>>     >>>>MeasureType GetValue() const;
>>     >>>>
>>     >>>>which is an interface mis-match... This I encountered when i
>>     replaced
>>     >>>>regular step gradient descent optimizer by conjugate gradient
>>     >>>>optimizer. In the observer I was using a const reference of  
>> the
>>     >>>>optimizer and displaying the value (just the example which is
>>     >>>>available for the same nothing new =D)...
>>     >>>>
>>     >>>>with regards,
>>     >>>>Ashish.
>>     >>>>
>>     >>>
>>     >>>
>>     >>--
>>     >>Ashish Poddar
>>     >>Have an acceptable reason for accepting anything.
>>     >>Y:ashish_poddar | MSN:ashish_poddar at yahoo.com
>>     <mailto:MSN:ashish_poddar at yahoo.com>
>>     >>
>>     >
>>     >
>>     >
>>
>>
>>
>>
>>
>>
>> -- 
>> Ashish Poddar
>> Have an acceptable reason for accepting anything.
>> Y:ashish_poddar | MSN:ashish_poddar at yahoo.com  
>> <mailto:MSN:ashish_poddar at yahoo.com>
>>
>> ---------------------------------------------------------------------- 
>> --
>>
>> _______________________________________________
>> Insight-users mailing list
>> Insight-users at itk.org
>> http://www.itk.org/mailman/listinfo/insight-users
>>
> _______________________________________________
> Insight-users mailing list
> Insight-users at itk.org
> http://www.itk.org/mailman/listinfo/insight-users
>



More information about the Insight-users mailing list