<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=us-ascii">
<META content="MSHTML 6.00.2900.2668" name=GENERATOR></HEAD>
<BODY>
<DIV dir=ltr align=left><SPAN class=018441218-22062005><FONT face=Arial
color=#0000ff size=2>Anish,</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=018441218-22062005><FONT face=Arial
color=#0000ff size=2></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN class=018441218-22062005><FONT face=Arial
color=#0000ff size=2>As far as I can tell, all of the optimization algorithms
from Netlib are local. Global optimization is considerably harder, and requires
much more crunching. Examples are simulated annealing, multi-starts, i.e.
stochastically sampling the solutions space, particle swarm methods, and
sequential response surfaces. I am new enough to ITK that I cannot say which if
any of these might usefully be implemented in ITK. Particle swarm methods are an
interesting option because they are so easy to program. Be advised, however,
there are no global methods that guarantee conversion.</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=018441218-22062005><FONT face=Arial
color=#0000ff size=2></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN class=018441218-22062005><FONT face=Arial
color=#0000ff size=2>Dan</FONT></SPAN></DIV>
<DIV> </DIV><!-- Converted from text/rtf format -->
<P><SPAN lang=en-us><FONT face="Times New Roman">Daniel R Einstein,
PhD<BR>Biological Monitoring and Modeling<BR>Pacific Northwest National
Laboratory<BR>P.O. Box 999; MSIN P7-59<BR>Richland, WA 99352<BR>Tel: 509/
376-2924<BR>Fax: 509/376-9064<BR></FONT></SPAN><A
href="mailto:daniel.einstein@pnl.gov"><SPAN lang=en-us><U><FONT
face="Times New Roman"
color=#0000ff>daniel.einstein@pnl.gov</FONT></U></SPAN></A><SPAN
lang=en-us></SPAN> </P>
<DIV> </DIV><BR>
<DIV class=OutlookMessageHeader lang=en-us dir=ltr align=left>
<HR tabIndex=-1>
<FONT face=Tahoma size=2><B>From:</B>
insight-users-bounces+daniel.einstein=pnl.gov@itk.org
[mailto:insight-users-bounces+daniel.einstein=pnl.gov@itk.org] <B>On Behalf Of
</B>Ashish Poddar<BR><B>Sent:</B> Wednesday, June 22, 2005 10:58
AM<BR><B>To:</B> Luis Ibanez<BR><B>Cc:</B> insight-users @ itk.
org<BR><B>Subject:</B> [Insight-users] Re: [Insight-developers] a small bug
initkConjugateGradientOptimizer<BR></FONT><BR></DIV>
<DIV></DIV>
<DIV>Hi,</DIV>
<DIV> </DIV>
<DIV>Most of the optimizers which I have came across help in finding the local
minima around the given initial approximation. But in that case I mostly end up
in a wrong place. Is there any algorithm which helps to scan the global space
somehow and help in determining the global minima. </DIV>
<DIV> </DIV>
<DIV>Other problem that I am facing with the conjugate gradient method is that
the scales do not work for conjugate gradient optimizer. The scales are only
taken into account in the very first initialization step and are never
considered again in any of the latter iterations. I want to fix some of the
parameters by setting the scale to some extreme value (I used to set 100 or
something for regular step gradient descent optimizer and it used to serve the
purpose very conveniently). </DIV>
<DIV> </DIV>
<DIV>any help will be highly appreciated,</DIV>
<DIV>with regards,</DIV>
<DIV>Ashish.<BR><BR> </DIV>
<DIV><SPAN class=gmail_quote>On 5/18/05, <B class=gmail_sendername>Luis
Ibanez</B> <<A
href="mailto:luis.ibanez@kitware.com">luis.ibanez@kitware.com</A>>
wrote:</SPAN>
<BLOCKQUOTE class=gmail_quote
style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid"><BR>Hi
Ashish,<BR><BR>The Conjugate Gradient method is only convenient when the
cost<BR>function has smooth second derivatives. If your cost
function <BR>is noisy, is is unlikely that this optimizer will behave
nicely.<BR><BR>Note that is is common to find that Image Metrics are
rather<BR>noisy functions.<BR><BR><BR><BR>
Regards,<BR><BR><BR> Luis<BR><BR><BR><BR>--------------------<BR><BR>Ashish
Poddar wrote:<BR><BR>> Hi,<BR>><BR>> I also am right now struggling
with the initialization options for the<BR>> Conjugate Gradient for which I
could not find any examples. While <BR>> searching I came across an example
for Levenberg Marquardt Optimizer<BR>> which seems to be having similar
interface as that of conjugate<BR>> gradient optimizer. However the similar
initialization did not worked <BR>> for Conjugate gradient. If someone can
point out any reference for<BR>> Conjugate Gradient, it would be
great.<BR>><BR>> Earlier I was using regular step gradient descent
optimizer with these<BR>> parameters: <BR>> Transform - Centered
Affine<BR>> Scale for first 9 parameters - 1.0<BR>> Scale for next 6
parameters - 0.0001<BR>> Number of Iterations - 400<BR>> Minimum Step
Length - 0.0001<BR>> Maximum Step Length - 0.005<BR>><BR>> Any help
will be highly appreciated,<BR>> with regards,<BR>>
Ashish.<BR>><BR>><BR>><BR>> On 5/17/05, Ashish Poddar <<A
href="mailto:ahpoddar@gmail.com">ahpoddar@gmail.com</A>> wrote:
<BR>><BR>>>Hi Luis,<BR>>><BR>>>Thank you for the quick
action. probably similar change is required<BR>>>for Levenberg Marquardt
Optimizer too.<BR>>><BR>>>with regards,<BR>>>Ashish.
<BR>>><BR>>>On 5/16/05, Luis Ibanez <<A
href="mailto:luis.ibanez@kitware.com">luis.ibanez@kitware.com</A>>
wrote:<BR>>><BR>>>>Hi
Ashish,<BR>>>><BR>>>>Thanks for pointing this out.
<BR>>>><BR>>>>You are right, the GetValue() method should be
const.<BR>>>><BR>>>>A fix has now been committed to the CVS
repository.<BR>>>><BR>>>>Please let us know if you encounter
any other problem.
<BR>>>><BR>>>> Thanks<BR>>>><BR>>>>
Luis<BR>>>><BR>>>>----------------------<BR>>>>Ashish
Poddar wrote:<BR>>>><BR>>>>>hi,<BR>>>>>
<BR>>>>>I am not sure whether it qualifies as a bug or not, but
surely affects<BR>>>>>the re-usability and pluggability model of
ITK Library.<BR>>>>><BR>>>>>the GetValue() function in
ConjugateGradientOptimizer class currently is
<BR>>>>><BR>>>>>MeasureType
GetValue();<BR>>>>><BR>>>>>but in case of
RegularStepGradientDescentOptimizer class its defined by macro
as<BR>>>>><BR>>>>>MeasureType GetValue() const;
<BR>>>>><BR>>>>>which is an interface mis-match...
This I encountered when i replaced<BR>>>>>regular step gradient
descent optimizer by conjugate gradient<BR>>>>>optimizer. In the
observer I was using a const reference of the <BR>>>>>optimizer
and displaying the value (just the example which
is<BR>>>>>available for the same nothing new
=D)...<BR>>>>><BR>>>>>with
regards,<BR>>>>>Ashish.
<BR>>>>><BR>>>><BR>>>><BR>>>--<BR>>>Ashish
Poddar<BR>>>Have an acceptable reason for accepting
anything.<BR>>>Y:ashish_poddar | <A
href="mailto:MSN:ashish_poddar@yahoo.com">MSN:ashish_poddar@yahoo.com</A><BR>>><BR>><BR>><BR>><BR><BR><BR><BR></BLOCKQUOTE></DIV><BR><BR><BR>--
<BR>Ashish Poddar<BR>Have an acceptable reason for accepting
anything.<BR>Y:ashish_poddar | <A
href="mailto:MSN:ashish_poddar@yahoo.com">MSN:ashish_poddar@yahoo.com</A>
</BODY></HTML>