No subject
Tue Jun 23 12:19:35 EDT 2009
"At each iteration a limited memory BFGS approximation of the Hessian is
updated.
This limited memory matrix is used to define a quadratic model of the
objective function 'f'.
A search direction is then computed using a two-stage approach: first, the
gradient projection
method is used to identify a set of active variables, i.e. variables taht
will be held at their bound;
then the quadratic model is approximately minimized with respect to the
free variables. The
search direction is defined to be the vector leading from the current
iterate to this approximate
minimizer. Finally a line search is performed along the search direction
using the subroutine
described in [17]. A novel feature of the algorithm is that the limited
memory BFGS matrices are
represented in a compact form [7] that is efficient for bound constrained
problems."
B) RegularStepGradientDescent
1. User sets initial step length
2. Starts at one point in the parametric space
3. Compute the Gradient of the cost function at that point
4. Advance along that direction by the step lenght
5. Compute angle between current directed step and previous one
6. If the angle is acute, assume that it has gone too far and reduces
the step length by the relaxation factor (default 0.5)
7. Check Gradient Magnitude tolerance (stop if necessary)
8. Check function tolerance (stop if necessary)
9. Check number of iterations (stop if necessary)
10. GoTo 3
Please let us know if you have further questions,
Thanks
Luis
------------------------------
On Fri, Jul 10, 2009 at 4:08 PM, Sean Lee <kevinseanlee at live.com> wrote:
> Hi, Luis,
> Thank you very much for your suggestions and I will check all those
> factors. I will get back to you later.
>
> Would you mind explain the difference between two optimizers:
> itkRegularStepGradientDescentOptimizer
> itkLBFGSBOptimizer
>
> Thanks.
>
> Sean
>
>
--001636427136f805ef046e721e34
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Hi Sean,<br><br><br>A) LBFGSB:<br><br><a href=3D"http://www.ece.northwester=
n.edu/%7Enocedal/lbfgsb.html" target=3D"_blank">http://www.ece.northwestern=
.edu/~nocedal/lbfgsb.html</a><br><p><b>References</b>
</p><ul><li>
R. H. Byrd, P. Lu and J. Nocedal. <a href=3D"http://www.ece.northwestern.ed=
u/%7Enocedal/PSfiles/limited.ps.gz" target=3D"_blank">A
Limited Memory Algorithm for Bound Constrained Optimization</a>, (1995),
SIAM Journal on Scientific and Statistical Computing , 16, 5, pp. 1190-1208=
.<br><a href=3D"http://www.ece.northwestern.edu/%7Enocedal/PSfiles/limited.=
ps.gz" target=3D"_blank">http://www.ece.northwestern.edu/~nocedal/PSfiles/l=
imited.ps.gz</a><br>
<br></li><li>
C. Zhu, R. H. Byrd and J. Nocedal. <a href=3D"http://www.ece.northwestern.e=
du/%7Enocedal/PSfiles/lbfgsb.ps.gz" target=3D"_blank">L-BFGS-B:
Algorithm 778: L-BFGS-B, FORTRAN routines for large scale bound constrained
optimization</a> (1997), ACM Transactions on Mathematical Software, Vol
23, Num. 4, pp. 550 - 560.<br><a href=3D"http://www.ece.northwestern.edu/%7=
Enocedal/PSfiles/lbfgsb.ps.gz" target=3D"_blank">http://www.ece.northwester=
n.edu/~nocedal/PSfiles/lbfgsb.ps.gz</a><br></li></ul><br>From the second re=
ference:<br>
<br>"At each iteration a limited memory BFGS approximation of the Hess=
ian is updated.<br>=A0This limited memory matrix is used to define a quadra=
tic model of the objective function 'f'.<br>=A0A search direction i=
s then computed using a two-stage approach: first, the gradient projection<=
br>
=A0method is used to identify a set of active variables, i.e. variables tah=
t will be held at their bound;<br>=A0then the quadratic model is approximat=
ely minimized with respect to the free variables. The <br>=A0search directi=
on is defined to be the vector leading from the current iterate to this app=
roximate<br>
=A0minimizer. Finally a line search is performed along the search direction=
using the subroutine<br>=A0described in [17]. A novel feature of the algor=
ithm is that the limited memory BFGS matrices are<br>=A0represented in a co=
mpact form [7] that is efficient for bound constrained problems."<br>
<br><br><br>B) RegularStepGradientDescent<br><br>
<ol><li>User sets initial step length<br></li><li>Starts at one point in th=
e parametric space</li><li>Compute the Gradient of the cost function at tha=
t point</li><li>Advance along that direction by the step lenght</li><li>
Compute angle between current directed step and previous one</li><li>If the=
angle is acute, assume that it has gone too far and reduces<br>the step le=
ngth by the relaxation factor (default 0.5)</li><li>Check Gradient Magnitud=
e tolerance (stop if necessary)<br>
</li><li>Check function tolerance (stop if necessary)</li><li>Check number=
of iterations (stop if necessary)</li><li>GoTo 3</li></ol><br>Please let =
us know if you have further questions,<br><br><br>=A0=A0=A0=A0 Thanks<br><b=
r><br>
=A0=A0=A0=A0=A0=A0=A0=A0=A0 Luis<br><br><br><br>
<hr><br><br><div class=3D"gmail_quote">On Fri, Jul 10, 2009 at 4:08 PM, Sea=
n Lee <span dir=3D"ltr"><<a href=3D"mailto:kevinseanlee at live.com" target=
=3D"_blank">kevinseanlee at live.com</a>></span> wrote:<br><blockquote clas=
s=3D"gmail_quote" style=3D"border-left: 1px solid rgb(204, 204, 204); margi=
n: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div>
Hi, Luis,<div><br></div><div>Thank you very much for your suggestions and I=
will check all those=A0</div><div>factors. I will get back to you later.</=
div><div><br></div><div>Would you mind explain the difference between two o=
ptimizers:=A0</div>
<div>itkRegularStepGradientDescentOptimizer</div><div>itkLBFGSBOptimizer<br=
></div><div><br></div><div>Thanks.</div><div><br></div><div>Sean</div><div>=
<br></div></div></blockquote></div><br>
--001636427136f805ef046e721e34--
More information about the Insight-users
mailing list