[Insight-users] Re: On MultiResMIRegistration
Luis Ibanez
luis.ibanez@kitware.com
Sat, 16 Nov 2002 10:25:16 -0500
Hi Valli,
If you notice that increasing the number of iterations
results in quite different matrices, this may indicate
that the optimization is unstable.
Here are several recommendations
(and probably Lydia can give us a sounder advice too):
1) You may want to trace the evolution of the optimization
process. In order to do so, you may take advantage of
Obsever/Event mechanism in ITK. The optimizer invoke
IterationEvent at every iteration. You can create and
Observer and register it with the optimizer in order
to get feedback at every iteration. This will allow you
to monitor how the transform is changing in the parameter
space. With this insight you may be in a better position
to determine if the optimization is needing more iterations,
or a larger learning rate, or a smaller learning rate.
You can find an example of an Observer in the file
Insight/Testing/Code/Algorithms/itkCommandIterationUpdate.h
This class (deriving from itk::Command) is watching for
iteration events in optimizers and printing the set of
parameters at every iteration. Please note that with an
affine transform in 3D the set of paramters is just the
dump of the matrix coefficients followed by the translation
components. You will see then 12 numbers. the first 9 will
be the matrix coefficients (row by row), the last three will
be the translation components (x,y,z).
When the optimization is well behaved these numbers should
evolve in a monotonic way, without much oscilations.
2) Because you are usin a multiresolution approach, there is
some freedom in how much iterations you want to performe
at each level of the pyramid and what learning rate for each
one.
I will recomend you to start tunning the low resolution
level first (and don't even look at the other). Make sure
that you find a combination of parameters where the first
level converges. The final result of the first level doesn't
have to be very precise (e.g. your 1.75 degrees angle) but
rather should be going consistently to the same value and
stay stable even if you increase the iterations for this
particular level.
Once you got the lower level to converge, don't touch its
parameters, just proceed to analyze the second level.
Play again with the learning rate and number of iterations
on this level until you get a consistent convergence.
(that is, more iterations will not change the result).
More repeating this procedure level by level until you get
to the high resolution level. You may expect the transform
at each level to get closer and closer to the theoretical
value (1.75 degrees rotation around X).
3) Be careful though when judging how different a rotation
matrix is from another. It is hard to interpret a matrix
directly from the coefficients. A much reliable way of
comparision is to convert the matrix in to an itk::Versor
(which is just a unit Quaternion), Versors manage only
the rotational part of a quaternion and keep the scale
to unit.
By converting the matrix to a Versor you could easily
compare the axis around which the rotation is happening
and the angle of such rotation.
4) You could start easier by not using a full AffineTransform
but rather a Rigid3D transform. This will reduce the number
of parameters in the optimization from 12 to 7. That is,
the optimizer will be exploring a 7-dimensional space instead
of a 12-dimensional one.
----
Unfortunately Registration is not a black box (nor is Segmentation)
to which you can just feed images and extract results blindly.
We could imagine in the future, to plug and smart class on top
of the Multiresolution Registration to implement a strategy for
convergence. That will releive you from being there trying to
fine tune parameters. Probably a soft-computing approach
could be appropriated for this. [Ssoft-computing is this combination
of Neural-Networks, Fuzzy Loging and Evolutionary algorithms,
they are quite efficient in adapting to new situations....]
---
Please let us know if you have further questions,
Thanks
Luis
===========================================================
valli gummadi wrote:
> Dear Mr.Luis,
> When I increased the number of iterations i could get the
> following matrix.
> Matrix:
> 0.999999 -0.000407 0.001650
> 0.000470 0.999254 -0.038604
> -0.001633 0.038604 0.999253
> Offset Values:
> -0.111432 -0.675801 -3.244812
>
> I have taken 4 levels.
> iterations are 2500,1000,1000,10.
> learning rates are 1e-6,1e-5,5e-6,1e-6.
>
> When i try to increase the iterations further,matrix values are totally
> different from the expected values.In the transformation matrix
> -0.03864 is closer to the -sin(1.75).But,additional rotations are taking
> place in other axis.Why this is ocuuring.Please give some suggestions
> regarding this.
>
> Regards,
> Valli.
>
>
>