[Insight-users] mattesMI and BSpline mix up

Stefan Klein stefan at isi.uu.nl
Wed Mar 16 14:09:08 EST 2005


Hi all,

We are troubled. :-)
The itk::MattesMutualInformationImageToImageMetric is the reason of this.
More specific: its close relationship with the itk::BSplineDeformableTransform.


It seems that the MattesMutualInformation is actually doing some hacks to 
speed up computations. We distinguish 2 cases:

1. Hacks to accelerate the computation of the derivatives of the 
deformation field with respect to the BSpline coefficients (so: the 
Jacobian). Advantage is taken from the sparseness of the Jacobian of a 
BSplineDeformableTransform.

2. Hacks to accelerate the computation of the transformed (mapped) points, 
with help of precomputed weights and indices.


The advantages for the speed are clear.
The disadvantages are, according to us:

1. The 'oh so nice' registration framework is disturbed. Functionality of a 
Transform is copy-pasted to a Metric. This contradicts the design 
philosophy (but ok, that's just philosophy)

2. If a new transform would be added, which for example also has a sparse 
Jacobian, more hacks would be necessary in the 
itk::MattesMutualInformationImageToImageMetric, leading to very ugly code.

3. If a new transform would be written that inherits from the 
BSplineDeformableTransform, it is treated as a BSplineDeformableTransform 
by the itk::MattesMutualInformationImageToImageMetric, which is not 
necessarily correct. The inherited transform may for example change the way 
the TransformPoint method computes its result. The 
itk::MattesMutualInformationImageToImageMetric would simply override this 
behaviour again, because it computes internally the transformed point.

Actually we have encountered these problems when writing new transforms.


Wouldn't it be possible to have both: acceleration, while conforming to the 
registration framework?

Possible solutions for the two above mentioned cases:

1. Add to the itk::Transform a method GetSparseJacobian, which returns the 
Jacobian in a smart data structure; something like this:
    array[  "array containing a nonzero element value of the Jacobian and 
its position"  ]
In this way you wouldn't have to loop over all Jacobian elements in order 
to find the nonzero elements. So, it would give the same acceleration as 
the current solution.
Whether a metric should use the GetSparseJacobian, or the GetJacobian 
method, could be set by the user, with a boolean.
Of course not every Transform has to implement this 
GetSparseJacobianMethod. It just has to be declared in the base class 
itk::Transform.

2. This case is more difficult. It might be solved by again adding the 
following functions to the itk::Transform-class:

	void DoPreComputationsForPoints ( array of points for which precomputation 
have to be done )
             OutputPointType TransformPoint ( index of the array of points 
for which precomputed data is available )  //to make sure that there is a 
TransformPoint method that actually uses the precomputed data


What is your opinion about these things?

Marius Staring and Stefan Klein.



More information about the Insight-users mailing list