[Insight-developers] accessing metrics

Luis Ibanez ibanez@choroid.cs.unc.edu
Thu, 31 Jan 2002 00:15:54 -0500 (EST)


Hi Brian,


That's a great idea !

and it shouldn't be hard to implement.


Right now the metrics are computed over the 
"LargestPossibleRegion" of the target. We might
want to change that for the "RequestedRegion" or
even better, (as you propose) by a user provided
region. That will only require to add a SetRegion()
method to the metrics and can be easily done  in
the MatricsBase class: "SimilariyRegistrationMetric".

As far as the second part of your proposal, a generic
filter could be a better way to go. In particular 
after following the discussion on the BloxImages.

A single filter could take care of all the types of
metrics. What it unclear to me from your message is
that in principle the metric is not varying over the
region of the image but over the space of parameters
of the transform.

For example, given two 2D images A and B, and a rigid
transform in 2D, we could explore the values of the
metric for a range of translations and rotations.

For all of them, the computation will be limited to
the region provided by the user (in terms of the 
target image).

The difficult point will be then to define a way 
of specifying how to sample the space of parameters
of the transform. For example say: sample rotations
from 0 to 30 degrees every 2 degrees and sample
translations in X from -10 to 10 every 1 millimeter
and sample translations in Y from -5 to 15 every 5
millimeters.

Given that the rigid2D transform has three parameters,
the filter that explore this espace will generate as
output an Image<3D> on which each pixel position  is 
associated with a particular combination of parameters 
for the transform, while the pixel value is the result
of evaluating the metric with this transform.

In some way, that will be the equivalent of a registration
method that works by "exhaustive" search. This is always
a good thing to try before attacking the problem with an 
optimizer that is supposed to find the best path in such
a space.

The sampling of the parameter space could be performed by
an ImageIterator aware of the interpretation of its position
as a point in the parameter space of the transform.


The filter will then have (for this example)

Input1 = Image2D
Input2 = Image2D
Input3 = Metric
Input4 = Transform

Output = Image3D on which
         X direction = translation in X
         y direction = translation in Y
         Z direction = rotation around Z

The  "RequestedRegion" of the output image along with its
spacing values will completly define the way of sampling
the parameter space.




   Luis