ITK/ImageRegistration: Difference between revisions

From KitwarePublic
< ITK
Jump to navigationJump to search
m (moved ITK Image Registration to ITK/ImageRegistration: Adhere to shared wiki naming convention.)
 
(3 intermediate revisions by one other user not shown)
Line 8: Line 8:
* Setup each component
* Setup each component
** optimization parameters (e.g., for the RegularStepGradientDescent: number of iterations, minimum and maximum step lentghs, transformation parameters scales)  
** optimization parameters (e.g., for the RegularStepGradientDescent: number of iterations, minimum and maximum step lentghs, transformation parameters scales)  
** Metric parameters (e.g., for Mutual Information number of samples)
** Metric parameters (e.g., for Mutual Information: number of samples, ...)
* Connect up observers
* Connect up observers
* StartRegistration()
* StartRegistration()
Line 31: Line 31:
=== Transforms ===
=== Transforms ===


How centered transforms work:


p' = R (p - C)  + C  + T
where p' = transformed point, p = original point, C = Center, T=
Translation. R= Rotation matrix.
This can be re-written as
p' = Rp + Offset        where Offset = -RC + C + T
It is clear that for any transform with any set of parameters, it is
always possible to find an equivalent transform satisfying
p' = R(p - 0) + T'      where the Offset / translation have a value T' and the center is 0.
This is how transforms operate by default in ITK.
http://public.kitware.com/pipermail/insight-users/2005-July/013868.html
The GetInverse method is one of those methods that changes the matrix
and the offset, since it has to compute the inverse matrix.
The correct way to invert a transformation and retain the option of
defining your own center is to do it in the following order:
transform2->SetCenter( transform1->GetCenter() );
transform1->GetInverse( transform2 );
transform2 will now contain the inverse of transform1 and will contain
transform1's center. Flipping the two statements will produce an
incorrect transform.
http://public.kitware.com/pipermail/insight-users/2005-July/013866.html


=== Optimizers ===
=== Optimizers ===
Line 39: Line 73:
=== Interpolators ===
=== Interpolators ===


*[http://www.itk.org/Doxygen/html/classitk_1_1NearestNeighborInterpolateImageFunction.html itk::NearestNeighborInterpolateFunction]
**Assumes image is piecewise constant
*[http://www.itk.org/Doxygen/html/classitk_1_1LinearInterpolateImageFunction.html itk::LinearInterpolateFunction]
**Assumes image is piecewise linear
*[http://www.itk.org/Doxygen/html/classitk_1_1BSplineInterpolateImageFunction.html itk::BSplineInterpolateFunction]
**Underlying image represented using B-spline basis functions
**On connection, image of B-spline coefficients is computed


From: '''Lydia Ng, "Overview: ITK Registration Methods," SPIE 2005: Medical Image Registration and Segmentation with ITK, February 12, 2005'''
<br>
File name: SPIE2005-RegistrationMethodsOverview.ppt <br>
Available at:
http://www.itk.org/cgi-bin/cvsweb.cgi/CourseWare/Training/?root=InsightDocuments <br>
Last accessed on: January 06, 2006


== Parameter Selection ==
== Parameter Selection ==

Latest revision as of 18:09, 6 December 2010

Image Registration Process

  • Declare types
  • Instantiate objects via New()
  • Connect components and images to the ImageRegistrationMethod using Set methods
  • Set initial transform parameters
  • Setup each component
    • optimization parameters (e.g., for the RegularStepGradientDescent: number of iterations, minimum and maximum step lentghs, transformation parameters scales)
    • Metric parameters (e.g., for Mutual Information: number of samples, ...)
  • Connect up observers
  • StartRegistration()
    • Inside a try/catch block
  • Get the last transform parameters
  • Create registered image using ResampledImageFilter

Based on: Lydia Ng, "Overview: ITK Registration Methods," SPIE 2005: Medical Image Registration and Segmentation with ITK, February 12, 2005
File name: SPIE2005-RegistrationMethodsOverview.ppt
Available at: http://www.itk.org/cgi-bin/cvsweb.cgi/CourseWare/Training/?root=InsightDocuments
Last accessed on: January 06, 2006

Image Registration Components

Image Similarity Metrics


Transforms

How centered transforms work:

p' = R (p - C) + C + T

where p' = transformed point, p = original point, C = Center, T= Translation. R= Rotation matrix.

This can be re-written as

p' = Rp + Offset where Offset = -RC + C + T

It is clear that for any transform with any set of parameters, it is always possible to find an equivalent transform satisfying

p' = R(p - 0) + T' where the Offset / translation have a value T' and the center is 0.

This is how transforms operate by default in ITK.

http://public.kitware.com/pipermail/insight-users/2005-July/013868.html


The GetInverse method is one of those methods that changes the matrix and the offset, since it has to compute the inverse matrix.

The correct way to invert a transformation and retain the option of defining your own center is to do it in the following order:

transform2->SetCenter( transform1->GetCenter() ); transform1->GetInverse( transform2 );

transform2 will now contain the inverse of transform1 and will contain transform1's center. Flipping the two statements will produce an incorrect transform.

http://public.kitware.com/pipermail/insight-users/2005-July/013866.html

Optimizers

Interpolators

From: Lydia Ng, "Overview: ITK Registration Methods," SPIE 2005: Medical Image Registration and Segmentation with ITK, February 12, 2005
File name: SPIE2005-RegistrationMethodsOverview.ppt
Available at: http://www.itk.org/cgi-bin/cvsweb.cgi/CourseWare/Training/?root=InsightDocuments
Last accessed on: January 06, 2006

Parameter Selection

Optimizers

"[...] In order to get some insight on how to tune the parameters for your optimization, probably the best thing to do is to study the characteristics of the Metric when computed over your images. Please find attached a small program that allows you to explore the values of the metric using a translation transform. You want to plot these values in order to get a feeling on the level of noise of the Metric, the presence of local minima, and the overall reproducibility of the Metric function itself. (you will have to convert the dimension to 3D, since the example was configured for 2D)

Please run this program first using the *same* input image as fixed and moving images, and let the translation be evaluated for a range that is close to 1/2 of the image extent (physical extent in millimeters).

Plot the values using your favorite plotting program. (anything from GNUPlot to Excel).

For examples on how these plot will look like, please take a look at the course in Image Registration:

    http://www.cs.rpi.edu/courses/spring04/imagereg/

In particular, look at lectures 8 and 9.

[...] It is not productive to fight with the registration + optimization until you find a way of generating relatively smooth Metric plots. Note that this is just an exercise on learning how to tune the parameters. Once you figure out the parameters, you will not need to plot the Metric anymore.

Make sure that origin and spacing are correctly set in your images before you start computing all these metric plots."


#include "itkImage.h"
#include "itkImageFileReader.h"
#include "itkImageFileWriter.h"
#include "itkMattesMutualInformationImageToImageMetric.h"
#include "itkTranslationTransform.h"
#include "itkLinearInterpolateImageFunction.h"


int main( int argc, char * argv[] )
{
  if( argc < 3 )
    {
    std::cerr << "Usage: " << std::endl;
    std::cerr << argv[0] << "  fixedImage  movingImage" << std::endl;
    return 1;
    }

  const     unsigned int   Dimension = 2;
  typedef   unsigned char  PixelType;

  typedef itk::Image< PixelType, Dimension >   ImageType;
  typedef itk::Image< PixelType, Dimension >   ImageType;


  typedef itk::ImageFileReader< ImageType >  ReaderType;

  ReaderType::Pointer fixedReader  = ReaderType::New();
  ReaderType::Pointer movingReader = ReaderType::New();

  fixedReader->SetFileName(  argv[ 1 ] );
  movingReader->SetFileName( argv[ 2 ] );

  try 
    {
    fixedReader->Update();
    movingReader->Update();
    }
  catch( itk::ExceptionObject & excep )
    {
    std::cerr << "Exception catched !" << std::endl;
    std::cerr << excep << std::endl;
    }


  typedef itk::MattesMutualInformationImageToImageMetric< ImageType, ImageType >  MetricType;

  MetricType::Pointer metric = MetricType::New();



  typedef itk::TranslationTransform< double, Dimension >  TransformType;

  TransformType::Pointer transform = TransformType::New();



  typedef itk::LinearInterpolateImageFunction< 
                                 ImageType, double >  InterpolatorType;

  InterpolatorType::Pointer interpolator = InterpolatorType::New();


  metric->SetInterpolator( interpolator );
  metric->SetTransform( transform );

  metric->SetNumberOfHistogramBins( 20 );
  metric->SetNumberOfSpatialSamples( 10000 );

  transform->SetIdentity();

  ImageType::ConstPointer fixedImage  = fixedReader->GetOutput();
  ImageType::ConstPointer movingImage = movingReader->GetOutput();

  metric->SetFixedImage(  fixedImage  );
  metric->SetMovingImage( movingImage );

  metric->SetFixedImageRegion(  fixedImage->GetBufferedRegion()  );

  try 
    {
    metric->Initialize();
    }
  catch( itk::ExceptionObject & excep )
    {
    std::cerr << "Exception catched !" << std::endl;
    std::cerr << excep << std::endl;
    return -1;
    }


  MetricType::TransformParametersType displacement( Dimension );

  int rangex = 50;
  int rangey = 50;

  for( int dx = -rangex; dx <= rangex; dx++ )
    {
    for( int dy = -rangey; dy <= rangey; dy++ )
      {
      displacement[0] = dx;
      displacement[1] = dy;
      const double value = metric->GetValue( displacement );

      std::cout << dx << "   "  << dy << "   " << value << std::endl;
      }
    }

  std::cout << std::endl;


  return 0;
}


http://public.kitware.com/pipermail/insight-users/2004-April/008043.html



Optimizer->SetScales()


The rule of thumb is to figure out how much each one of those parameters will change for your registration, and then rescale that range to [-1:1].


In the case that you know the anticipated range of translations and rotations,


"[...]if you are doing 2D rigid you will have a 2D transform with three parameters:

     Tx  translation in millimeters along X
     Ty  translation in millimeters along Y
     R   rotation in radians

and you anticipate that your images need a correction of the order of 10 to 50 millimeters in translation and 0.01 to 0.1 radians in rotation, then you should put scales:

     scale[0] = 1/50;     scale for Tx
     scale[1] = 1/50;     scale for Ty
     scale[2] = 1/0.1;    scale for Rotation

Of course, those will be just "good values to start with". You will still need to refine them according to the behavior of the optimizer."

http://public.kitware.com/pipermail/insight-users/2005-April/012896.html


In the case that you do not know the anticipated range of translations and rotations,


"[...]the recommendation for the scaling of translation parameters versus rotation parameter is to use a factor proportional to the diagonal length of the image.

For your case the, you have 100 pixels with 1 mm / pixel, therefore the physical extent of your image is

       100mm  X  100mm  X 100mm

The diagonal the image bounding box is

         sqrt(3) * 100 mm

which is about

             173.2

and extra factor of 10X is usually useful, so you should probably try a factor of

    1.0 / ( 10 x 173.2 )  =  1.0 / 1732.0

You could use this same factor for the three components of the translation or you could estimate independent factor for each component in the way it is done in the VolView plugin.

Note that this factors are not expected to be computed precisely. Their purpose is simply to bring the rotational and translational parameters to a similar numerical scale.

By default, they are quite disproportionate since rotation are in radians, therefore in a range about -1:1, while translations are in millimeters, and for an image of 100mm you probably can expect translations as large as 50mm."

http://public.kitware.com/pipermail/insight-users/2004-July/009558.html


In short,


"[...]for an 3D AffineTransform, you get 12 parameters: the first 9 are the coefficients of the matrix (representing rotation, scale and shearing) the last 3 are the components of a translation vector. You want then to provide an array of 12 values with the first 9 being =1.0 and the last three being on the range of 1.0 / the image size (in millimeters)."

http://public.kitware.com/pipermail/insight-users/2002-October/001400.html


itk::RegularStepGradientDescentOptimizer->SetMaximumStepLength()

itk::RegularStepGradientDescentOptimizer->SetMinimumStepLength()


"[...]There is no magic recipe for selecting one. You probably want to start experimenting with a small value (e.g. 0.01) and plot the metric evaluations during the registration process. If you observe that the metric values are fairly monotonic, that means that you can safely increment the step length. Such an increment has the advantage of reducing the time required to reach an extrema of the cost function (the image metric in this case). You could restart the registration with larger values of the step length, as long as you don't observe a noisy and/or erratic behavior on the Metric values.

Step length issues are discussed in the course material from the "Image Registration Techniques" course at RPI.

  http://www.cs.rpi.edu/courses/spring04/imagereg/

for example in lecture 9:

  http://www.cs.rpi.edu/courses/spring04/imagereg/lecture09.ppt"


http://public.kitware.com/pipermail/insight-users/2004-July/009558.html



Use Cases

MultiResMIRegistration Application

"When I first implemented the Viola and Wells, the results I got was pretty pathetic as well. I[t] took me several months to finally figure out how to make it work. The main lessons I learnt are:

  • you need to do the registration in a multiresolution way
  • you got to use lots of iterations (order of thousands per level)
  • you got to normalize the image in some way otherwise you will need to reselect the standard deviation (parzen window) width for each and every image you want to register
  • you can't set your learning rate too high or you will quickly walk out of your capture region
  • you need to scale between rotation parameters and translation parameters

I have work out for myself a set of heuristic [...]

  • start the multiresolution so that coarsest level is approx isotropic
  • don't downsample too much (say about 64x64 in-plane)
  • normalize the image to mean zero, std of 1
  • use stddev of approx 0.4
  • use 50-80 sample points (using more is too costly)
  • set translation scale to approx the size of image in mm
  • set learning rate conservatively say 1e-3

I used these "rules" as a start and then refine. Setting these parameters is an art and will depend on the images and modality.

My heuristic for the translation scale comes from: if rotate about the center by x degrees how does that translate to motion at the edge of the image?"

http://www.itk.org/pipermail/insight-users/2002-April/000276.html


CT-MRI Brain Registration

PET-CT Registration



ITK: [Welcome | Site Map]