[Insight-users] mapping points from moving image to fixed image
Luis Ibanez
luis.ibanez at kitware.com
Fri Oct 16 11:23:44 EDT 2009
Hi John,
That sounds like a reasonable approach.
Regarding your question about multi-threading the computation
of the inverse: Yes, it shouldn't be too hard to convert this filter.
If you look at the GenerateData() method in
Insight/Code/BasicFilters/itkInverseDeformationFieldImageFilter.txx
lines 219-281.
It has an initial call to:
this->PrepareKernelBaseSpline();
followed by a main loop visiting all the pixels of the output
image and computing their corresponding deformation vectors.
The main loop can easily be split in multiple-threads, and moved
to a ThreadedGenerateData() method.
The PrepareKernelBaseSpline() method, has a main loop in which
points are used as landmarks, and then it is followed by a heavy
computation for m_KernelTransform->ComputeWMatrix();
whose core is a linear system solution...
Whether that solution can be multi-threaded requires more
investigation...
I would suggest to start by profiling the filter by using a
run-time profiler or by simply inserting itkTimeProbes in it.
Once we have timing for the filter we can evaluate where is that
it is worth to invest effort for parallelizing the code.
Please let us know if you would like to pursue this initiative,
Thanks
Luis
-----------------------------------------------------------
On Thu, Oct 15, 2009 at 6:13 PM, John Drozd <john.drozd at gmail.com> wrote:
> Hi,
>
> Rather than calculating the deformation and inverse deformation field, I
> decided to deform the subject into the atlas, and that way I ended up with a
> transform (and deformation field) from the atlas to the subject, in which
> points can be transformed easily.
>
> john
>
> On Thu, Oct 15, 2009 at 1:01 PM, John Drozd <john.drozd at gmail.com> wrote:
>>
>> Hi Luis,
>>
>> I made some small corrections in my email below in red. Also this time I
>> remembered to carbon copy the insight users mailing list.
>>
>> thanks,
>> john
>>
>> On Thu, Oct 15, 2009 at 10:52 AM, John Drozd <john.drozd at gmail.com> wrote:
>>>
>>> Hello Luis,
>>>
>>> I used itkInverseDeformationFieldImageFilter.h to calculate an inverse
>>> deformation field from the moving subject to the fixed subject. I tried to
>>> read in the deformation field in vtk format but reading the
>>> deformationField.vtk file took forever so I included calculating the inverse
>>> deformation field as the last part of the original deformable registration
>>> code that first calculated the deformation field (from the fixed subject to
>>> the moving subject). I was wondering about setting the index of the region.
>>> I arbitrarily set it to 0,0,0 as done in the testing code
>>> Insight/Testing/Code/BasicFilters/itkInverseDeformationFieldImageFilterTest.cxx
>>> Does the choice of the index make a difference? Also it took almost 1
>>> hour to calculate the inverse deformation field. I was wondering if there is
>>> a way to optimize the code so that it can run on the 8 processors on my
>>> computer. I noticed that the registration procedure used the 8 processors.
>>>
>>> To validate my code, I would like to use the inverse deformation field
>>> that my code has calculated to next translate physical coordinates of points
>>> from the moving subject to physical coordinates of points in the fixed
>>> subject and then back to the moving subject to see if I get back the
>>> original point (location). Is there any example code that would show me how
>>> to do this? I am currently looking over ThinPlateSplineWarp.cxx which
>>> transforms some points.
>>>
>>> Eventually, I want to use the moving subject as an atlas and use the
>>> inverse deformation field to translate physical coordinates of points from
>>> ventricles in the moving subject atlas to physical coordinates of points of
>>> ventricles in the fixed subject, and then use these as seeds in the fixed
>>> subject in a region growing algorithm to segment the ventricles and
>>> calculate the volumes of the ventricles in the fixed subject. This way I can
>>> have an automated segmentation code.
>>>
>>> thank you
>>> john
>>>
>>> Below is my code
>>>
>>> #if defined(_MSC_VER)
>>> #pragma warning ( disable : 4786 )
>>> #endif
>>>
>>>
>>> /*
>>> ./DeformableRegistration8andinverse correctedsubject1.dcm
>>> correctedsubject2.dcm outputImage.mhd differenceOutput.mhd
>>> differenceBeforeRegistration.mhd deformationField.vtk
>>> inversedeformationField.vtk
>>> */
>>>
>>>
>>> // Software Guide : BeginLatex
>>> //
>>> // This example illustrates the use of the
>>> \doxygen{BSplineDeformableTransform}
>>> // class for performing registration of two $3D$ images and for the case
>>> of
>>> // multi-modality images. The image metric of choice in this case is the
>>> // \doxygen{MattesMutualInformationImageToImageMetric}.
>>> //
>>> // \index{itk::BSplineDeformableTransform}
>>> // \index{itk::BSplineDeformableTransform!DeformableRegistration}
>>> // \index{itk::LBFGSBOptimizer}
>>> //
>>> //
>>> // Software Guide : EndLatex
>>>
>>> #include "itkImageRegistrationMethod.h"
>>> #include "itkMattesMutualInformationImageToImageMetric.h"
>>> #include "itkLinearInterpolateImageFunction.h"
>>> #include "itkImage.h"
>>>
>>> #include "itkTimeProbesCollectorBase.h"
>>>
>>> //added per itkInverseDeformationFieldImageFilterTest.cxx
>>> #include "itkVector.h"
>>> #include "itkInverseDeformationFieldImageFilter.h"
>>> #include "itkImageRegionIteratorWithIndex.h"
>>> //#include "itkImageFileWriter.h"
>>> #include "itkFilterWatcher.h"
>>> //end of added code
>>>
>>> #ifdef ITK_USE_REVIEW
>>> #include "itkMemoryProbesCollectorBase.h"
>>> #define itkProbesCreate() \
>>> itk::TimeProbesCollectorBase chronometer; \
>>> itk::MemoryProbesCollectorBase memorymeter
>>> #define itkProbesStart( text ) memorymeter.Start( text );
>>> chronometer.Start( text )
>>> #define itkProbesStop( text ) chronometer.Stop( text );
>>> memorymeter.Stop( text )
>>> #define itkProbesReport( stream ) chronometer.Report( stream );
>>> memorymeter.Report( stream )
>>> #else
>>> #define itkProbesCreate() \
>>> itk::TimeProbesCollectorBase chronometer
>>> #define itkProbesStart( text ) chronometer.Start( text )
>>> #define itkProbesStop( text ) chronometer.Stop( text )
>>> #define itkProbesReport( stream ) chronometer.Report( stream )
>>> #endif
>>>
>>>
>>> // Software Guide : BeginLatex
>>> //
>>> // The following are the most relevant headers to this example.
>>> //
>>> // \index{itk::BSplineDeformableTransform!header}
>>> // \index{itk::LBFGSBOptimizer!header}
>>> //
>>> // Software Guide : EndLatex
>>>
>>> // Software Guide : BeginCodeSnippet
>>> #include "itkBSplineDeformableTransform.h"
>>> #include "itkLBFGSBOptimizer.h"
>>> // Software Guide : EndCodeSnippet
>>>
>>> // Software Guide : BeginLatex
>>> //
>>> // The parameter space of the \code{BSplineDeformableTransform} is
>>> composed by
>>> // the set of all the deformations associated with the nodes of the
>>> BSpline
>>> // grid. This large number of parameters makes possible to represent a
>>> wide
>>> // variety of deformations, but it also has the price of requiring a
>>> // significant amount of computation time.
>>> //
>>> // \index{itk::BSplineDeformableTransform!header}
>>> //
>>> // Software Guide : EndLatex
>>>
>>> #include "itkImageFileReader.h"
>>> #include "itkImageFileWriter.h"
>>>
>>> #include "itkResampleImageFilter.h"
>>> #include "itkCastImageFilter.h"
>>> #include "itkSquaredDifferenceImageFilter.h"
>>>
>>>
>>> #include "itkTransformFileReader.h"
>>>
>>> //added from ReadAtlasAndSubject.cxx
>>> #include "itkGDCMImageIO.h"
>>> //end of added code
>>>
>>> // The following section of code implements a Command observer
>>> // used to monitor the evolution of the registration process.
>>> //
>>> #include "itkCommand.h"
>>> class CommandIterationUpdate : public itk::Command
>>> {
>>> public:
>>> typedef CommandIterationUpdate Self;
>>> typedef itk::Command Superclass;
>>> typedef itk::SmartPointer<Self> Pointer;
>>> itkNewMacro( Self );
>>> protected:
>>> CommandIterationUpdate() {};
>>> public:
>>> typedef itk::LBFGSBOptimizer OptimizerType;
>>> typedef const OptimizerType * OptimizerPointer;
>>>
>>> void Execute(itk::Object *caller, const itk::EventObject & event)
>>> {
>>> Execute( (const itk::Object *)caller, event);
>>> }
>>>
>>> void Execute(const itk::Object * object, const itk::EventObject &
>>> event)
>>> {
>>> OptimizerPointer optimizer =
>>> dynamic_cast< OptimizerPointer >( object );
>>> if( !(itk::IterationEvent().CheckEvent( &event )) )
>>> {
>>> return;
>>> }
>>> std::cout << optimizer->GetCurrentIteration() << " ";
>>> std::cout << optimizer->GetCachedValue() << " ";
>>> std::cout << optimizer->GetInfinityNormOfProjectedGradient() <<
>>> std::endl;
>>> }
>>> };
>>>
>>>
>>> int main( int argc, char *argv[] )
>>> {
>>> if( argc < 4 )
>>> {
>>> std::cerr << "Missing Parameters " << std::endl;
>>> std::cerr << "Usage: " << argv[0];
>>> std::cerr << " fixedImageFile movingImageFile outputImagefile ";
>>> std::cerr << " [differenceOutputfile] [differenceBeforeRegistration]
>>> ";
>>>
>>> //std::cerr << " [deformationField] ";
>>> std::cerr << " [deformationField] " << " [inversedeformationField] ";
>>>
>>> std::cerr << " [useExplicitPDFderivatives ] [useCachingBSplineWeights
>>> ] ";
>>> std::cerr << " [filenameForFinalTransformParameters] ";
>>> std::cerr << std::endl;
>>> return EXIT_FAILURE;
>>> }
>>>
>>> const unsigned int ImageDimension = 3;
>>> typedef signed short PixelType;
>>>
>>> typedef itk::Image< PixelType, ImageDimension > FixedImageType;
>>> typedef itk::Image< PixelType, ImageDimension > MovingImageType;
>>>
>>>
>>> // Software Guide : BeginLatex
>>> //
>>> // We instantiate now the type of the
>>> \code{BSplineDeformableTransform} using
>>> // as template parameters the type for coordinates representation, the
>>> // dimension of the space, and the order of the BSpline.
>>> //
>>> // \index{BSplineDeformableTransform!New}
>>> // \index{BSplineDeformableTransform!Instantiation}
>>> //
>>> // Software Guide : EndLatex
>>>
>>> // Software Guide : BeginCodeSnippet
>>> const unsigned int SpaceDimension = ImageDimension;
>>> const unsigned int SplineOrder = 3;
>>> typedef double CoordinateRepType;
>>>
>>> typedef itk::BSplineDeformableTransform<
>>> CoordinateRepType,
>>> SpaceDimension,
>>> SplineOrder > TransformType;
>>> // Software Guide : EndCodeSnippet
>>>
>>>
>>> typedef itk::LBFGSBOptimizer OptimizerType;
>>>
>>>
>>> typedef itk::MattesMutualInformationImageToImageMetric<
>>> FixedImageType,
>>> MovingImageType > MetricType;
>>>
>>> typedef itk:: LinearInterpolateImageFunction<
>>> MovingImageType,
>>> double >
>>> InterpolatorType;
>>>
>>> typedef itk::ImageRegistrationMethod<
>>> FixedImageType,
>>> MovingImageType >
>>> RegistrationType;
>>>
>>> MetricType::Pointer metric = MetricType::New();
>>> OptimizerType::Pointer optimizer = OptimizerType::New();
>>> InterpolatorType::Pointer interpolator = InterpolatorType::New();
>>> RegistrationType::Pointer registration = RegistrationType::New();
>>>
>>>
>>> registration->SetMetric( metric );
>>> registration->SetOptimizer( optimizer );
>>> registration->SetInterpolator( interpolator );
>>>
>>>
>>> // Software Guide : BeginLatex
>>> //
>>> // The transform object is constructed below and passed to the
>>> registration
>>> // method.
>>> // \index{itk::RegistrationMethod!SetTransform()}
>>> //
>>> // Software Guide : EndLatex
>>>
>>> // Software Guide : BeginCodeSnippet
>>> TransformType::Pointer transform = TransformType::New();
>>> registration->SetTransform( transform );
>>> // Software Guide : EndCodeSnippet
>>>
>>>
>>> //
>>> // In general, you must first solve an Affine registration between
>>> // the images before attempting to solve a deformable registration.
>>> // If you have solve an affine transform, it can be loaded into the
>>> // BSplineDeformableTransform as a "bulk" transform that will be
>>> // pre-composed with the deformation computed by the BSpline.
>>> // The following code loads one of such initial transforms if they
>>> // are available.
>>> //
>>> typedef itk::TransformFileReader TransformReaderType;
>>> typedef itk::AffineTransform<double, 3> AffineTransformType;
>>>
>>> TransformReaderType::Pointer transformReader =
>>> TransformReaderType::New();
>>>
>>> if( argc > 11 )
>>> {
>>> std::cout << "Loading Transform: " << argv[11] << std::endl;
>>> transformReader->SetFileName( argv[11] );
>>> transformReader->Update();
>>>
>>> typedef TransformReaderType::TransformListType * TransformListType;
>>> TransformListType transforms = transformReader->GetTransformList();
>>> TransformReaderType::TransformListType::const_iterator tit =
>>> transforms->begin();
>>> if( !strcmp((*tit)->GetNameOfClass(),"AffineTransform") )
>>> {
>>> AffineTransformType::Pointer affine_read =
>>> static_cast<AffineTransformType*>((*tit).GetPointer());
>>> AffineTransformType::Pointer affine_transform =
>>> dynamic_cast< AffineTransformType * >(
>>> affine_read.GetPointer() );
>>>
>>> if( affine_transform )
>>> {
>>> transform->SetBulkTransform( affine_transform );
>>> }
>>> }
>>> else
>>> {
>>> std::cerr << "Bulk transform wasn't an affine transform." <<
>>> std::endl;
>>> return EXIT_FAILURE;
>>> }
>>> }
>>>
>>>
>>> typedef itk::ImageFileReader< FixedImageType > FixedImageReaderType;
>>> typedef itk::ImageFileReader< MovingImageType > MovingImageReaderType;
>>>
>>> FixedImageReaderType::Pointer fixedImageReader =
>>> FixedImageReaderType::New();
>>> MovingImageReaderType::Pointer movingImageReader =
>>> MovingImageReaderType::New();
>>>
>>> fixedImageReader->SetFileName( argv[1] );
>>> movingImageReader->SetFileName( argv[2] );
>>>
>>> //added from ReadAtlasAndSubject.cxx
>>> typedef itk::GDCMImageIO ImageIOTypefixed;
>>> ImageIOTypefixed::Pointer gdcmImageIOfixed = ImageIOTypefixed::New();
>>> fixedImageReader->SetImageIO( gdcmImageIOfixed );
>>>
>>> typedef itk::GDCMImageIO ImageIOTypemoving;
>>> ImageIOTypemoving::Pointer gdcmImageIOmoving =
>>> ImageIOTypemoving::New();
>>> movingImageReader->SetImageIO( gdcmImageIOmoving );
>>> //end of added code
>>>
>>> movingImageReader->Update();
>>> fixedImageReader->Update();
>>>
>>> FixedImageType::ConstPointer fixedImage =
>>> fixedImageReader->GetOutput();
>>>
>>> //added
>>> MovingImageType::ConstPointer movingImage =
>>> movingImageReader->GetOutput();
>>>
>>> registration->SetFixedImage( fixedImage );
>>> registration->SetMovingImage( movingImageReader->GetOutput() );
>>>
>>> //fixedImageReader->Update();
>>>
>>> FixedImageType::RegionType fixedRegion =
>>> fixedImage->GetBufferedRegion();
>>>
>>> registration->SetFixedImageRegion( fixedRegion );
>>>
>>> // Software Guide : BeginLatex
>>> //
>>> // Here we define the parameters of the BSplineDeformableTransform
>>> grid. We
>>> // arbitrarily decide to use a grid with $5 \times 5$ nodes within the
>>> image.
>>> // The reader should note that the BSpline computation requires a
>>> // finite support region ( 1 grid node at the lower borders and 2
>>> // grid nodes at upper borders). Therefore in this example, we set
>>> // the grid size to be $8 \times 8$ and place the grid origin such
>>> that
>>> // grid node (1,1) coincides with the first pixel in the fixed image.
>>> //
>>> // \index{BSplineDeformableTransform}
>>> //
>>> // Software Guide : EndLatex
>>>
>>> unsigned int numberOfGridNodesInOneDimension = 5;
>>>
>>> if( argc > 10 )
>>> {
>>> numberOfGridNodesInOneDimension = atoi( argv[10] );
>>> }
>>>
>>> // Software Guide : BeginCodeSnippet
>>> typedef TransformType::RegionType RegionType;
>>> RegionType bsplineRegion;
>>> RegionType::SizeType gridSizeOnImage;
>>> RegionType::SizeType gridBorderSize;
>>> RegionType::SizeType totalGridSize;
>>>
>>> gridSizeOnImage.Fill( numberOfGridNodesInOneDimension );
>>> gridBorderSize.Fill( SplineOrder ); // Border for spline order = 3 (
>>> 1 lower, 2 upper )
>>> totalGridSize = gridSizeOnImage + gridBorderSize;
>>>
>>> bsplineRegion.SetSize( totalGridSize );
>>> // Software Guide : EndCodeSnippet
>>>
>>>
>>> // Software Guide : BeginCodeSnippet
>>> typedef TransformType::SpacingType SpacingType;
>>> SpacingType spacing = fixedImage->GetSpacing();
>>>
>>> typedef TransformType::OriginType OriginType;
>>> OriginType origin = fixedImage->GetOrigin();;
>>>
>>> FixedImageType::SizeType fixedImageSize = fixedRegion.GetSize();
>>>
>>> for(unsigned int r=0; r<ImageDimension; r++)
>>> {
>>> spacing[r] *= static_cast<double>(fixedImageSize[r] - 1) /
>>> static_cast<double>(gridSizeOnImage[r] - 1);
>>> }
>>>
>>> FixedImageType::DirectionType gridDirection =
>>> fixedImage->GetDirection();
>>> SpacingType gridOriginOffset = gridDirection * spacing;
>>>
>>> OriginType gridOrigin = origin - gridOriginOffset;
>>>
>>> transform->SetGridSpacing( spacing );
>>> transform->SetGridOrigin( gridOrigin );
>>> transform->SetGridRegion( bsplineRegion );
>>> transform->SetGridDirection( gridDirection );
>>>
>>>
>>> typedef TransformType::ParametersType ParametersType;
>>>
>>> const unsigned int numberOfParameters =
>>> transform->GetNumberOfParameters();
>>>
>>> ParametersType parameters( numberOfParameters );
>>>
>>> parameters.Fill( 0.0 );
>>>
>>> transform->SetParameters( parameters );
>>> // Software Guide : EndCodeSnippet
>>>
>>>
>>>
>>> // Software Guide : BeginLatex
>>> //
>>> // We now pass the parameters of the current transform as the initial
>>> // parameters to be used when the registration process starts.
>>> //
>>> // Software Guide : EndLatex
>>>
>>> // Software Guide : BeginCodeSnippet
>>> registration->SetInitialTransformParameters( transform->GetParameters()
>>> );
>>> // Software Guide : EndCodeSnippet
>>>
>>>
>>> // Software Guide : BeginLatex
>>> //
>>> // Next we set the parameters of the LBFGSB Optimizer.
>>> //
>>> // Software Guide : EndLatex
>>>
>>>
>>> // Software Guide : BeginCodeSnippet
>>> OptimizerType::BoundSelectionType boundSelect(
>>> transform->GetNumberOfParameters() );
>>> OptimizerType::BoundValueType upperBound(
>>> transform->GetNumberOfParameters() );
>>> OptimizerType::BoundValueType lowerBound(
>>> transform->GetNumberOfParameters() );
>>>
>>> boundSelect.Fill( 0 );
>>> upperBound.Fill( 0.0 );
>>> lowerBound.Fill( 0.0 );
>>>
>>> optimizer->SetBoundSelection( boundSelect );
>>> optimizer->SetUpperBound( upperBound );
>>> optimizer->SetLowerBound( lowerBound );
>>>
>>> optimizer->SetCostFunctionConvergenceFactor( 1.e7 );
>>> optimizer->SetProjectedGradientTolerance( 1e-6 );
>>> optimizer->SetMaximumNumberOfIterations( 200 );
>>> optimizer->SetMaximumNumberOfEvaluations( 30 );
>>> optimizer->SetMaximumNumberOfCorrections( 5 );
>>> // Software Guide : EndCodeSnippet
>>>
>>> // Create the Command observer and register it with the optimizer.
>>> //
>>> CommandIterationUpdate::Pointer observer =
>>> CommandIterationUpdate::New();
>>> optimizer->AddObserver( itk::IterationEvent(), observer );
>>>
>>>
>>> // Software Guide : BeginLatex
>>> //
>>> // Next we set the parameters of the Mattes Mutual Information Metric.
>>> //
>>> // Software Guide : EndLatex
>>>
>>> // Software Guide : BeginCodeSnippet
>>> metric->SetNumberOfHistogramBins( 50 );
>>>
>>> const unsigned int numberOfSamples =
>>> static_cast<unsigned int>( fixedRegion.GetNumberOfPixels() * 20.0 /
>>> 100.0 );
>>>
>>> metric->SetNumberOfSpatialSamples( numberOfSamples );
>>> // Software Guide : EndCodeSnippet
>>>
>>>
>>> // Software Guide : BeginLatex
>>> //
>>> // Given that the Mattes Mutual Information metric uses a random
>>> iterator in
>>> // order to collect the samples from the images, it is usually
>>> convenient to
>>> // initialize the seed of the random number generator.
>>> //
>>> //
>>> \index{itk::Mattes\-Mutual\-Information\-Image\-To\-Image\-Metric!ReinitializeSeed()}
>>> //
>>> // Software Guide : EndLatex
>>>
>>> // Software Guide : BeginCodeSnippet
>>> metric->ReinitializeSeed( 76926294 );
>>> // Software Guide : EndCodeSnippet
>>>
>>> if( argc > 8 )
>>> {
>>> // Define whether to calculate the metric derivative by explicitly
>>> // computing the derivatives of the joint PDF with respect to the
>>> Transform
>>> // parameters, or doing it by progressively accumulating
>>> contributions from
>>> // each bin in the joint PDF.
>>> metric->SetUseExplicitPDFDerivatives( atoi( argv[8] ) );
>>> }
>>>
>>> if( argc > 9 )
>>> {
>>> // Define whether to cache the BSpline weights and indexes
>>> corresponding to
>>> // each one of the samples used to compute the metric. Enabling
>>> caching will
>>> // make the algorithm run faster but it will have a cost on the
>>> amount of memory
>>> // that needs to be allocated. This option is only relevant when
>>> using the
>>> // BSplineDeformableTransform.
>>> metric->SetUseCachingOfBSplineWeights( atoi( argv[9] ) );
>>> }
>>>
>>>
>>> // Add time and memory probes
>>> itkProbesCreate();
>>>
>>> std::cout << std::endl << "Starting Registration" << std::endl;
>>>
>>> try
>>> {
>>> itkProbesStart( "Registration" );
>>> registration->StartRegistration();
>>> itkProbesStop( "Registration" );
>>> }
>>> catch( itk::ExceptionObject & err )
>>> {
>>> std::cerr << "ExceptionObject caught !" << std::endl;
>>> std::cerr << err << std::endl;
>>> return EXIT_FAILURE;
>>> }
>>>
>>> OptimizerType::ParametersType finalParameters =
>>> registration->GetLastTransformParameters();
>>>
>>>
>>> // Report the time and memory taken by the registration
>>> itkProbesReport( std::cout );
>>>
>>> // Software Guide : BeginCodeSnippet
>>> transform->SetParameters( finalParameters );
>>> // Software Guide : EndCodeSnippet
>>>
>>>
>>> typedef itk::ResampleImageFilter<
>>> MovingImageType,
>>> FixedImageType > ResampleFilterType;
>>>
>>> ResampleFilterType::Pointer resample = ResampleFilterType::New();
>>>
>>> resample->SetTransform( transform );
>>> resample->SetInput( movingImageReader->GetOutput() );
>>>
>>> resample->SetSize( fixedImage->GetLargestPossibleRegion().GetSize()
>>> );
>>> resample->SetOutputOrigin( fixedImage->GetOrigin() );
>>> resample->SetOutputSpacing( fixedImage->GetSpacing() );
>>> resample->SetOutputDirection( fixedImage->GetDirection() );
>>>
>>> // This value is set to zero in order to make easier to perform
>>> // regression testing in this example. However, for didactic
>>> // exercise it will be better to set it to a medium gray value
>>> // such as 100 or 128.
>>> //resample->SetDefaultPixelValue( 0 );
>>> resample->SetDefaultPixelValue( 0 );
>>>
>>> typedef signed short OutputPixelType;
>>>
>>> typedef itk::Image< OutputPixelType, ImageDimension > OutputImageType;
>>>
>>> typedef itk::CastImageFilter<
>>> FixedImageType,
>>> OutputImageType > CastFilterType;
>>>
>>> typedef itk::ImageFileWriter< OutputImageType > WriterType;
>>>
>>>
>>> WriterType::Pointer writer = WriterType::New();
>>> CastFilterType::Pointer caster = CastFilterType::New();
>>>
>>> //added from ReadAtlasAndSubject.cxx
>>> /*typedef itk::GDCMImageIO ImageIOType;
>>> ImageIOType::Pointer gdcmImageIO = ImageIOType::New();
>>>
>>> writer->UseInputMetaDataDictionaryOff();
>>> writer->SetImageIO( gdcmImageIO );*/
>>> //end of added code
>>>
>>> writer->SetFileName( argv[3] );
>>>
>>> writer->UseInputMetaDataDictionaryOff();
>>> //writer->SetImageIO( gdcmImageIOAtlas );
>>>
>>> caster->SetInput( resample->GetOutput() );
>>> writer->SetInput( caster->GetOutput() );
>>>
>>>
>>> try
>>> {
>>> writer->Update();
>>> }
>>> catch( itk::ExceptionObject & err )
>>> {
>>> std::cerr << "ExceptionObject caught !" << std::endl;
>>> std::cerr << err << std::endl;
>>> return EXIT_FAILURE;
>>> }
>>>
>>>
>>>
>>> typedef itk::SquaredDifferenceImageFilter<
>>> FixedImageType,
>>> FixedImageType,
>>> OutputImageType > DifferenceFilterType;
>>>
>>> DifferenceFilterType::Pointer difference = DifferenceFilterType::New();
>>>
>>> WriterType::Pointer writer2 = WriterType::New();
>>>
>>> //added
>>> writer2->UseInputMetaDataDictionaryOff();
>>> //end of added code
>>>
>>>
>>> writer2->SetInput( difference->GetOutput() );
>>>
>>>
>>> // Compute the difference image between the
>>> // fixed and resampled moving image.
>>> if( argc > 4 )
>>> {
>>> difference->SetInput1( fixedImageReader->GetOutput() );
>>> difference->SetInput2( resample->GetOutput() );
>>> writer2->SetFileName( argv[4] );
>>> try
>>> {
>>> writer2->Update();
>>> }
>>> catch( itk::ExceptionObject & err )
>>> {
>>> std::cerr << "ExceptionObject caught !" << std::endl;
>>> std::cerr << err << std::endl;
>>> return EXIT_FAILURE;
>>> }
>>> }
>>>
>>>
>>> // Compute the difference image between the
>>> // fixed and moving image before registration.
>>> if( argc > 5 )
>>> {
>>> writer2->SetFileName( argv[5] );
>>> difference->SetInput1( fixedImageReader->GetOutput() );
>>> difference->SetInput2( movingImageReader->GetOutput() );
>>> try
>>> {
>>> writer2->Update();
>>> }
>>> catch( itk::ExceptionObject & err )
>>> {
>>> std::cerr << "ExceptionObject caught !" << std::endl;
>>> std::cerr << err << std::endl;
>>> return EXIT_FAILURE;
>>> }
>>> }
>>>
>>> // Generate the explicit deformation field and inverse deformation
>>> field resulting from
>>> // the registration.
>>> if( argc > 6 )
>>> {
>>>
>>> typedef itk::Vector< float, ImageDimension > VectorType;
>>> typedef itk::Image< VectorType, ImageDimension >
>>> DeformationFieldType;
>>>
>>> typedef itk::InverseDeformationFieldImageFilter<
>>> DeformationFieldType,
>>> DeformationFieldType
>>> > FilterType;
>>>
>>> FilterType::Pointer filter = FilterType::New();
>>>
>>> DeformationFieldType::Pointer field = DeformationFieldType::New();
>>> field->SetRegions( fixedRegion );
>>> field->SetOrigin( fixedImage->GetOrigin() );
>>> field->SetSpacing( fixedImage->GetSpacing() );
>>> field->SetDirection( fixedImage->GetDirection() );
>>> field->Allocate();
>>>
>>> typedef itk::ImageRegionIterator< DeformationFieldType >
>>> FieldIterator;
>>> FieldIterator fi( field, fixedRegion );
>>>
>>> fi.GoToBegin();
>>>
>>> TransformType::InputPointType fixedPoint;
>>> TransformType::OutputPointType movingPoint;
>>> DeformationFieldType::IndexType index;
>>>
>>> VectorType displacement;
>>>
>>> while( ! fi.IsAtEnd() )
>>> {
>>> index = fi.GetIndex();
>>> field->TransformIndexToPhysicalPoint( index, fixedPoint );
>>> movingPoint = transform->TransformPoint( fixedPoint );
>>> displacement = movingPoint - fixedPoint;
>>> fi.Set( displacement );
>>> ++fi;
>>> }
>>>
>>> typedef itk::ImageFileWriter< DeformationFieldType >
>>> FieldWriterType;
>>> FieldWriterType::Pointer fieldWriter = FieldWriterType::New();
>>>
>>> fieldWriter->SetInput( field );
>>>
>>> fieldWriter->SetFileName( argv[6] );
>>> try
>>> {
>>> fieldWriter->Update();
>>> }
>>> catch( itk::ExceptionObject & excp )
>>> {
>>> std::cerr << "Exception thrown " << std::endl;
>>> std::cerr << excp << std::endl;
>>> return EXIT_FAILURE;
>>> }
>>>
>>>
>>> // Generate the explicit inverse deformation field resulting from
>>> // the registration.
>>>
>>> DeformationFieldType::SpacingType spacing;
>>> //spacing.Fill( 1.0 );
>>>
>>> DeformationFieldType::PointType origin;
>>> //origin.Fill( 0.0 );
>>>
>>> DeformationFieldType::RegionType region;
>>> DeformationFieldType::SizeType size;
>>> DeformationFieldType::IndexType start;
>>>
>>> /*size[0] = movingImage->GetLargestPossibleRegion().GetSize()[0];
>>> size[1] = movingImage->GetLargestPossibleRegion().GetSize()[1];
>>> size[2] = movingImage->GetLargestPossibleRegion().GetSize()[2];
>>> */
>>>
>>> size[0] = fixedImage->GetLargestPossibleRegion().GetSize()[0];
>>> size[1] = fixedImage->GetLargestPossibleRegion().GetSize()[1];
>>> size[2] = fixedImage->GetLargestPossibleRegion().GetSize()[2];
>>>
>>>
>>> std::cout << "size[0] "<< size[0] << std::endl;
>>> std::cout << "size[1] "<< size[1] << std::endl;
>>> std::cout << "size[2] "<< size[2] << std::endl;
>>>
>>> region.SetSize( size );
>>>
>>> start[0] = 0;
>>> start[1] = 0;
>>> start[2] = 0;
>>>
>>>
>>>
>>> region.SetIndex( start );
>>>
>>> field->SetOrigin( field->GetOrigin() );
>>> field->SetSpacing( field->GetSpacing() );
>>>
>>> field->SetRegions( region );
>>>
>>>
>>> field->Allocate();
>>>
>>> VectorType pixelValue;
>>>
>>> filter->SetOutputSpacing( field->GetSpacing() );
>>>
>>> filter->SetOutputOrigin( field->GetOrigin() );
>>>
>>> filter->SetSize( size );
>>>
>>>
>>>
>>> filter->SetInput( field );
>>>
>>>
>>> filter->SetSubsamplingFactor( 16 );
>>>
>>> try
>>> {
>>> std::cout << "updating inverse deformation filter" << std::endl;
>>> filter->UpdateLargestPossibleRegion();
>>> }
>>> catch( itk::ExceptionObject & excp )
>>> {
>>> std::cerr << "Exception thrown " << std::endl;
>>> std::cerr << excp << std::endl;
>>> }
>>>
>>> std::cout << "finished updating inverse deformation filter" <<
>>> std::endl;
>>>
>>>
>>> // Write an image for regression testing
>>> typedef itk::ImageFileWriter< DeformationFieldType > InvWriterType;
>>>
>>> InvWriterType::Pointer writerinv = InvWriterType::New();
>>>
>>> writerinv->SetInput (filter->GetOutput());
>>> writerinv->SetFileName( argv[7] );
>>>
>>> try
>>> {
>>> std::cout << "writing inverse deformation filter" << std::endl;
>>> writerinv->Update();
>>> }
>>> catch( itk::ExceptionObject & excp )
>>> {
>>> std::cerr << "Exception thrown by writer" << std::endl;
>>> std::cerr << excp << std::endl;
>>> return EXIT_FAILURE;
>>> }
>>>
>>> std::cout << "finished writing inverse deformation filter" <<
>>> std::endl;
>>>
>>> }
>>>
>>> // Optionally, save the transform parameters in a file
>>> if( argc > 10 )
>>> {
>>> std::ofstream parametersFile;
>>> parametersFile.open( argv[10] );
>>> parametersFile << finalParameters << std::endl;
>>> parametersFile.close();
>>> }
>>>
>>> return EXIT_SUCCESS;
>>> }
>>>
>>> #undef itkProbesCreate
>>> #undef itkProbesStart
>>> #undef itkProbesStop
>>> #undef itkProbesReport
>>>
>>>
>>>
>>> On Mon, Oct 12, 2009 at 12:29 PM, Luis Ibanez <luis.ibanez at kitware.com>
>>> wrote:
>>>>
>>>> Hi John,
>>>>
>>>> 1) The Deformation Field resulting from a Deformable Registration
>>>> process in ITK, contains the vectors that point from the physical
>>>> coordinates of a point in the Fixed image reference system to
>>>> its corresponding point in the Moving image reference system.
>>>>
>>>> 2) You could combine that deformation field with the WarpImageFilter
>>>> in order to map intensities from the Moving image reference system
>>>> into the Fixed Image reference system. That is: to resample the
>>>> Moving image into the image grid of the Fixed image.
>>>>
>>>> 3) If you want to map points (not pixel values) from the Moving image
>>>> to the Fixed image, then you need to invert the deformation field.
>>>> Simply negating (multiplying by -1 ) the initial deformation field
>>>> will NOT do the trick.
>>>>
>>>> 4) You may want to look at the following two options for inverting
>>>> deformation fields:
>>>>
>>>> 4.1) Insight/Code/BasicFilters/itkInverseDeformationFieldImageFilter.h
>>>>
>>>> 4.2)
>>>> Insight/Code/BasicFilters/itkIterativeInverseDeformationFieldImageFilter.h
>>>>
>>>> The first method uses a KernelTransform ( such as ThinPlateSplines)
>>>> as an intermediate helper for computing the inverse field.
>>>>
>>>> The second method computes the inverse field iterative based
>>>> on F * R = I (forward field composed with Reverse field should
>>>> be equal to the Identity), by redistributing the residual errors of
>>>> Error = F * R - I
>>>>
>>>>
>>>> 5) An alternative option can be found in
>>>>
>>>> InsightApplications/InverseConsistentLandmarkRegistration
>>>>
>>>> Based on the paper:
>>>>
>>>> Consistent Landmark and Intensity-Based Image Registration
>>>> by H. J. Johnson* and G. E. Christensen
>>>> IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 21, NO. 5, MAY 2002
>>>>
>>>>
>>>>
>>>> Please give it a try and let us know if you find
>>>> any problems.
>>>>
>>>>
>>>>
>>>> Thanks
>>>>
>>>>
>>>> Luis
>>>>
>>>>
>>>> --------------------------------------------------------
>>>> On Sat, Oct 10, 2009 at 4:12 AM, John Drozd <john.drozd at gmail.com>
>>>> wrote:
>>>> > Hello,
>>>> >
>>>> > I used deformable registration using DeformableRegistration8.cxx to
>>>> > register
>>>> > one brain subject from one person with another brain subject from
>>>> > another
>>>> > person. I now have a deformation field file that was outputted from
>>>> > deformableregistration8.cxx. I was reading on the internet the
>>>> > following
>>>> > link
>>>> > http://www.itk.org/pipermail/insight-users/2009-April/030004.html that
>>>> > describes how to map points from the fixed image to the moving image
>>>> > using a
>>>> > deformation field. If i want to map points from the moving image to
>>>> > the
>>>> > fixed image, can I use the same procedure but take the negative of the
>>>> > deformation field, and if so how can i negate the deformation field
>>>> > and use
>>>> > it with ResampleImagefilter? My logic is that when you take the
>>>> > negative of
>>>> > a vector, this reverses its direction.
>>>> >
>>>> > Thank you,
>>>> >
>>>> > john
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > John Drozd
>>>> > Postdoctoral Fellow
>>>> > Imaging Research Laboratories
>>>> > Robarts Research Institute
>>>> > Room 1256
>>>> > 100 Perth Drive
>>>> > London, Ontario, Canada
>>>> > N6A 5K8
>>>> > (519) 661-2111 ext. 25306
>>>> >
>>>
>>>
>>>
>>> --
>>> John Drozd
>>> Postdoctoral Fellow
>>> Imaging Research Laboratories
>>> Robarts Research Institute
>>> Room 1256
>>> 100 Perth Drive
>>> London, Ontario, Canada
>>> N6A 5K8
>>> (519) 661-2111 ext. 25306
>>
>>
>>
>> --
>> John Drozd
>> Postdoctoral Fellow
>> Imaging Research Laboratories
>> Robarts Research Institute
>> Room 1256
>> 100 Perth Drive
>> London, Ontario, Canada
>> N6A 5K8
>> (519) 661-2111 ext. 25306
>
>
>
> --
> John Drozd
> Postdoctoral Fellow
> Imaging Research Laboratories
> Robarts Research Institute
> Room 1256
> 100 Perth Drive
> London, Ontario, Canada
> N6A 5K8
> (519) 661-2111 ext. 25306
>
More information about the Insight-users
mailing list