[Insight-users] Image transformation tool that resizes the FOV
Frank Ezekiel
frank.ezekiel at ucsf.edu
Tue Apr 7 18:01:27 EDT 2009
Luis:
This is great, your suggested approach worked perfectly on axial data! (Thanks for all the help)
However, it is failing on sag and coronal data with the corrections of the position apparently applied to the wrong axis.
So, do you have any suggestions on what criteria should be used to test for orientation? (One approach I can take would be the one David Clunie published, which looks like the code below). But if there is an "ITK-ish" solution I'd rather use that.
Thoughts?
Frank
char GetMajorAxisFromPatientRelativeDirectionCosine(double x,double y,double z)
{
char axis=0;
double obliquityThresholdCosineValue = 0.25;
char OrientationX = x < 0 ? 'R' : 'L';
char OrientationY = y < 0 ? 'A' : 'P';
char OrientationZ = z < 0 ? 'F' : 'H';
double absX = fabs(x);
double absY = fabs(y);
double absZ = fabs(z);
// The tests here really don't need to check the other dimensions,
// just the threshold, since the sum of the squares should be == 1.0
// but just in case ...
if (absX>obliquityThresholdCosineValue && absX>absY && absX>absZ) {
axis=OrientationX;
}
else if (absY>obliquityThresholdCosineValue && absY>absX && absY>absZ) {
axis=OrientationY;
}
else if (absZ>obliquityThresholdCosineValue && absZ>absX && absZ>absY) {
axis=OrientationZ;
}
return axis;
}
At 04:16 PM 4/2/2009, Luis Ibanez wrote:
>Hi Frank,
>
>Your list of items looks right.
>
>It seems however that you use a different terminology...
>
>In practice you need to do:
>
>A) Use the ResampleImageFilter
>B) Use the IdentityTransform
>C) Pick your favorite Interpolator
>D) Pick the new pixel spacing of the output image
>E) Pick the new number of pixels (along each dimension)
> of the outupt image.
>F) With (D and E) compute the new origin of the output
> image, in such a way that their center of the image
> will remain in place.
>
>
>To compute (F) you can simply do the math inside of a
>for loop:
>
>
> PointType o1;
> SpacingType d1;
> SizeType s1;
>
> o1 = inputImage->GetOrigin();
> s1 = inputImage->GetBufferedRegion()->GetSize();
> d1 = inputImage->GetSpacing();
>
> PointType o2
> for( unsigned int i=0; i<3; i++)
> {
> double center = o1[i] + s1[i] * d1[i] / 2.0;
> o2[i] = center - s2[i] * d2[i] / 2.0;
> }
>
> resampleFilter->SetOutputOrigin( o2 );
> resampleFilter->SetOutputSize( s2 );
> resampleFilter->SetOutputSpacing( d2 );
>
> resampleFilter->SetOutputDirection(
> inputImage->GetDirection() ); // no rotations
>
> resampleFilter->SetInput( inputImage );
> resampleFilter->SetDefaultValue( 0.0 );
>
> resampleFilter->Update();
>
>
>
>That should do the trick...
>
>
>
> Regards,
>
>
> Luis
>
>
>----------------------
>Frank Ezekiel wrote:
>>That is helpful.
>>So let me describe this to see if I have it right.
>>Since the angulation doesn't change, my "fixed" and "moving" coordinate frames are the same.
>>I need to :
>>1) Transform the original origin from "image coordinates" into physical coordinates
>>2) Reposition the origin to correctly reflect my expected change in FOV size for each dimension (add 1/2 of extent change to position on each of 3 axis)
>>3) Transform the new origin back into image coordinates
>>4) Set the position of my output image using this new value
>>5) Fill in the output image with the source image centered and otherwise zero filled
>>
>>Is this right?
>>
>>What ITK functions are available to transform the origin point into physical coordinates and then back to image coordinates?
>>
>>Thanks again,
>>
>>Frank
>>
>>
>>
>>At 02:24 PM 4/2/2009, you wrote:
>>
>>>Hi Frank,
>>>
>>>Thanks for the clarification.
>>>
>>>Yes, I don't think that the orientation should change at all.
>>>
>>>It seems that your main concern is to keep the center of
>>>the output image aligned with the center of the input image.
>>>
>>>That will require that you compute properly the Origin
>>>of the output image.
>>>
>>>The best way to visualize this is by drawing a sketch of
>>>the grids from both images. Do this in the context of the
>>>Physical reference system, do not do this in the context
>>>of pixels. Things become a lot clearer when you identify
>>>all the elements in (mm).
>>>
>>>The pseudo computation of the Output origin is
>>>
>>>InputCenterPoint = InputOriginPoint + InputSize*InputSpacing / 2
>>>OutputCenterPoint = InputCenterPoint
>>>OutputOriginPoint = OutputCenterPoint - OutputSize * OutputSpacing / 2
>>>
>>>
>>>In order to fill the additional empty space with "zero" you can
>>>use the DefaultValue parameter of the ResampleImageFilter.
>>>
>>>
>>> Regards,
>>>
>>>
>>> Luis
>>>
>>>
>>>----------------------------------------------------------------------------------
>>>On Thu, Apr 2, 2009 at 4:30 PM, Frank Ezekiel <frank.ezekiel at ucsf.edu> wrote:
>>>
>>>>Hi Luis:
>>>>
>>>> Yes, by "DCM" I mean the direction cosine matrix.
>>>>
>>>> BUT, I'm not looking to rotate the image. Just to extend the boundaries of the field-of-view.
>>>>
>>>> Another point I forgot to mention, is that these are "OrientedImages".
>>>>
>>>> I'm thinking to change the FOV of the image, so for example.
>>>>
>>>>Starting image: 40mm by 20mm by 40mm, 1mm per pixel
>>>>Ending image: 256mm by 256mm by 256mm, 1 mm per pixel, WITH SAME CENTER POINT.
>>>>
>>>> Obviously in this example all of the additional "space" would be zero filled.
>>>>
>>>> So if I understand your hint correctly, the DCM is not expected to change in this scenario. And presumably I just need to figure out the math for modifying the "position" point so that the correct amount of field-of-view extension is applied in each of the 3 axis based on dcm values.
>>>>
>>>>Frank
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>At 12:57 PM 4/2/2009, you wrote:
>>>>
>>>>
>>>>
>>>>>Hi Frank,
>>>>>
>>>>>I assume that by "DCM" you are referring to
>>>>>the Direction Cosines Matrix, Is that right ?
>>>>>
>>>>>If so, I don't quite see why do you expect,
>>>>>or want the direction cosines to be modified
>>>>>if you are just changing the resolution of the
>>>>>image.
>>>>>
>>>>>
>>>>>Are you intending to *rotate* the image grid ?
>>>>>
>>>>>
>>>>>If so, then what you want to use is the following
>>>>>combination:
>>>>>
>>>>>
>>>>> * ResampleImageFilter
>>>>> * VersorRigid3DTransform
>>>>> * LinearInterpolator
>>>>>
>>>>>
>>>>>Set the rotation that you expect/want, in the Versor
>>>>>(unit quaternion) parameter of the Transform, and then
>>>>>run the ResampleImageFilter. The output image will have
>>>>>the description of the new output Grid.
>>>>>
>>>>>
>>>>>Note that another alternative is to use the IdentityTransform
>>>>>and setting a non-identity matrix in the OutputDirection()
>>>>>of the ResampleImageFilter. In other words, you set in the
>>>>>ResampleImageFilter the DCM (direction cosines) values that
>>>>>you want to see in the output image. These two procedures
>>>>>are *not* equivalent. The one with the VersorRigid3DTransform
>>>>>is what you would do in the context of image registration, when
>>>>>you are relating two different coordinate systems. While the
>>>>>IdentityTransform is what you would use when you simply want
>>>>>to resample the image with a new image grid.
>>>>>
>>>>>
>>>>>You will find this explained in detail in the
>>>>>ITK Software Guide.
>>>>>
>>>>> http://www.itk.org/ItkSoftwareGuide.pdf
>>>>>
>>>>>in particular in
>>>>>Section 6.9.4. "Resample Image Filter"
>>>>>in pdf-pages: 254-274.
>>>>>
>>>>>
>>>>>
>>>>> Regards,
>>>>>
>>>>>
>>>>> Luis
>>>>>
>>>>>
>>>>>
>>>>>---------------------
>>>>>Frank Ezekiel wrote:
>>>>>
>>>>>>Luis:
>>>>>> I'm not sure this example meets my needs. I'm working with 3D data, and want to EXPAND the FOV with appropriate modifications to the DCM and quaternion records.
>>>>>> So, my ideal app would take a 3D input nifti file of any resolution and dimensions and produce one of the requested resolution and dimensions.
>>>>>>
>>>>>>The command line might look like this:
>>>>>>Prog SourceData TargetFile Output: xdim ydim zdim xres yres zres
>>>>>>"ModifyImageExtent InputImageName OutputImageName 256 256 256 1.0 1.0 1.0"
>>>>>>
>>>>>> So far I've looked at the ResampleVolumesToBeIsotropic example, and this works just fine for changing the image extent. BUT, it does not show examples of how to use ITK to modify the DCM and quaternion because in this example the image extent is not changing.
>>>>>>
>>>>>> Do you have any other examples and/or functions to suggest?
>>>>>>Thanks in advance,
>>>>>>Frank
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>At 02:01 PM 3/26/2009, you wrote:
>>>>>>
>>>>>>
>>>>>>>Hi Frank,
>>>>>>>
>>>>>>>It looks like you simply need to extract a region of interest
>>>>>>>out of an input nifti file.
>>>>>>>
>>>>>>>You may want to look at the example:
>>>>>>>
>>>>>>> Insight/Examples/IO
>>>>>>> ImageReadRegionOfInterestWrite.cxx
>>>>>>>
>>>>>>>
>>>>>>>Regards,
>>>>>>>
>>>>>>>
>>>>>>> Luis
>>>>>>>
>>>>>>>
>>>>>>>---------------------
>>>>>>>Frank Ezekiel wrote:
>>>>>>>
>>>>>>>
>>>>>>>>Hi Luis:
>>>>>>>> I'm looking for a tool that will transform a 3d nifti file containing structural data so that the FOV is changed and the header information (DCM, Quaternion) is properly modified. For example, I might want to transform an image within a 256 mm cubed volume such that it only contains the central 200mm on each axis.
>>>>>>>> Can you suggest any existing ITK examples? Or point me towards the functions I'll need?
>>>>>>>>Thanks as always,
>>>>>>>>Frank
More information about the Insight-users
mailing list