[Insight-developers] Image as an OrientedImage Progress : Introducing Coordinate System Tree ?

Luis Ibanez luis.ibanez at kitware.com
Mon Oct 6 09:10:21 EDT 2008


Hi Steve,

Thanks for looking at the proposal.

About your comments/questions:


1) Yes,
    the Orientation has been fully implemented in the itk::Image.

2) Yes, introducing the scene graph representation would
    mean to make itk::OrientedImage deprecated. Although
    not because it is wrong to use it, but because it will
    be redundant with the behavior of the itk::Image.

    In any case, the itk::OrientedImage was supposed to be a
    transitional device.


3) In a 2D image embedded in 3D space:

    We will instantiate the image with dimension 2.

    The transform node above it in the scene graph will have
    two 3D vectors. There are 2 vectors, because each one
    correspond to one of the axis of the 2D image. The vectors
    have 3D components because they are embedded in a 3D space.

    You bring a good point about the image Origin. The transform
    nodes will have to have Translations in addition to the
    direction cosines. That translation will be in 3D coordinates
    and will place the image corner in 3D with respect to the
    parent node (Good catch).

    Looking again at the SpatialObjects, we will have to decouple
    the image dimension from the space dimension. That could be done
    by adding another template parameter to this class.

4) In the 3D image embedded in 3D space:

    Yes the transform node above the image should include a
    Translation in 3D, for the purpose of locating the image
    in 3D space.


5) For the 4D image embedded in 3D space:

    If the sampling across time is not uniform, then this
    could not be considered a 4D image, but simply a collection
    of 3D images.

    ITK assumes regular spacing along every dimension.

    If the spacing along time is not regular, then it won't
    be possible to represent this image as a 4D image.

    Of course, there is always the option of resampling/interpolating
    across time, in order to get a regularly sampled 4D image.



      Luis


--------------------------
Steve M. Robbins wrote:
> Hi Luis et al.,
> 
> I'm still trying to get my head around your proposal.
> 
> Incidentally, what is the status of
> http://www.itk.org/Wiki/Proposals:Orientation?  The basic proposal
> appears to be "add direction cosines to itk::Image", which is now
> done, right?
> 
> 
> On Tue, Sep 23, 2008 at 11:01:03AM -0400, Luis Ibanez wrote:
> 
>>Here is a proposal:
>>
>>
>>It seems that we could go around the challenges of the ImageOrientation
>>by introducing in ITK a formal representation of coordinate system
>>hierarchies. (most of which is already implemented in SpatialObjects).
> 
> 
> What does this proposal mean for itk::OrientedImage?  Would it be
> deprecated?  Discouraged in favour of ImageSpatialObject?
> 
> 
> 
>>Here is how it could work:
>>
>>Case A: Image 2D embedded in a 3D Space:
>>========================================
>>
>>   The image Directions will be 2D x 2D, and the image will
>>   be the child node of another node that represents the
>>   3D host space. The link between the two nodes will contain
>>   a 2D x 3D matrix transform mapping the two coordinate axes
>>   of the 2D image to the 3D host space.
> 
> 
> Do I understand correctly that the Image would be instantiated with
> TDimension=2?  Does this image hold the pixel spacing and handle
> conversion from IndexType to a (2D) "physical point";
> i.e. Image::TransformIndexToPhysicalPoint()?  What is the Origin set
> to?  What are the direction cosines set to?
> 
> Naively, it seems that you have to leave the 3D origin and 3D
> direction cosine information in the child-to-parent transformation.
> The 2D origin and directions would be fixed to (0,0) and the identity?
> 
> Currently, it looks like ImageSpatialObject always contains an image
> of the same dimension, so the spatial object would have to be 2D.  It
> also appears that a child SpatialObject is always the same dimension
> as its parent.  In order to have a 2D image in a 3D SpatialObject,
> would you relax one or both of these constraints?
> 
> 
> 
> 
>>Case B: Image 3D embedded in a 3D space:
>>========================================
>>
>>   The image directions will be 3D x 3D.
>>   There is no need of parent node, but for consistency we could
>>   always have a parent node representing the 3D host space.
>>   The transform between the image node and the host space node
>>   will be an identity transform of 3Dx3D.
> 
> 
> Alternatively -- for consistency with Case A -- you could keep the
> origin and direction information in the child-to-parent
> transformation.
> 
> Introducing the SpatialObject means that for the "ND image in an ND
> space" case, there are two places to store the index-to-world
> transformation.  I worry that this makes it too easy to get it wrong.
> 
> 
> 
>>Case C: Image 4D embedded in a 3D space:
>>=========================================
>>
>>   The image directions will be 4Dx4D and they will represent
>>   the concept of "simultaneity of acquisition" from the scanner.
>>
>>   The image node will be a child of the parent host space node.
>>   and the transform relating them will be a 4D x 3D transform,
>>   that will represent the actual orientation of the dataset
>>   in the 3D world.
> 
> 
> Suppose the 4D data is a time series of 3D volumes.  The data may not
> be sampled uniformly in time.  What is the Image's spacing set to?
> 
> 
> -Steve


More information about the Insight-developers mailing list