[Insight-developers] Blox Images

Damion Shelton dmshelto@andrew.cmu.edu
Tue, 29 Jan 2002 21:31:53 -0500


Ok... I've modified my position somewhat (see the end), but still have some 
concerns. I had this paragraph further down, but I think I'll lead off with 
it:

I guess the way I think of the problem is something like the itk::Vector 
case. Vector's have a normalize() function call, which operates on the data 
stored in the vector. You wouldn't implement a class 
NormalizeVectorToVectorFilter because the normalization operation is a 
fundamental sort of thing you do with vectors. Many of the things the 
different BloxImages/Pixels can do operate like this.

I suppose I also don't think of BloxImages as "true images", despite their 
parentage. They are a data structure that can be navigated (superficially) 
in a manner similar to images, but there is a lot about them that is not 
very image-like.

> I agree that normal filters cannot work with the Blox images, however,
> there is no reason someone shouldn't be able to write new filters that can
> work with a BloxImage.

The class BloxImage actually doesn't do anything at present. It's an 
artifact of the pixeltraits days. However, it seems to make sense, and will 
probably be useful in the future. It currently is templated in exactly the 
same fashion as Image itself.

Because Blox is kind of a general concept, the functionality of a given 
blox image is determined by it's pixel type. There are two subclasses of 
BloxImage, called BloxCoreAtomImage and BloxBoundaryPointImage, with pixel 
types of BloxCoreAtomPixel and BloxBoundaryPointPixel respectively. These 
are fixed in each of the two subclasses (i.e. not template parameters).

Keep in mind that aside from the general concept of a blox being a linked 
list stored as what is essentially a "large" pixel, these two different 
blox images actually do very different things. So, aside from the way you 
navigate them, there's actually little functional similarity between the 
two.

Therefore, the idea of a filter "working with" a blox image is kind of 
nebulous. It has to work with a _particular_ blox image (a particular pixel 
type).

> The set of operations defined for blox images
> shouldn't be limited by the methods implemented by the BloxImage classes.

Agreed. However, the current implementation doesn't prevent that. All the 
current implementation says is that "filters" (functions) defined within 
the BloxImage subclasses can only be used on those classes, which is in 
fact the case, since the operations require a specific pixel and image type.

>  This is the idea of the pipeline: new filters can be added to perform new
> operations on the existing class representations of their input/output
> data.  The documentation could simply state that only filters with the
> suffix "BloxImageFilter" can be used with a BloxImage (see below).

Hmmmm.... weeelllll.... ok, I'll give you this one.

> Since BloxImage inherits from Image, and the filters only apply to
> BloxImage types, the names could be:
>
>   BoundaryPointToCoreAtomBloxImageFilter
>   CoreAtomToCoreAtomEigenanalyzedBloxImageFilter
>
> Note the "BloxImageFilter" suffix.

I'll concede this as well, and this is probably a good idea (although not 
for the eigen analysis example). The most likely candidate would be 
conversion of BoundaryPointImages to CoreAtomImages, which involves 
converting between two different data types. I don't have any problem with 
implementing that.

> Some filters are meant to work only with vector data, others with only
> scalar.  I see no difference here.  It will still be okay to produce
> BloxImage outputs, which could potentially become inputs to new filters
> written in the future.

The analogy would hold if the idea of a BloxPixel weren't so general. A 
BloxPixel and can hold literally anything; as an extreme example, it could 
even hold a list of images at every pixel location. So, while some filters 
might work only on scalar or vector data, a "blox filter" would be similar 
to an edge detector that only took unsigned short int's as input and only 
produced vectors of doubles as output.

Again, I'm not arguing that implementing such a beast is impossible, but in 
the case where the legal operations on the data are so closely tied to the 
representation of the data, the argument is a little less convincing.

> The template argument of the BloxImage could be
> reduced to specify only the representation used inside the blox pixel,
> and/or the dimension of the image (to help lock down the type of
> input/output image).

This already the case:

template <typename TBloxPixelType, unsigned int VImageDimension=3, 
traits...>

> Changing the implementation to a pipeline filter may not increase the
> clarity of the code for this specific case, but it will greatly help in
> consistency with the rest of ITK.  This consistency will automatically
> increase clarity overall.

The problem is that anything you do with BloxPixels is a "specific case". 
Here's what I think can be successfully changed:

itkGradientImageToBloxBoundaryPointImageFilter - handles creation of 
boundary point images given an image of covariant vectors represent 
non-normalized gradients

itkBoundaryPointToCoreAtomBloxImageFilter - creates core atoms given an 
image of boundary points

The remaining operations one performs on a core atom image, eigenanalysis 
and voting, modify ivars of CoreAtomPixels - you don't get a "new" image as 
a result, so I'm uncomfortable implementing these as filters. If you did, 
you'd either:

1) Use the same input and output image types, with the input having "empty" 
ivars and the output having "filled"

2) Use different types, with additional ivars in the output image

Neither of these is particularly satisfactory. However, the two filters 
mentioned above should be easy to constuct and I'd be happy to do so.

-Damion-