[Insight-developers] Blox Images

Damion Shelton dmshelto@andrew.cmu.edu
Tue, 29 Jan 2002 17:11:28 -0500


We worried about this for a while when designing the blox hierarchy...

> It appears that these classes are performing filtering right in the Image
> subclass, which does not follow the pipeline mechanism.

That's correct. These are not "normal images", nor are they intended to be. 
The problem is that a linked list at each pixel is a fundamentally 
different sort of thing than most other primitive data types. This would 
result in compile errors any time you attempted to combine blox images with 
"normal" filters or blox filters with "normal" images.

The input and output data types are also "locked". The "output" of the 
routine:

>   /** Walk the source image, find core atoms, store them.  */
>   void FindCoreAtoms();

produces a very specific type of type of output (a BloxCoreAtomItem), for 
which there is no possible cast to other data types (ints, floats, etc.) 
Most of the other filter-type functions found in the blox classes are 
similar. You would end up with a bunch of filters with names like:

BloxBoundaryPointImageToBloxCoreAtomImageImageToImageFilter

or even worse:

BloxCoreAtomImageToBloxCoreAtomEigenanalyzedImageToImageFilter

Names aside, the "bad" things about these filters is that they could _only_ 
take a very specific kind of input image and would only produce a very 
specific kind of output image.

> I think these classes should be changed to filters.  A normal itk::Image
> should be sufficient to store the Blox image pixels, given the proper
> pixel type as its first template argument.

Subclassing BloxImage's from Image allows the addition of additional image 
parameters, relevant to blox images but not to normal images. For example, 
without adding an m_NumItems parameter to BloxImage (which may or may not 
be implemented yet - don't remember offhand) it's impossible to tell how 
many items are stored in the BloxImage without explicitly counting every 
time you want that information.

In other words, a blox image is not just a regular image with a special 
pixel type.

> A filter should be able to
> take the source image and produce the corresponding Blox image using the
> same analysis done by the FindBoundaryPoints method.  The pipeline
> mechanism would take care of much of the work currently implemented by
> hand in the test.

I agree that it would work, but I don't know that implementing it that way 
would increase clarity or efficiency in the code. We do gain quite a bit by 
subclassing from image - ability to use iterators, in particular, but I 
don't know that adhering to the pipeline architecture (in this instance) 
really gains anyone anything, and creates a large number of very specific 
filters that don't allow us to do anything we can't already do.

I don't have any philosophical problem with "smart" images, that are more 
than "dumb" pixel containers, as long as that idea isn't abused. 
Comments/thoughts?

-Damion-