[Insight-developers] Blox Images

Brad King brad.king@kitware.com
Tue, 29 Jan 2002 22:21:21 -0500 (EST)


Damion,

> stored in the vector. You wouldn't implement a class 
> NormalizeVectorToVectorFilter because the normalization operation is a 
> fundamental sort of thing you do with vectors.
However, one might implement a filter to take an input vector image and
produce an output image in which every pixel has been normalized.

> I suppose I also don't think of BloxImages as "true images", despite
> their parentage. They are a data structure that can be navigated
> (superficially)  in a manner similar to images, but there is a lot
> about them that is not very image-like.
Very true, but the Mesh isn't very image like either, but it is
included in the pipeline architecture.

> template <typename TBloxPixelType, unsigned int VImageDimension=3, 
> traits...>
Oops, you are right, of course.

> > Changing the implementation to a pipeline filter may not increase the
> > clarity of the code for this specific case, but it will greatly help in
> > consistency with the rest of ITK.  This consistency will automatically
> > increase clarity overall.
> 
> The problem is that anything you do with BloxPixels is a "specific case". 
I meant "specific case" as if the blox classes were a stand-alone example
independent of ITK.  They wouldn't be confusing in that case.  As part of
ITK, though, the pipeline should be consistent.

> Here's what I think can be successfully changed:
> 
> itkGradientImageToBloxBoundaryPointImageFilter - handles creation of 
> boundary point images given an image of covariant vectors represent 
> non-normalized gradients
> 
> itkBoundaryPointToCoreAtomBloxImageFilter - creates core atoms given an 
> image of boundary points
Okay, these two would take care of most of my concern.  Currently these
appear to be done by setting an "Input" separate from the pipeline and
then calling some other method.  They should be able to fit on the end of
an imaging pipeline so that modifications to the pipeline further back
cause them to be re-executed automatically on the next update.

> The remaining operations one performs on a core atom image, eigenanalysis 
> and voting, modify ivars of CoreAtomPixels - you don't get a "new" image as 
> a result, so I'm uncomfortable implementing these as filters. If you did, 
> you'd either:
> 
> 1) Use the same input and output image types, with the input having "empty" 
> ivars and the output having "filled"
> 
> 2) Use different types, with additional ivars in the output image
I see two options here:
  - Implement the filter to perform the analysis on the CoreAtomBloxImage.
    Output is produced by copying the input, and then running the
    algorithm to fill in the ivars.

  - Consider the output from itkBoundaryPointToCoreAtomBloxImageFilter,
    a CoreAtomBloxImage, the final output of the pipeline.  The
    program would then do the remaining operations with the provided
    methods.  It would have to do an Update on the last filter in the
    pipeline to make sure everything is up to date.  This is similar to
    the current approach in the blox test, but moves the dividing line
    closer to the end of processing.

-Brad