[Insight-users] Performance issues for single slices
Luis Ibanez
luis . ibanez at kitware . com
Thu, 19 Jun 2003 15:56:43 -0400
Hi Simon
---------------------
Simon Warfield wrote:
>
> I am giving an example of the latter - running a 2D filtering process on
> 2D data extracted from a 3D volume --- the motivation being that e.g. a
> user may interactively tune parameters on the 2D slices and then run the
> presumably longer but same filtering on the 3D with the adjusted
> parameters. An example might be a noise smoothing filter.
>
I agree with you in that it is useful and ergonomic to process a
single slice in order to get a feeling of the right parameters
to use in the 3D volume. Many of the developers are doing this
for their applications. However, as Ross said, this is an application
issue. Something that is done only for the convenience of the user
interaction.
ITK provides the ExtractImageFilter for dealing with these cases.
http://www . itk . org/Insight/Doxygen/html/classitk_1_1ExtractImageFilter . html
You extract the desired 2D slice image from the 3D volume, and
then you can proceed to feed it into a 2D pipeline.
Notice that parameters fitted in a 2D image do not necessarily
apply well to the 3D image from which the slice was selected.
Typical cases are the multiplier of the ConfidenceConnected filter
for region growing and the curvature scaling for level set filters
like ShapeDetection and GeodesicActiveContours. Image dimension is
relevant for those parameters.
Instead of extracting a slice and doing 2D processing, you
could take advantage of the streaming mechanism in ITK for
requesting a filter to process only a restricted region of
the output image. In this case the region will be the desired
(Nx * Ny * 1 ) image associated with a single slice. The advantage
of this approach is that you get the exact same slice you would
have obtained if you first process the entire volume and then
extract one slice.
>
> What makes it more computationally expensive to process
> a 1xNxM set of voxels than to process an NxM set of voxels ?
>
The extra computations are due to the fact that the filter has to
keep exploring all dimensions of the image (3D in this example),
looking for potential pixel neighbors. Most of these queries will
simply return saying that there is not neighbor pixel there,...
so a lot of time is wasted asking boundary questions.
In the ITK implementation all those useless question are factorized
and solved at once at compilation time if you instantiate your
filter as being 2D.
>>
>> One of the reasons, as I recall, we templated the ITK code by
>> dimension was to avoid all of the case statements assocated with
>> checking on the dimensionality of the data.
>>
> So it is basically a design choice to simplify implementation
> with some performance implications ?
>
ITK filters are N-dimensional. This versatility is achieved by the use
of ImageIterators which are in charge of visiting the image pixels.
Thanks to ImageIterators you don't have to deal with the nightmare
of nested for()-loops in the filter implementation. Note that even an
apparently trivial filter like the convolution-filter will get pretty
ugly when implemented to run in a 5D image, specially if you include
the right management for bounday conditions (as ITK does).
ImageIterators are dimension-aware, they assume the responsibility
of dealing with the intricacies of visiting all the image pixels,
and querying values on their neighbors.
When we pass a degenerate volume to a 3D filter, we are forcing the
ImageIterator to evaluate the third dimension all the time. A native 2D
iteration will, on the other hand, look for values only in the dimension
where there is actually data availabe.
>
> I just have a feeling that a filter should work on a NxMxO data set,
> irrespective of the magnitude of N, or M or O. A filter that doesn't
> work or goes slow for any of N,M,O = 1 is broken.
>
Not quite,
"broken" is probably not the right adjective to use here.
Most filters based on neighborhood computation require a reasonable
image size. E.g. at least as big as the neighborhood size.
Consider the following cases:
A) What should be the result of mathematical morphology
erosion applied in a volume made of a single slice ?
If we assume that the boundary conditions are that the image is
zero in the rest of the 3D space... then 3D erosion will simply
eliminate all the slice data. If we assume mirror or replicated
conditions then we are wasting computations on trivial operations.
B) How can we interpret the evolution of a 3D level set in a single
slice ?
Is the zero set running along the flat surface of the slice ?
Should we imagine the slice as bein replicated ab-infinitum in
the 3rd dimension, and hence waste computation on that direction?
C) What should a 3D gradient filter return when applied to a
single slice ?
Should it return a 2D image with 3D vectors ?
and if so,... what should the third component of such vectors be ?
zero? or infinity ?
I would actually mistrust any N-D filter that accept to process
(N-1)-Dimensional data sets. Chances are that its mathematical
implementation is questionable.
Regards,
Luis