[Insight-users] Performance issues for single slices
Simon Warfield
warfield at bwh . harvard . edu
Fri, 20 Jun 2003 11:19:42 -0400
Hi Luis,
Thanks for the comments. The thinking behind the current design is clear.
Luis Ibanez wrote:
> Hi Simon
>
> ---------------------
> Simon Warfield wrote:
> >
> > I am giving an example of the latter - running a 2D filtering
> process on
> > 2D data extracted from a 3D volume --- the motivation being that
> e.g. a
> > user may interactively tune parameters on the 2D slices and then
> run the
> > presumably longer but same filtering on the 3D with the adjusted
> > parameters. An example might be a noise smoothing filter.
> >
>
> I agree with you in that it is useful and ergonomic to process a
> single slice in order to get a feeling of the right parameters
> to use in the 3D volume. Many of the developers are doing this
> for their applications. However, as Ross said, this is an application
> issue. Something that is done only for the convenience of the user
> interaction.
There is a fuzzy boundary between application issues and toolkit
issues. After all, the toolkit would not be valuable unless
applications were built from it. So there is a question of how much
complexity should be in the toolkit implementations which will be
carried out once and automatically regression tested etc., and how much
complexity should be left for the application developers to handle,
where certain idioms for working with ITK may be repeatedly implemented.
>
>
> ITK provides the ExtractImageFilter for dealing with these cases.
> http://www . itk . org/Insight/Doxygen/html/classitk_1_1ExtractImageFilter . html
> You extract the desired 2D slice image from the 3D volume, and
> then you can proceed to feed it into a 2D pipeline.
>
> Notice that parameters fitted in a 2D image do not necessarily
> apply well to the 3D image from which the slice was selected.
> Typical cases are the multiplier of the ConfidenceConnected filter
> for region growing and the curvature scaling for level set filters
> like ShapeDetection and GeodesicActiveContours. Image dimension is
> relevant for those parameters.
Absolutely true.
>
>
>
> Instead of extracting a slice and doing 2D processing, you
> could take advantage of the streaming mechanism in ITK for
> requesting a filter to process only a restricted region of
> the output image. In this case the region will be the desired
> (Nx * Ny * 1 ) image associated with a single slice. The advantage
> of this approach is that you get the exact same slice you would
> have obtained if you first process the entire volume and then
> extract one slice.
This is a valuable capability.
>
>
>
> >
> > What makes it more computationally expensive to process
> > a 1xNxM set of voxels than to process an NxM set of voxels ?
> >
>
>
>
> The extra computations are due to the fact that the filter has to
> keep exploring all dimensions of the image (3D in this example),
> looking for potential pixel neighbors. Most of these queries will
> simply return saying that there is not neighbor pixel there,...
> so a lot of time is wasted asking boundary questions.
>
> In the ITK implementation all those useless question are factorized
> and solved at once at compilation time if you instantiate your
> filter as being 2D.
>
>
>
> >>
> >> One of the reasons, as I recall, we templated the ITK code by
> >> dimension was to avoid all of the case statements assocated with
> >> checking on the dimensionality of the data.
> >>
> > So it is basically a design choice to simplify implementation
> > with some performance implications ?
> >
>
>
>
> ITK filters are N-dimensional. This versatility is achieved by the use
> of ImageIterators which are in charge of visiting the image pixels.
> Thanks to ImageIterators you don't have to deal with the nightmare
> of nested for()-loops in the filter implementation. Note that even an
> apparently trivial filter like the convolution-filter will get pretty
> ugly when implemented to run in a 5D image, specially if you include
> the right management for bounday conditions (as ITK does).
So we can say of ITK filters, they are N dimensional, they generate
correct results at boundaries, and if you have N-1 dimensional data, or
you run the ExtractImageFilter to get N-1 dimensional data, a template
instantiation for that will work just fine, or you can use streaming to
handle it just fine.
But N dimensional filters aren't happy if one of the dimensions
consists of a lot of boundary. Along that dimension, the iterators just
check for non-existing data a lot of the time.
>
>
> ImageIterators are dimension-aware, they assume the responsibility
> of dealing with the intricacies of visiting all the image pixels,
> and querying values on their neighbors.
>
> When we pass a degenerate volume to a 3D filter, we are forcing the
> ImageIterator to evaluate the third dimension all the time. A native 2D
> iteration will, on the other hand, look for values only in the dimension
> where there is actually data availabe.
Let's imagine an application developer using ITK to tackle
implementation of a tool to enable a user to filter some 3D data.
Case a: Imagine ImageIterators deal with degenerate volumes efficiently.
Application developer uses ITK filters to build an application
enabling users to process sub-volumes of interest to the user, selected
at runtime. Everything is wonderful and simple for the developer and the
user.
The benefit to having this in ITK is the testing mechanism ensures
once it is done right, it stays that way.
Case b: ImageIterators are simplified to not deal with degenerate
volumes efficiently.
Application developer uses ITK filters to build an application
enabling users to process sub-volumes of interest to the user, selected
at runtime. The developer discovers a performance problem when a user
asks to process certain subsets of the data, or loads certain
acquisitions. A debugging and performance monitoring process ensues,
and the developer identifies the ImageIterator handling in certain cases
as the problem.
So the developer instantiates processing filters not just for N
dimensional data, but for every dimension 1,2,...,N-1 as well. Then
introduces a large switch statement and special cases the handling of
certain subsets of the data so as to get efficient filtering on all
legal inputs.
The lack of benefit to the application developer community is that by
not having this in ITK, the process gets repeated by every application
developer hitting the problem, and that application code may never get
regression tested which will hurt the user commnuity as well.
>
>
>
> >
> > I just have a feeling that a filter should work on a NxMxO data set,
> > irrespective of the magnitude of N, or M or O. A filter that doesn't
> > work or goes slow for any of N,M,O = 1 is broken.
> >
>
> Not quite,
>
> "broken" is probably not the right adjective to use here.
>
> Most filters based on neighborhood computation require a reasonable
> image size. E.g. at least as big as the neighborhood size.
Of course, appropriate handling of boundary conditions is important
whatever the size of the input data is. If you were to run a filter on,
say a thin slab CT acquisition where CT radiation dose is minimized by
acquiring only the anatomy around a tumor, I would hope the boundaries
of the slab are still treated in a sensible manner, despite not having
acquired an entire head.
Since ITK includes N-1 dimensional filters for each of the N
dimensional filters, given that it is templated over dimension, clearly
there is a meaningful definition of the filter in a subset of the data.
>
>
>
> Consider the following cases:
>
>
> A) What should be the result of mathematical morphology
> erosion applied in a volume made of a single slice ?
>
> If we assume that the boundary conditions are that the image is
> zero in the rest of the 3D space... then 3D erosion will simply
> eliminate all the slice data. If we assume mirror or replicated
> conditions then we are wasting computations on trivial operations.
The assumptions of the boundary conditions are independent of the size
of the data.
Whether we assume 0 outside the image, or mirror or replication of the
boundary, the operations can be efficiently implemented for NxMx1 data.
>
>
>
> B) How can we interpret the evolution of a 3D level set in a single
> slice ?
>
> Is the zero set running along the flat surface of the slice ?
> Should we imagine the slice as bein replicated ab-infinitum in
> the 3rd dimension, and hence waste computation on that direction?
>
>
> C) What should a 3D gradient filter return when applied to a
> single slice ?
>
> Should it return a 2D image with 3D vectors ?
> and if so,... what should the third component of such vectors be ?
> zero? or infinity ?
>
>
>
> I would actually mistrust any N-D filter that accept to process
> (N-1)-Dimensional data sets. Chances are that its mathematical
> implementation is questionable.
Consider the efficient distance transform algorithm of Breu et al. PAMI
1995, or that of Maurer et al. IPMI 2001 and PAMI 2003 entitled ``A
Linear Time Algorithm for Computing Exact Euclidean Distance Transforms
of Binary Images in Arbitrary Dimensions''. This describes an optimally
efficient exact distance transform algorithm that operates effectively
via recursion over dimension, where the N dimension solution requires
first the calculation of the N-1 dimension solution.
>
>
>
> Regards,
>
>
>
> Luis
>
>
>
>
--
Simon