[Insight-users] Lack of memory for segmentation
Miller, James V (Research)
millerjv at crd.ge.com
Fri Nov 18 09:16:50 EST 2005
There are a number of strategies for managing memory.
1) SetReleaseDataFlagOn() on a filter will instruct the pipeline to release
a filter's bulk data once a downstream filter has consumed its output. So if
you have A -> B -> C and set ReleaseDataFlag on A and B, then once B updates,
the memory associated with the output of A will be freed, then once C updates,
the memory associated with the output of B will be freed.
2) StreamingImageFilter. You can place a StreamingImageFilter at the end of
your pipeline (or anywhere within the pipeline) and tell the StreamingImageFilter
to divide the problem into N sections. The pipeline will then execute N times,
each time running on a subset of the data and piecing the final result together.
As mentioned, not all filters support streaming and it is not trivially clear
which filters would support streaming. But here is a general rule of thumb:
Any filter that does not rely on having access to every pixel in an image in
order to produce the value for a single pixel SHOULD be streamable. So simple
pixelwise filters (like thresholding, adding, shift/scaling) should stream.
Neighborhood filters (like convolution, derivatives, morphology) should also
stream. Even things liks anisotropic diffusion should stream because one
can calculate a neighborhood of influence based on the number of iterations.
In contrast, operations like FFT, region growing, level sets, and the
entry level watershed do not stream.
One way to see if filters stream is to just try it. Put a streaming image
filter in the pipeline and place SimpleFilterWatchers on the filters in the
pipeline and see how many times each filter in the pipeline updates.
On another note, a 32 bit Windows system only allows a process to access a little
over 1GB of memory (unless you are running the server editions of Windows). This
limit is regardless of how much memory you have in the computer.
Jim
-----Original Message-----
From: insight-users-bounces+millerjv=crd.ge.com at itk.org
[mailto:insight-users-bounces+millerjv=crd.ge.com at itk.org]On Behalf Of
Atwood, Robert C
Sent: Friday, November 18, 2005 8:49 AM
To: Olivier Rousseau; insight-users at itk.org
Subject: RE: [Insight-users] Lack of memory for segmentation
I have experienced similar problems, in fact I was thinking of asking
for some more help on the list...
First, some basic problems need to be ruled out
The data type used to store the information: What is yours? 600 x 600 x
600 x 4 bytes (float data) = 864 000 000 bytes.
Are you on a Windows or *Nix system? I find windows doesn't like big
programs, I think it defaults to a 2/2 split giving your program 2 Gb
max even though you have 4 available, and reserving 2Gb for itself.
Is it a 32 bit processor? Then you cannot (easily) address more than 4gb
for the whole process (data and stacks) so the amount you can use for
data will be less than 4gb
Filtering a volume creates a new data space about the same as the
original, so as soon as you have a couple of connections in a pipeline,
you will exceed 4Gb, since each one could use 860 Mb.
My filter just does these operations: Import, extract region of
interest, extract and print a slice, 3d median with selected kernel,
extract and print a slice, cast to float, apply nonlinear diffusion
filter, cast to original data type, extract and print a slice, write the
volume file; and it would exceed the available memory if the image was
about 300 Mb (and the region of interest is nearly the whole thing) if
I did not apply the method mentioned below.
Hopefully this brings you to the same point as me for this problem, I
think we both need to find out what is the best way to release the
memory used by previous filters in the pipeline? Can the filters be
directed to do so automatically when in a pipleine? I saw something
about 'streaming' but as I recall this is not fully implemented so far,
is this what we want?
Currently I use brace-delimited scopes which seems like a bit of a
kludge, since it demolishes the nice syntax of the pipeline.
Below is an abridged code showing what I have done.
[set up data types etc.]
MyImage::Pointer filteredImage;
[ read the image using importFilter similar to example that came with
ITK]
{
MyPreFilter::Pointer myprefilter = MyPreFilter::New();
prefilter->SetInput(importFilter->GetOutput());
prefilter->Update(); /* actually in a try/catch */
filteredImage = prefilter->GetOutput();
filteredImage->Update(); /* actually in a try/catch */
} /*end block of existance for prefilter */
free(imagedata); /* my raw data not handled by ITK smart pointer */
To see what's going on, I used numerous snippets of the following and
compile with -DVERBOSE
#ifdef VERBOSE
system("free");
#endif /*VERBOSE*/
-----Original Message-----
From: insight-users-bounces+r.atwood=imperial.ac.uk at itk.org
[mailto:insight-users-bounces+r.atwood=imperial.ac.uk at itk.org] On Behalf
Of Olivier Rousseau
Sent: 17 November 2005 19:57
To: insight-users at itk.org
Subject: [Insight-users] Lack of memory for segmentation
Hi,
I am trying to segment a 3D volume, that is 600x600X600 pixels. I ran
ShapeDetectionLevelSetFilter,
but an exception is thrown during the segmentation saying that I'm
lacking memory.
I am surprised since the computer I am using has 4gb of RAM.
Is it possible that I am doing something wrong?
Otherwise, what size of 3D volume can I expect to be able to segment?
Or, is it possible to run this segmentation algorithm on a cluster?
Thanks
Olivier
_______________________________________________
Insight-users mailing list
Insight-users at itk.org
http://www.itk.org/mailman/listinfo/insight-users
More information about the Insight-users
mailing list