[vtkusers] Optimizing memory usage with MemoryLimitImageDataStreamer (continued)

Kalle Pahajoki kalle.pahajoki at gmail.com
Tue Nov 7 13:23:15 EST 2006


Ignore the previous message with the same title, it was accidentally sent
prematurely.

Hi

I'm developing a software with VTK that sometimes needs to process very
large datasets It is quite easy to run into a situation where the software
eats up all memory and has to be killed. I've had some success with using
streaming (especially the vtkMemoryLimitImageDataStreamer) but I'm not sure
if I'm making use of it's full potential. Therefore, I have a couple of
questions.

Our software is developed in a way where there can be a variable amount of
processing filters between the datasource and when it's rendered. Currently,
the software doesn't build a pipeline in the traditional sense, where you
connect all the filters together and then update the last one, but instead,
most of the steps Update the pipeline and pass the processed data forward.

A typical "pipeline" inside the program might be:

XMLImageDataReader -> Reslice -> \
                                  }- Merge the datasets to an RGB image ->
Extract single slice -> Show it (outside VTK)
XMLImageDataReader -> Reslice -> /

Am I correct in assuming that it would be more efficient to construct such
pipeline without updating it, and then execute with streaming the whole
pipeline, than it is to execute each part of the pipeline with streaming by
itself?

To clarify, currently (because none of the step really know about preceding
steps) I do something like this, which seems a bit unclean:

XMLImageDataReader -> Reslice -> Streamer -> \
                                              }- Merge the datasets to an
RGB image -> Streamer -> ... and so on
XMLImageDataReader -> Reslice -> Streamer -> /


2) Can the readers utilize streaming, so in the above pipeline example, can
the whole pipeline really be streamed, or is it necessary to read the whole
dataset to memory and only after that stream the pieces through the
pipeline?

3) If you have any other tips for how to manage the processing of large
datasets (using Python, if that makes a difference) they'd be more than
welcome.

Kalle
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.vtk.org/pipermail/vtkusers/attachments/20061107/0743c9af/attachment.htm>


More information about the vtkusers mailing list