[vtkusers] Streaming Pipeline
the.render.dude at gmail.com
Thu Mar 19 10:11:42 EDT 2009
We had determined that this subset of the code is where we were having
problems, so it's what we were exercising. Typically we have an Image Viewer
attached to the end of the pipeline with a few other modules thrown in..
The dataset is a small subsample of the actual data. The data is to large to
fit into memory without either going parallel or streaming, hence the test I
A typical use case is for a user to read in a particular image, drill into a
subregion and trace a feature through the layers above and below (typically
around 10 images on either side of the target). They then either make some
adjustments or edit some meta-data and then move to another subregion on the
same layer, or zoom out, switch layers, and repeat the process.
So, issue number 1 is the data handling piece which is why I was looking at
the image streaming/streaming executive. The second issue is that the
graphic display of large images is rather pokey at 6k images, and we'll be
moving to tiled 6k images for even higher resolution.
Hope that clarifies what we're doing and maybe give some insight for a
On Thu, Mar 19, 2009 at 5:43 AM, Kevin H. Hobbs <hobbsk at ohiou.edu> wrote:
> Mark Bolstad wrote:
>> Hello list,
>> Can someone look at this simple pipeline and explain if I'm using the
>> StreamingPipeline correctly? It's operating on a set of 6k x 6k images, but
>> any large image set would work. I would expect that the streamer would help
>> reduce the memory usage, but I'm blowing the memory on my mac no matter how
>> I set it,
> I'm very confused by that code, but it looks like you are trying to load a
> series of images, combine them into a volume, and have the whole thing
> colorized. Is that correct?
> I can not see any sink (a writer or a renderer) for these data.
> It looks like you are trying to stream the reading of each piece.
> Typically slice readers (like .png and .jpeg) can not read any less than
> one slice at a time.
> When asked for part of a slice they read the whole thing and throw out the
> I find it's best to save streaming until the very end so that as much of
> the pipeline as possible is streamed. If you can use a streamed writer at
> the end of the pipeline, then it's possible to avoid EVER loading even one
> copy of a whole volume.
> The png reader can do the appending of many slices on it's own all you have
> to do is give it the file pattern and I think the number of slices.
> Why doesn't your pipeline look like this?
> vtkPNGReader -> vtkLookupTable -> vtkImageDataStreamer
> This pipeline could require as little RAM as the final volume plus a few
> If you want to write the colorized volume to disk in say vti format your
> pipeline could look like this:
> vtkPNGReader -> vtkLookupTable -> vtkXMLImageDataWriter
> and use as little RAM as one slice plus a few pieces of one slice.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the vtkusers