[vtk-developers] Streaming slow, requires a lot of memory
tom fogal
tfogal at apollo.sr.unh.edu
Tue Jun 14 20:46:47 EDT 2005
Hey all, we're working with growing datasets that have in some cases
started to exceed physical RAM on our workstations. I've been working on
utilizing streaming to help mitigate the problem, and it seems that I'm
doing everything correctly and still not seeing the improvements I was
hoping for.
I've written a convertor that takes the format we have and spits out a
vtkXMLRectilinearGrid file, of a configurable number of pieces. I've
tried with pieces of ~80 meg / field, and ~160 meg / field, where there
are 3 vector fields and 3 scalar fields in the output rectilinear grid.
In both MayaVi and a custom C++ app I've developed, the datasets take a
prohibitive amount of memory (and thus time) to work with.
I was under the impression that once I had a reader/format that
supported streaming, the maximum amount of memory that would be
required for processing would be the size of a piece plus some caching
overhead. It seems as if the entire dataset is being read in regardless
of what portions are to be used.
I'm using VTK4.4. I haven't checked recently but I vaguely remember
comparing a 'grep -r "Piece" *' from VTK CVS and VTK4.4 a month or two
ago, and seeing tremendous differences. Are there a significant number
of filters / algorithms that don't take advantage of streaming in
VTK4.4? If there is a pipeline component that doesn't 'obey' streaming,
does that ruin the fun for the rest of the pipeline?
Thanks for a great library, and any clarification you could offer,
-tom
More information about the vtk-developers
mailing list