[vtkusers] Rendering of large data sets
Adrian Friebel Work
friebel at izbi.uni-leipzig.de
Tue Mar 13 11:15:31 EDT 2012
Hello,
first my problem:
I have binary image data (1024x1024x~100)(140MB) of a segmented network.
The substructures of this network are very thin, only a few pixels wide.
I extracted a skeleton of this network, converted it into a graph and
post-processed the graph.
Now I want to visualize the segmented network together with the graph.
Since the network is very big and the substructures are so small it is
necessary to visualize only a region of interest, but it should be
possible to move the whole segmented network together with the graph
through this region (or other way round the ROI over the data).
My approach at the moment:
Read the image data and extract the contour, in order I get a PolyData
object. I give this to a poly data mapper and add some clipping planes &
make it a bit less opaque to be able to see the graph inside. The graph
is also converted into PolyData (mapper, clipping planes ..). With GUI
sliders I set new positions for the actors. In principle everything
works fine, as long as I use small data sets (300x300x35).
If I want to visualize the original data sets I run into problems:
- The contour filtering takes quite a while + needs huge amounts of
memory. On my windows machine it results in a crash. On Linux it works.
- The rendering is extremely slow (for data set sizes of ~ 512x512x100),
if i try to interact with the objects (rotate, move them through the
ROI, ...)
I already use ReleaseDataFlagOn for my filters for smaller memory
footprint and ImmediateModeRenderingOn (I know, makes rendering slow,
but otherwise the PolyData object is too large) for the mappers. I also
tried the Decimate Filters. The only one which runs in acceptable time
for the bigger data sets is Quadric Clustering but it doesn't preserve
the topology which is a problem to some extent. And I am not sure if
memory consumption isn't to high when I use another division number in
order to get more accurate results.
Are there some tricks or should I use another approach (e.g. volume
rendering?) or is the data set simply to large?
I would extremely appreciate any thoughts, remarks or of course
solutions you come up with. :)
Best regards,
Adrian.
More information about the vtkusers
mailing list