[vtkusers] Multi block structured grid support?
jnorris at mcs.anl.gov
Fri Sep 28 13:09:46 EDT 2001
Once upon a time, Prabhu Ramachandran wrote:
> I was thinking of ways to support multi-block structured data for VTK.
> This kind of data is frequently used by CFD users. I did a search on
> on the topic and the only relavant thread I got was the one here:
> David Marshall also expressed interest to develop a
> vtkCompositeStructuredGrid class that treats multiple blocks as a
> single data set. I think this is an important thing to be able to do.
> Would appreciate if someone who has done this lets us know about their
> work. Or if someone could *please* give me a few tips as to where I
> should start looking to see if this is possible and how one may go
> about doing this in the "right" way?
I've had to do a similar thing for my visualization tool, but it has to
support both structured and unstructured blocks. The only real solution
available to me was to merge the blocks into one vtkUnstructuredGrid. This
has quite a few disadvantages, of course; filters/routines for unstructured
grids are generally slower than the corresponding filters/routines for
structured grids, but I'm forced to use the slower ones. Then there's also
the added memory overhead, since the cell structure isn't implicit.
A vtkCompositeGrid class that supported multiple blocks and used to most
efficient version of a particular filter/routine for each block would be
a dream come true.
> AFAIK, the parallel framework allows people to split their
> visualization (and data) amongst different processors and yet treat
> the visualization itself as one, i.e. the algorithms are applied to
> the whole data and not to individual pieces. So there is already some
> kind of multi-block support across multiple machines. Shouldn't it be
> easy to use this to support true muti-block data?
The client-server version of my vis tool uses a parallel server, but I'm
taking advantage of the fact that the data being visualized was generated on
the same server. This means that the data is already distributed across
the nodes (we have a linux cluster). Why merge the data, only to have to
partition it back up again? My server consists of a master process on the
server front-end, and slave processes on the compute nodes. The slaves read
the local data and preform the requested operations on that data (isosurfaces,
glyphs, etc). The polydata is collected by the master process and sent to
the client, which does the rendering. The trickiest part was with generating
the surface (via vtkGeometryFilter), since you get a lot of false surfaces
when you're dealing with a bunch of separate blocks instead of the merged
dataset. I had to write a filter to find and remove polygons that existed
on more than one slave process. I'm not really happy with it as it stands;
it works right now, but I'm sure that I could break it if I tried.
Center for Simulation of Advanced Rockets
More information about the vtkusers