[Paraview] ParaView in parallel
Paul Edwards
paul.m.edwards at gmail.com
Thu Oct 1 12:33:56 EDT 2009
Hi,
Thanks for your responses. My dataset a multi-block structure consisting of
zones, each containing a single volume and multiple surfaces, e.g.
- zone1 (vtkMultiBlockDataSet)
- volume (vtkUnstructuredGrid)
- surfaces (vtkMultiBlockDataSet)
- surface1 (vtkPolyData)
- surface2 (vtkPolyData)
- ...
- ...
- zoneN
- volume
- surfaces
- surface1
- surface2
- ...
I'm not sure how to keep this structure when writing a parallel reader? Do
I just return exactly the same structure for each MPI process with different
data?
Thanks,
Paul
2009/9/23 Moreland, Kenneth <kmorel at sandia.gov>
> You said your data is multiblock, but you did not say how many blocks it
> has. D3 is going to run its partitioning algorithm on each block
> independently. This is bad if you have lots of blocks; you will end up with
> lots of tiny pieces on all the processes.
>
> To make your reader “parallel”, you basically just have to set
> vtkStreamingDemandDrivenPipeline::MAXIMUM_NUMBER_OF_PIECES() in the output
> information in the RequestInformation call (usually you set it to –1 to
> allow the downstream pipeline to set any number of pieces) and then read
> vtkStreamingDemandDrivenPipeline::UPDATE_NUMBER_OF_PIECES() and
> vtkStreamingDemandDrivenPipeline::UPDATE_PIECE_NUMBER(), which will
> correspond to the number of processes and local rank in ParaView, in the
> RequestData call.
>
> If your data typically has lots of blocks, I would design the reader to
> simply assign blocks to pieces so that each piece has about the same amount
> of data and then read in the blocks associated with the requested piece.
> This should yield the simplest code and the fastest reading times, and the
> approach (generally) does not require you to worry about ghost cells.
>
> The simplest example of a reader that responds to piece requests is the
> vtkParticleReader. That reader breaks up a list of points into the
> requested pieces, but it is not a stretch to do the same thing with the
> blocks.
>
> -Ken
>
>
>
> On 9/23/09 3:09 AM, "Paul Edwards" <paul.m.edwards at gmail.com> wrote:
>
> Hi,
>
> I've been experimenting with ParaView in parallel without much. Below I
> have listed my setup, experiences and questions!
>
> *Setup
> *My setup is a 6 node gigabit cluster (without gfx cards), compiled with
> OSMesa. The filesystem is only a RAID on the frontend exported with NFS.
> Each node has 8 cores and I experimented running with both 1 core per node
> and also 8 cores per node
>
> *Data
> *The data I am reading is multiple blocks, where each contains an
> unstructured grid and a multiblock structure of polydata surfaces. The data
> is about 10 million points in total.
>
> *Reader
> *My reader is not written for parallel so I loaded in the mesh and
> partitioned it with D3. The partitioning took a long time - approx 2 mins -
> is this normal? When the data was partitioned I didn't really notice much
> difference in speed and the rendering performance was considerably worse
> than a single node for the same data. Is this just the result of using
> OSMesa? If so, does anyone have any suggestions for the number of nodes to
> be running on? I also tried saving my data once it was partitioned but then
> loading it actually took longer (is this due to a lack of parallel
> filesystem?)
>
> *Filters
> *I tried to run some of my custom filters but one that calculates scalar
> variables for pointdata didn't display the variables in the GUI after
> running the filter. Do I to do something different for parallel filters?
>
> Finally, is there any documentation for implementing parallel readers (or
> simple examples) in ParaView? And, does anyone have any suggestions for how
> to split up the data?
>
> Thanks,
> Paul
>
>
>
> **** Kenneth Moreland
> *** Sandia National Laboratories
> ***********
> *** *** *** email: kmorel at sandia.gov
> ** *** ** phone: (505) 844-8919
> *** web: http://www.cs.unm.edu/~kmorel<http://www.cs.unm.edu/%7Ekmorel>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.paraview.org/pipermail/paraview/attachments/20091001/36d5383f/attachment.htm>
More information about the ParaView
mailing list