[Paraview] pvbatch, MPI and MultiBlock data sets

Yves Rogez yves.rogez at obs.ujf-grenoble.fr
Tue Jan 15 07:20:31 EST 2013


Hello,

I'm trying to parallelize a process using pvbatch and MPI, with 
MultiBlock data set; thus using the vtk composite pipeline.
I made a sample python program that is representative of what I have to do :

--------------------------------------------------------------------------------------------------

    /from paraview.simple import *//
    //
    //r = servermanager.sources.XMLMultiBlockDataReader()//
    //r.FileName = "input.vtm"//
    //
    //# Defining a sample fake data processing
    nbTs = 1000//
    //ts = {}//
    //for tIndex in range( 0, nbTs )://
    //    ts[tIndex] = servermanager.filters.Transform()//
    //    if tIndex == 0://
    //        ts[tIndex].Input = r//
    //    else://
    //        ts[tIndex].Input = ts[tIndex - 1]//
    //    ts[tIndex].Transform.Scale = [1.01,1.01,1.01]//
    //
    //w = servermanager.writers.XMLMultiBlockDataWriter()//
    //w.Input = ts[nbTs - 1]//
    //w.FileName = "output.vtm"//
    //
    //w.UpdatePipeline()//
    /

--------------------------------------------------------------------------------------------------

I launch that using /"mpiexec -np 4 pvbatch myscript.py/"
All run well but I get a longer time using MPI than using only "/pvbatch 
myscript.py".

/By monitoring RAM, I noticed that it seems the data is loaded on time 
by MPI process, and (maybe) all the MPI processes do exactly the same 
job, computing four times all the data.

Why my blocks in MultiBlock data set aren't dispatched over the MPI 
processes ?
What am I doing wrong ?

Many thanks for any help,

Yves
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.paraview.org/pipermail/paraview/attachments/20130115/4ed34c21/attachment-0001.htm>


More information about the ParaView mailing list