[Paraview] pvbatch, MPI and MultiBlock data sets

Yves Rogez yves.rogez at obs.ujf-grenoble.fr
Tue Jan 15 11:17:06 EST 2013


Ok, trying ProcessIdScalars filter obliged me to see the results more in 
details (with spreadsheet view).

My input : 50 blocks containing each 1 polydata constituted by 1 point 
and 1 vertex cell

With a single process :
     I get 50 blocks containing 1 point and 1 vertex cell, with PID = 0, 
block number = ( index of the block + 1 )
     So it is OK

With MPI (2 processes) :
     I get 50 blocks containing 2 points and 2 vertices cells, each with 
strange block numbers :
         block id 0 -> pt 1 has BN=2, pt 2 has BN=3
         block id 1 -> BN=(5,6)
         block id 2 -> BN=(8,9)
         block id 3 -> BN=(11,12) and so on...
     It seems :
         BN for Point1 = ( ( BlockID + 1 ) * 3 ) - 1
         BN for Point2 = ( BlockID + 1 ) * 3

     Point 1 of a block has always PID=0 and point 2 PID=1

Maybe I made something wrong when generating my input (please find 
attached a zip) ?

Yves Rogez

*IPAG*
/Institut de Planétologie et d'Astrophysique de Grenoble /
Bat D de Physique - BP. 53 - 38041 Grenoble - FRANCE

tel : +33 (0)4 76 63 52 80
lab : +33 (0)4 76 63 52 89
Le 15/01/2013 16:25, Utkarsh Ayachit a écrit :
> Just to make sure, your ParaView is built with MPI support enabled,
> right? XMLMultiBlockDataReader does distribute the blocks to read
> among the processes. Try apply a "ProcessIdScalars" filter in the
> middle and then look at the ProcessId assigned to the blocks in the
> data. They should show how the blocks were distributed.
>
> Utkarsh
>
> On Tue, Jan 15, 2013 at 7:20 AM, Yves Rogez
> <yves.rogez at obs.ujf-grenoble.fr> wrote:
>> Hello,
>>
>> I'm trying to parallelize a process using pvbatch and MPI, with MultiBlock
>> data set; thus using the vtk composite pipeline.
>> I made a sample python program that is representative of what I have to do :
>>
>> --------------------------------------------------------------------------------------------------
>>
>> from paraview.simple import *
>>
>> r = servermanager.sources.XMLMultiBlockDataReader()
>> r.FileName = "input.vtm"
>>
>> # Defining a sample fake data processing
>> nbTs = 1000
>> ts = {}
>> for tIndex in range( 0, nbTs ):
>>      ts[tIndex] = servermanager.filters.Transform()
>>      if tIndex == 0:
>>          ts[tIndex].Input = r
>>      else:
>>          ts[tIndex].Input = ts[tIndex - 1]
>>      ts[tIndex].Transform.Scale = [1.01,1.01,1.01]
>>
>> w = servermanager.writers.XMLMultiBlockDataWriter()
>> w.Input = ts[nbTs - 1]
>> w.FileName = "output.vtm"
>>
>> w.UpdatePipeline()
>>
>> --------------------------------------------------------------------------------------------------
>>
>> I launch that using "mpiexec -np 4 pvbatch myscript.py"
>> All run well but I get a longer time using MPI than using only "pvbatch
>> myscript.py".
>>
>> By monitoring RAM, I noticed that it seems the data is loaded on time by MPI
>> process, and (maybe) all the MPI processes do exactly the same job,
>> computing four times all the data.
>>
>> Why my blocks in MultiBlock data set aren't dispatched over the MPI
>> processes ?
>> What am I doing wrong ?
>>
>> Many thanks for any help,
>>
>> Yves
>>
>> _______________________________________________
>> Powered by www.kitware.com
>>
>> Visit other Kitware open-source projects at
>> http://www.kitware.com/opensource/opensource.html
>>
>> Please keep messages on-topic and check the ParaView Wiki at:
>> http://paraview.org/Wiki/ParaView
>>
>> Follow this link to subscribe/unsubscribe:
>> http://www.paraview.org/mailman/listinfo/paraview
>>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.paraview.org/pipermail/paraview/attachments/20130115/846d8900/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: input.zip
Type: application/x-zip-compressed
Size: 31394 bytes
Desc: not available
URL: <http://www.paraview.org/pipermail/paraview/attachments/20130115/846d8900/attachment-0001.bin>


More information about the ParaView mailing list