[Paraview] pvbatch, MPI and MultiBlock data sets
Yves Rogez
yves.rogez at obs.ujf-grenoble.fr
Tue Jan 29 02:53:09 EST 2013
Any idea on this problem ?
Thanks,
Yves Rogez
*IPAG*
/Institut de Planétologie et d'Astrophysique de Grenoble /
Bat D de Physique - BP. 53 - 38041 Grenoble - FRANCE
tel : +33 (0)4 76 63 52 80
lab : +33 (0)4 76 63 52 89
Le 15/01/2013 17:25, Yves Rogez a écrit :
> And also the output file for MPI 2 processes...
>
> Yves Rogez
>
> *IPAG*
> /Institut de Planétologie et d'Astrophysique de Grenoble /
> Bat D de Physique - BP. 53 - 38041 Grenoble - FRANCE
>
> tel : +33 (0)4 76 63 52 80
> lab : +33 (0)4 76 63 52 89
> Le 15/01/2013 16:25, Utkarsh Ayachit a écrit :
>> Just to make sure, your ParaView is built with MPI support enabled,
>> right? XMLMultiBlockDataReader does distribute the blocks to read
>> among the processes. Try apply a "ProcessIdScalars" filter in the
>> middle and then look at the ProcessId assigned to the blocks in the
>> data. They should show how the blocks were distributed.
>>
>> Utkarsh
>>
>> On Tue, Jan 15, 2013 at 7:20 AM, Yves Rogez
>> <yves.rogez at obs.ujf-grenoble.fr> wrote:
>>> Hello,
>>>
>>> I'm trying to parallelize a process using pvbatch and MPI, with MultiBlock
>>> data set; thus using the vtk composite pipeline.
>>> I made a sample python program that is representative of what I have to do :
>>>
>>> --------------------------------------------------------------------------------------------------
>>>
>>> from paraview.simple import *
>>>
>>> r = servermanager.sources.XMLMultiBlockDataReader()
>>> r.FileName = "input.vtm"
>>>
>>> # Defining a sample fake data processing
>>> nbTs = 1000
>>> ts = {}
>>> for tIndex in range( 0, nbTs ):
>>> ts[tIndex] = servermanager.filters.Transform()
>>> if tIndex == 0:
>>> ts[tIndex].Input = r
>>> else:
>>> ts[tIndex].Input = ts[tIndex - 1]
>>> ts[tIndex].Transform.Scale = [1.01,1.01,1.01]
>>>
>>> w = servermanager.writers.XMLMultiBlockDataWriter()
>>> w.Input = ts[nbTs - 1]
>>> w.FileName = "output.vtm"
>>>
>>> w.UpdatePipeline()
>>>
>>> --------------------------------------------------------------------------------------------------
>>>
>>> I launch that using "mpiexec -np 4 pvbatch myscript.py"
>>> All run well but I get a longer time using MPI than using only "pvbatch
>>> myscript.py".
>>>
>>> By monitoring RAM, I noticed that it seems the data is loaded on time by MPI
>>> process, and (maybe) all the MPI processes do exactly the same job,
>>> computing four times all the data.
>>>
>>> Why my blocks in MultiBlock data set aren't dispatched over the MPI
>>> processes ?
>>> What am I doing wrong ?
>>>
>>> Many thanks for any help,
>>>
>>> Yves
>>>
>>> _______________________________________________
>>> Powered bywww.kitware.com
>>>
>>> Visit other Kitware open-source projects at
>>> http://www.kitware.com/opensource/opensource.html
>>>
>>> Please keep messages on-topic and check the ParaView Wiki at:
>>> http://paraview.org/Wiki/ParaView
>>>
>>> Follow this link to subscribe/unsubscribe:
>>> http://www.paraview.org/mailman/listinfo/paraview
>>>
>
>
>
> _______________________________________________
> Powered by www.kitware.com
>
> Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html
>
> Please keep messages on-topic and check the ParaView Wiki at: http://paraview.org/Wiki/ParaView
>
> Follow this link to subscribe/unsubscribe:
> http://www.paraview.org/mailman/listinfo/paraview
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.paraview.org/pipermail/paraview/attachments/20130129/3021a8fe/attachment.htm>
More information about the ParaView
mailing list