[Paraview] pvpython vtkMPIController usage? rank is always 0

Ephraim Obermaier ephraimobermaier at gmail.com
Wed May 17 01:26:58 EDT 2017


Thank you, "mpirun -n 2 pvbatch --mpi --symmetric test.py" runs as expected.

But I am now using pure python with properly set library paths. It's a bit
annoying that these pvbatch traps aren't documented in the vtkMPIController
class reference or in
http://www.paraview.org/ParaView3/Doc/Nightly/www/py-doc/. What other
surprises are to be expected when I use pvbatch??

Thank you!
Ephraim


2017-05-16 21:22 GMT+02:00 David E DeMarle <dave.demarle at kitware.com>:

> Run with --symmetric.
>
> Without it, only root node reads the script and it tells the rest of the
> nodes what to do via paraview's proxy mechanisms (which take effect only
> for vtkSMProxy and subclasses).
> With it, every node reads and executes the script and all nodes do their
> own parts behind the proxies.
>
>
>
> David E DeMarle
> Kitware, Inc.
> Principal Engineer
> 21 Corporate Drive
> Clifton Park, NY 12065-8662
> Phone: 518-881-4909 <(518)%20881-4909>
>
> On Tue, May 16, 2017 at 3:14 PM, Ephraim Obermaier <
> ephraimobermaier at gmail.com> wrote:
>
>> Thank you all for suggesting "pvbatch --mpi".
>> At least, this returns size=2 processes, but the updated test.py (below)
>> hangs with the following output:
>>
>> $ mpirun -n 2 pvbatch --mpi test.py
>> comm: <type 'vtkParallelMPIPython.vtkMPIController'>
>> rank: 0
>> size: 2
>> Process 0
>>
>> Why is "Process 1" not printed, and why does the program hang instead of
>> finishing?
>> The file test.py was simplified to:
>>
>> import vtk
>> c = vtk.vtkMultiProcessController.GetGlobalController()
>> print "comm:",type(c)
>> rank = c.GetLocalProcessId()
>> print "rank:",rank
>> size = c.GetNumberOfProcesses()
>> print "size:",size
>> if rank == 0:
>>   print "Process 0"
>> else:
>>   print "Process 1"
>> c.Finalize()
>>
>> Thank you!
>> Ephraim
>>
>>
>>
>> 2017-05-16 19:11 GMT+02:00 David E DeMarle <dave.demarle at kitware.com>:
>>
>>> Try your script within pvbatch.
>>>
>>> pvpython is analogous to the Qt client application, it (usually) is not
>>> part of an MPI execution environment. Either one can connect to an MPI
>>> parallel pvserver.
>>> pvbatch is a python interface that is meant to be run on the server. It
>>> is directly connected to the pvserver.
>>>
>>>
>>>
>>>
>>>
>>>
>>> David E DeMarle
>>> Kitware, Inc.
>>> Principal Engineer
>>> 21 Corporate Drive
>>> Clifton Park, NY 12065-8662
>>> Phone: 518-881-4909 <(518)%20881-4909>
>>>
>>> On Tue, May 16, 2017 at 1:07 PM, Ephraim Obermaier <
>>> ephraimobermaier at gmail.com> wrote:
>>>
>>>> Hello,
>>>> I am trying to use VTK's MPI communication from pvpython, running with
>>>> OpenMPI's mpirun. It seems like ParaView hasn't enabled the MPI
>>>> capabilities for VTK, although it was compiled from source with
>>>> PARAVIEW_USE_MPI=ON and correctly found the system OpenMPI-2.0.0 libraries
>>>> and includes.
>>>>
>>>> I am running the short example below with the command "mpirun -n 2
>>>> pvpython test.py". The full output is also attached.
>>>> In short, both MPI processes report rank=0 and size=1 and their
>>>> controller is a vtkDummyController although I expected rank=0..1, size=2
>>>> and a vtkMPIController.
>>>>
>>>> Is it possible to determine the problem with the given information? Do
>>>> I need extra CMake settings besides "PARAVIEW_USE_MPI=ON" to enable MPI for
>>>> VTK?
>>>> ParaView by itself runs fine in parallel, and I can start several
>>>> parallel pvservers using "mpirun -n 16 pvserver".
>>>>
>>>> --- test.py: ---
>>>> import vtk
>>>>
>>>> c = vtk.vtkMultiProcessController.GetGlobalController()
>>>>
>>>> print "comm:",type(c)
>>>> rank = c.GetLocalProcessId()
>>>> print "rank:",rank
>>>> size = c.GetNumberOfProcesses()
>>>> print "size:",size
>>>>
>>>> if rank == 0:
>>>>     ssource = vtk.vtkSphereSource()
>>>>     ssource.Update()
>>>>     print " 0 sending."
>>>>     c.Send(ssource.GetOutput(), 1, 1234)
>>>> else:
>>>>     sphere = vtk.vtkPolyData()
>>>>     print " 1 receiving."
>>>>     c.Receive(sphere, 0, 1234)
>>>>     print sphere
>>>>
>>>> --- Test run: ---
>>>> $ mpirun -n 2 pvpython test.py
>>>> comm: <type 'vtkParallelCorePython.vtkDummyController'>
>>>> rank: 0
>>>> size: 1
>>>>  0 sending.
>>>> Warning: In /home/user/.local/easybuild/bu
>>>> ild/ParaView/5.3.0/foss-2016b-mpi/ParaView-v5.3.0/VTK/Parall
>>>> el/Core/vtkDummyCommunicator.h, line 47
>>>> vtkDummyCommunicator (0x1ff74e0): There is no one to send to.
>>>> [... 7 more times the same Warning...]
>>>>
>>>> comm: <type 'vtkParallelCorePython.vtkDummyController'>
>>>> rank: 0
>>>> size: 1
>>>>  0 sending.
>>>> Warning: In /home/user/.local/easybuild/bu
>>>> ild/ParaView/5.3.0/foss-2016b-mpi/ParaView-v5.3.0/VTK/Parall
>>>> el/Core/vtkDummyCommunicator.h, line 47
>>>> vtkDummyCommunicator (0x22c14e0): There is no one to send to.
>>>> [... 7 more times the same Warning...]
>>>> --- end of output ---
>>>>
>>>> Thank you!
>>>> Ephraim
>>>>
>>>>
>>>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virenfrei.
>>>> www.avast.com
>>>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
>>>> <#m_-5463582805380536858_m_880591844498205545_m_2985275498583997113_m_7351775176557461903_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>>>
>>>> _______________________________________________
>>>> Powered by www.kitware.com
>>>>
>>>> Visit other Kitware open-source projects at
>>>> http://www.kitware.com/opensource/opensource.html
>>>>
>>>> Please keep messages on-topic and check the ParaView Wiki at:
>>>> http://paraview.org/Wiki/ParaView
>>>>
>>>> Search the list archives at: http://markmail.org/search/?q=ParaView
>>>>
>>>> Follow this link to subscribe/unsubscribe:
>>>> http://public.kitware.com/mailman/listinfo/paraview
>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://public.kitware.com/pipermail/paraview/attachments/20170517/2269158a/attachment.html>


More information about the ParaView mailing list