<div dir="ltr"><div><div>Run with --symmetric.<br></div><br>Without it, only root node reads the script and it tells the rest of the nodes what to do via paraview's proxy mechanisms (which take effect only for vtkSMProxy and subclasses).<br></div>With it, every node reads and executes the script and all nodes do their own parts behind the proxies.<br><div><br><br></div></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>David E DeMarle<br>Kitware, Inc.<br>Principal Engineer<br>21 Corporate Drive<br>Clifton Park, NY 12065-8662<br>Phone: 518-881-4909</div></div></div></div></div></div>
<br><div class="gmail_quote">On Tue, May 16, 2017 at 3:14 PM, Ephraim Obermaier <span dir="ltr"><<a href="mailto:ephraimobermaier@gmail.com" target="_blank">ephraimobermaier@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Thank you all for suggesting "pvbatch --mpi".<br>At least, this returns size=2 processes, but the updated test.py (below) hangs with the following output:<br><br>$ mpirun -n 2 pvbatch --mpi test.py <br>comm: <type 'vtkParallelMPIPython.<wbr>vtkMPIController'><br>rank: 0<br>size: 2<br>Process 0<br><br>Why is "Process 1" not printed, and why does the program hang instead of finishing?<br>The file test.py was simplified to:<span class=""><br><br>import vtk<br>c = vtk.vtkMultiProcessController.<wbr>GetGlobalController()<br>print "comm:",type(c)<br>rank = c.GetLocalProcessId()<br>print "rank:",rank<br>size = c.GetNumberOfProcesses()<br>print "size:",size<br>if rank == 0:<br></span> print "Process 0"<br>else:<br> print "Process 1"<br>c.Finalize()<br><br>Thank you!<span class="HOEnZb"><font color="#888888"><br>Ephraim<br><br><br></font></span></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">2017-05-16 19:11 GMT+02:00 David E DeMarle <span dir="ltr"><<a href="mailto:dave.demarle@kitware.com" target="_blank">dave.demarle@kitware.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>Try your script within pvbatch.<br><br>pvpython is analogous to the Qt client application, it (usually) is not part of an MPI execution environment. Either one can connect to an MPI parallel pvserver.<br></div></div>pvbatch is a python interface that is meant to be run on the server. It is directly connected to the pvserver.<br><br><br><br><br><div><div><div><br></div></div></div></div><div class="gmail_extra"><br clear="all"><div><div class="m_880591844498205545m_2985275498583997113gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div class="m_880591844498205545m_2985275498583997113gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>David E DeMarle<br>Kitware, Inc.<br>Principal Engineer<br>21 Corporate Drive<br>Clifton Park, NY 12065-8662<br>Phone: <a href="tel:(518)%20881-4909" value="+15188814909" target="_blank">518-881-4909</a></div></div></div></div></div></div>
<br><div class="gmail_quote"><div><div class="m_880591844498205545h5">On Tue, May 16, 2017 at 1:07 PM, Ephraim Obermaier <span dir="ltr"><<a href="mailto:ephraimobermaier@gmail.com" target="_blank">ephraimobermaier@gmail.com</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="m_880591844498205545h5"><div dir="ltr"><div><div><div>Hello,<br>I am trying to use VTK's MPI communication from pvpython, running with OpenMPI's mpirun. It seems like ParaView hasn't enabled the MPI capabilities for VTK, although it was compiled from source with PARAVIEW_USE_MPI=ON and correctly found the system OpenMPI-2.0.0 libraries and includes.<br><br>I am running the short example below with the command "mpirun -n 2 pvpython test.py". The full output is also attached.<br>In short, both MPI processes report rank=0 and size=1 and their controller is a vtkDummyController although I expected rank=0..1, size=2 and a vtkMPIController.<br><br>Is it possible to determine the problem with the given information? Do I need extra CMake settings besides "PARAVIEW_USE_MPI=ON" to enable MPI for VTK?<br>ParaView by itself runs fine in parallel, and I can start several parallel pvservers using "mpirun -n 16 pvserver".<br><br>--- test.py: ---<br>import vtk<br><br>c = vtk.vtkMultiProcessController.<wbr>GetGlobalController()<br><br>print "comm:",type(c)<br>rank = c.GetLocalProcessId()<br>print "rank:",rank<br>size = c.GetNumberOfProcesses()<br>print "size:",size<br><br>if rank == 0:<br> ssource = vtk.vtkSphereSource()<br> ssource.Update()<br> print " 0 sending."<br> c.Send(ssource.GetOutput(), 1, 1234)<br>else:<br> sphere = vtk.vtkPolyData()<br> print " 1 receiving."<br> c.Receive(sphere, 0, 1234)<br> print sphere<br><br>--- Test run: ---<br>$ mpirun -n 2 pvpython test.py <br>comm: <type 'vtkParallelCorePython.vtkDumm<wbr>yController'><br>rank: 0<br>size: 1<br> 0 sending.<br>Warning: In /home/user/.local/easybuild/bu<wbr>ild/ParaView/5.3.0/foss-2016b-<wbr>mpi/ParaView-v5.3.0/VTK/Parall<wbr>el/Core/vtkDummyCommunicator.h<wbr>, line 47<br>vtkDummyCommunicator (0x1ff74e0): There is no one to send to.<br>[... 7 more times the same Warning...]<br><br>comm: <type 'vtkParallelCorePython.vtkDumm<wbr>yController'><br>rank: 0<br>size: 1<br> 0 sending.<br>Warning: In /home/user/.local/easybuild/bu<wbr>ild/ParaView/5.3.0/foss-2016b-<wbr>mpi/ParaView-v5.3.0/VTK/Parall<wbr>el/Core/vtkDummyCommunicator.h<wbr>, line 47<br>vtkDummyCommunicator (0x22c14e0): There is no one to send to.<br>[... 7 more times the same Warning...]<br></div>--- end of output ---<br><br></div>Thank you!<br></div>Ephraim<br></div><div id="m_880591844498205545m_2985275498583997113m_7351775176557461903DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2"><br> <table style="border-top:1px solid #d3d4de">
<tbody><tr>
<td style="width:55px;padding-top:18px"><a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail" target="_blank"><img src="https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif" alt="" style="width:46px;height:29px" height="29" width="46"></a></td>
<td style="width:470px;padding-top:17px;color:#41424e;font-size:13px;font-family:Arial,Helvetica,sans-serif;line-height:18px">Virenfrei. <a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail" style="color:#4453ea" target="_blank">www.avast.com</a> </td>
</tr>
</tbody></table>
<a href="#m_880591844498205545_m_2985275498583997113_m_7351775176557461903_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2" width="1" height="1"></a></div>
<br></div></div>______________________________<wbr>_________________<br>
Powered by <a href="http://www.kitware.com" rel="noreferrer" target="_blank">www.kitware.com</a><br>
<br>
Visit other Kitware open-source projects at <a href="http://www.kitware.com/opensource/opensource.html" rel="noreferrer" target="_blank">http://www.kitware.com/opensou<wbr>rce/opensource.html</a><br>
<br>
Please keep messages on-topic and check the ParaView Wiki at: <a href="http://paraview.org/Wiki/ParaView" rel="noreferrer" target="_blank">http://paraview.org/Wiki/ParaV<wbr>iew</a><br>
<br>
Search the list archives at: <a href="http://markmail.org/search/?q=ParaView" rel="noreferrer" target="_blank">http://markmail.org/search/?q=<wbr>ParaView</a><br>
<br>
Follow this link to subscribe/unsubscribe:<br>
<a href="http://public.kitware.com/mailman/listinfo/paraview" rel="noreferrer" target="_blank">http://public.kitware.com/mail<wbr>man/listinfo/paraview</a><br>
<br></blockquote></div><br></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>