[Paraview] MPI Socket in use

Adam Dershowitz, Ph.D., P.E. adershowitz at exponent.com
Tue Apr 17 15:39:03 EDT 2012


The addresses are different, but the shared objects seem to be the same.

--Adam



On Apr 17, 2012, at 12:17 PM, John Patchett wrote:

> Sometimes, depending on the process launcher being used by MPI you don't get the same environment in your current shell as you get from the mpiexeced environment, since interactive shells behave differently than non-interactive shells in regards to what dot files they read and/or don't read.  
> 
> you should try an
> ssh ldd /home/dersh/projects/ParaView-bin/bin/pvserver
> or
> rsh ldd /home/dersh/projects/ParaView-bin/bin/pvserver
> if you get the same output as 
> ldd /home/dersh/projects/ParaView-bin/bin/pvserver
> I'm barking up the wrong tree.
> 
> Good Luck,
> --John.
> 
> On Tue, Apr 17, 2012 at 1:02 PM, Utkarsh Ayachit <utkarsh.ayachit at kitware.com> wrote:
> Are there multiple version of MPI installed by any chance?
> 
> Utkarsh
> 
> On Tue, Apr 17, 2012 at 2:51 PM, Adam Dershowitz
> <adershowitz at exponent.com> wrote:
> > Attached are the results of ldd libvtkParallel.so (looks to me like it is
> > linking MPI stuff).  And also CMakeCache.txt
> >
> >
> >
> >
> >
> > -----Original Message-----
> > From: Utkarsh Ayachit [mailto:utkarsh.ayachit at kitware.com]
> > Sent: Tue 4/17/2012 6:14 AM
> > To: Adam Dershowitz
> > Cc: paraview at paraview.org
> > Subject: Re: [Paraview] MPI Socket in use
> >
> > Do you mind posting your CMakeCache.txt file? Also do a "ldd
> > libvtkParallel.so". Let's verify that vtkParallel is linking against
> > MPI libs.
> >
> > Utkarsh
> >
> > On Tue, Apr 17, 2012 at 12:26 AM, Adam Dershowitz, Ph.D., P.E.
> > <adershowitz at exponent.com> wrote:
> >> I am sure that I use ccmake and changed PARAVIEW_USE_MPI to ON, (I also
> >> enabled python and pointed to the open mpi compiler, so that it then filled
> >> in most of the variables for MPI, as the note explains).  Then I did
> >> configure and generate.  Finally I did make, and sudo make install.  All
> >> seemed to work fine, and paraview runs fine, with a single processor.
> >> I even went so far and to make a new, empty directory, and rebuilt it
> >> there.  With the same results.
> >> I have also tried to explicitly put in the correct path, just to make sure
> >> that there isn't some other pvserver around:
> >> mpirun -np 3 /home/dersh/projects/ParaView-bin/bin/pvserver
> >> with exactly the same results.
> >> I have gone back, and now I see that MPIEXEC_MAX_NUMPROCS and
> >> VTK_MPI_MAX_NUMPROCS are set to 2.  But, I get the same error if I try to
> >> run it with -np 2, so clearly just using MPI is failing.  A
> >>
> >> I realized that the problem is consistent with it not being built with
> >> MPI, but I definitely set it on.  Are there some other variables that have
> >> to be set?  Clearly something is not being built correctly, but I am not
> >> sure what it is.
> >>
> >> Thanks,
> >>
> >> --Adam
> >>
> >>
> >>
> >> On Apr 16, 2012, at 7:09 PM, Utkarsh Ayachit wrote:
> >>
> >>> You may want to verify PARAVIEW_USE_MPI flag again and ensure that the
> >>> pvserver you're running is indeed the one that has PARAVIEW_USE_MPI
> >>> set to ON. The problem you're seeing is typical when ParaView not
> >>> built with MPI.
> >>>
> >>> Utkarsh
> >>>
> >>> On Mon, Apr 16, 2012 at 8:10 PM, Adam Dershowitz
> >>> <adershowitz at exponent.com> wrote:
> >>>> I just built paraview on an opensuse linux box.  When I use the GUI, and
> >>>> a
> >>>> single connection it seems to work fine.  But, if I try to use multiple
> >>>> CPUs, or run with mpi, it fails.
> >>>> I do have OpenMPI installed.
> >>>> When I first started getting the error, I googled around some and found
> >>>> that
> >>>> maybe "make install" would help (I had been running it just in the build
> >>>> directory).  But I am getting the same error after installing.  I also
> >>>> added
> >>>> my openmpi libraries to LD_LIBRARY_PATH (when I first tried to run it I
> >>>> had
> >>>> other errors about a shared library).
> >>>>
> >>>> I did build with PARAVIEW_USE_MPI set to on.  It looks as though one
> >>>> pvserver will run, but any additional ones finds a port in use.  Clearly
> >>>> something is not right about how MPI is being handled, but I am not sure
> >>>> how
> >>>> to fix it.
> >>>>
> >>>>
> >>>> If I try mpirun, here is error I get:
> >>>>
> >>>>
> >>>> mpiexec -np 3  pvserver
> >>>> Waiting for client...
> >>>> Connection URL: cs://cfd:11111
> >>>> Accepting connection(s): cfd:11111
> >>>> Waiting for client...
> >>>> Connection URL: cs://cfd:11111
> >>>> ERROR: In
> >>>> /home/dersh/projects/ParaView-3.14.1-Source/VTK/Common/vtkSocket.cxx,
> >>>> line
> >>>> 206
> >>>> vtkServerSocket (0xebd970): Socket error in call to bind. Address
> >>>> already in
> >>>> use.
> >>>>
> >>>> ERROR: In
> >>>>
> >>>> /home/dersh/projects/ParaView-3.14.1-Source/ParaViewCore/ClientServerCore/vtkTCPNetworkAccessManager.cxx,
> >>>> line 343
> >>>> vtkTCPNetworkAccessManager (0x661800): Failed to set up server socket.
> >>>>
> >>>> Exiting...
> >>>> Waiting for client...
> >>>> Connection URL: cs://cfd:11111
> >>>> ERROR: In
> >>>> /home/dersh/projects/ParaView-3.14.1-Source/VTK/Common/vtkSocket.cxx,
> >>>> line
> >>>> 206
> >>>> vtkServerSocket (0xebd970): Socket error in call to bind. Address
> >>>> already in
> >>>> use.
> >>>>
> >>>> ERROR: In
> >>>>
> >>>> /home/dersh/projects/ParaView-3.14.1-Source/ParaViewCore/ClientServerCore/vtkTCPNetworkAccessManager.cxx,
> >>>> line 343
> >>>> vtkTCPNetworkAccessManager (0x661800): Failed to set up server socket.
> >>>>
> >>>> Exiting...
> >>>>
> >>>> -----------------------------------------------------------------------------
> >>>> It seems that [at least] one of the processes that was started with
> >>>> mpirun did not invoke MPI_INIT before quitting (it is possible that
> >>>> more than one process did not invoke MPI_INIT -- mpirun was only
> >>>> notified of the first one, which was on node n0).
> >>>>
> >>>> mpirun can *only* be used with MPI programs (i.e., programs that
> >>>> invoke MPI_INIT and MPI_FINALIZE).  You can use the "lamexec" program
> >>>> to run non-MPI programs over the lambooted nodes.
> >>>>
> >>>> -----------------------------------------------------------------------------
> >>>> mpirun failed with exit status 252
> >>>>
> >>>> And suggestions would be greatly appreciated.
> >>>>
> >>>> thanks,
> >>>>
> >>>> --Adam
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> Powered by www.kitware.com
> >>>>
> >>>> Visit other Kitware open-source projects at
> >>>> http://www.kitware.com/opensource/opensource.html
> >>>>
> >>>> Please keep messages on-topic and check the ParaView Wiki at:
> >>>> http://paraview.org/Wiki/ParaView
> >>>>
> >>>> Follow this link to subscribe/unsubscribe:
> >>>> http://www.paraview.org/mailman/listinfo/paraview
> >>>>
> >>
> >
> _______________________________________________
> Powered by www.kitware.com
> 
> Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html
> 
> Please keep messages on-topic and check the ParaView Wiki at: http://paraview.org/Wiki/ParaView
> 
> Follow this link to subscribe/unsubscribe:
> http://www.paraview.org/mailman/listinfo/paraview
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.paraview.org/pipermail/paraview/attachments/20120417/29af8d6a/attachment.htm>


More information about the ParaView mailing list