[Paraview] MPI-enabled pvserver using SSH

Kent Eschenberg eschenbe at psc.edu
Wed Jun 13 15:33:42 EDT 2007


Hi Kyle,

Sounds like a problem I had using the MPICH-1 library on a parallel system 
running Redhat Enterprise. MPICH had to be rebuilt to replace rsh with ssh. 
Once it is built with rsh it could not be switched to ssh by setting options or 
environment variables.

It seems you also have another problem: pvserver should not need to connect to 
an X server if it is uses Mesa for rendering. You won't need mangled Mesa but 
do need to ensure that all the flags used by cmake are set appropriately.

P.S. It is always helpful when posting a question to mention the system and MPI 
package that you are using.

Kent
Pittsburgh Supercomputing Center

KLemmons at beckman.com wrote:
> 
> Hello,
> 
> I got ParaView 2.6 compiled with MPI enabled, tested it locally (it 
> works), and went to test it using a few pvservers on the nodes in my 
> cluster, and I see the splash screen, but (paradoxically enough) an 
> invisible window is created whose title in the task bar is "unable to 
> open display." This happens both physically, from the cluster head node, 
> and remotely from our control box (connects to the head node via SSH 
> with OpenGL-enabled X-forwarding).  Looking around at archives and 
> searching online has only led me to one possible cause, and that is that 
> when the pvserver spawns the children servers, they are not able to open 
> an X display (for rendering purposes?). If they are looking for an open 
> display, they won't find them, because X is not running (but it is 
> installed and working) on each of the nodes.  Looking at the output of 
> ps aux, I noticed something very interesting:
> foobar      3597  0.0  0.2   4440  2216 pts/14   S    14:27   0:00     
>  |       \_ /usr/bin/rsh n101 -l foobar -n /usr/local/bin/pvserver 
> cluster 36515 \-p4amslave \-p4yourname n101 \-p4rmrank 1
> ...
> foobar      3956  0.0  0.2   4440  2216 pts/14   S    14:27   0:00     
>  |       \_ /usr/bin/rsh n109 -l foobar -n /usr/local/bin/pvserver 
> cluster 36515 \-p4amslave \-p4yourname n109 \-p4rmrank 9
> This uses RSH, which as far as I know does not do X forwarding.  What 
> makes this interesting is that I cannot find this remote shell command 
> specified anywhere in the mpirun scripts (they all use SSH by default), 
> and this was verified by our MPI hello world program:
> foobar     10621  1.2  0.2   4444  2220 pts/14   S    14:29   0:00     
>  |       \_ /usr/bin/ssh n101 -l foobar -n /foo/bar/mpi/mpihw cluster 
> 48369 \-p4amslave \-p4yourname n101 \-p4rmrank 1
> ...
> foobar     10869  2.0  0.2   4440  2216 pts/14   S    14:29   0:00     
>  |       \_ /usr/bin/ssh n109 -l foobar -n /foo/bar/mpi/mpihw cluster 
> 48369 \-p4amslave \-p4yourname n109 \-p4rmrank 9
> 
> My thought is that using SSH might allow me to either (a) specify that 
> X-forwarding is enabled by default (via the ssh config file) or (b) once 
> I figure out how to set it, I can specify "/usr/bin/ssh -X -Y" as the 
> remote shell command/options.
> 
> I have been unsuccessful in locating where pvserver determines to use 
> rsh instead of ssh, as it is not specified (from what I can tell) 
> anywhere in the CMake configuration or in the build files, and I cannot 
> find in the documentation where this setting might be located.
> 
> My questions are as follows:
> 1. Why is my pvserver trying to use rsh when the default would appear to 
> be ssh? Could this cause the problem I am decribing?
> 2. Are there any other ways for the nodes to connect to a display 
> without X-forwarding? (did I misconfigure something?)
> 3. If it should be using ssh and the reason why it isn't is not obvious 
> from my above description, what would the next step be in determining my 
> problem?
> 
> Thanks,
> ~Kyle


More information about the ParaView mailing list