[Paraview] paraview - client-server

burlen burlen.loring at gmail.com
Fri Feb 5 15:37:00 EST 2010


The only thing about reverse connection is that the user then has to 
open a port on the local firewall. It's not always possible.
> I think the technique Burlen described takes advantages of the 
> interactive queues.
It relies on a fast queue. Actually I use this technique for the 
production/normal/long queues, on Pliedes (NASA NAS), Spur, and Ranger 
(TACC). So far with a few exceptions I have been able to get as many 
nodes as I want in a matter of seconds or few minutes. A large PV job 
(1E2 nodes) is a drop in the bucket for most of these systems so they 
seem to get scheduled quickly.

The interactive job (qsub -I) lets you get the hostname of the node and 
set up the tunnel before you start paraview. You have to set the tunnel 
up first then start PV, because once PV starts and is listening on a 
port ssh refuses to use that port in the tunnel. I used a two ssh 
sessions the second to create the tunnel because the something about the 
way the batch system logs you into the compute node prevents the tunnel 
from being created inside the job. I don't know its specific to the 
systems I am using or a general fact of life.
> I wasn't able to do that on the last cluster I worked on because it 
> was not possible to ssh to a compute node.
Don't know if it would for you, but on the systems mentioned I haven't 
needed to ssh to the compute node, if you have its hostname (interactive 
job will display it), you can set up the tunnel from the front end.

    fe$ ~C<enter>
    -L YYYYY:NODE:ZZZZZ

An additional factor to consider when devising an automated solution for 
PV: In order to login into the front end, in addition to a password we 
are required to enter code from an RSA Secure Id device that NASA sent 
us. no password less login is allowed there. An automated solution would 
hopefully take that scenario into account.

pat marion wrote:
> I wasn't able to do that on the last cluster I worked on because it 
> was not possible to ssh to a compute node.
>
> Usually I just set up a reverse tunnel from the login node to my work 
> machine.  The server starts up on the compute nodes and does a reverse 
> connection to the login node which then travels through the tunnel to 
> my work machine.  All you need to know is the IP of the login node.  
> Often a login node has 2 different IPs, one is for the network 
> interface to the outside, and the other is a network interface for 
> communicating with compute nodes.
>
> Pat
>
> On Fri, Feb 5, 2010 at 2:13 PM, Berk Geveci <berk.geveci at kitware.com 
> <mailto:berk.geveci at kitware.com>> wrote:
>
>     I was thinking about this. Is it not possible to run a script that
>     starts the ParaView server, figures out where the first node is
>     and then sets up an ssh tunnel to that node through the login node?
>
>
>     On Fri, Feb 5, 2010 at 1:46 PM, pat marion <pat.marion at kitware.com
>     <mailto:pat.marion at kitware.com>> wrote:
>
>         If admins are concerned about opening ports than maybe you
>         should use ssh tunnels instead of portfwd.  I like to
>         recommend portfwd because I find it simpler to use, but
>         usually an ssh tunnel can work just as well.  And you don't
>         have to worry about killing portfwd when your done.  If you
>         can get a ssh tunnel to work then should be able to setup the
>         tunnel in your custom command when starting the server.
>
>         Pat
>
>
>         On Fri, Feb 5, 2010 at 1:27 PM, Rakesh Hammond
>         <rakesh.hammond at googlemail.com
>         <mailto:rakesh.hammond at googlemail.com>> wrote:
>
>             Hi,
>
>             Thanks for your replies - I think the plan is to have two
>             queues in our new machine (parallel and interactive). What
>             Pat described here is what we have been thinking about,
>             just thought there must be a more elegant solution than
>             this - for example, in VisIt you don't have to include the
>             portfwd bit. Perhaps some thing to consider for future PV
>             releases?  I assume these kind of issues are common in
>             production platforms?
>
>             Another point to consider is, from a procurement point of
>             view, people don't really care about visualization, all
>             they want is more TeraFlops etc and then expect the
>             visualization to be fitted into the system (ie : no
>             separate network between login and compute, only logins
>             will see the file systems etc)
>
>             I did set it up to do client-server in reverse, some
>             direct responses to what is suggested below.
>
>                 You mentioned generating MOAB job scripts on the fly
>                 through a pv xml input dialog.  All you would need to
>                 do would be add some extra code to your job script
>                 template.  The extra code would generate a portfwd
>                 config file with a port chosen by the user.  The user
>                 would have to pick a port number other than 11111, a
>                 random port that hopefully is not in use.  When the
>                 start server button is clicked on the client it could
>                 run a custom command that ssh's to a login node,
>                 starts portfwd with the custom config file, and
>                 submits the MOAB script.  Now the user just waits to
>                 pvserver to connect back to the login node which is
>                 forwarded straight to the workstation.  If the compute
>                 node can't connect to the login node by name
>                 (`hostname`) you can use /sbin/ifconfig to figure out
>                 the exact IP.
>
>             Yes - agreed, only problem is sys admin folks are very
>             funny about opening several ports - I think I have to come
>             up with some clever way to make them think, they are not
>             opening many ports !  We thought we could keep a log of
>             what is available and what is used, so each time it
>             connects it can work out which ones to use (this may avoid
>             the use of random port number that is "hopefully" not in use)
>              
>
>                 I can't think of an elegant way to kill portfwd when
>                 the session is over, but you can probably come up with
>                 something.  Sorry I can't offer any specific details. 
>                 You might want to read this wiki page too, it
>                 describes such a system in use:
>
>             Yes - this will be  an issue, but we thought that we could
>             just kill the portfwd job at the end when MOAB scripts
>             finishes.  The portfwd will be started at the user level -
>             so the user should be able to kill their own jobs as part
>             of the script (well that would be the theory!)
>
>             When I get something to work, I will post some thing here
>             on how I did it....
>
>
>             Regards,
>             Rakesh
>
>
>                 On Thu, Feb 4, 2010 at 3:05 PM, burlen
>                 <burlen.loring at gmail.com
>                 <mailto:burlen.loring at gmail.com>> wrote:
>
>                     Hi,
>
>                     I have used ssh for this on such systems. As long
>                     as the batch system gives you exclusive use of the
>                     set of compute nodes(usually the case), you
>                     shouldn't have to worry about ports being used by
>                     others because the tunnel is through your ssh
>                     connection. It's not automated though. Here is how
>                     I do it:
>
>                     I use two terminals on my workstation, in the
>                     following denoted by t1$ and t2$, say fe is the
>                     front end on your cluster. In the first terminal:
>
>                       t1$ ssh fe
>                       t1$ qsub -I -V -l select=XX -l walltime=XX:XX:XX
>
>
>                     XX is replaced by your values. The job starts and
>                     you're automatically ssh'd into some compute node,
>                     which we'll say has hostname NODE. In the second
>                     terminal:
>
>                       t2$ ssh fe
>                       t2$ ~C<enter>
>                       -L ZZZZZ:NODE:YYYYY
>
>
>                     The ~C bit is an escape sequence that sets up the
>                     port forward. ZZZZZ is a port number on your
>                     workstation. YYYYY is a port number on the server
>                     that is not blocked by the clusters internal
>                     firewall (see your sys admin). Now back to
>                     terminal one, and your waiting compute node:
>
>                       t1$ module load PV3-modulefile
>                       t1$ mpiexec pvserver --server-port=YYYYY
>
>
>                     The module is what sets up the ld library path and
>                     paths for your ParaView server install (see your
>                     sys admin). now paraview is running on the
>                     cluster. You start the ParaView client locally and
>                     connect over port ZZZZZ on localhost.
>
>
>                     That's what I do, if you come up with some
>                     automated script though that would be killer.
>                     Burlen
>
>
>
>
>                     Bart Janssens wrote:
>
>                         On Thursday 04 February 2010 07:59:46 pm
>                         Rakesh Hammond wrote:
>                          
>
>                             I am no expert on this type of stuff, but
>                             I can see how this would work
>                             - the question is if you have multiple
>                             users connecting at the same
>                             time, obviously you can't forward
>                             everything into 11111 for example.
>
>                                
>
>
>                         Hi Rakesh,
>
>                         If you use reverse connections, the compute
>                         nodes only need to be able to connect to
>                         outside machines (i.e. the workstations).
>                         Turning on NAT on a gateway machine, i.e. the
>                         frontend, should be sufficient for that, and
>                         no port forwarding is needed. This works on a
>                         standard Rocks setup, which enables the
>                         frontend as gateway by default.
>
>                         Cheers,
>
>                         Bart
>                         _______________________________________________
>                         Powered by www.kitware.com
>                         <http://www.kitware.com>
>
>                         Visit other Kitware open-source projects at
>                         http://www.kitware.com/opensource/opensource.html
>
>                         Please keep messages on-topic and check the
>                         ParaView Wiki at:
>                         http://paraview.org/Wiki/ParaView
>
>                         Follow this link to subscribe/unsubscribe:
>                         http://www.paraview.org/mailman/listinfo/paraview
>                          
>
>
>                     _______________________________________________
>                     Powered by www.kitware.com <http://www.kitware.com>
>
>                     Visit other Kitware open-source projects at
>                     http://www.kitware.com/opensource/opensource.html
>
>                     Please keep messages on-topic and check the
>                     ParaView Wiki at: http://paraview.org/Wiki/ParaView
>
>                     Follow this link to subscribe/unsubscribe:
>                     http://www.paraview.org/mailman/listinfo/paraview
>
>
>
>
>
>         _______________________________________________
>         Powered by www.kitware.com <http://www.kitware.com>
>
>         Visit other Kitware open-source projects at
>         http://www.kitware.com/opensource/opensource.html
>
>         Please keep messages on-topic and check the ParaView Wiki at:
>         http://paraview.org/Wiki/ParaView
>
>         Follow this link to subscribe/unsubscribe:
>         http://www.paraview.org/mailman/listinfo/paraview
>
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> Powered by www.kitware.com
>
> Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html
>
> Please keep messages on-topic and check the ParaView Wiki at: http://paraview.org/Wiki/ParaView
>
> Follow this link to subscribe/unsubscribe:
> http://www.paraview.org/mailman/listinfo/paraview
>   



More information about the ParaView mailing list