[Paraview] paraview - client-server

pat marion pat.marion at kitware.com
Fri Feb 5 13:46:15 EST 2010


If admins are concerned about opening ports than maybe you should use ssh
tunnels instead of portfwd.  I like to recommend portfwd because I find it
simpler to use, but usually an ssh tunnel can work just as well.  And you
don't have to worry about killing portfwd when your done.  If you can get a
ssh tunnel to work then should be able to setup the tunnel in your custom
command when starting the server.

Pat

On Fri, Feb 5, 2010 at 1:27 PM, Rakesh Hammond <
rakesh.hammond at googlemail.com> wrote:

> Hi,
>
> Thanks for your replies - I think the plan is to have two queues in our new
> machine (parallel and interactive). What Pat described here is what we have
> been thinking about, just thought there must be a more elegant solution than
> this - for example, in VisIt you don't have to include the portfwd bit.
> Perhaps some thing to consider for future PV releases?  I assume these kind
> of issues are common in production platforms?
>
> Another point to consider is, from a procurement point of view, people
> don't really care about visualization, all they want is more TeraFlops etc
> and then expect the visualization to be fitted into the system (ie : no
> separate network between login and compute, only logins will see the file
> systems etc)
>
> I did set it up to do client-server in reverse, some direct responses to
> what is suggested below.
>
> You mentioned generating MOAB job scripts on the fly through a pv xml input
>> dialog.  All you would need to do would be add some extra code to your job
>> script template.  The extra code would generate a portfwd config file with a
>> port chosen by the user.  The user would have to pick a port number other
>> than 11111, a random port that hopefully is not in use.  When the start
>> server button is clicked on the client it could run a custom command that
>> ssh's to a login node, starts portfwd with the custom config file, and
>> submits the MOAB script.  Now the user just waits to pvserver to connect
>> back to the login node which is forwarded straight to the workstation.  If
>> the compute node can't connect to the login node by name (`hostname`) you
>> can use /sbin/ifconfig to figure out the exact IP.
>>
>> Yes - agreed, only problem is sys admin folks are very funny about opening
> several ports - I think I have to come up with some clever way to make them
> think, they are not opening many ports !  We thought we could keep a log of
> what is available and what is used, so each time it connects it can work out
> which ones to use (this may avoid the use of random port number that is
> "hopefully" not in use)
>
>
>> I can't think of an elegant way to kill portfwd when the session is over,
>> but you can probably come up with something.  Sorry I can't offer any
>> specific details.  You might want to read this wiki page too, it describes
>> such a system in use:
>>
>> Yes - this will be  an issue, but we thought that we could just kill the
> portfwd job at the end when MOAB scripts finishes.  The portfwd will be
> started at the user level - so the user should be able to kill their own
> jobs as part of the script (well that would be the theory!)
>
> When I get something to work, I will post some thing here on how I did
> it....
>
>
> Regards,
> Rakesh
>
>>
>> On Thu, Feb 4, 2010 at 3:05 PM, burlen <burlen.loring at gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I have used ssh for this on such systems. As long as the batch system
>>> gives you exclusive use of the set of compute nodes(usually the case), you
>>> shouldn't have to worry about ports being used by others because the tunnel
>>> is through your ssh connection. It's not automated though. Here is how I do
>>> it:
>>>
>>> I use two terminals on my workstation, in the following denoted by t1$
>>> and t2$, say fe is the front end on your cluster. In the first terminal:
>>>
>>>   t1$ ssh fe
>>>   t1$ qsub -I -V -l select=XX -l walltime=XX:XX:XX
>>>
>>>
>>> XX is replaced by your values. The job starts and you're automatically
>>> ssh'd into some compute node, which we'll say has hostname NODE. In the
>>> second terminal:
>>>
>>>   t2$ ssh fe
>>>   t2$ ~C<enter>
>>>   -L ZZZZZ:NODE:YYYYY
>>>
>>>
>>> The ~C bit is an escape sequence that sets up the port forward. ZZZZZ is
>>> a port number on your workstation. YYYYY is a port number on the server that
>>> is not blocked by the clusters internal firewall (see your sys admin). Now
>>> back to terminal one, and your waiting compute node:
>>>
>>>   t1$ module load PV3-modulefile
>>>   t1$ mpiexec pvserver --server-port=YYYYY
>>>
>>>
>>> The module is what sets up the ld library path and paths for your
>>> ParaView server install (see your sys admin). now paraview is running on the
>>> cluster. You start the ParaView client locally and connect over port ZZZZZ
>>> on localhost.
>>>
>>>
>>> That's what I do, if you come up with some automated script though that
>>> would be killer.
>>> Burlen
>>>
>>>
>>>
>>>
>>> Bart Janssens wrote:
>>>
>>>> On Thursday 04 February 2010 07:59:46 pm Rakesh Hammond wrote:
>>>>
>>>>
>>>>> I am no expert on this type of stuff, but I can see how this would work
>>>>> - the question is if you have multiple users connecting at the same
>>>>> time, obviously you can't forward everything into 11111 for example.
>>>>>
>>>>>
>>>>>
>>>>
>>>> Hi Rakesh,
>>>>
>>>> If you use reverse connections, the compute nodes only need to be able
>>>> to connect to outside machines (i.e. the workstations). Turning on NAT on a
>>>> gateway machine, i.e. the frontend, should be sufficient for that, and no
>>>> port forwarding is needed. This works on a standard Rocks setup, which
>>>> enables the frontend as gateway by default.
>>>>
>>>> Cheers,
>>>>
>>>> Bart
>>>> _______________________________________________
>>>> Powered by www.kitware.com
>>>>
>>>> Visit other Kitware open-source projects at
>>>> http://www.kitware.com/opensource/opensource.html
>>>>
>>>> Please keep messages on-topic and check the ParaView Wiki at:
>>>> http://paraview.org/Wiki/ParaView
>>>>
>>>> Follow this link to subscribe/unsubscribe:
>>>> http://www.paraview.org/mailman/listinfo/paraview
>>>>
>>>>
>>>
>>> _______________________________________________
>>> Powered by www.kitware.com
>>>
>>> Visit other Kitware open-source projects at
>>> http://www.kitware.com/opensource/opensource.html
>>>
>>> Please keep messages on-topic and check the ParaView Wiki at:
>>> http://paraview.org/Wiki/ParaView
>>>
>>> Follow this link to subscribe/unsubscribe:
>>> http://www.paraview.org/mailman/listinfo/paraview
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.paraview.org/pipermail/paraview/attachments/20100205/bc41b884/attachment-0001.htm>


More information about the ParaView mailing list