well, in this case we're not tiling it Either.. <br><br>Upon some further digging, it looks like it was just a Version Mismatch problem. (Probably should add a check for that at startup). I was running CVS on the cluster, and
2.4.4 on the client. using 2.4.4 on both works beautifully.<br><br>now for the next question :) PV_USE_TRANSMIT - Does that still work?<br><br>I have a file that only exists on the head node (process 0), but when I try to open it (with PV_USE_TRANSMIT set to 1 on all nodes) I get errors about "unable to open PLY file" on all the processors and it seems to lock.
<br><br><div><span class="gmail_quote">
On 8/10/06, <b class="gmail_sendername">Sean Ziegeler, Contractor</b> <<a href="mailto:seanzig.ctr@navo.hpc.mil" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">seanzig.ctr@navo.hpc.mil</a>> wrote:
</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Randall,<br>As you probably know, our configuration is very similar to yours in that<br>only the head node(s) have external access. We routinely use<br>client/server mode successfully in conjunction with parallel<br>processing. As stated by others, you simply have to force the first
<br>rank (#0) MPI process to run on a head node. If that is your only<br>concern, you should be ok.<br><br>The difference, however, is that we are using MPIRenderModule (we aren't<br>tiling the display). You'll want to be sure IceTDesktop doesn't
<br>deviate, but from the other responses on the list, I'm guessing not.<br><br>-Sean<br><br>On Thu, 2006-08-10 at 12:10, Berk Geveci wrote:<br>> Only the first node has to be connected to the client. They talk to<br>
> each other with MPI.
<br>><br>> On 8/10/06, Randall Hand <<a href="mailto:randall.hand@gmail.com" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">randall.hand@gmail.com</a>> wrote:<br>> This, this is the tile cluster (plasma).
<br>><br>> We brought it up with our admins onsite, and they dont' think
<br>> that setting up NAT would be any big problem, so they're<br>> looking into it now. What we want to do is use the<br>> IceTDesktop render mode to parallel process across our 12<br>> nodes with the display piped to a client (probably windows)
<br>> that's not part of the cluster. It was my understanding (and<br>> experience) that in this setup (mpirun -np 12 pvserver -rc)<br>> that each pvserver process will want an independent connection
<br>> to the Client, which would require full network connectivity<br>> if it's any host other than the head node.<br>><br>> AM I mistaken?<br>><br>> On 8/10/06, Andy Cedilnik <
<a href="mailto:andy.cedilnik@kitware.com" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">andy.cedilnik@kitware.com</a>> wrote:<br>> Hello,<br>><br>> Actually, as far as I remember there is no need for
<br>> connectivity from
<br>> satellite nodes. I am pretty sure Randal's setup does<br>> not use NAT, so<br>> there is no way his satellite nodes can access<br>> Internet. Randal, this is
<br>> the tile cluster?<br>><br>> But, again, I see no reason for satellite nodes to<br>> access client. They<br>> will try to access render server nodes if you do
<br>> render/data server<br>> separation.<br>><br>> Andy<br>><br>> Wylie, Brian wrote:<br>> > Sandia uses reverse connect on all of our cluster
<br>> deployments.<br>> ><br>> > Even though you can't 'see' the cluster nodes from<br>> outside you can<br>> > often 'see' the outside from a cluster node.
<br>> ><br>> > If the cluster node cannot ping an external ip, than<br>> perhaps install<br>> > NAT on the cluster? (I'm a bit out of my area
<br>> here.... but that's<br>> > what our administrators have done if there was an<br>> issue).<br>> ><br>> > If you want I can forward your email to our cluster
<br>> folks....<br>> ><br>> > Brian Wylie - Org 1424<br>> > Sandia National Laboratories<br>> > MS 0822 - Building 880/A1-J
<br>> > (505)844-2238 FAX(505)845-0833<br>> ><br>> ><br>> ><br>> ><br>> ------------------------------------------------------------------------
<br>> > *From:*<br>> paraview-bounces+bnwylie=<a href="mailto:sandia.gov@paraview.org" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">sandia.gov@paraview.org
</a><br>> ><br>> [mailto:
<a href="mailto:paraview-bounces+bnwylie=sandia.gov@paraview.org" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">paraview-bounces+bnwylie=sandia.gov@paraview.org</a>] *On<br>> > Behalf Of *Randall Hand
<br>> > *Sent:* Wednesday, August 09, 2006 1:46 PM
<br>> > *To:* Paraview List<br>> > *Subject:* [Paraview] Paraview client-Server<br>> ><br>> > On our cluster, only the head node has true
<br>> internet<br>> > connectivity. The remaining cluster nodes are<br>> on an internal<br>> > network with no visibility except to each other
<br>> & the head node.<br>> ><br>> > In this configuration, is there any way to run<br>> Paraview in<br>> > Parallel-server mode to a client anywhere other
<br>> than the head<br>> > node? I would presume that there would need to<br>> be some kind of<br>> > Relay on the head node to make this happen.
<br>> ><br>> > This came up a few months ago with kitware, but<br>> I'm kinda curious<br>> > if anyone else out there has a different
<br>> solution (/poke Bryan,<br>> > Kenneth, *@doe :) ).<br>> ><br>> > --<br>> > ----------------------------------------
<br>> > Randall Hand<br>> > Visualization Scientist<br>> > ERDC MSRC-ITL<br>> ><br>> ><br>> ------------------------------------------------------------------------
<br>> ><br>> > _______________________________________________<br>> > ParaView mailing list<br>> > <a href="mailto:ParaView@paraview.org" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
ParaView@paraview.org</a><br>> > <a href="http://www.paraview.org/mailman/listinfo/paraview" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">http://www.paraview.org/mailman/listinfo/paraview
</a><br>> ><br>><br>><br>>
<br>><br>><br>> --<br>> ----------------------------------------<br>> Randall Hand<br>> Visualization Scientist<br>> ERDC MSRC-ITL<br>> _______________________________________________
<br>> ParaView mailing list<br>> <a href="mailto:ParaView@paraview.org" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">ParaView@paraview.org</a><br>> <a href="http://www.paraview.org/mailman/listinfo/paraview" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
http://www.paraview.org/mailman/listinfo/paraview
</a><br>><br>><br>><br>><br>><br>> ______________________________________________________________________<br>> _______________________________________________<br>> ParaView mailing list<br>> <a href="mailto:ParaView@paraview.org" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
ParaView@paraview.org</a><br>> <a href="http://www.paraview.org/mailman/listinfo/paraview" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">http://www.paraview.org/mailman/listinfo/paraview</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br>----------------------------------------
<br>Randall Hand<br>Visualization Scientist<br>ERDC MSRC-ITL