<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">HI Burlen,<div> Yes I am for the purpose of testing on our debug queue. But you are bang on with setting the DISPLAY environment variable.</div><div><br></div><div>so setting in the preview module</div><div>setenv LIBGL_ALWAYS_INDIRECT 1</div><div>or in bash in the job script</div><div>export LIBGL_ALWAYS_INDIRECT 1</div><div><br></div><div>but also adding </div><div>export DISPLAY=:1 </div><div>is needed to render on the GPU.</div><div><br></div><div>Renders the parallelSphere.py example with no errors across multiple nodes.</div><div><br></div><div>my mpirun command looks like this</div><div><br></div><div>mpirun pvbatch parallelSphere.py</div><div><br></div><div>note we have PBSpro installed so it determines the -np ##-number of processors/cores based on the </div><div>#PBS -l select=.....</div><div><br></div><div>So now to see if I can render something awesome! </div><div><br></div><div>Chris B</div><div> </div><div>
<br><div><div>On 27/10/2014, at 11:10 PM, Burlen Loring wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite">
<meta content="text/html; charset=windows-1252" http-equiv="Content-Type">
<div bgcolor="#FFFFFF" text="#000000">
Hi Christopher,<br>
<br>
Are you by any chance logged in with ssh X11 forwarding (ssh -X
...)? It seems the error you report comes up often in that context.
X forwarding would not be the right way to run PV on your cluster.<br>
<br>
Depending on how your cluster is setup you may need to start up the
xserver before launching PV, and make sure to close it after PV
exits. IUn that scenario your xorg.conf would specify the nvidia
driver and a screen for each gpu which you would refernece in the
shell used to start PV through the DISPLAY variable. If you already
have x11 running and screens configured then it's just a matter of
setting the display variable correctly. When there are multiple
GPU's per node then you'd need to set the display using mpi rank
modulo the number of gpus per node.<br>
<br>
I'm not sure it matters that much but I don't think that you want
--use-offscreen-rendering option.<br>
<br>
Burlen<br>
<br>
<div class="moz-cite-prefix">On 10/26/2014 10:23 PM, R C Bording
wrote:<br>
</div>
<blockquote cite="mid:DB8B585F-2FF1-4CC1-9F9F-19820CA5ADF1@ivec.org" type="cite">Hi,<br>
Managed to get a "working version of Paraview-4.2.0.1" on our GPU
cluster but when I try to run the <br>
parallelSphere.py script on more than one node it just hangs.
Work like it is supposed to up to 12 cores on a single node. I
am still trying work out if I a running on the GPU "tesla- C2070).
<br>
<br>
Here is the list of cake configurations<br>
<br>
IBS_TOOL_CONFIGURE='-DCMAKE_BUILD_TYPE=Release \<br>
-DParaView_FROM_GIT=OFF \<br>
-DParaView_URL=$MYGROUP/vis/src/ParaView-v4.2.0-source.tar.gz \<br>
-DENABLE_boost=ON \<br>
-DENABLE_cgns=OFF \<br>
-DENABLE_ffmpeg=ON \<br>
-DENABLE_fontconfig=ON \<br>
-DENABLE_freetype=ON \<br>
-DENABLE_hdf5=ON \<br>
-DENABLE_libxml2=ON \<br>
-DENABLE_matplotlib=ON \<br>
-DENABLE_mesa=OFF \<br>
-DENABLE_mpi=ON \<br>
-DENABLE_numpy=ON \<br>
-DENABLE_osmesa=OFF \<br>
-DENABLE_paraview=ON \<br>
-DENABLE_png=ON \<br>
-DENABLE_python=ON \<br>
-DENABLE_qhull=ON \<br>
-DENABLE_qt=ON \<br>
-DENABLE_silo=ON \<br>
-DENABLE_szip=ON \<br>
-DENABLE_visitbridge=ON \<br>
-DMPI_CXX_LIBRARIES:STRING="$MPI_HOME/lib/libmpi_cxx.so" \<br>
-DMPI_C_LIBRARIES:STRING="$MPI_HOME/lib/libmpi.so" \<br>
-DMPI_LIBRARY:FILEPATH="$MPI_HOME/lib/libmpi_cxx.so" \<br>
-DMPI_CXX_INCLUDE_PATH:STRING="$MPI_HOME/include" \<br>
-DMPI_C_INCLUDE_PATH:STRING="$MPI_HOME/include" \<br>
-DUSE_SYSTEM_mpi=ON \<br>
-DUSE_SYSTEM_python=OFF \<br>
-DUSE_SYSTEM_qt=OFF \<br>
-DUSE_SYSTEM_zlib=OFF '<br>
<br>
The goal is to be able to support batch rendering on the whole
cluster ~96 nodes.<br>
<br>
Also do I need set another environment variable in my Paraview
module to make the Xlib<br>
warning go away?<br>
<br>
[cbording@f100 Paraview]$ mpirun -n 12 pvbatch
--use-offscreen-rendering parallelSphere.py<br>
Xlib: extension "NV-GLX" missing on display "localhost:50.0".<br>
Xlib: extension "NV-GLX" missing on display "localhost:50.0".<br>
Xlib: extension "NV-GLX" missing on display "localhost:50.0".<br>
Xlib: extension "NV-GLX" missing on display "localhost:50.0".<br>
Xlib: extension "NV-GLX" missing on display "localhost:50.0".<br>
Xlib: extension "NV-GLX" missing on display "localhost:50.0".<br>
<br>
Is this related to my not being able to run across multiple nodes?
<div><br>
<div apple-content-edited="true">
<span class="Apple-style-span" style="border-collapse: separate; font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><span class="Apple-style-span" style="border-collapse: separate; font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; ">
<div style="word-wrap: break-word; -webkit-nbsp-mode:
space; -webkit-line-break: after-white-space; "><span class="Apple-style-span" style="border-collapse: separate; font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; ">
<div style="word-wrap: break-word; -webkit-nbsp-mode:
space; -webkit-line-break: after-white-space; ">
<div>R. Christopher Bording</div>
<div>Supercomputing Team-iVEC@UWA</div>
<div>E: <a moz-do-not-send="true" href="mailto:cbording@ivec.org">cbording@ivec.org</a></div>
<div>T: +61 8 6488 6905</div>
<div><br>
</div>
<div>26 Dick Perry Avenue, </div>
<div>Technology Park</div>
<div>Kensington, Western Australia.</div>
<div>6151</div>
<div><br>
</div>
</div>
</span><br class="Apple-interchange-newline">
</div>
</span><br class="Apple-interchange-newline">
</span><br class="Apple-interchange-newline">
</div>
<br>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Powered by <a class="moz-txt-link-abbreviated" href="http://www.kitware.com/">www.kitware.com</a>
Visit other Kitware open-source projects at <a class="moz-txt-link-freetext" href="http://www.kitware.com/opensource/opensource.html">http://www.kitware.com/opensource/opensource.html</a>
Please keep messages on-topic and check the ParaView Wiki at: <a class="moz-txt-link-freetext" href="http://paraview.org/Wiki/ParaView">http://paraview.org/Wiki/ParaView</a>
Follow this link to subscribe/unsubscribe:
<a class="moz-txt-link-freetext" href="http://public.kitware.com/mailman/listinfo/paraview">http://public.kitware.com/mailman/listinfo/paraview</a>
</pre>
</blockquote>
<br>
</div>
</blockquote></div><br></div></body></html>