[Paraview] HPC paraview

Burlen Loring bloring at lbl.gov
Thu Oct 30 13:40:00 EDT 2014


to verify make a small job on 1 node. In the shell you are starting 
pvserver in, after you've done all the environment settings you need to 
to setup for paraview, run "glxinfo" and look for the gl vendor, gl 
version and gl renderer strings. these should give you info about your 
tesla. that can be done from your batch script or at the command prompt. 
some systems like cray mpiexec command (aprun) runs the program on a  
different set of nodes than the batch script itself, so for a system 
like that run "aprun -n 1 glxinfo".

On 10/29/2014 08:29 PM, R C Bording wrote:
> HI Burlen,
>   Thanks again.  Is there and easy way to verify that Paraview is 
> using the GPU ours are TESLA C2075?
>
> Should I need to change the "MOD"(?) setting so it is rendering rather 
> than compute?
>
> R. Christopher Bording
> Supercomputing Team-iVEC at UWA
> E: cbording at ivec.org <mailto:cbording at ivec.org>
> T: +61 8 6488 6905
>
> 26 Dick Perry Avenue,
> Technology Park
> Kensington, Western Australia.
> 6151
>
>
>
>
>
> On 27/10/2014, at 11:10 PM, Burlen Loring wrote:
>
>> Hi Christopher,
>>
>> Are you by any chance logged in with ssh X11 forwarding (ssh -X ...)? 
>> It seems the error you report comes up often in that context. X 
>> forwarding would not be the right way to run PV on your cluster.
>>
>> Depending on how your cluster is setup you may need to start up the 
>> xserver before launching PV, and make sure to close it after PV 
>> exits. IUn that scenario your xorg.conf would specify the nvidia 
>> driver and a screen for each gpu which you would refernece in the 
>> shell used to start PV through the DISPLAY variable. If you already 
>> have x11 running and screens configured then it's just a matter of 
>> setting the display variable correctly. When there are multiple GPU's 
>> per node then you'd need to set the display using mpi rank modulo the 
>> number of gpus per node.
>>
>> I'm not sure it matters that much but I don't think that you want 
>> --use-offscreen-rendering option.
>>
>> Burlen
>>
>> On 10/26/2014 10:23 PM, R C Bording wrote:
>>> Hi,
>>>  Managed to get a "working version of Paraview-4.2.0.1" on our GPU 
>>> cluster but when I try to run the
>>> parallelSphere.py script on more than one node it just hangs.  Work 
>>> like it is supposed to up to 12 cores on a single node.  I am still 
>>> trying work out if I a running on the GPU "tesla- C2070).
>>>
>>> Here is the list of cake configurations
>>>
>>> IBS_TOOL_CONFIGURE='-DCMAKE_BUILD_TYPE=Release \
>>> -DParaView_FROM_GIT=OFF \
>>> -DParaView_URL=$MYGROUP/vis/src/ParaView-v4.2.0-source.tar.gz \
>>> -DENABLE_boost=ON \
>>> -DENABLE_cgns=OFF \
>>> -DENABLE_ffmpeg=ON \
>>> -DENABLE_fontconfig=ON \
>>> -DENABLE_freetype=ON \
>>> -DENABLE_hdf5=ON \
>>> -DENABLE_libxml2=ON \
>>> -DENABLE_matplotlib=ON \
>>> -DENABLE_mesa=OFF \
>>> -DENABLE_mpi=ON \
>>> -DENABLE_numpy=ON \
>>> -DENABLE_osmesa=OFF \
>>> -DENABLE_paraview=ON \
>>> -DENABLE_png=ON \
>>> -DENABLE_python=ON \
>>> -DENABLE_qhull=ON \
>>> -DENABLE_qt=ON \
>>> -DENABLE_silo=ON \
>>> -DENABLE_szip=ON \
>>> -DENABLE_visitbridge=ON \
>>> -DMPI_CXX_LIBRARIES:STRING="$MPI_HOME/lib/libmpi_cxx.so" \
>>> -DMPI_C_LIBRARIES:STRING="$MPI_HOME/lib/libmpi.so" \
>>> -DMPI_LIBRARY:FILEPATH="$MPI_HOME/lib/libmpi_cxx.so" \
>>> -DMPI_CXX_INCLUDE_PATH:STRING="$MPI_HOME/include" \
>>> -DMPI_C_INCLUDE_PATH:STRING="$MPI_HOME/include" \
>>> -DUSE_SYSTEM_mpi=ON \
>>> -DUSE_SYSTEM_python=OFF \
>>> -DUSE_SYSTEM_qt=OFF \
>>> -DUSE_SYSTEM_zlib=OFF '
>>>
>>> The goal is to be able to support batch rendering on the whole 
>>> cluster ~96 nodes.
>>>
>>> Also do I need set another environment variable in my Paraview 
>>> module to make the Xlib
>>> warning go away?
>>>
>>> [cbording at f100 Paraview]$ mpirun -n 12 pvbatch 
>>> --use-offscreen-rendering parallelSphere.py
>>> Xlib:  extension "NV-GLX" missing on display "localhost:50.0".
>>> Xlib:  extension "NV-GLX" missing on display "localhost:50.0".
>>> Xlib:  extension "NV-GLX" missing on display "localhost:50.0".
>>> Xlib:  extension "NV-GLX" missing on display "localhost:50.0".
>>> Xlib:  extension "NV-GLX" missing on display "localhost:50.0".
>>> Xlib:  extension "NV-GLX" missing on display "localhost:50.0".
>>>
>>> Is this related to my not being able to run across multiple nodes?
>>>
>>> R. Christopher Bording
>>> Supercomputing Team-iVEC at UWA
>>> E: cbording at ivec.org <mailto:cbording at ivec.org>
>>> T: +61 8 6488 6905
>>>
>>> 26 Dick Perry Avenue,
>>> Technology Park
>>> Kensington, Western Australia.
>>> 6151
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Powered bywww.kitware.com
>>>
>>> Visit other Kitware open-source projects athttp://www.kitware.com/opensource/opensource.html
>>>
>>> Please keep messages on-topic and check the ParaView Wiki at:http://paraview.org/Wiki/ParaView
>>>
>>> Follow this link to subscribe/unsubscribe:
>>> http://public.kitware.com/mailman/listinfo/paraview
>>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://public.kitware.com/pipermail/paraview/attachments/20141030/0ddf3566/attachment-0001.html>


More information about the ParaView mailing list