[Paraview] HPC paraview

R C Bording cbording at ivec.org
Mon Oct 27 01:23:47 EDT 2014


Hi,
 Managed to get a "working version of Paraview-4.2.0.1" on our GPU cluster but when I try to run the 
parallelSphere.py script on more than one node it just hangs.  Work like it is supposed to up to 12 cores on a single node.  I am still trying work out if I a running on the GPU "tesla- C2070).  

Here is the list of cake configurations

IBS_TOOL_CONFIGURE='-DCMAKE_BUILD_TYPE=Release \
-DParaView_FROM_GIT=OFF \
-DParaView_URL=$MYGROUP/vis/src/ParaView-v4.2.0-source.tar.gz \
-DENABLE_boost=ON \
-DENABLE_cgns=OFF \
-DENABLE_ffmpeg=ON \
-DENABLE_fontconfig=ON \
-DENABLE_freetype=ON \
-DENABLE_hdf5=ON \
-DENABLE_libxml2=ON \
-DENABLE_matplotlib=ON \
-DENABLE_mesa=OFF \
-DENABLE_mpi=ON \
-DENABLE_numpy=ON \
-DENABLE_osmesa=OFF \
-DENABLE_paraview=ON \
-DENABLE_png=ON \
-DENABLE_python=ON \
-DENABLE_qhull=ON \
-DENABLE_qt=ON \
-DENABLE_silo=ON \
-DENABLE_szip=ON \
-DENABLE_visitbridge=ON \
-DMPI_CXX_LIBRARIES:STRING="$MPI_HOME/lib/libmpi_cxx.so" \
-DMPI_C_LIBRARIES:STRING="$MPI_HOME/lib/libmpi.so" \
-DMPI_LIBRARY:FILEPATH="$MPI_HOME/lib/libmpi_cxx.so" \
-DMPI_CXX_INCLUDE_PATH:STRING="$MPI_HOME/include" \
-DMPI_C_INCLUDE_PATH:STRING="$MPI_HOME/include" \
-DUSE_SYSTEM_mpi=ON \
-DUSE_SYSTEM_python=OFF \
-DUSE_SYSTEM_qt=OFF \
-DUSE_SYSTEM_zlib=OFF '

The goal is to be able to support batch rendering on the whole cluster ~96 nodes.

Also do I need set another environment variable in my Paraview module to make the Xlib
warning go away?

[cbording at f100 Paraview]$ mpirun -n 12 pvbatch --use-offscreen-rendering parallelSphere.py
Xlib:  extension "NV-GLX" missing on display "localhost:50.0".
Xlib:  extension "NV-GLX" missing on display "localhost:50.0".
Xlib:  extension "NV-GLX" missing on display "localhost:50.0".
Xlib:  extension "NV-GLX" missing on display "localhost:50.0".
Xlib:  extension "NV-GLX" missing on display "localhost:50.0".
Xlib:  extension "NV-GLX" missing on display "localhost:50.0".

Is this related to my not being able to run across multiple nodes?

R. Christopher Bording
Supercomputing Team-iVEC at UWA
E: cbording at ivec.org
T: +61 8 6488 6905

26 Dick Perry Avenue, 
Technology Park
Kensington, Western Australia.
6151





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://public.kitware.com/pipermail/paraview/attachments/20141027/aaaad0e9/attachment-0001.html>


More information about the ParaView mailing list