[Paraview] GPU-based remote rendering
Favre Jean
jfavre at cscs.ch
Tue Jul 16 17:00:56 EDT 2013
Hello
I am configuring a new cluster with two GPUs per node. When doing server-side rendering, I get the following dirty pixels when I increase pixels resolution beyond a certain threshold. this is using a single GPU per node with a command like
mpiexec -n 8 -env DISPLAY :0.0 /apps/castor/ParaView/4.0/bin/pvserver -rc -ch=148.***.***.*** -sp=11111
my PV is compiled with ICET_USE_OPENGL:BOOL=ON. What could cause these dirty pixels? my driver is libGL.so.319.17
Second problem: I'd really like to use both GPUs. I run (for example)
mpiexec -n 8 -env DISPLAY :0.0 /apps/castor/ParaView/4.0/bin/pvserver -rc -ch=148.***.***.*** -sp=11111 : -n 8 -env DISPLAY :0.1 /apps/castor/ParaView/4.0/bin/pvserver -rc -ch=148.***.***.*** -sp=11111
and the whole thing crashes right on init with
ERROR: In /apps/eiger/src/ParaView-v4.0.1-source/VTK/Parallel/Core/vtkSocketCommunicator.cxx, line 812
vtkSocketCommunicator (0x200d5f390): Could not receive tag. 144432
I have search the ML and the wiki, but most messages refer to tiled displays, wheras I just want to distribute GPUs over pvservers. What have I forgotten? I am running 4.0.1.
-----------------
Jean
CSCS
-------------- next part --------------
A non-text attachment was scrubbed...
Name: remoteGPU.png
Type: image/png
Size: 217886 bytes
Desc: remoteGPU.png
URL: <http://www.paraview.org/pipermail/paraview/attachments/20130716/52710812/attachment-0001.png>
More information about the ParaView
mailing list