[Paraview] Running pvserver on multiple GPU per node

Robert Sawko robertsawko at gmail.com
Fri Mar 2 05:40:23 EST 2018


Hi,

I have a distinct impression I already asked this question, but I can't find it
in the archives. Apologies, if I am repeating myself.

I am working on a cluster which has multiple GPUs per node 4-6 to be more
precise. I got PV to work with EGL backend and in a client server mode and it's
been a really good experience so far but one always wants more... It seems to
me that I am using only one GPU even if I am running in parallel.

I am looking for some advice here. I am using this website as a
reference 
https://www.paraview.org/Wiki/Setting_up_a_ParaView_Server#Multiple_GPUs_Per_Node
and I am using OpenMPI to do my MPI.

# 1. This works for me but is using a single GPU
mpirun -report-bindings -map-by core -bind-to core \
    -np 20 pvserver -sp=22221 --disable-xdisplay-test

# 2. This probably doesn't work, but I am not sure...
mpirun -report-bindings -map-by core -bind-to core \
    -np 10 pvserver -sp=22221 --disable-xdisplay-test --egl-device-index=0 : \
    -np 10 pvserver -disable-xdisplay-test --egl-device-index=1


General question. Is it ok to drop -display option with EGL?

With option 1. I am afraid that the process are spread out across both sockets
and I think there may be a slight communication overhead between processes
on one socket and the GPU plugged in on another one, but perhaps I am being
paranoid. How can I measure this?

With option 2: am I just abusing the syntax or this the right way to do
it? nvidia-smi tells me I am using two GPUs now. How to make sure that the
processes talking to an nth GPU are on bound to the right socket? Is single -sp
option correct there?

And perhaps a silly general question. How can I make a fair benchmark of this?

Best wishes,
Robert
-- 
Ghosts of the upper atmosphere
https://www.youtube.com/watch?v=D7mqs6fng7o
Reference:
https://en.wikipedia.org/wiki/Upper-atmospheric_lightning


More information about the ParaView mailing list