[Paraview] Pvbatch not performing significantly better in parallel
Massimiliano Leoni
leoni.massimiliano1 at gmail.com
Sat Apr 18 06:40:02 EDT 2015
Hi everybody,
I am trying to run pvbatch in parallel to render an animation, with a very
easy script that looks like
* import sys
* from paraview.simple import *
*
* # read pvsm file from command line and load it
* stateFile = sys.argv[1]
* simulation = stateFile.split("/")[-1].split(".")[0]
* servermanager.LoadState(stateFile)
*
* # set active view and render animation
* SetActiveView(GetRenderView())
* WriteAnimation(simulation + ".jpg",magnification=2,quality=2)
I compiled paraview from sources, configuring with
cmake -DPARAVIEW_BUILD_QT=OFF -DCMAKE_BUILD_TYPE=Release -
DBUILD_TESTING=OFF -DPARAVIEW_ENABLE_PYTHON=ON -
DPARAVIEW_USE_MPI=ON ..
and then building all.
I am doing a benchmark on 11GB of data distributed over many pvd/vtu
files [written by an MPI application in parallel].
I copied the data to a tmpfs folder to ensure the execution is not slowed
down by disk access.
Executing pvbatch on 1 or 16 processors doesn't really seem to change
anything.In particular, I was expecting to see the frames appearing in
blocks of 16 when running with mpi on 16 procs, but they always appear
one at a time at a constant pace, which makes me suspect that the other
processes aren't really contributing to the rendering.
What could I be doing wrong?Any suggestion is highly appreciated.
Best regards,
Massimiliano
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://public.kitware.com/pipermail/paraview/attachments/20150418/f213e35d/attachment.html>
More information about the ParaView
mailing list