[Paraview] Parallel Scalability

Berk Geveci berk.geveci at kitware.com
Tue Mar 26 14:15:59 EDT 2013


Hmmm. These results are surprising to me. Is there a difference between the
1 process run and the 2 processes run? Is one compiled to use Mesa and the
other one accelerated OpenGL? Rendering should not jump like that. Also,
how many cores are on this system? I am surprised that processing goes up
when going to 4 processes.

-berk


On Tue, Mar 26, 2013 at 11:40 AM, Dr. Olaf Ippisch <
olaf.ippisch at iwr.uni-heidelberg.de> wrote:

> Dear Berk,
>
> I tried your suggestion and coloured the result with the
> ProcessIdScalars Filter. I can see the partitioning and it also makes
> sense, so there should not be any major load imbalance. I did add some
> timing information in the code. I add the changed program. It is evident
> that the data input is not the bottleneck. There is also some speedup in
> the application of the filters (should be the contour filter mostly),
> but this is more than compensated by the much longer time needed by the
> WriteImage command which does the rendering. You can see the times
> below. As I use a fast raid for the I/O this is not due to disk speed.
> Do you have any ideas if I could do something to speed up this last part?
>
> Best regards,
> Olaf
>
> 1 Process:
> Reading data from  conc_4996800.vtr
> Setup and Transformation took  1.52222299576
> Writing  contour_00380.png
> [Array: conc, Array: ProcessId]
> (0.0, 0.0)
> Filters took  83.8577969074
> Rendering took  13.8007481098
> Total runtime:  99.1809921265
>
> 2 Processes:
> Reading data from  conc_4996800.vtr
> Setup and Transformation took  1.02662491798
> Writing  contour_00380.png
> [Array: conc, Array: ProcessId]
> (0.0, 1.0)
> Filters took  54.6261291504
> Rendering took  46.8519799709
> Total runtime:  102.504925966
>
> 4 Processes:
> Reading data from  conc_4996800.vtr
> Setup and Transformation took  0.90910410881
> Writing  contour_00380.png
> [Array: conc, Array: ProcessId]
> (0.0, 3.0)
> Filters took  56.8009800911
> Rendering took  42.3190040588
> Total runtime:  100.029356956
>
>
>
> Am 26.03.13 13:43, schrieb Berk Geveci:
> > Hi Olaf,
> >
> > From your previous message, I am assuming that you are using vtr files.
> > In this case, the processing should scale. If you can make an example
> > files available, I can verify this. Feel free to e-mail them to me
> > directly or I can download them somewhere if they are too big. The two
> > potential problems are:
> >
> > - IO. You still have one disk if you are not running this on a cluster.
> > If the processing that ParaView is doing is negligible compared to the
> > time it takes to read the data, you will not see good scaling of the
> > whole script as you add more processes.
> >
> > - Load balancing. ParaView uses static load balancing when running in
> > parallel. So if that partitioning is not load balanced wrt iso-surfacing
> > (e.g. most of the iso-surface is generated by one process only), you
> > will not see good scaling. You can check if this is the case by applying
> > Process Id Scalars to the contour output. It will color polygons based
> > on which processor generates them.
> >
> > Best,
> > -berk
> >
> >
> >
> > On Mon, Mar 25, 2013 at 10:46 AM, Dr. Olaf Ippisch
> > <olaf.ippisch at iwr.uni-heidelberg.de
> > <mailto:olaf.ippisch at iwr.uni-heidelberg.de>> wrote:
> >
> >     Dear Paraview developers and users,
> >
> >     I tried to run paraview in parallel using a python script. I
> compiled a
> >     server including OpenMPI support and support for MESA off-screen
> >     rendering and started the server using mpirun. The I connected from a
> >     python script (see attachment). I could see that there are two
> threads
> >     both taking 100% CPU time. However, there was absolutely no speed-up.
> >     The runtime using two processors was completely the some. The data
> sets
> >     were rather large (about 100 million unknowns in 3D, 512 x 512 x
> 405).
> >     The result looked like the result with one process, but the time
> needed
> >     was also the same. I am sure that I am making some error either in
> the
> >     setup or I am missing something in the python program. Do you have
> any
> >     suggestions?
> >
> >     Best regards,
> >     Olaf Ippisch
> >
> >     --
> >     Dr. Olaf Ippisch
> >     Universität Heidelberg
> >     Interdisziplinäres Zentrum für Wissenschaftliches Rechnen
> >     Im Neuenheimer Feld 368, Raum 4.24
> >     Tel: 06221/548252   Fax: 06221/548884
> >     Mail: Im Neuenheimer Feld 368, 69120 Heidelberg
> >     e-mail: <olaf.ippisch at iwr.uni-heidelberg.de
> >     <mailto:olaf.ippisch at iwr.uni-heidelberg.de>>
> >
> >     _______________________________________________
> >     Powered by www.kitware.com <http://www.kitware.com>
> >
> >     Visit other Kitware open-source projects at
> >     http://www.kitware.com/opensource/opensource.html
> >
> >     Please keep messages on-topic and check the ParaView Wiki at:
> >     http://paraview.org/Wiki/ParaView
> >
> >     Follow this link to subscribe/unsubscribe:
> >     http://www.paraview.org/mailman/listinfo/paraview
> >
> >
>
> --
> Dr. Olaf Ippisch
> Universität Heidelberg
> Interdisziplinäres Zentrum für Wissenschaftliches Rechnen
> Im Neuenheimer Feld 368, Raum 4.24
> Tel: 06221/548252   Fax: 06221/548884
> Mail: Im Neuenheimer Feld 368, 69120 Heidelberg
> e-mail: <olaf.ippisch at iwr.uni-heidelberg.de>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.paraview.org/pipermail/paraview/attachments/20130326/ec507dff/attachment.htm>


More information about the ParaView mailing list