No subject
Tue Jan 15 14:41:49 EST 2013
this case, the processing should scale. If you can make an example files
available, I can verify this. Feel free to e-mail them to me directly or I
can download them somewhere if they are too big. The two potential problems
are:
- IO. You still have one disk if you are not running this on a cluster. If
the processing that ParaView is doing is negligible compared to the time it
takes to read the data, you will not see good scaling of the whole script
as you add more processes.
- Load balancing. ParaView uses static load balancing when running in
parallel. So if that partitioning is not load balanced wrt iso-surfacing
(e.g. most of the iso-surface is generated by one process only), you will
not see good scaling. You can check if this is the case by applying Process
Id Scalars to the contour output. It will color polygons based on which
processor generates them.
Best,
-berk
On Mon, Mar 25, 2013 at 10:46 AM, Dr. Olaf Ippisch <
olaf.ippisch at iwr.uni-heidelberg.de> wrote:
> Dear Paraview developers and users,
>
> I tried to run paraview in parallel using a python script. I compiled a
> server including OpenMPI support and support for MESA off-screen
> rendering and started the server using mpirun. The I connected from a
> python script (see attachment). I could see that there are two threads
> both taking 100% CPU time. However, there was absolutely no speed-up.
> The runtime using two processors was completely the some. The data sets
> were rather large (about 100 million unknowns in 3D, 512 x 512 x 405).
> The result looked like the result with one process, but the time needed
> was also the same. I am sure that I am making some error either in the
> setup or I am missing something in the python program. Do you have any
> suggestions?
>
> Best regards,
> Olaf Ippisch
>
> --
> Dr. Olaf Ippisch
> Universit=E4t Heidelberg
> Interdisziplin=E4res Zentrum f=FCr Wissenschaftliches Rechnen
> Im Neuenheimer Feld 368, Raum 4.24
> Tel: 06221/548252 Fax: 06221/548884
> Mail: Im Neuenheimer Feld 368, 69120 Heidelberg
> e-mail: <olaf.ippisch at iwr.uni-heidelberg.de>
>
> _______________________________________________
> Powered by www.kitware.com
>
> Visit other Kitware open-source projects at
> http://www.kitware.com/opensource/opensource.html
>
> Please keep messages on-topic and check the ParaView Wiki at:
> http://paraview.org/Wiki/ParaView
>
> Follow this link to subscribe/unsubscribe:
> http://www.paraview.org/mailman/listinfo/paraview
>
>
--047d7bb03c9c802aa904d8d347bb
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
<div dir=3D"ltr">Hi Olaf,<div><br></div><div>From your previous message, I =
am assuming that you are using vtr files. In this case, the processing shou=
ld scale. If you can make an example files available, I can verify this. Fe=
el free to e-mail them to me directly or I can download them somewhere if t=
hey are too big. The two potential problems are:</div>
<div><br></div><div style>- IO. You still have one disk if you are not runn=
ing this on a cluster. If the processing that ParaView is doing is negligib=
le compared to the time it takes to read the data, you will not see good sc=
aling of the whole script as you add more processes.</div>
<div style><br></div><div style>- Load balancing. ParaView uses static load=
balancing when running in parallel. So if that partitioning is not load ba=
lanced wrt iso-surfacing (e.g. most of the iso-surface is generated by one =
process only), you will not see good scaling. You can check if this is the =
case by applying Process Id Scalars to the contour output. It will color po=
lygons based on which processor generates them.</div>
<div style><br></div><div style>Best,</div><div style>-berk</div><div style=
><br></div></div><div class=3D"gmail_extra"><br><br><div class=3D"gmail_quo=
te">On Mon, Mar 25, 2013 at 10:46 AM, Dr. Olaf Ippisch <span dir=3D"ltr">&l=
t;<a href=3D"mailto:olaf.ippisch at iwr.uni-heidelberg.de" target=3D"_blank">o=
laf.ippisch at iwr.uni-heidelberg.de</a>></span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">Dear Paraview developers and users,<br>
<br>
I tried to run paraview in parallel using a python script. I compiled a<br>
server including OpenMPI support and support for MESA off-screen<br>
rendering and started the server using mpirun. The I connected from a<br>
python script (see attachment). I could see that there are two threads<br>
both taking 100% CPU time. However, there was absolutely no speed-up.<br>
The runtime using two processors was completely the some. The data sets<br>
were rather large (about 100 million unknowns in 3D, 512 x 512 x 405).<br>
The result looked like the result with one process, but the time needed<br>
was also the same. I am sure that I am making some error either in the<br>
setup or I am missing something in the python program. Do you have any<br>
suggestions?<br>
<br>
Best regards,<br>
Olaf Ippisch<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
--<br>
Dr. Olaf Ippisch<br>
Universit=E4t Heidelberg<br>
Interdisziplin=E4res Zentrum f=FCr Wissenschaftliches Rechnen<br>
Im Neuenheimer Feld 368, Raum 4.24<br>
Tel: 06221/548252 =A0 Fax: 06221/548884<br>
Mail: Im Neuenheimer Feld 368, 69120 Heidelberg<br>
e-mail: <<a href=3D"mailto:olaf.ippisch at iwr.uni-heidelberg.de">olaf.ippi=
sch at iwr.uni-heidelberg.de</a>><br>
</font></span><br>_______________________________________________<br>
Powered by <a href=3D"http://www.kitware.com" target=3D"_blank">www.kitware=
.com</a><br>
<br>
Visit other Kitware open-source projects at <a href=3D"http://www.kitware.c=
om/opensource/opensource.html" target=3D"_blank">http://www.kitware.com/ope=
nsource/opensource.html</a><br>
<br>
Please keep messages on-topic and check the ParaView Wiki at: <a href=3D"ht=
tp://paraview.org/Wiki/ParaView" target=3D"_blank">http://paraview.org/Wiki=
/ParaView</a><br>
<br>
Follow this link to subscribe/unsubscribe:<br>
<a href=3D"http://www.paraview.org/mailman/listinfo/paraview" target=3D"_bl=
ank">http://www.paraview.org/mailman/listinfo/paraview</a><br>
<br></blockquote></div><br></div>
--047d7bb03c9c802aa904d8d347bb--
More information about the ParaView
mailing list