<div dir="ltr">Hi Karl,<div><br></div><div>Sorry for the lag in response.</div><div><br></div><div>We spent quite a bit of time looking at this issue when we were making the xdmf3 reader.</div><div><br></div><div>The new xdmf library's treatment of the data structures that pertain to the xml contents it much more efficient than the old version. </div><div><br></div><div>Also, at the ParaView level there you have two choices of how to read and xdmf file with the new reader. </div><div><br></div><div>The "(Top Level Parition)" is meant for this case. It makes it so that that every node opens its own child xdmf files. That way no memory is spent on the xml structured for contents they are not responsible for the hdf5 data for.</div><div><br></div><div>hth</div><div><br></div></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature">David E DeMarle<br>Kitware, Inc.<br>R&D Engineer<br>21 Corporate Drive<br>Clifton Park, NY 12065-8662<br>Phone: 518-881-4909</div></div>
<br><div class="gmail_quote">On Wed, Dec 3, 2014 at 7:05 PM, Karl-Ulrich Bamberg <span dir="ltr"><<a href="mailto:Karl-Ulrich.Bamberg@physik.uni-muenchen.de" target="_blank">Karl-Ulrich.Bamberg@physik.uni-muenchen.de</a>></span> wrote:<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word">Hi,<div><br></div><div>Paraview is a great and handy tool. I am now trying to scale it to large data sets and have</div><div>a question related to the parallel capabilities of the Paraview-XDMF-Reader:</div><div><br></div><div>We use a PIC-Code that devides its grid into blocks distributed over the ranks.</div><div>For the output every rank writes exactly one HDF5, collecting all local blocks and all time steps.</div><div>One XDMF per rank was written describing the data, as well as a central XDMF including all the individual ones.</div><div><br></div><div>The hierarchy is </div><div><span style="white-space:pre-wrap"> </span>central-xdmf: spatial_collection->includes_for_rank_xdmf_files</div><div><span style="white-space:pre-wrap"> </span>rank_xdmf_files: temporal_collection->spatial_collectio_within_ranks->grids</div><div><br></div><div>The size of the simulation is about 1024 ranks and 256 time steps but should be increased.</div><div>For these parameters we see (via top) a memory consumption of 16GB per pvserver instance.</div><div>Directly after opening of the file, so even before "apply". </div><div><br></div><div>I guess that this is because all the pvserver instances read and parse the XDMF file?</div><div>One time the paths to the HDF5 files were wrong, the memory consumption was the same.</div><div>After "apply" there was than an error.</div><div><br></div><div>I tried "PV_USE_TRANSMIT=1" and also changed the grid hierarchy to only have:</div><div>temporal_collection->spatial_collection->grids</div><div><br></div><div>This directly in one file, that was finally 1GB on disk and around 16GB in memory with lxml2 via python.</div><div><br></div><div>But it was to no effort, still every instance was consuming around 16GB</div><div><br></div><div>Is there any chance that pvserver can parallelize on the top-level so every pvserver instance only reads some of the "include" files.</div><div>Or is there a different approach to store the grid-patches (all the same resolution right now) in HDF5?</div><div><br></div><div>All suggestions are highly appreciated :-)</div><div><br></div><div>Thank you all very much for any support,</div><div>Best regards</div><div>--</div><div><span style="border-collapse:separate;color:rgb(0,0,0);font-family:Helvetica;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;font-size:medium"><span style="border-collapse:separate;color:rgb(0,0,0);font-family:Helvetica;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;font-size:medium"><div style="word-wrap:break-word"><div>Dipl.-Phys. Karl-Ulrich Bamberg<br></div><br><div>Ludwig-Maximilians-Universität München<br></div><div>Arnold-Sommerfeld-Center (ASC)<br></div><div>Computational & Plasma Physics<br></div><div>Theresienstr. 37, D-80333 München<br></div><br><div>phone: <a href="tel:%2B49%20%280%2989%202180%204577" value="+498921804577" target="_blank">+49 (0)89 2180 4577</a><br></div><div>fax: <a href="tel:%2B49%20%280%2989%202180%2099%204577" value="+49892180994577" target="_blank">+49 (0)89 2180 99 4577</a><br></div><div>e-mail: <a href="mailto:Karl-Ulrich.Bamberg@physik.uni-muenchen.de" target="_blank">Karl-Ulrich.Bamberg@physik.uni-muenchen.de</a></div></div></span></span>
</div>
<br></div><br>_______________________________________________<br>
Powered by <a href="http://www.kitware.com" target="_blank">www.kitware.com</a><br>
<br>
Visit other Kitware open-source projects at <a href="http://www.kitware.com/opensource/opensource.html" target="_blank">http://www.kitware.com/opensource/opensource.html</a><br>
<br>
Please keep messages on-topic and check the ParaView Wiki at: <a href="http://paraview.org/Wiki/ParaView" target="_blank">http://paraview.org/Wiki/ParaView</a><br>
<br>
Follow this link to subscribe/unsubscribe:<br>
<a href="http://public.kitware.com/mailman/listinfo/paraview" target="_blank">http://public.kitware.com/mailman/listinfo/paraview</a><br>
<br></blockquote></div></div>