<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Hi,<div><br></div><div>Paraview is a great and handy tool. I am now trying to scale it to large data sets and have</div><div>a question related to the parallel capabilities of the Paraview-XDMF-Reader:</div><div><br></div><div>We use a PIC-Code that devides its grid into blocks distributed over the ranks.</div><div>For the output every rank writes exactly one HDF5, collecting all local blocks and all time steps.</div><div>One XDMF per rank was written describing the data, as well as a central XDMF including all the individual ones.</div><div><br></div><div>The hierarchy is </div><div><span class="Apple-tab-span" style="white-space:pre"> </span>central-xdmf: spatial_collection->includes_for_rank_xdmf_files</div><div><span class="Apple-tab-span" style="white-space:pre"> </span>rank_xdmf_files: temporal_collection->spatial_collectio_within_ranks->grids</div><div><br></div><div>The size of the simulation is about 1024 ranks and 256 time steps but should be increased.</div><div>For these parameters we see (via top) a memory consumption of 16GB per pvserver instance.</div><div>Directly after opening of the file, so even before "apply". </div><div><br></div><div>I guess that this is because all the pvserver instances read and parse the XDMF file?</div><div>One time the paths to the HDF5 files were wrong, the memory consumption was the same.</div><div>After "apply" there was than an error.</div><div><br></div><div>I tried "PV_USE_TRANSMIT=1" and also changed the grid hierarchy to only have:</div><div>temporal_collection->spatial_collection->grids</div><div><br></div><div>This directly in one file, that was finally 1GB on disk and around 16GB in memory with lxml2 via python.</div><div><br></div><div>But it was to no effort, still every instance was consuming around 16GB</div><div><br></div><div>Is there any chance that pvserver can parallelize on the top-level so every pvserver instance only reads some of the "include" files.</div><div>Or is there a different approach to store the grid-patches (all the same resolution right now) in HDF5?</div><div><br></div><div>All suggestions are highly appreciated :-)</div><div><br></div><div>Thank you all very much for any support,</div><div>Best regards</div><div>--</div><div><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div>Dipl.-Phys. Karl-Ulrich Bamberg<br></div><br><div>Ludwig-Maximilians-Universität München<br></div><div>Arnold-Sommerfeld-Center (ASC)<br></div><div>Computational & Plasma Physics<br></div><div>Theresienstr. 37, D-80333 München<br></div><br><div>phone: +49 (0)89 2180 4577<br></div><div>fax: +49 (0)89 2180 99 4577<br></div><div>e-mail: <a href="mailto:Karl-Ulrich.Bamberg@physik.uni-muenchen.de">Karl-Ulrich.Bamberg@physik.uni-muenchen.de</a></div></div></span></span>
</div>
<br></body></html>