[Paraview] Xdmf data duplication

Utkarsh Ayachit utkarsh.ayachit at kitware.com
Thu Aug 18 11:46:38 EDT 2011


Oops, my bad! I missed the original patch I sent skipped reading data
on other processes even in the multi-grid case. I've pushed a fix.
Attached is the corrected patch (start with a clean version of Xdmf
w/o the previous patch).

Utkarsh

On Wed, Aug 17, 2011 at 11:38 AM, Paul Melis <paul.melis at sara.nl> wrote:
> Hi Utkarsh,
>
> On 08/16/2011 06:22 PM, Utkarsh Ayachit wrote:
>> If you're writing out data that is already partitioned, you should
>> write it out as a collection of grids. Then each grid in that
>> collection is read on a separate partition.
>
> I followed your advice, but seem to have hit on another bug, this time
> with nodes not reading/showing their collection. See the attached Xdmf
> file, it contains a temporal collection of spatial collections, where
> there is actually only 1 timestep as it shows the incorrect behaviour.
> Each spatial grid consists of 16 unstructured grids, read from 16 HDF5
> files.
>
> The behaviour I'm seeing when loading this dataset on an 8-process PV
> session is that only 1/8th of the data actually shows up. Looking at the
> process Id scalars it only has a range of [0,0] and only 1 out of 8 of
> the "hypercolumn-?" sets in the multi-block set show any cells and
> points. This is with PV patched with the code you sent earlier, btw.
> Loading the same set on a local PV session works fine.
>
> I can upload the corresponding HDF5 files, if needed, but these are not
> public I'm afraid.
>
> Regards,
> Paul
>
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: fix.patch
Type: text/x-patch
Size: 1364 bytes
Desc: not available
URL: <http://www.paraview.org/pipermail/paraview/attachments/20110818/ac162892/attachment.bin>


More information about the ParaView mailing list