[vtkusers] MPI application writing PVTI
Favre Jean
jfavre at cscs.ch
Fri Jun 12 02:36:00 EDT 2015
I personally prefer to dump distributed data with MPI-IO into a single file of raw data, and then I use the Xdmf wrapper to read the raw data. I find it much simpler to specify extents, etc.
The other advantage of using Xdmf is that I can read it with at least two different open-source viz applications. My Xdmf lightweight data is the following, for a single attribute, written in parallel by an app running on 32 MPI tasks partitioned in an 8x4 cartesian MPI grid, each task writing its 64^3 block
Please note the Format="Binary"
<?xml version="1.0" ?>
<!DOCTYPE Xdmf SYSTEM "Xdmf.dtd" []>
<Xdmf xmlns:xi="http://www.w3.org/2003/XInclude" Version="2.2">
<Domain>
<Grid Name="Jacobi Mesh" GridType="Uniform">
<Topology TopologyType="3DCORECTMESH" Dimensions="64 256 512"/>
<Geometry GeometryType="ORIGIN_DXDYDZ">
<DataItem Name="Origin" NumberType="Float" Dimensions="3" Format="XML">0. 0. 0.</DataItem>
<DataItem Name="Spacing" NumberType="Float" Dimensions="3" Format="XML">1. 1. 1.</DataItem>
</Geometry>
<Attribute Name="Temperature" Active="1" AttributeType="Scalar" Center="Node">
<DataItem Dimensions="64 256 512" NumberType="Float" Precision="4" Format="Binary">bench_raw.0000</DataItem>
</Attribute>
</Grid>
</Domain>
</Xdmf>
Give Xdmf a try. ParaView will read it in parallel without any problem (Xdmf2)
-----------------
Jean M. Favre
Swiss National Supercomputing Center
CH-6900 Lugano
Switzerland
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://public.kitware.com/pipermail/vtkusers/attachments/20150612/8039ac30/attachment.html>
More information about the vtkusers
mailing list