[Paraview] Large Volumes and D3

Moreland, Kenneth kmorel at sandia.gov
Tue Aug 12 15:40:03 EDT 2008


Nope.  Sorry.  D3 wasn't designed to do that.  D3 assumes it is working with unstructured data, so it will be converting your nice 3D array into a bunch of unstructured hexahedra, which requires 576 GB in topology information and 192 GB in coordinate information alone.

D3 does not handle structured data because it is assumed that structured data will already be partitioned well.  It is typically the responsibility of the reader to respond to extent requests correctly.  The best case scenario is for the reader to use parallel HDF5 to concurrently read the volume and divide it amongst processes.

-Ken

> -----Original Message-----
> From: paraview-bounces at paraview.org [mailto:paraview-bounces at paraview.org]
> On Behalf Of Kent Eschenberg
> Sent: Tuesday, August 12, 2008 9:18 AM
> To: ParaView
> Subject: [Paraview] Large Volumes and D3
>
> It seems this should work: read 1 array from 1 HDF5 file then distribute
> it across nodes with D3.
>
> The array is a simple volume of 2048^3, the data type is unsigned char and
> the file is 8GB. Its about as simple as it gets!
>
> ParaView crashes when I use D3. Before it crashed the total memory usage
> for pvserver (i.e., the sum across all 4 nodes) climb to more than 72 GB
> (virtual) and 40 GB (resident). And I had not yet done any visualization.
>
> ParaView CVS 6/25/2008
> CentOS 5
> 4 nodes
> each node has 16 GB and two quad Xeons (x86_64)
> pvserver using 4 processes, one per node
> HDF5_ENABLE_PARALLEL:BOOL=OFF
>
> It seems there is a fatal design flaw that shows up when working with a
> large volume. Comments?
>
> Kent
> Pittsburgh Supercomputing Center
> _______________________________________________
> ParaView mailing list
> ParaView at paraview.org
> http://www.paraview.org/mailman/listinfo/paraview




More information about the ParaView mailing list