[Paraview] Volume Rendering 17GB 8.5 billion cell volume

Berk Geveci berk.geveci at kitware.com
Mon Sep 14 11:33:40 EDT 2015


Hi Niklas,

Your problem is different. It looks like one of the arrays in your data
requires 268GBs of memory. I expect that this is not the only data array so
the total dataset should be much bigger. You are going to need to do this
in parallel. I would not recommend more than 100M elements / node and
that's on the really high end where I assume that you have multiple MPI
ranks running per node.

Best,
-berk

On Mon, Sep 14, 2015 at 11:28 AM, Niklas Röber <roeber at dkrz.de> wrote:

> Hey,
>
> did you find a solution for this? I have a little smaller data set of
> just around 3.4 billion cells in a 3D unstructured grid volume, climate
> simulation data. It reads and displays the 2D slices (22 million cells
> each) fine and quiet fast, but if I want to load all 150 slices, I get
> this error message:
>
> ERROR: In
>
> /tmp/xas/build/paraview_paraview_4.3.1_default_gcc48/src/ParaView/VTK/Common/Core/vtkDataArrayTemplate.txx,
> line 308
> vtkIdTypeArray (0x3cecaf0): Unable to allocate 33587986431 elements of
> size 8 bytes.
>
> ParaView consumes around 2/3 of the systems memory by this time.
>
> Cheers,
> Niklas
>
> > Aashish,
> >
> > (sorry - didn't hit reply-all first time)
> >
> >> Would it be possible for you to try OpenGL2 backend?
> > Yes - I can try this, but probably next week. I just change
> VTK_RENDERING_BACKENDS? Do you know if OSMESA has to be built with any
> particularly flags itself?
> >
> > Thanks,
> >
> > DT
> >
> >
> > ________________________________________
> > From: Aashish Chaudhary [aashish.chaudhary at kitware.com]
> > Sent: Thursday, September 10, 2015 9:59 AM
> > To: David Trudgian
> > Cc: Berk Geveci; ParaView list
> > Subject: Re: [Paraview] Volume Rendering 17GB 8.5 billion cell volume
> >
> > Thanks Dave. Haven' t looked at your email in detail (will do in a
> moment) but another thought would be some sort of limit we are hitting on
> the indices (MAX_INT or MAX_<TYPE>) being used when dealing with very large
> dataset such as yours.
> >
> > Would it be possible for you to try OpenGL2 backend?
> >
> > - Aashish
> >
> > On Thu, Sep 10, 2015 at 10:55 AM, David Trudgian <
> david.trudgian at utsouthwestern.edu<mailto:david.trudgian at utsouthwestern.edu>>
> wrote:
> > Berk (and others), thanks for your replies!
> >
> >> This is pretty awesome. I am assuming that this has something to do with
> >> things not fitting on the GPU memory or exceeding some texture memory
> >> limitation. Can you provide some more details?
> > Sure - thanks for your help.
> >
> >> * Which version of ParaView are you using?
> > This is with Paraview 4.3.1
> >
> >> * It sounds like you have multiple GPUs and multiple nodes. What is the
> >> setup? Are you running in parallel with MPI?
> > Have tried in two ways, both are using MPI (OpenMPI/1.8.3 on an
> InfiniBand FDR
> > network):
> >
> > Setup 1) Paraview 4.3.1 pvserver is running with MPI across multiple
> cluster
> > nodes, each with a Tesla K20 GPU. Only up to 4 nodes total, each one has
> a
> > single Tesla K20. Have used various numbers of MPI tasks. The machines
> are 16
> > physical cores, with hyper-threading on for 32 logical cores. 256GB RAM
> and the
> > Tesla K20 has 5GB.
> >
> > ... when this didn't work we did suspect out of GPU memory. Since we
> have a
> > limited number of GPU nodes then decided to try the CPU approach...
> >
> > Setup 2) Paraview 4.3.1 rebuilt with OSMESA support, to run pvserver on
> a larger
> > number of cluster nodes without any GPUs. These are 16 or 24 core
> machines with
> > 128/256/384GB RAM. Tried various numbers of nodes (up to 16) and
> > MPI tasks per node, allowing for OSMESA threading per the docs/graphs on
> the
> > Paraview wiki page.
> >
> > Watching the pvserver processes when running across 16 nodes I wasn't
> seeing
> > more than ~2GB RAM usage per process. Across 16 nodes I ran with 8 tasks
> per
> > node, so at 2GB each this is well under the minimum of 128GB RAM per
> node.
> >
> >> * If you are running parallel with MPI and you have multiple GPUs per
> node,
> >> did you setup the DISPLAYs to leverage the GPUs?
> > As above, only 1 GPU per node, or 0 when switched to the OSMESA approach
> to try
> > with across more nodes.
> >
> > As mentioned before, we can view a smaller version of the data without
> issue on
> > both GPU and OSMESA setups. I just opened a 4GB version (approx 25% of
> full size)
> > using the OSMESA setup on a single node (8 MPI tasks) without issue. The
> > responsiveness is really great - but the 16GB file is a no-go even
> scaling up
> > across 16 nodes. The VTI itself seems fine, as slices and surface look as
> > expected.
> >
> > Thanks again for any and all suggestions!
> >
> > DT
> >
> >> On Wed, Sep 9, 2015 at 5:00 PM, David Trudgian <
> >> david.trudgian at utsouthwestern.edu<mailto:
> david.trudgian at utsouthwestern.edu>> wrote:
> >>
> >>> Hi,
> >>>
> >>> We have been experimenting with using Paraview to display very volumes
> >>> from very
> >>> large TIFF stacks generated by whole-brain microscopy equipment. The
> test
> >>> stack
> >>> has dimensions of 5,368x10,695x150. Stack is assembled in ImageJ from
> >>> individual
> >>> TIFFs, exported as a RAW and loaded into paraview. Saved as a .vti for
> >>> convenience. Can view slices fine in standalone paraview client on a
> 256GB
> >>> machine.
> >>>
> >>> When we attempt volume rendering on this data across multiple nodes
> with
> >>> MPI
> >>> nothing appears in the client. Surface view works as expected. On
> >>> switching to
> >>> volume rendering the client's display will show nothing. There are no
> >>> messages
> >>> from the client or servers - no output.
> >>>
> >>> This is happening when running pvserver across GPU nodes with NVIDIA
> Tesla
> >>> cards, or using CPU only with OSMESA. pvserver memory usage is well
> below
> >>> what
> >>> we have on the nodes - no memory warnings/errors.
> >>>
> >>> Data is about 17GB, 8 billion cells. If we downsize to ~4GB or ~9GB
> then
> >>> we can
> >>> get working volume rendering. The 17GB never works regardless of
> scaling
> >>> nodes/mpi processes. The 4/9GB will work on 1 or 2 nodes.
> >>>
> >>> Am confused by the lack of rendering, as we don't have memory issues,
> or an
> >>> other messages at all. Am wondering if there are any inherent
> limitation,
> >>> or I'm
> >>> missing something stupid.
> >>>
> >>> Thanks,
> >>>
> >>> Dave Trudgian
> >>>
> >>>
> >>> _______________________________________________
> >>> Powered by www.kitware.com<http://www.kitware.com>
> >>>
> >>> Visit other Kitware open-source projects at
> >>> http://www.kitware.com/opensource/opensource.html
> >>>
> >>> Please keep messages on-topic and check the ParaView Wiki at:
> >>> http://paraview.org/Wiki/ParaView
> >>>
> >>> Search the list archives at: http://markmail.org/search/?q=ParaView
> >>>
> >>> Follow this link to subscribe/unsubscribe:
> >>> http://public.kitware.com/mailman/listinfo/paraview
> >>>
> > --
> > David Trudgian Ph.D.
> > Computational Scientist, BioHPC
> > UT Southwestern Medical Center
> > Dallas, TX 75390-9039
> > Tel: (214) 648-4833<tel:%28214%29%20648-4833>
> >
> >
> > _______________________________________________
> > Powered by www.kitware.com<http://www.kitware.com>
> >
> > Visit other Kitware open-source projects at
> http://www.kitware.com/opensource/opensource.html
> >
> > Please keep messages on-topic and check the ParaView Wiki at:
> http://paraview.org/Wiki/ParaView
> >
> > Search the list archives at: http://markmail.org/search/?q=ParaView
> >
> > Follow this link to subscribe/unsubscribe:
> > http://public.kitware.com/mailman/listinfo/paraview
> >
> >
> >
> > --
> > | Aashish Chaudhary
> > | Technical Leader
> > | Kitware Inc.
> > | http://www.kitware.com/company/team/chaudhary.html
> >
> > ________________________________
> >
> > UT Southwestern
> >
> >
> > Medical Center
> >
> >
> >
> > The future of medicine, today.
> >
> >
> > _______________________________________________
> > Powered by www.kitware.com
> >
> > Visit other Kitware open-source projects at
> http://www.kitware.com/opensource/opensource.html
> >
> > Please keep messages on-topic and check the ParaView Wiki at:
> http://paraview.org/Wiki/ParaView
> >
> > Search the list archives at: http://markmail.org/search/?q=ParaView
> >
> > Follow this link to subscribe/unsubscribe:
> > http://public.kitware.com/mailman/listinfo/paraview
>
>
>
> _______________________________________________
> Powered by www.kitware.com
>
> Visit other Kitware open-source projects at
> http://www.kitware.com/opensource/opensource.html
>
> Please keep messages on-topic and check the ParaView Wiki at:
> http://paraview.org/Wiki/ParaView
>
> Search the list archives at: http://markmail.org/search/?q=ParaView
>
> Follow this link to subscribe/unsubscribe:
> http://public.kitware.com/mailman/listinfo/paraview
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://public.kitware.com/pipermail/paraview/attachments/20150914/47135656/attachment.html>


More information about the ParaView mailing list