<div dir="ltr">Hi Niklas,<div><br></div><div>Your problem is different. It looks like one of the arrays in your data requires 268GBs of memory. I expect that this is not the only data array so the total dataset should be much bigger. You are going to need to do this in parallel. I would not recommend more than 100M elements / node and that's on the really high end where I assume that you have multiple MPI ranks running per node.</div><div><br></div><div>Best,</div><div>-berk</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Sep 14, 2015 at 11:28 AM, Niklas Röber <span dir="ltr"><<a href="mailto:roeber@dkrz.de" target="_blank">roeber@dkrz.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hey,<br>
<br>
did you find a solution for this? I have a little smaller data set of<br>
just around 3.4 billion cells in a 3D unstructured grid volume, climate<br>
simulation data. It reads and displays the 2D slices (22 million cells<br>
each) fine and quiet fast, but if I want to load all 150 slices, I get<br>
this error message:<br>
<br>
ERROR: In<br>
/tmp/xas/build/paraview_paraview_4.3.1_default_gcc48/src/ParaView/VTK/Common/Core/vtkDataArrayTemplate.txx,<br>
line 308<br>
vtkIdTypeArray (0x3cecaf0): Unable to allocate 33587986431 elements of<br>
size 8 bytes.<br>
<br>
ParaView consumes around 2/3 of the systems memory by this time.<br>
<br>
Cheers,<br>
Niklas<br>
<div><div class="h5"><br>
> Aashish,<br>
><br>
> (sorry - didn't hit reply-all first time)<br>
><br>
>> Would it be possible for you to try OpenGL2 backend?<br>
> Yes - I can try this, but probably next week. I just change VTK_RENDERING_BACKENDS? Do you know if OSMESA has to be built with any particularly flags itself?<br>
><br>
> Thanks,<br>
><br>
> DT<br>
><br>
><br>
> ________________________________________<br>
> From: Aashish Chaudhary [<a href="mailto:aashish.chaudhary@kitware.com">aashish.chaudhary@kitware.com</a>]<br>
> Sent: Thursday, September 10, 2015 9:59 AM<br>
> To: David Trudgian<br>
> Cc: Berk Geveci; ParaView list<br>
> Subject: Re: [Paraview] Volume Rendering 17GB 8.5 billion cell volume<br>
><br>
> Thanks Dave. Haven' t looked at your email in detail (will do in a moment) but another thought would be some sort of limit we are hitting on the indices (MAX_INT or MAX_<TYPE>) being used when dealing with very large dataset such as yours.<br>
><br>
> Would it be possible for you to try OpenGL2 backend?<br>
><br>
> - Aashish<br>
><br>
> On Thu, Sep 10, 2015 at 10:55 AM, David Trudgian <<a href="mailto:david.trudgian@utsouthwestern.edu">david.trudgian@utsouthwestern.edu</a><mailto:<a href="mailto:david.trudgian@utsouthwestern.edu">david.trudgian@utsouthwestern.edu</a>>> wrote:<br>
> Berk (and others), thanks for your replies!<br>
><br>
>> This is pretty awesome. I am assuming that this has something to do with<br>
>> things not fitting on the GPU memory or exceeding some texture memory<br>
>> limitation. Can you provide some more details?<br>
> Sure - thanks for your help.<br>
><br>
>> * Which version of ParaView are you using?<br>
> This is with Paraview 4.3.1<br>
><br>
>> * It sounds like you have multiple GPUs and multiple nodes. What is the<br>
>> setup? Are you running in parallel with MPI?<br>
> Have tried in two ways, both are using MPI (OpenMPI/1.8.3 on an InfiniBand FDR<br>
> network):<br>
><br>
> Setup 1) Paraview 4.3.1 pvserver is running with MPI across multiple cluster<br>
> nodes, each with a Tesla K20 GPU. Only up to 4 nodes total, each one has a<br>
> single Tesla K20. Have used various numbers of MPI tasks. The machines are 16<br>
> physical cores, with hyper-threading on for 32 logical cores. 256GB RAM and the<br>
> Tesla K20 has 5GB.<br>
><br>
> ... when this didn't work we did suspect out of GPU memory. Since we have a<br>
> limited number of GPU nodes then decided to try the CPU approach...<br>
><br>
> Setup 2) Paraview 4.3.1 rebuilt with OSMESA support, to run pvserver on a larger<br>
> number of cluster nodes without any GPUs. These are 16 or 24 core machines with<br>
> 128/256/384GB RAM. Tried various numbers of nodes (up to 16) and<br>
> MPI tasks per node, allowing for OSMESA threading per the docs/graphs on the<br>
> Paraview wiki page.<br>
><br>
> Watching the pvserver processes when running across 16 nodes I wasn't seeing<br>
> more than ~2GB RAM usage per process. Across 16 nodes I ran with 8 tasks per<br>
> node, so at 2GB each this is well under the minimum of 128GB RAM per node.<br>
><br>
>> * If you are running parallel with MPI and you have multiple GPUs per node,<br>
>> did you setup the DISPLAYs to leverage the GPUs?<br>
> As above, only 1 GPU per node, or 0 when switched to the OSMESA approach to try<br>
> with across more nodes.<br>
><br>
> As mentioned before, we can view a smaller version of the data without issue on<br>
> both GPU and OSMESA setups. I just opened a 4GB version (approx 25% of full size)<br>
> using the OSMESA setup on a single node (8 MPI tasks) without issue. The<br>
> responsiveness is really great - but the 16GB file is a no-go even scaling up<br>
> across 16 nodes. The VTI itself seems fine, as slices and surface look as<br>
> expected.<br>
><br>
> Thanks again for any and all suggestions!<br>
><br>
> DT<br>
><br>
>> On Wed, Sep 9, 2015 at 5:00 PM, David Trudgian <<br>
>> <a href="mailto:david.trudgian@utsouthwestern.edu">david.trudgian@utsouthwestern.edu</a><mailto:<a href="mailto:david.trudgian@utsouthwestern.edu">david.trudgian@utsouthwestern.edu</a>>> wrote:<br>
>><br>
>>> Hi,<br>
>>><br>
>>> We have been experimenting with using Paraview to display very volumes<br>
>>> from very<br>
>>> large TIFF stacks generated by whole-brain microscopy equipment. The test<br>
>>> stack<br>
>>> has dimensions of 5,368x10,695x150. Stack is assembled in ImageJ from<br>
>>> individual<br>
>>> TIFFs, exported as a RAW and loaded into paraview. Saved as a .vti for<br>
>>> convenience. Can view slices fine in standalone paraview client on a 256GB<br>
>>> machine.<br>
>>><br>
>>> When we attempt volume rendering on this data across multiple nodes with<br>
>>> MPI<br>
>>> nothing appears in the client. Surface view works as expected. On<br>
>>> switching to<br>
>>> volume rendering the client's display will show nothing. There are no<br>
>>> messages<br>
>>> from the client or servers - no output.<br>
>>><br>
>>> This is happening when running pvserver across GPU nodes with NVIDIA Tesla<br>
>>> cards, or using CPU only with OSMESA. pvserver memory usage is well below<br>
>>> what<br>
>>> we have on the nodes - no memory warnings/errors.<br>
>>><br>
>>> Data is about 17GB, 8 billion cells. If we downsize to ~4GB or ~9GB then<br>
>>> we can<br>
>>> get working volume rendering. The 17GB never works regardless of scaling<br>
>>> nodes/mpi processes. The 4/9GB will work on 1 or 2 nodes.<br>
>>><br>
>>> Am confused by the lack of rendering, as we don't have memory issues, or an<br>
>>> other messages at all. Am wondering if there are any inherent limitation,<br>
>>> or I'm<br>
>>> missing something stupid.<br>
>>><br>
>>> Thanks,<br>
>>><br>
>>> Dave Trudgian<br>
>>><br>
>>><br>
>>> _______________________________________________<br>
>>> Powered by <a href="http://www.kitware.com" rel="noreferrer" target="_blank">www.kitware.com</a><<a href="http://www.kitware.com" rel="noreferrer" target="_blank">http://www.kitware.com</a>><br>
>>><br>
>>> Visit other Kitware open-source projects at<br>
>>> <a href="http://www.kitware.com/opensource/opensource.html" rel="noreferrer" target="_blank">http://www.kitware.com/opensource/opensource.html</a><br>
>>><br>
>>> Please keep messages on-topic and check the ParaView Wiki at:<br>
>>> <a href="http://paraview.org/Wiki/ParaView" rel="noreferrer" target="_blank">http://paraview.org/Wiki/ParaView</a><br>
>>><br>
>>> Search the list archives at: <a href="http://markmail.org/search/?q=ParaView" rel="noreferrer" target="_blank">http://markmail.org/search/?q=ParaView</a><br>
>>><br>
>>> Follow this link to subscribe/unsubscribe:<br>
>>> <a href="http://public.kitware.com/mailman/listinfo/paraview" rel="noreferrer" target="_blank">http://public.kitware.com/mailman/listinfo/paraview</a><br>
>>><br>
> --<br>
> David Trudgian Ph.D.<br>
> Computational Scientist, BioHPC<br>
> UT Southwestern Medical Center<br>
> Dallas, TX 75390-9039<br>
> Tel: <a href="tel:%28214%29%20648-4833" value="+12146484833">(214) 648-4833</a><tel:%28214%29%20648-4833><br>
><br>
><br>
> _______________________________________________<br>
> Powered by <a href="http://www.kitware.com" rel="noreferrer" target="_blank">www.kitware.com</a><<a href="http://www.kitware.com" rel="noreferrer" target="_blank">http://www.kitware.com</a>><br>
><br>
> Visit other Kitware open-source projects at <a href="http://www.kitware.com/opensource/opensource.html" rel="noreferrer" target="_blank">http://www.kitware.com/opensource/opensource.html</a><br>
><br>
> Please keep messages on-topic and check the ParaView Wiki at: <a href="http://paraview.org/Wiki/ParaView" rel="noreferrer" target="_blank">http://paraview.org/Wiki/ParaView</a><br>
><br>
> Search the list archives at: <a href="http://markmail.org/search/?q=ParaView" rel="noreferrer" target="_blank">http://markmail.org/search/?q=ParaView</a><br>
><br>
> Follow this link to subscribe/unsubscribe:<br>
> <a href="http://public.kitware.com/mailman/listinfo/paraview" rel="noreferrer" target="_blank">http://public.kitware.com/mailman/listinfo/paraview</a><br>
><br>
><br>
><br>
> --<br>
> | Aashish Chaudhary<br>
> | Technical Leader<br>
> | Kitware Inc.<br>
> | <a href="http://www.kitware.com/company/team/chaudhary.html" rel="noreferrer" target="_blank">http://www.kitware.com/company/team/chaudhary.html</a><br>
><br>
> ________________________________<br>
><br>
> UT Southwestern<br>
><br>
><br>
> Medical Center<br>
><br>
><br>
><br>
> The future of medicine, today.<br>
><br>
><br>
</div></div>> _______________________________________________<br>
> Powered by <a href="http://www.kitware.com" rel="noreferrer" target="_blank">www.kitware.com</a><br>
<div class="HOEnZb"><div class="h5">><br>
> Visit other Kitware open-source projects at <a href="http://www.kitware.com/opensource/opensource.html" rel="noreferrer" target="_blank">http://www.kitware.com/opensource/opensource.html</a><br>
><br>
> Please keep messages on-topic and check the ParaView Wiki at: <a href="http://paraview.org/Wiki/ParaView" rel="noreferrer" target="_blank">http://paraview.org/Wiki/ParaView</a><br>
><br>
> Search the list archives at: <a href="http://markmail.org/search/?q=ParaView" rel="noreferrer" target="_blank">http://markmail.org/search/?q=ParaView</a><br>
><br>
> Follow this link to subscribe/unsubscribe:<br>
> <a href="http://public.kitware.com/mailman/listinfo/paraview" rel="noreferrer" target="_blank">http://public.kitware.com/mailman/listinfo/paraview</a><br>
<br>
<br>
</div></div><br>_______________________________________________<br>
Powered by <a href="http://www.kitware.com" rel="noreferrer" target="_blank">www.kitware.com</a><br>
<br>
Visit other Kitware open-source projects at <a href="http://www.kitware.com/opensource/opensource.html" rel="noreferrer" target="_blank">http://www.kitware.com/opensource/opensource.html</a><br>
<br>
Please keep messages on-topic and check the ParaView Wiki at: <a href="http://paraview.org/Wiki/ParaView" rel="noreferrer" target="_blank">http://paraview.org/Wiki/ParaView</a><br>
<br>
Search the list archives at: <a href="http://markmail.org/search/?q=ParaView" rel="noreferrer" target="_blank">http://markmail.org/search/?q=ParaView</a><br>
<br>
Follow this link to subscribe/unsubscribe:<br>
<a href="http://public.kitware.com/mailman/listinfo/paraview" rel="noreferrer" target="_blank">http://public.kitware.com/mailman/listinfo/paraview</a><br>
<br></blockquote></div><br></div>