[Paraview] Potential Memory Error in CAVE mode

Faiz Abidi fabidi89 at vt.edu
Wed Mar 15 20:26:37 EDT 2017


Thanks Utkarsh and Ashish for your replies. CC'ing some more people since
it may be of interest to them too.

Writing pointwise for clarity.

1. Our pipeline involves TurboVNC, VirtualGL, and ParaView 5.2.0. We have a
4 walled CAVE-like system where each wall is driven by two projectors. We
are doing mono and not stereo. The Paraview client connects to the remote
HPC servers which have 512 GB of memory and GPUs. The data we have is in
PVTU format.

2. I have ruled out TurboVNC and VirtualGL as the issue since it all works
along with them in the pipeline when I don't use the CAVE mode. But can't
be 100% certain about that either.

3. This setup works for small data sets (I have tested for a few million
points) but I don't know what's the limit above which this error occurs. I
can do some more testing to find out though.

3. Question: if this is a memory issue, I am assuming memory per CPU issue?
Because system wide I have 512 GB and at no point while the data is trying
to load I see more than 200 GB consumed. I mean, it could be a spike in
memory and ParaView crashes so quickly that I can't see it in /proc/meminfo
but I am not sure.

4. Question: If it is indeed a memory per CPU issue, do you think using
more processes can help divide the workload? And on that note, my second
question is that is it even possible to use more processes than displays
defined in the PVX file? I have read about tiled displays and using tdx and
tdy flags but that won't work in a CAVE mode I assume? Currently, I only
use 8 processes, each driving one projector. But if there is a way to bump
up my processes while providing only 8 displays in the PVX file, please do
let me know.

5. Question: With regards to #4, I was actually trying to do tiled
rendering in the sense that for each display in my PVX file (attached), I
was defining two geometries (basically half of each). I did load up small
data successfully like this and was able to use 16 processes, but ParaView
still crashed with memory issues with the big data.

6. Question: On a related note, when I check the memory inspector while
ParaView is connected in a CAVE mode, I see the attached picture
(pvserver.png). This seems off since though I request 16 processes in this
case, I see every process id twice in the memory inspector and apparently,
only the first 8 processors work and the rest 8 don't do anything?

7. I just tested using my updated pvx file in which I tried tiling the
walls and loaded a 100million points PVTU file, and I got different
warnings and errors as attached (errors.png). I don't get any errors with a
10 million data points file and everything else same. I think these errors
maybe due to  how MPI implements and handles memory? Not sure again.

Sorry if it is too much information in one email but just trying to figure
out if this is indeed a limitation in the current implementation of
ParaView in a CAVE mode. And if so, I would have to significantly cut down
on my data size to get any results.

Appreciate all the help!

On Wed, Mar 15, 2017 at 11:06 AM, Aashish Chaudhary <
aashish.chaudhary at kitware.com> wrote:

> Faiz,
>
> Can you remind us about your setup again so that we can see if something
> can be done to reduce per rank memory requirements? Also, I am assuming the
> current setup works if the data is small?
>
>
>
> On Wed, Mar 15, 2017 at 10:47 AM Utkarsh Ayachit <
> utkarsh.ayachit at kitware.com> wrote:
>
>> You may indeed be running into a limitation of current implementation.
>> To support fast rendering requirements for CAVE, ParaView duplicates
>> geometry between all rendering ranks. Seems like you're running out of
>> memory during that process.
>>
>> On Mon, Mar 13, 2017 at 3:10 PM, Faiz Abidi <fabidi89 at vt.edu> wrote:
>> > Hi again community!
>> >
>> > I have been testing Paraview 5.2.0 in a CAVE mode with pretty big data
>> (1+
>> > billion points) while remotely connected to some HPC servers.
>> >
>> > PROBLEM: In CAVE mode, I get the below error when I try to change mode
>> from
>> > "Outline" to "Points". Same issue while doing other things as well like
>> > applying a Glyph (Sphere), etc.
>> >
>> > ERROR: terminate called after throwing an instance of 'std::bad_alloc'
>> > what():  std::bad_alloc
>> >
>> > The same issue doesn't occur if I don't use the CAVE mode in that I
>> simple
>> > connect a Paraview client with the HPC servers and don't pass any pvx
>> file.
>> >
>> > I read some similar online discussions pointing towards memory issues
>> but
>> > it's hard for me to believe that given  a) I have hundreds of gigs of
>> memory
>> > and most of it remains empty even with my big data loaded, b) the issue
>> > doesn't occur when not in a CAVE mode.
>> >
>> > Anyone experienced any such similar issues?
>> >
>> > Thanks for all the help!
>> > --
>> > Faiz Abidi | Master's Student at Virginia Tech | www.faizabidi.com |
>> > +1-540-998-6636 <(540)%20998-6636>
>> >
>> > _______________________________________________
>> > Powered by www.kitware.com
>> >
>> > Visit other Kitware open-source projects at
>> > http://www.kitware.com/opensource/opensource.html
>> >
>> > Please keep messages on-topic and check the ParaView Wiki at:
>> > http://paraview.org/Wiki/ParaView
>> >
>> > Search the list archives at: http://markmail.org/search/?q=ParaView
>> >
>> > Follow this link to subscribe/unsubscribe:
>> > http://public.kitware.com/mailman/listinfo/paraview
>> >
>> _______________________________________________
>> Powered by www.kitware.com
>>
>> Visit other Kitware open-source projects at http://www.kitware.com/
>> opensource/opensource.html
>>
>> Please keep messages on-topic and check the ParaView Wiki at:
>> http://paraview.org/Wiki/ParaView
>>
>> Search the list archives at: http://markmail.org/search/?q=ParaView
>>
>> Follow this link to subscribe/unsubscribe:
>> http://public.kitware.com/mailman/listinfo/paraview
>>
>


-- 
Faiz Abidi | Master's Student at Virginia Tech | www.faizabidi.com |
+1-540-998-6636
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://public.kitware.com/pipermail/paraview/attachments/20170315/970c6a8f/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pvserver.png
Type: image/png
Size: 118335 bytes
Desc: not available
URL: <http://public.kitware.com/pipermail/paraview/attachments/20170315/970c6a8f/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: cave-mono-2tiles.pvx
Type: application/octet-stream
Size: 4037 bytes
Desc: not available
URL: <http://public.kitware.com/pipermail/paraview/attachments/20170315/970c6a8f/attachment-0001.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: errors.png
Type: image/png
Size: 191105 bytes
Desc: not available
URL: <http://public.kitware.com/pipermail/paraview/attachments/20170315/970c6a8f/attachment-0003.png>


More information about the ParaView mailing list