<div dir="ltr"><div><div><div><div>Thanks for testing and reporting back! <br><br></div>I'll see if I can motivate someone else to look at the other memory leaks since I'm not as familiar with that code.<br><br></div>FYI: the vtkPKdTree changes are now in VTK master and will likely make it into PV 5.5.<br><br></div>Best,<br></div>Andy<br><div><div><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Feb 26, 2018 at 1:47 PM, <span dir="ltr"><<a href="mailto:yvan.fournier@free.fr" target="_blank">yvan.fournier@free.fr</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello Andy,<br>
<br>
Here is a log (edited to remove unrelated OpenMPI Valgrind warnings) using the patch from your merge request.<br>
<br>
It seems the first 2 warnings (related to the kd-tree) have disappeared. The rest is unchanged.<br>
<br>
I also just ran a test regarding transparency issues a colleague reported. I did not reproduce issues running on 224 MPI ranks with a relatively simple test case (a tube bundle with only 4 tubes), which is the same test case I have been using for some time. So there seem to be issues in more complex geometries, but no apparent regression relative to PV 5.0.1). I'll need to find a more complex yet non-confidential test case for that issue, so I'll keep you updated when I can...<br>
<br>
The same user ran 3 days over the week-end (several hundred time steps) on that complex mesh, using point sprites, with no crash, so the memory leaks don't seem as bad as they used to be (the biggest leak was on my side, where the mesh was missing smart pointers...). I haven't seen his images yet (reportedly mostly working well but with some transparency parallel compositing issues, whether using a transparent boundary or point sprites).<br>
<br>
Best regards,<br>
<br>
Yvan<br>
<br>
<br>
----- Mail original -----<br>
De: "Andy Bauer" <<a href="mailto:andy.bauer@kitware.com">andy.bauer@kitware.com</a>><br>
À: "Yvan Fournier" <<a href="mailto:yvan.fournier@free.fr">yvan.fournier@free.fr</a>><br>
Cc: "Paraview (<a href="mailto:paraview@paraview.org">paraview@paraview.org</a>)" <<a href="mailto:paraview@paraview.org">paraview@paraview.org</a>><br>
Envoyé: Samedi 24 Février 2018 18:07:17<br>
Objet: Re: [Paraview] Memory leaks in Catalyst ?<br>
<span class=""><br>
<br>
<br>
<br>
<br>
Hi Yvan,<br>
<br>
I have a merge request into VTK at <a href="https://gitlab.kitware.com/vtk/vtk/merge_requests/3971" rel="noreferrer" target="_blank">https://gitlab.kitware.com/<wbr>vtk/vtk/merge_requests/3971</a> that hopefully improves the memory use. I still have 2 tests that are failing and also want to test with the ParaView tests so there's a bit more work to do. For the most part it seems ok though so if you want to try taking those change and test it on your end I wouldn't mind some extra testing on it, especially since we're getting so close to the ParaView 5.5 release.<br>
<br>
Best,<br>
Andy<br>
<br>
<br>
<br>
On Thu, Feb 22, 2018 at 8:26 PM, Yvan Fournier < <a href="mailto:yvan.fournier@free.fr">yvan.fournier@free.fr</a> > wrote:<br>
<br>
<br>
<br>
<br>
Hi Andy,<br>
<br>
<br>
Thanks for checking. Fixing my own bug (by adding vtkSmartPointer where needed in ly adaptor) fixed what seemed the larges issue on a small test case. A colleague is testing this on a larger case (for a real application) and should provide me some feedback on a larger, long-running case.<br>
<br>
<br>
He also observed some artifacts using transparency on a boundary/surface mesh (not fixed by using -DDEFAULT_SOFTWARE_DEPTH_BITS=<wbr>31 in Mesa's CFLAGS and CPPFLAGS, but remind me of issues I had observed on ParaView 5.0 and which had been fixed in 5.0.1) using llvmpipe. OpenSWR seemed to lead to crashes. I'll start by testing this on one of my simpler (non-confidential) benchmark cases.<br>
<br>
<br>
So I'll probably be running a series of additional tests (to update a series from 2 years ago) and keep you informed if I encounter any issues (and possibly send a few non-confidential screenshots if everything is working well).<br>
<br>
<br>
Cheers,<br>
<br>
<br>
Yvan<br>
<br>
<br>
<br>
<br>
On Thu, 2018-02-22 at 17:33 -0500, Andy Bauer wrote:<br>
<br>
<br>
<br>
<br>
<br>
<br>
Hi Yvan,<br>
<br>
The vtkPKdTree ones look like they could be after looking at the code, especially vtkPKdTree::<wbr>InitializeRegionAssignmentList<wbr>s(). It seems like a good idea to replace the int **ProcessAssignmentMap with maybe a std::vector. Probably a good idea for the other member variables here as well. I'll spend some time refactoring vtkPKdTree to make sure that the memory management is leak free.<br>
<br>
I don't see anything that suspicious with respect to ParaView in the other leak reports, though that doesn't necessarily mean that they aren't leaks.<br>
<br>
Cheers,<br>
Andy<br>
<br>
<br>
<br>
On Thu, Feb 22, 2018 at 4:53 PM, Yvan Fournier < <a href="mailto:yvan.fournier@free.fr">yvan.fournier@free.fr</a> > wrote:<br>
<br>
<br>
Hello,<br>
<br>
Running under Valgrind (memcheck, with --enable-leak-check=full), I have some<br>
warnings about ParaView/Catalyst possibly leaking memory.<br>
<br>
Catalyst is called from Code_Saturne, whose adapter code (using ParaView Python<br>
adapters from C++) is here <a href="https://www.code-saturne.org/viewvc/saturne/trunk/src" rel="noreferrer" target="_blank">https://www.code-saturne.org/<wbr>viewvc/saturne/trunk/src</a><br>
</span>/fvm/fvm_to_catalyst.cxx?<wbr>revision=11048&view=markup , using the attached<br>
<div class="HOEnZb"><div class="h5">results.py script.<br>
<br>
I fixed a leak in my own code following the Valgrind warnings, but some remining<br>
warnings seem related to calls I have no direct control over, so I attach a log<br>
(on one MPI rank) of Valgrind warnings (edited to remove OpenMPI initialization<br>
related warnings). The first part contains memcheck warnings, the part after<br>
"HEAP SUMMARY" the memory leak info.<br>
<br>
I'm not sure if the leaks are "one time only" (not too much of an issue), or can<br>
occur at every output timestep (30 in this example, for a small case with about<br>
8000 mesh elements per MPI rank), so any opinion / checking on that would be<br>
welcome.<br>
<br>
Best regards,<br>
<br>
Yvan Fournier<br>
______________________________<wbr>_________________<br>
Powered by <a href="http://www.kitware.com" rel="noreferrer" target="_blank">www.kitware.com</a><br>
<br>
Visit other Kitware open-source projects at <a href="http://www.kitware.com/opensource/opensource.html" rel="noreferrer" target="_blank">http://www.kitware.com/<wbr>opensource/opensource.html</a><br>
<br>
Please keep messages on-topic and check the ParaView Wiki at: <a href="http://paraview.org/Wiki/ParaView" rel="noreferrer" target="_blank">http://paraview.org/Wiki/<wbr>ParaView</a><br>
<br>
Search the list archives at: <a href="http://markmail.org/search/?q=ParaView" rel="noreferrer" target="_blank">http://markmail.org/search/?q=<wbr>ParaView</a><br>
<br>
Follow this link to subscribe/unsubscribe:<br>
<a href="https://public.kitware.com/mailman/listinfo/paraview" rel="noreferrer" target="_blank">https://public.kitware.com/<wbr>mailman/listinfo/paraview</a><br>
<br>
<br>
<br>
</div></div></blockquote></div><br></div></div></div></div></div></div>