[Paraview] peak memory use on remote nodes
David E DeMarle
dave.demarle at kitware.com
Sat Apr 23 09:07:14 EDT 2016
Sorry I've got no insight on this.
On Apr 21, 2016 6:29 AM, "Biddiscombe, John A." <biddisco at cscs.ch> wrote:
Dave
So it turns out that aprun/alps won’t let me run a script with dstat to
monitor the memory use, and using craypat makes the execution take too long
(weeks it seems). So I tried importing part of the dstat python into my
paraview python
Using this I am able to query memory use on the node and dump it out.
Question : If I start a background python thread that polls memory like
this
#####################
# start a thread that prints mem use
#####################
def dump_memory():
# do something here ...
global stop_thread
line = ''
for o in stats:
o.extract()
line = line + ' ' + o.show()
print line + dstat.ansi['reset']
# call f() again in n seconds
if stop_thread==0:
threading.Timer(0.5, dump_memory).start()
and set it going at the start of my paraview python pipeline : Will it
actually run and collect info whilst the paraview script is executing the
rest of the pipeline.
intuition and brief googling tells me that this won’t work.
Can it be done?
JB
*From:* ParaView [mailto:paraview-bounces at paraview.org] *On Behalf Of
*Biddiscombe,
John A.
*Sent:* 18 April 2016 14:53
*To:* David E DeMarle
*Cc:* paraview at paraview.org
*Subject:* Re: [Paraview] peak memory use on remote nodes
Thanks Dave
A colleague here suggested I put my
aprun pvbatch <blah> into a script and add a ‘dtrace” command at the start
of the script. This will then dump out a txt file on each node and after
the job completes I can grep the info I need.
I’m going to give that a try first as it seems straightforward, but I will
revert to your suggestion if I fail and nothing else comes up.
cheers
JB
*From:* David E DeMarle [mailto:dave.demarle at kitware.com
<dave.demarle at kitware.com>]
*Sent:* 18 April 2016 14:48
*To:* Biddiscombe, John A.
*Cc:* paraview at paraview.org
*Subject:* Re: [Paraview] peak memory use on remote nodes
Perhaps extend the benchmark to glean /proc/<PID>/status's VmHWM entry on
Linux at least. Get the number for t1 and at t2 and subtract to see what
the maximum was in between those two times.
Seems the right place to do that is kwsys/SystemInformation.cxx, but you
could prototype in the script.
Hope that helps and please post some code. Seems like a very useful thing
to have.
David E DeMarle
Kitware, Inc.
R&D Engineer
21 Corporate Drive
Clifton Park, NY 12065-8662
Phone: 518-881-4909
On Mon, Apr 18, 2016 at 2:42 AM, Biddiscombe, John A. <biddisco at cscs.ch>
wrote:
The python scripting module “paraview.benchmark” allows one to get the
memory use using “paraview.benchmark.get_memuse” - but I presume this is
the memory reported as used by the system at the moment when the function
is called.
Does anyone know a way of recording the peak memory use on remote nodes
between t1 and t2 - where t1 and t2 are the start and stop of either a
function, job or even paraview python script on the remote node?
thanks
JB
--
John Biddiscombe, email:biddisco @.at.@ cscs.ch
http://www.cscs.ch/
CSCS, Swiss National Supercomputing Centre | Tel: +41 (91) 610.82.07
Via Trevano 131, 6900 Lugano, Switzerland | Fax: +41 (91) 610.82.82
_______________________________________________
Powered by www.kitware.com
Visit other Kitware open-source projects at
http://www.kitware.com/opensource/opensource.html
Please keep messages on-topic and check the ParaView Wiki at:
http://paraview.org/Wiki/ParaView
Search the list archives at: http://markmail.org/search/?q=ParaView
Follow this link to subscribe/unsubscribe:
http://public.kitware.com/mailman/listinfo/paraview
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://public.kitware.com/pipermail/paraview/attachments/20160423/7eaa56f1/attachment.html>
More information about the ParaView
mailing list