<div dir="ltr">Can you share your Python script? Another thought is that your Python script was added to each process instead of the subset of processes that are supposed to do the calculation on it. For example, the Python script that is supposed to generate the image should only be added through a vtkCPPythonScriptPipeline on those 8 processes.<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Aug 4, 2016 at 2:48 AM, Ufuk Utku Turuncoglu (BE) <span dir="ltr"><<a href="mailto:u.utku.turuncoglu@be.itu.edu.tr" target="_blank">u.utku.turuncoglu@be.itu.edu.tr</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
After getting help from the list, i finished the initial implementation of the code but in this case i have a strange experience with Catalyst. The prototype code is working with allinputsgridwriter.py script and could write multi-piece dataset in VTK format without any problem. In this case, the code also handles four different input ports to get data in different grid structure and dimensions (2d/3d).<br>
<br>
The main problem is that if i try to use the same code to output a png file after creating iso-surface from single 3d field (141x115x14 = 227K), it is hanging. In this case, if i check the utilization of the processors (on Linux, Centos 7.1,<br>
<br>
12064 turuncu 20 0 1232644 216400 77388 R 100.0 0.7 10:44.17 main.x<br>
12068 turuncu 20 0 1672156 483712 70420 R 100.0 1.5 10:44.17 main.x<br>
12069 turuncu 20 0 1660620 266716 70500 R 100.0 0.8 10:44.26 main.x<br>
12070 turuncu 20 0 1660412 267204 71204 R 100.0 0.8 10:44.22 main.x<br>
12071 turuncu 20 0 1659988 266644 71360 R 100.0 0.8 10:44.18 main.x<br>
12065 turuncu 20 0 1220328 202224 77620 R 99.7 0.6 10:44.08 main.x<br>
12066 turuncu 20 0 1220236 204696 77444 R 99.7 0.6 10:44.16 main.x<br>
12067 turuncu 20 0 1219644 199116 77152 R 99.7 0.6 10:44.18 main.x<br>
12078 turuncu 20 0 1704272 286924 102940 S 10.6 0.9 1:12.91 main.x<br>
12074 turuncu 20 0 1704488 287668 103456 S 10.0 0.9 1:08.50 main.x<br>
12072 turuncu 20 0 1704444 287488 103316 S 9.6 0.9 1:09.09 main.x<br>
12076 turuncu 20 0 1704648 287268 102848 S 9.6 0.9 1:10.16 main.x<br>
12073 turuncu 20 0 1704132 284128 103384 S 9.3 0.9 1:05.27 main.x<br>
12077 turuncu 20 0 1706236 286228 103380 S 9.3 0.9 1:05.49 main.x<br>
12079 turuncu 20 0 1699944 278800 102864 S 9.3 0.9 1:05.87 main.x<br>
12075 turuncu 20 0 1704356 284408 103436 S 8.6 0.9 1:07.03 main.x<br>
<br>
they seems normal because the co-processing component only works on a subset of the resource (8 processor, has utilization around 99 percent). The GPU utilization (from nvidia-smi command) is<br>
<br>
+-----------------------------<wbr>-------------------------+<br>
| NVIDIA-SMI 352.79 Driver Version: 352.79 |<br>
|-----------------------------<wbr>--+----------------------+----<wbr>------------------+<br>
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |<br>
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |<br>
|=============================<wbr>==+======================+====<wbr>==================|<br>
| 0 Quadro K5200 Off | 0000:42:00.0 On | Off |<br>
| 26% 42C P8 14W / 150W | 227MiB / 8191MiB | 0% Default |<br>
+-----------------------------<wbr>--+----------------------+----<wbr>------------------+<br>
<br>
+-----------------------------<wbr>------------------------------<wbr>------------------+<br>
| Processes: GPU Memory |<br>
| GPU PID Type Process name Usage |<br>
|=============================<wbr>==============================<wbr>==================|<br>
| 0 1937 G /usr/bin/Xorg 81MiB |<br>
| 0 3817 G /usr/bin/gnome-shell 110MiB |<br>
| 0 9551 G paraview 16MiB |<br>
+-----------------------------<wbr>------------------------------<wbr>------------------+<br>
<br>
So, the GPU is not overloaded in this case. I tested the code with two different version of ParaView (5.0.0 and 5.1.0). The results are same for both cases even if i create the co-processing Python scripts with same version of the ParaView that is used to compile the code. I also tried to use 2d field (141x115) but the result is also same and the code is still hanging. The different machine (MacOS+ParaView 5.0.0) works without problem. There might be a issue of Linux or installation but i am not sure and it was working before. Is there any flag or tool that allows to analyze Paraview deeply to find the source of the problem.<br>
<br>
Regards,<br>
<br>
--ufuk<div class="HOEnZb"><div class="h5"><br>
<br>
<br>
<br>
<br>
______________________________<wbr>_________________<br>
Powered by <a href="http://www.kitware.com" rel="noreferrer" target="_blank">www.kitware.com</a><br>
<br>
Visit other Kitware open-source projects at <a href="http://www.kitware.com/opensource/opensource.html" rel="noreferrer" target="_blank">http://www.kitware.com/opensou<wbr>rce/opensource.html</a><br>
<br>
Please keep messages on-topic and check the ParaView Wiki at: <a href="http://paraview.org/Wiki/ParaView" rel="noreferrer" target="_blank">http://paraview.org/Wiki/ParaV<wbr>iew</a><br>
<br>
Search the list archives at: <a href="http://markmail.org/search/?q=ParaView" rel="noreferrer" target="_blank">http://markmail.org/search/?q=<wbr>ParaView</a><br>
<br>
Follow this link to subscribe/unsubscribe:<br>
<a href="http://public.kitware.com/mailman/listinfo/paraview" rel="noreferrer" target="_blank">http://public.kitware.com/mail<wbr>man/listinfo/paraview</a><br>
</div></div></blockquote></div><br></div>