[Paraview] fundamental doubts and problems with ParaView3

Raashid Baig raashid.b at rediffmail.com
Tue Jun 26 03:31:19 EDT 2007


Some more fundamental doubts and problems with ParaView3 compiled
sources with LAM/MPI support.

When I start paraview pvserver on 1 machine
(hostname=ged/ip=10.101.11.71) and run a script through pvpython on
another machine (hostname=bean/ip=10.101.11.70) which connects itself
to the server there is no problem and the results are perfectly, the
script which I use is given below.

import paraview

paraview.ActiveConnection = paraview.Connect("10.101.11.71", 11111)

reader = paraview.CreateProxy("sources", "XMLUnstructuredGridReader", "sources")
reader.SetFileName("fire_ug.vtu")
reader.UpdateVTKObjects()

isosurface_filter = paraview.CreateProxy("filters", "Contour", "filters")
isosurface_filter.SetInput(reader)
isosurface_filter.SetContourValues(298.38, 396.84, 495.31, 593.775, 692)
isosurface_filter.UpdateVTKObjects()

outline_filter = paraview.CreateProxy("filters", "OutlineFilter", "filters")
pyproxy5 = paraview.pyProxy(outline_filter)
pyproxy5.SetInput(reader)

renWin = paraview.CreateRenderWindow()
renWin.SetRemoteRenderThreshold(0)
renWin.UpdateVTKObjects()

display1 = paraview.CreateDisplay(isosurface_filter, renWin)
display2 = paraview.CreateDisplay(outline_filter, renWin)

renWin.ResetCamera()
renWin.StillRender()
raw_input("Enter")      

The problem is when I start the pvserver in parallel on 2 machines
(hostname=ged/ip=10.101.11.71) and (hostname=ender/ip=10.101.11.72)
with ged being the master node, and when I try to run a script through
pvpython on a different computer (hostname=bean/ip=10.101.11.70) the
script crashes. The output on the server end is :

raashid at ged:~/ParaView3-unix/bin$ lamnodes
n0      ged.local:1:origin,this_node
n1      ender.local:1:
raashid at ged:~/ParaView3-unix/bin$ mpirun -np 2 ./pvserver 
Listen on port: 11111
Waiting for client...
Client connected.
MPI_Send: process in local group is dead (rank 0, comm 2)
Rank (0, MPI_COMM_WORLD): Call stack within LAM:
Rank (0, MPI_COMM_WORLD):  - MPI_Send()
Rank (0, MPI_COMM_WORLD):  - main()
-----------------------------------------------------------------------------
One of the processes started by mpirun has exited with a nonzero exit
code.  This typically indicates that the process finished in error.
If your process did not finish in error, be sure to include a "return
0" or "exit(0)" in your C code before exiting the application.

PID 27473 failed on node n1 (10.101.11.72) due to signal 6.
-----------------------------------------------------------------------------
Process id: 1 >> ERROR: In /home/raashid/ParaView3/Servers/Common/vtkProcessModule.cxx, line 971
vtkProcessModule (0x80f1d68): Invalid arguments to vtkClientServerStream::Invoke.  There must be at least two arguments.  The first must be an object and the second a string.
while processing
Message 0 = Invoke
  Argument 0 = id_value {34}
  Argument 1 = string_value {AddRenderer}
  Argument 2 = int32_value {0}
  Argument 3 = id_value {32}


Process id: 1 >> ERROR: In /home/raashid/ParaView3/Servers/Common/vtkProcessModule.cxx, line 973
vtkProcessModule (0x80f1d68): Aborting execution for debugging purposes.


The script on the client end is 

raashid at bean:~/ParaView3-unix/bin$ ./pvpython
Python 2.5.1 (r251:54863, May  2 2007, 16:53:27) 
[GCC 4.1.2 (Ubuntu 4.1.2-0ubuntu4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import paraview
>>> paraview.ActiveConnection = paraview.Connect("10.101.11.71", 11111)
>>> reader = paraview.CreateProxy("sources", "XMLUnstructuredGridReader", "sources")
>>> reader.SetFileName("fire_ug.vtu")
>>> reader.UpdateVTKObjects()
>>> isosurface_filter = paraview.CreateProxy("filters", "Contour", "filters")
>>> isosurface_filter.SetInput(reader)
>>> isosurface_filter.SetContourValues(298.38, 396.84, 495.31, 593.775, 692)
>>> isosurface_filter.UpdateVTKObjects()
>>> renWin = paraview.CreateRenderWindow()


The pvserver crashed as soon as I tried to creat the window

***************************************************************

The other doubt I have is, if in the second case described above
(pvserver running on 2 different computers(10.101.11.71 &
10.101.11.72) and pvpython running on another(10.101.11.70)) I run the
same script, how should I make the ActiveConnection. On the client
side the script should be connecting the the master node on the
pvserver ("10.101.11.71") in this case, but the scripts on the server
side, I suppose they should just connect to the "localhost". Am I
correct or making some mistake ?

********************************************************************

I finished compiling paraview with mpisupport, using the standard
compilers (gcc), and LAM/MPI. The problem is that i still continue to
get floating point exceptions when trying to start the stand-alone
version of paraview that was compiled with mpi support.

I have checked out the sources from cvs and tried to build Paraview3
two days ago there was absolutely no error during the installation but
when I try to launch the stand-alone version of paraview that was
complied with mpi support it raises an error:

raashid at bean:~/ParaView3-unix/bin$ ./paraview 
Floating point exception (core dumped)

The hardware configuration of the computer is Intel Core 2 Duo 2.4GHz,
2GB DDR2 RAM and NVidia 7600 graphics card. The system runs on Ubuntu
7.04.

I have build the ParaView3 with debug symbol on and I compiled the
sources to support parallel processing through LAM. Version of LAM
installed is 7.1.2-1

The gdb output is as follows :

raashid at bean:~/ParaView3-unix/bin$ gdb ./paraview 
GNU gdb 6.6-debian
Copyright (C) 2006 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "i486-linux-gnu"...
Using host libthread_db library "/lib/tls/i686/cmov/libthread_db.so.1".
(gdb) run
Starting program: /home/raashid/ParaView3-unix/bin/paraview 
[Thread debugging using libthread_db enabled]
[New Thread -1319647504 (LWP 27488)]

Program received signal SIGFPE, Arithmetic exception.
[Switching to Thread -1319647504 (LWP 27488)]
0xb20461e6 in ?? () from /lib/tls/i686/cmov/libc.so.6

The backtrace output is as follows :

(gdb) bt
#0  0xb20461e6 in ?? () from /lib/tls/i686/cmov/libc.so.6
#1  0xb211f240 in ?? () from /lib/tls/i686/cmov/libc.so.6
#2  0xb211f120 in ?? () from /lib/tls/i686/cmov/libc.so.6
#3  0xb211f558 in ?? () from /lib/tls/i686/cmov/libc.so.6
#4  0xb211dff4 in ?? () from /lib/tls/i686/cmov/libc.so.6
#5  0xb136a2f8 in ?? ()
#6  0xb136a410 in ?? ()
#7  0xbfcaf86c in ?? ()
#8  0xb2048575 in ?? () from /lib/tls/i686/cmov/libc.so.6
#9  0x00000218 in ?? ()
#10 0xb57d8000 in ?? ()
#11 0xb211f150 in ?? () from /lib/tls/i686/cmov/libc.so.6
#12 0x000000f8 in ?? ()
#13 0x00000004 in ?? ()
#14 0x00000001 in ?? ()
#15 0x00000040 in ?? ()
#16 0xbfcaf850 in ?? ()
#17 0x000ffd08 in ?? ()
#18 0x000ffbf0 in ?? ()
#19 0xb2107880 in ?? () from /lib/tls/i686/cmov/libc.so.6
#20 0xb211f150 in ?? () from /lib/tls/i686/cmov/libc.so.6
#21 0x00000001 in ?? ()
#22 0x00100000 in ?? ()
---Type <return> to continue, or q <return> to quit---
#23 0x00000000 in ?? ()
  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://public.kitware.com/pipermail/paraview/attachments/20070626/fa46e1b5/attachment-0001.html


More information about the ParaView mailing list