[Paraview] about your MPI problem
haifangzhou
haifang_zhou at 163.com
Wed Mar 29 19:50:01 EST 2006
Christoph,
I have read your questions in paraview mailing list. We have compiled and installed paraview with MPI in a 8-node cluster. and we have met the same problems as you, but we solved them later.
#1:
==========
$ mpirun -machinefile ~/machinefile.txt -np 2 ./paraview
p1_27474: p4_error: alloc_p4_msg: Message size exceeds P4s maximum
message size: 321213977
rm_l_1_27475: (257.175131) net_send: could not write to fd=5, errno = 32
==========
The first problem is caused by the MPI message size limit. you can configrate the MPI to enlarge this size.
#2:
==========
$ mpirun -machinefile ~/machinefile.txt -np 2 ./paraview
Process id: 1 >> ERROR: In
/home/moder/paraview-3D/paraview-2.4.2-clone2/VTK/Rendering/vtkXOpenGLRenderWindow.cxx,
line 1319
vtkXOpenGLRenderWindow (0xb576c78): bad X server connection.
p1_22329: p4_error: interrupt SIGSEGV: 11
rm_l_1_22330: (1360.557878) net_send: could not write to fd=5, errno = 32
==========
The second error is caused by Linux Xserver. You should set the $DISPLAY to localhost and run the commond 'xhost+' to allow remote processes could open a window locally.
Another request, Can you share your big test data with me? We want do some test about parallel performance of paraview, but can not find large enough dataset.
Best,
Nancy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://public.kitware.com/pipermail/paraview/attachments/20060330/c3d12050/attachment.htm
More information about the ParaView
mailing list