[Paraview] client/server error for large superquadrics

Moreland, Kenneth kmorel at sandia.gov
Wed Jun 13 10:06:14 EDT 2007


Sorry.  I was not very clear.  As you know, you have two programs
running.  First, you have the paraview client application running on
your laptop.  Second, you have a server (pvserver) running on your
cluster which is run "under MPI" (that is, launched with mpirun and has
multiple processes).

 

What I believe is happening is that pvserver is trying to make an X
window.  This is because pvserver has the ability to remote render your
data, which requires a rendering context.  However, your cluster must be
set up in such a way that pvserver is unable to open an x window (either
you don't have xhost running on your server nodes or the permissions are
set up wrong).

 

The reason why you only see this on large data is that ParaView is smart
about where it renders data.  If your data is small, it will transfer it
to the client (your laptop) to render locally because it can be faster
overall.  If your data is large, ParaView will try to render it remotely
(on the cluster).  At this point pvserver will try to create X windows
and, hence, the crash.

 

As Brian mentioned in an email he sent while I was writing this, the
quickest way to get around this is to turn off the remote rendering
feature.  Go to Edit -> Settings.  Then click on the Server tab and
uncheck the box next to "Composite Threshold" to turn off remote
rendering.  This will force ParaView to always send all of the geometry
to your laptop for rendering and you should no longer see this problem.

 

However, in the long run it would be better to fix your install of
pvserver to allow remote rendering.  Moving the geometry to your laptop
can be horribly inefficient for large data.  You can do this in two
ways.  First, if you have graphics cards on your cluster, it is best to
set up the nodes of your cluster to be running an X server and to allow
any local X connections.  Second, if you do not have hardware
accelerating anyway you can compile ParaView on your server with OSMesa
support.  To do that, compile ParaView using the Mesa 3D library
(www.mesa3d.org) as your OpenGL implementation (setting the
OPENGL_INCLUDE_DIR, OPENGL_gl_LIBRARY, and OPENGL_glu_LIBRARY CMake
variables) and turning on OSMesa support (turn on VTK_OPENGL_HAS_OSMESA
and then set the OSMESA_INCLUDE_DIR and OSMESA_LIBRARY CMake variables).
Of course, you will have to turn back on the Composite Threshold
checkbox to turn remote rendering back on once you do this.

 

-Ken

 

________________________________

From: Glanfield, Wayne [mailto:Wayne.Glanfield at uk.renaultf1.com] 
Sent: Wednesday, June 13, 2007 6:56 AM
To: Moreland, Kenneth; paraview at paraview.org
Subject: RE: [Paraview] client/server error for large superquadrics

 

Hi Kenneth,

 

Thanks for your reply, I'm not quite sure I understand what you are
saying here when you say open up the X host to mpi, as it appears to be
working for small superquadrics, looking back at my original email its
probably not clear what I was trying to explain.

 

I am running the paraview 3.0.1 client from my laptop and doing a manual
reverse connection from the remote data server (sl-cfd-08) and remote
render server (sl-cfd-08) both of which appear to have to run on the
same machine for this to work. To test the configuration I have been
generating superquadric sources. This works for smaller resolution value
i.e. 256x256, but fails when using the larger values. The fact that this
configuration appears to work makes me think that the setup is ok.

 

The following is the output once the connections are made, and the error
when using larger resolution values;

 

sl-cfd-08:~ # clear; /usr/local/mpich-1.2.7/ch_p4/bin/mpirun -np 6
-machinefile /apps/CFD/machinefile
/apps/CFD/paraview-3.0.1/bin/pvdataserver -rc -ch=10.1.114.8

Connected to client

WaitForConnection: id :0  Port:50125

Received Hello from process 0

WaitForConnection: id :2  Port:44046

Received Hello from process 2

WaitForConnection: id :1  Port:43025

Received Hello from process 1 

 

sl-cfd-08:~ # clear; /usr/local/mpich-1.2.7/ch_p4/bin/mpirun -np 4
-machinefile /apps/CFD/machinefile
/apps/CFD/paraview-3.0.1/bin/pvrenderserver -rc -ch=10.1.114.8

WaitForConnection: id :3  Port:33852 

RenderServer: Connected to client

Connect: id :0  host: localhost  Port:50125

Connect: id :2  host: localhost  Port:44046

Connect: id :3  host: localhost  Port:33852

Connect: id :1  host: localhost  Port:43025

Process id: 1 >> ERROR: In
/apps/CFD/ParaView3/VTK/Rendering/vtkXOpenGLRenderWindow.cxx, line 1376

vtkXOpenGLRenderWindow (0x56d190): bad X server connection.
DISPLAY=p1_12661:  p4_error: interrupt SIGSEGV: 11

Process id: 2 >> ERROR: In
/apps/CFD/ParaView3/VTK/Rendering/vtkXOpenGLRenderWindow.cxx, line 1376

vtkXOpenGLRenderWindow (0x56d190): bad X server connection.
DISPLAY=p2_26245:  p4_error: interrupt SIGSEGV: 11

Process id: 3 >> ERROR: In
/apps/CFD/ParaView3/VTK/Rendering/vtkXOpenGLRenderWindow.cxx, line 1376

vtkXOpenGLRenderWindow (0x56d190): bad X server connection.
DISPLAY=p3_29085:  p4_error: interrupt SIGSEGV: 11

rm_l_2_26250: (62.367188) net_send: could not write to fd=5, errno = 32

rm_l_1_12666: (63.015625) net_send: could not write to fd=5, errno = 32

rm_l_3_29090: (61.468750) net_send: could not write to fd=5, errno = 32

sl-cfd-08:~ #

Received Hello from process 3

 

 

Wayne

 

 

 

 

 

 

 

 

 

 

________________________________

From: Moreland, Kenneth [mailto:kmorel at sandia.gov] 
Sent: 12 June 2007 14:32
To: Glanfield, Wayne; paraview at paraview.org
Subject: RE: [Paraview] client/server error for large superquadrics

 

It looks like your server is crashing because it is trying to open a
connection to the local X host and is failing.  You should be able to
correct this by either opening up the X host to the mpi job or compiling
with OSMesa (in which case the -use-offscreen-rendering will prevent a
connection any X host).

 

On a larger issue, ParaView 2 was able to correctly detect this, give a
warning to the user at the client, and perform all rendering locally on
the client.  Has this functionality gone away with ParaView 3?  If so,
we need to bring it back.

 

-Ken

 

________________________________

From: paraview-bounces+kmorel=sandia.gov at paraview.org
[mailto:paraview-bounces+kmorel=sandia.gov at paraview.org] On Behalf Of
Glanfield, Wayne
Sent: Tuesday, June 05, 2007 3:12 AM
To: paraview at paraview.org
Subject: [Paraview] client/server error for large superquadrics

 

I am experiencing a problem with Paraview 3.0.1 when generating large
i.e. 1024x1024 Superquadric sources when testing client/server mode.
Smaller values e.g. 1024x16 or 128x128 for example work ok. Large values
i.e. 1024x1024 in local mode work ok. I have installed the repository
version of PV3 and MPICH-1.2.7 and everything appeared to installed
correctly. I have tried it with and without offscreen rendering. Clients
and servers have the foillowing configuration, SUN Ultra 40, AMD
Opteron, SLED10.1 with NVIDIA Quadro FX 3450 GPU with the latest NVIDIA
drivers

 

Does anyone have any ideas why this is happening?

 

Regards

Wayne

 

I receive the following client and server errors

 

CLIENT

 

ERROR: In /apps/CFD/ParaView3/Servers/Common/vtkServerConnection.cxx,
line 67

vtkServerConnection (0xedd500): Server Connection Closed!

 

SERVER (MPI MASTER)

 

l-cfd-08:/apps/CFD/downloads/mpi/mpich-1.2.7p1 #
/usr/local/mpich-1.2.7/ch_p4/bin/mpirun -np 8 -machinefile
/apps/CFD/machinefile /apps/CFD/paraview-3.0.1/bin/pvserver -rc
-ch=sl-cfd-07 --connect-id=11111 --use-offscreen-rendering

Connected to client

Process id: 2 >> ERROR: In
/apps/CFD/ParaView3/VTK/Rendering/vtkXOpenGLRenderWindow.cxx, line 1376

vtkXOpenGLRenderWindow (0x7fedd0): bad X server connection.
DISPLAY=p2_5680:  p4_error: interrupt SIGSEGV: 11

rm_l_2_5685: (81.078125) net_send: could not write to fd=5, errno = 32

Process id: 6 >> ERROR: In
/apps/CFD/ParaView3/VTK/Rendering/vtkXOpenGLRenderWindow.cxx, line 1376

vtkXOpenGLRenderWindow (0x7fea00): bad X server connection.
DISPLAY=p6_5698:  p4_error: interrupt SIGSEGV: 11

Process id: 1 >> ERROR: In
/apps/CFD/ParaView3/VTK/Rendering/vtkXOpenGLRenderWindow.cxx, line 1376

vtkXOpenGLRenderWindow (0x7fea00): bad X server connection.
DISPLAY=Process id: 3 >> ERROR: In
/apps/CFD/ParaView3/VTK/Rendering/vtkXOpenGLRenderWindow.cxx, line 1376

vtkXOpenGLRenderWindow (0x7fea00): bad X server connection.
DISPLAY=Process id: 5 >> ERROR: In
/apps/CFD/ParaView3/VTK/Rendering/vtkXOpenGLRenderWindow.cxx, line 1376

vtkXOpenGLRenderWindow (0x7fea00): bad X server connection.
DISPLAY=Process id: 4 >> ERROR: In
/apps/CFD/ParaView3/VTK/Rendering/vtkXOpenGLRenderWindow.cxx, line 1376

vtkXOpenGLRenderWindow (0x7fea00): bad X server connection.
DISPLAY=p4_5689:  p4_error: interrupt SIGSEGV: 11

Process id: 7 >> ERROR: In
/apps/CFD/ParaView3/VTK/Rendering/vtkXOpenGLRenderWindow.cxx, line 1376

vtkXOpenGLRenderWindow (0x7fea00): bad X server connection.
DISPLAY=p7_3036:  p4_error: interrupt SIGSEGV: 11

rm_l_7_3041: (77.406250) net_send: could not write to fd=5, errno = 32

p1_3015:  p4_error: interrupt SIGSEGV: 11

rm_l_1_3020: (82.078125) net_send: could not write to fd=5, errno = 32

p5_3029:  p4_error: interrupt SIGSEGV: 11

p3_3022:  p4_error: interrupt SIGSEGV: 11

rm_l_3_3027: (80.535156) net_send: could not write to fd=5, errno = 32

rm_l_6_5703: (78.105469) net_send: could not write to fd=5, errno = 32

rm_l_4_5694: (79.593750) net_send: could not write to fd=5, errno = 32

rm_l_5_3034: (79.039062) net_send: could not write to fd=5, errno = 32

sl-cfd-08:/apps/CFD/downloads/mpi/mpich-1.2.7p1 # p2_5680: (85.132812)
net_send: could not write to fd=5, errno = 32

p5_3029: (89.078125) net_send: could not write to fd=5, errno = 32

p7_3036: (87.464844) net_send: could not write to fd=5, errno = 32

p6_5698: (88.152344) net_send: could not write to fd=5, errno = 32

p3_3022: (90.593750) net_send: could not write to fd=5, errno = 32

p4_5689: (89.652344) net_send: could not write to fd=5, errno = 32

p1_3015: (94.136719) net_send: could not write to fd=5, errno = 32

 

 

The following information is a list of processes which are running when
I manually setup the client and servers;

 

 

CLIENT SL-CFD-07

 

root     27926  2.1  1.0 246744 85764 pts/1    SL+  09:32   0:12
/apps/CFD/paraview-3.0.1/bin/paraview --connect-id=11111

 

 

MPI MASTER SERVER 1 (SL-CFD-08)

 

sl-cfd-08:/apps/CFD/downloads/mpi/mpich-1.2.7p1 #
/usr/local/mpich-1.2.7/ch_p4/bin/mpirun -np 8 -machinefile
/apps/CFD/machinefile /apps/CFD/paraview-3.0.1/bin/pvserver -rc
-ch=sl-cfd-07 --connect-id=11111 --use-offscreen-rendering

Connected to client

 

 

root      4737  0.0  0.0   8336  1684 pts/3    S+   09:36   0:00 /bin/sh
/usr/local/mpich-1.2.7/ch_p4/bin/mpirun -np 8 -machinefile
/apps/CFD/machinefile /apps/CFD/paraview-3

root      4968  0.3  0.2 161004 36668 pts/3    S+   09:36   0:00
/apps/CFD/paraview-3.0.1/bin/pvserver -rc -ch=sl-cfd-07
--connect-id=11111 --use-offscreen-rendering -p4pg /r

root      4969  0.0  0.0 156904  5632 pts/3    S+   09:36   0:00
/apps/CFD/paraview-3.0.1/bin/pvserver -rc -ch=sl-cfd-07
--connect-id=11111 --use-offscreen-rendering -p4pg /r

root      4970  0.0  0.0   6388   676 pts/3    S+   09:36   0:00
/usr/bin/rsh sl-cfd-09 -l root -n /apps/CFD/paraview-3.0.1/bin/pvserver
sl-cfd-08 58395 \-p4amslave \-p4yourn

root      4971  0.0  0.0   6388   672 pts/3    S+   09:36   0:00
/usr/bin/rsh sl-cfd-08 -l root -n /apps/CFD/paraview-3.0.1/bin/pvserver
sl-cfd-08 58395 \-p4amslave \-p4yourn

root      4972  0.0  0.0  24840  1216 ?        Ss   09:36   0:00 in.rshd
-aL

root      4973  0.2  0.2 160308 36396 ?        S    09:36   0:00
/apps/CFD/paraview-3.0.1/bin/pvserver sl-cfd-08 58395   4amslave
-p4yourname sl-cfd-08 -p4rmrank 2

root      4978  0.0  0.0   6388   676 pts/3    S+   09:36   0:00
/usr/bin/rsh sl-cfd-09 -l root -n /apps/CFD/paraview-3.0.1/bin/pvserver
sl-cfd-08 58395 \-p4amslave \-p4yourn

root      4979  0.0  0.0 156344  5952 ?        S    09:36   0:00
/apps/CFD/paraview-3.0.1/bin/pvserver sl-cfd-08 58395   4amslave
-p4yourname sl-cfd-08 -p4rmrank 2

root      4980  0.0  0.0   6388   676 pts/3    S+   09:36   0:00
/usr/bin/rsh sl-cfd-08 -l root -n /apps/CFD/paraview-3.0.1/bin/pvserver
sl-cfd-08 58395 \-p4amslave \-p4yourn

root      4981  0.0  0.0  24836  1208 ?        Ss   09:36   0:00 in.rshd
-aL

root      4982  0.2  0.2 160312 36392 ?        S    09:36   0:00
/apps/CFD/paraview-3.0.1/bin/pvserver sl-cfd-08 58395   4amslave
-p4yourname sl-cfd-08 -p4rmrank 4

root      4987  0.0  0.0   6384   676 pts/3    S+   09:36   0:00
/usr/bin/rsh sl-cfd-09 -l root -n /apps/CFD/paraview-3.0.1/bin/pvserver
sl-cfd-08 58395 \-p4amslave \-p4yourn

root      4988  0.0  0.0 156348  5956 ?        S    09:36   0:00
/apps/CFD/paraview-3.0.1/bin/pvserver sl-cfd-08 58395   4amslave
-p4yourname sl-cfd-08 -p4rmrank 4

root      4989  0.0  0.0   6384   676 pts/3    S+   09:36   0:00
/usr/bin/rsh sl-cfd-08 -l root -n /apps/CFD/paraview-3.0.1/bin/pvserver
sl-cfd-08 58395 \-p4amslave \-p4yourn

root      4990  0.0  0.0  24844  1212 ?        Ss   09:36   0:00 in.rshd
-aL

root      4991  0.2  0.2 160308 36388 ?        S    09:36   0:00
/apps/CFD/paraview-3.0.1/bin/pvserver sl-cfd-08 58395   4amslave
-p4yourname sl-cfd-08 -p4rmrank 6

root      4996  0.0  0.0   6384   672 pts/3    S+   09:36   0:00
/usr/bin/rsh sl-cfd-09 -l root -n /apps/CFD/paraview-3.0.1/bin/pvserver
sl-cfd-08 58395 \-p4amslave \-p4yourn

root      4997  0.0  0.0 156344  5956 ?        S    09:36   0:00
/apps/CFD/paraview-3.0.1/bin/pvserver sl-cfd-08 58395   4amslave
-p4yourname sl-cfd-08 -p4rmrank 6

 

 

SERVER 2 (SL-CFD-09)

 

root      2984  6.5  0.2 160308 36396 ?        S    09:36   0:00
/apps/CFD/paraview-3.0.1/bin/pvserver sl-cfd-08 58395   4amslave
-p4yourname sl-cfd-09 -p4rmrank 1

root      2989  0.0  0.0 156344  5180 ?        S    09:36   0:00
/apps/CFD/paraview-3.0.1/bin/pvserver sl-cfd-08 58395   4amslave
-p4yourname sl-cfd-09 -p4rmrank 1

root      2990  0.0  0.0  24840  1216 ?        Ss   09:36   0:00 in.rshd
-aL

root      2991  7.2  0.2 160308 36392 ?        S    09:36   0:00
/apps/CFD/paraview-3.0.1/bin/pvserver sl-cfd-08 58395   4amslave
-p4yourname sl-cfd-09 -p4rmrank 3

root      2996  0.0  0.0 156344  5956 ?        S    09:36   0:00
/apps/CFD/paraview-3.0.1/bin/pvserver sl-cfd-08 58395   4amslave
-p4yourname sl-cfd-09 -p4rmrank 3

root      2997  0.0  0.0  24848  1216 ?        Ss   09:36   0:00 in.rshd
-aL

root      2998  8.7  0.2 160312 36396 ?        S    09:36   0:00
/apps/CFD/paraview-3.0.1/bin/pvserver sl-cfd-08 58395   4amslave
-p4yourname sl-cfd-09 -p4rmrank 5

root      3003  0.0  0.0 156348  5964 ?        S    09:36   0:00
/apps/CFD/paraview-3.0.1/bin/pvserver sl-cfd-08 58395   4amslave
-p4yourname sl-cfd-09 -p4rmrank 5

root      3004  0.0  0.0  24848  1216 ?        Ss   09:36   0:00 in.rshd
-aL

root      3005 10.0  0.2 160312 36392 ?        S    09:36   0:00
/apps/CFD/paraview-3.0.1/bin/pvserver sl-cfd-08 58395   4amslave
-p4yourname sl-cfd-09 -p4rmrank 7

root      3010  0.0  0.0 156348  5960 ?        S    09:36   0:00
/apps/CFD/paraview-3.0.1/bin/pvserver sl-cfd-08 58395   4amslave
-p4yourname sl-cfd-09 -p4rmrank 7

 

 

 

---------------------------------------------------------------------
 
For further information on the Renault F1 Team visit our web site at
www.renaultf1.com. 
Renault F1 Team Limited
Registered in England no. 1806337
Registered Office: 16 Old Bailey London EC4M 7EG
 
 
WARNING: please ensure that you have adequate virus protection in place
before you open or detach any documents attached to this email.
 
This e-mail may constitute privileged information. If you are not the
intended recipient, you have received this confidential email and any
attachments transmitted with it in error and you must not disclose copy,
circulate or in any other way use or rely on this information.
 
E-mails to and from the Renault F1 Team are monitored for operational
reasons and in accordance with lawful business practices.
 
The contents of this email are those of the individual and do not
necessarily represent the views of the company.
 
Please note that this e-mail has been created in the knowledge that
Internet e-mail is not a 100% secure communications medium. We advise
that you understand and observe this lack of security when e-mailing us.
 
If you have received this email in error please forward to:
is.helpdesk at uk.renaultf1.com quoting the sender, then delete the message
and any attached documents
---------------------------------------------------------------------
---------------------------------------------------------------------
 
For further information on the Renault F1 Team visit our web site at
www.renaultf1.com. 
Renault F1 Team Limited
Registered in England no. 1806337
Registered Office: 16 Old Bailey London EC4M 7EG
 
 
WARNING: please ensure that you have adequate virus protection in place
before you open or detach any documents attached to this email.
 
This e-mail may constitute privileged information. If you are not the
intended recipient, you have received this confidential email and any
attachments transmitted with it in error and you must not disclose copy,
circulate or in any other way use or rely on this information.
 
E-mails to and from the Renault F1 Team are monitored for operational
reasons and in accordance with lawful business practices.
 
The contents of this email are those of the individual and do not
necessarily represent the views of the company.
 
Please note that this e-mail has been created in the knowledge that
Internet e-mail is not a 100% secure communications medium. We advise
that you understand and observe this lack of security when e-mailing us.
 
If you have received this email in error please forward to:
is.helpdesk at uk.renaultf1.com quoting the sender, then delete the message
and any attached documents
---------------------------------------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://public.kitware.com/pipermail/paraview/attachments/20070613/98a8fae1/attachment-0001.htm


More information about the ParaView mailing list