[Paraview] mpi + pvtk = tcl/tk errors ?

Berk Geveci berk.geveci@kitware.com
31 Jan 2003 11:13:57 -0500


This is a common problem. Your slave nodes are not getting the DISPLAY
variable. Depending on your configuration and your mpi launcher, you
need to find the right way of letting the know where to display. I
usually do this by setting the DISPLAY var. in a login file, i.e my
.bashrc has the following line

export DISPLAY=:0

Of course, this is not an ideal solution since it disables X11 
forwarding through ssh etc. but since I don't use the cluster for
anything other than running ParaView, I don't care. If you want
all the display to go somewhere else, replace :0 with whatever
is appropriate. Of course, you also have to set the display access
properly using xhost or xauth or something. If the mpich you
are using is configured to use mpd, the following might work:

mpirun -np 4 -machinefile ~/machines pvtk [script] -MPDENV- DISPLAY=:0

-Berk

On Fri, 2003-01-31 at 10:56, Ian Watkins wrote:
> Hello,
> 
> I'm trying to run paraview/pvtk on a 4 node cluster (Redhat 7.2), using
> mpich 1.2.4.  
> I'm not the sysadmin, so ParaView/pvtk is not installed on each node,
> but in my home area (which is cross mounted on all nodes). 
> 
> If I run the following script through vtk on the head node, there are no
> errors.
> 
> # ParaView Version 0.6
> 
> package require vtkio
> package require vtkrendering
> package require vtkparallel
> puts "Success"
> exit
> 
> As you can see, the script isn't very complicated.
> 
> On any other node, running the script produces:
> 
> no display name and no $DISPLAY environment variable
> ::vtk::load_component: tk could not be found.
> can't find package Tk
> Tk was not found: the VTK rendering package can not be used... Please
> check that your Tcl/Tk installation is correct. Windows users should
> also check that the program used to open/execute Tcl files is the Tk
> shell (wish), not the Tcl shell (tclsh).
> can't find package vtkrendering
>     while executing
> "package require vtkrendering"
>     (file "check.tcl" line 4)
> 
> I've attempted to solve the problem by moving all executables and
> libraries to all nodes, but the same error occurs (all nodes are
> identical).  I've also added:
> "wm withdraw ." as the first execution line, but then vtk fails with:
> 
> Application initialization failed: no display name and no $DISPLAY
> environment variable
> Error in startup script: invalid command name "wm"
>     while executing
> "wm withdraw ."
>     (file "check.tcl" line 2)
> 
> The child nodes are on a private network, and not world accessible.
> (The head node has 2 nics, one for world, one for the private network.)
> 
> I've verified that wish and tclsh exist on all machines.  (In fact, both
> wish and tclsh run the above script fine on the head node.)
> 
> I've attempted to do X11 forwarding from the child nodes.  I've tried
> setting the DISPLAY environment var, but nothing seems to work.
> 
> I imagine at least one person is asking where the mpi problem is.  When
> I began trying to run pvtk, I was using:
> mpirun -np 4 -machinefile ~/machines pvtk [script]
> The system would puke, giving an error similar to the above three times.
> That is when I began attempting to run a smaller script on each node.
> 
> Any help would be greatly appreciated.
> Ian Watkins
> 
> _______________________________________________
> ParaView mailing list
> ParaView@public.kitware.com
> http://public.kitware.com/mailman/listinfo/paraview