ParaView/Jaguarpf
How to install and run ParaView on Jaguarpf for Adios Plugin
Jaguarpf background
The XT5 partition contains 18,688 compute nodes in addition to dedicated login/service nodes. Each compute node contains dual hex-core AMD Opteron 2435 (Istanbul) processors running at 2.6GHz, 16GB of DDR2-800 memory, and a SeaStar 2+ router. The resulting partition contains 224,256 processing cores, 300TB of memory, and a peak performance of 2.3 petaflop/s (2.3 quadrillion floating point operations per second).
Introduction
In a system like Jaguar, you do have front node and compute node. Usualy those node do not share the same OS. That's why cross compiler are usualy used on the front nodes. Cross compiler are used to compile a code on a computer A and run the binary on a computer B that don't have the same architecture or caracteristic that may result with incompatible binary between the 2 computers. For the Cray XT5 case, both node are Linux base and therefore serial code can be compiled with the cross compiler and run on a front node. But it was not the case some time ago and even now it is not always the true. So if the MPI library is not involved, we might use the cross compiler for front node code such as CMake, git and so on.
Compilation and Module loading
Depending where you want to execute your application (front/compute node), you are not going to setup the same compiler. Therefore, you might need to load/unload or even swap some module to use the proper one. By default, a cross compiler is setup, therefore you will need to unload it if you want to compile an application on the front node for the front node.
For example, if you want to check which compiler is currently set, you can type
> which cc /opt/cray/xt-asyncpe/3.7/bin/cc
To see what are the current loaded module, you can type
> module list
To see what module can be used, you can type
> module available
In order to load or unload a given module, you can do
> module list Currently Loaded Modulefiles: 1) modules/3.1.6 2) DefApps 3) torque/2.4.1b1-snap.200905191614 4) moab/5.3.6 5) /opt/cray/xt-asyncpe/default/modulefiles/xtpe-istanbul 6) cray/MySQL/5.0.64-1.0000.2342.16.1 7) xtpe-target-cnl 8) xt-service/2.2.41A 9) xt-os/2.2.41A 10) xt-boot/2.2.41A 11) xt-lustre-ss/2.2.41_1.6.5 12) cray/job/1.5.5-0.1_2.0202.18632.46.1 13) cray/csa/3.0.0-1_2.0202.18623.63.1 14) cray/account/1.0.0-2.0202.18612.42.3 15) cray/projdb/1.0.0-1.0202.18638.45.1 16) Base-opts/2.2.41A 17) pgi/10.3.0 18) xt-libsci/10.4.4 19) pmi/1.0-1.0000.7628.10.2.ss 20) xt-mpt/4.0.0 21) xt-pe/2.2.41A 22) xt-asyncpe/3.7 23) PrgEnv-pgi/2.2.41A > which cc /opt/cray/xt-asyncpe/3.7/bin/cc > module unload PrgEnv-pgi Base-opts > module list Currently Loaded Modulefiles: 1) modules/3.1.6 2) DefApps 3) torque/2.4.1b1-snap.200905191614 4) moab/5.3.6 5) /opt/cray/xt-asyncpe/default/modulefiles/xtpe-istanbul 6) cray/MySQL/5.0.64-1.0000.2342.16.1 > which cc /usr/bin/cc > module load PrgEnv-pgi Base-opts > which cc /opt/cray/xt-asyncpe/3.7/bin/cc
Next step
The question is: Where should I put the code that I have to compile ?
System like that have networking storage with different caracteristics such as high performance file system based on Lustre but with time usage limitation and some other that are backup but slow as they rely on NFS.
A good place to put a common tool that is going to be used by your project, is a project base directory. But if you don't have access to it, you can simply use your home directory. But at least be aware that disk access will be painful due to the network access.
How to Install and Compile ParaView
Pre-requisit
Application such as VTK and ParaView rely on CMake for their configuration system, so you will need to install it in order to configure ParaView for building it. Moreover, you might want git too, to pull your work from a given repository. Usualy git can be loaded from the module, but if you need a newer version, you will have no choice. To deal with this required softwares Pat Marion did developped a build system that simplify the installation/compilation procedure for the whole system.
Please retreive the ParaViewAutoBuild project in the location of your choice.
Configuring ParaViewAutoBuild
Edit the file ParaViewAutoBuild/build_options.sh For the configuration that you are going to use, you will need to update the "base=..." to point to a directory that will contain the source/build and install of each application that you will download and build.
The current configuration is set on the top of the file with the field
platform=jaguarpfgcc
In our case, we are going to edit the given set.
set_jaguarpfgcc_options() { base=/.../opt/ toolchain_file=cray-cnl-gnu-toolchain.cmake make_command="make -j2" use_wget=0 broken_git_install=0 c_cross_compiler=cc cxx_cross_compiler=CC paraview_cross_cxx_flags="-O2" }
Running ParaViewAutoBuild
Then based on a set of target available, you will be able to install components such as git, CMake, and so on... To do so, you will rely on the ./auto_build.sh script. This script take an argument that is the component that you want to install. For example, to download, compile and install git, just type
> ./auto_build.sh do_git
Installing tools needed on the front nodes
Remember, you are going to build tools that are going to be executed on the front nodes. Therefore, you will need to unload any cross compiler and just use the local one.
In our case, just type
> module unload PrgEnv-pgi Base-opts > which cc /usr/bin/cc
Then for paraview you can either call
> ./auto_build.sh do_paraview_native_prereqs
or if you want to do it step by step so you are sure that all the steps have worked. Since the content of the target is listed as follow
do_paraview_native_prereqs() { do_git do_cmake do_python_download do_python_build_native do_osmesa_download do_osmesa_build_native do_paraview_download }
you can type
> ./auto_build.sh do_git > ./auto_build.sh do_cmake > ./auto_build.sh do_python_download > ./auto_build.sh do_python_build_native > ./auto_build.sh do_osmesa_download > ./auto_build.sh do_osmesa_build_native > ./auto_build.sh do_paraview_download
Installing ParaView for the compute nodes
Remember, you are going to build tools that are going to be executed on the compute nodes. Therefore, you will need to use the cross compiler if you are in the same terminal where you've launch the prevous auto_build.sh for the front nodes.
In our case, just type
> module load PrgEnv-gnu Base-opts
or if another cross compiler was set
> module swap PrgEnv-pgi PrgEnv-gnu
But make sure you have something like that after all
> which cc /opt/cray/xt-asyncpe/3.7/bin/cc
- CAUTION: We use gcc compiler and not the pgi one due to invalid optimization in some VTK filter that produce invalid result.
Then execute with the cross compiler part
> ./auto_build.sh do_toolchains > ./auto_build.sh do_python_build_cross > ./auto_build.sh do_osmesa_build_cross > ./auto_build.sh do_paraview_configure_hosttools > ./auto_build.sh do_paraview_build_hosttools > ./auto_build.sh do_paraview_configure_cross > ./auto_build.sh do_paraview_build_cross
For thurther compilation and CMake configuration you can go to the build directory in BASE_DIR/source/paraview/build-cross
To configure your project
> make edit_cache
To compile
> make -j2
The default values to set for adios
ADIOS_INCLUDE_PATH /ccs/proj/e2e/demo/adios_xt5.gnu/include;/ccs/proj/csc025/marionp/gccbuild/install/adios/include ADIOS_LIBRARY /ccs/proj/e2e/demo/adios_xt5.gnu/lib/libadios.a ADIOS_READ_LIBRARY /ccs/proj/e2e/demo/adios_xt5.gnu/lib/libadiosread.a;/sw/xt5/adios/1.2.1/cnl2.2_gnu4.4.4/spaces/lib/libdart2.a;/sw/xt5/adios/1.2.1/cnl2.2_gnu4.4.4/spaces/lib/libspaces.a ADIOS_READ_NO_MPI_LIBRARY /ccs/proj/e2e/demo/adios_xt5.gnu/lib/libadiosread_nompi.a;/sw/xt5/adios/1.2.1/cnl2.2_gnu4.4.4/spaces/lib/libdart2.a;/sw/xt5/adios/1.2.1/cnl2.2_gnu4.4.4/spaces/lib/libspaces.a