[Paraview] [Non-DoD Source] Building on Cray systems
Tim Gallagher
tim.gallagher at gatech.edu
Fri Feb 5 05:25:13 EST 2016
Good morning!
Thanks Andy and Richard for the advice.
After banging my head on my keyboard for most of the day yesterday, neither approach worked on Copper.
For Andy's suggestion of building pv5.0 with the Cray cross compiling script, I got a bunch of errors towards the end. It looks like many things did build, but then those errors about an unknown system kicked in and it seemed to all fall apart. I attached the output log as cray_build_pv5.0.txt
Richard, I tried your setup as well. I initially was trying to avoid the building CMake step but eventually something in the ParaviewSuperbuild failed because it said it needed CMake 2.8.11 and Copper only has 2.8.10. So, I took a step back and tried to build CMake. When I just do what is in your script -- configure and then make -- the build fails quickly because it says it cannot statically link shared libraries. So I tried to build CMake using the Catamount.cmake toolchain that we use for our application code as well. The initial configuration took over an hour, it sits at 95% for a long time going through the portion where it looks for various headers/features. But eventually it finishes and I tried to build but that failed. I get:
tgallagh at copper01:~/cmake/build> make
[ 3%] Built target cmsys
[ 4%] Built target cmsys_c
[ 6%] Built target cmzlib
[ 6%] Building C object Utilities/cmcurl/lib/CMakeFiles/cmcurl.dir/strerror.c.o
/u/tgallagh/cmake/cmake/Utilities/cmcurl/lib/strerror.c:32:6: error: #error "strerror_r MUST be either POSIX, glibc or vxworks-style"
# error "strerror_r MUST be either POSIX, glibc or vxworks-style"
^
make[2]: *** [Utilities/cmcurl/lib/CMakeFiles/cmcurl.dir/strerror.c.o] Error 1
make[1]: *** [Utilities/cmcurl/lib/CMakeFiles/cmcurl.dir/all] Error 2
So taking a big step backwards here to look at the big picture -- am I (and our lab) just using Cray systems in a fundamentally incorrect way? We've always struggled to get things to build when we never have had issues with SGI/Intel, BlueGene, or IBM machines (even when they were using PowerPC). We used to have all these same issues with our CFD code on Cray as well and the only solution we found was to build all of our codes' libraries statically and drop support for shared libraries.
I appreciate the help with the scripts. If there's any follow-up advice on what I'm doing wrong in both/either approach, that would be great.
Thanks,
Tim
----- Original Message -----
From: "Andy Bauer" <andy.bauer at kitware.com>
To: "tim gallagher" <tim.gallagher at gatech.edu>
Cc: "paraview" <paraview at paraview.org>, "Richard C Angelini (Rick) CIV USARMY RDECOM ARL (US)" <richard.c.angelini.civ at mail.mil>
Sent: Thursday, February 4, 2016 9:37:11 AM
Subject: Re: [Paraview] [Non-DoD Source] Building on Cray systems
Hi Tim,
I would recommend the ParaView superbuild script at Scripts/Sites/Cray-PrgEnv-cross-compile.sh for building PV 5.0.
The options you want to give it are the following:
Cray-PrgEnv-cross-compile.sh <comp> </path/to/cmake> </temp/download/directory> </install/directory>
Here, comp needs to be the string used to load the program environment, e.g. "PrgEnv-[gnu/gcc/Gnu] on those machines.
I used this to build a PV 5.0 pre-release version on Cori at NERSC and it worked just fine. It doesn't have an option to freeze python but after running that script you can go into the newly created cross/paraview/src/paraview-build subdirectory and switch it to that.
Another thing is if you're using the Intel compilers you may need to do a "module load gcc" when building and running your code. Intel's C++11 compilers rely on the GCC header files and libraries for some stuff and without gcc loaded it will give errors like "missing GLIBC".
Good luck and let us know how it goes!
Best,
Andy
On Thu, Feb 4, 2016 at 9:24 AM, Tim Gallagher < tim.gallagher at gatech.edu > wrote:
Andy,
We don't really care about the compiler or MPI used for paraview. Our code only supports Intel and GNU, but for simplicity I usually build paraview with GNU so everybody can use it. We usually use the default MPI for a system also, which on copper is cray-mpich/7.1.0 currently.
When we build our code, we have to specify the Catamount toolchain so everything is statically linked because we haven't really figured out how to update everything to use shared libraries on the compute nodes. When we first set up our build environment, shared libraries wasn't an option. If we go that route, will I need the FREEZE_PYTHON option since shared linking won't be available?
I suppose the proper answer is that we should update our build environment for shared linking rather than static. It's been on my to-do list to figure out for awhile now, but I haven't been able to write the proper toolchain file for it.
It appears that on copper at least (haven't checked the others), the system install has the libvtkPVPythonCatalyst* libraries (I misspoke in my previous email) but it does not have the development files from the PARAVIEW_INSTALL_DEVELOPMENT_FILES option. That and PARAVIEW_ENABLE_COPROCESSING are the only options in addition to the standard set of build options we need.
Tim
From: "Andy Bauer" < andy.bauer at kitware.com >
To: "Richard C Angelini (Rick) CIV USARMY RDECOM ARL (US)" < richard.c.angelini.civ at mail.mil >
Cc: "tim gallagher" < tim.gallagher at gatech.edu >, "paraview" < paraview at paraview.org >
Sent: Thursday, February 4, 2016 9:15:03 AM
Subject: Re: [Paraview] [Non-DoD Source] Building on Cray systems
Hi Rick,
Did you build ParaView with PARAVIEW_INSTALL_DEVELOPMENT_FILES enabled? Tim will need that for using Catalyst if he's going to be using your builds but not if he's going to do his own.
Tim, some questions on what you need:
* Do you have a specific compiler and version you want/need to use? Same thing for MPI implementation.
* Do you have a specific version of ParaView that you want to use?
I would recommend using the superbuild tools, to build statically with Python and Mesa. The other libraries can be built with the superbuild (definitely use system MPI though) for convenience even though for Catalyst you probably won't need many of them. The FREEZE_PYTHON option is to statically linking the other Python modules into the executable. This is definitely useful for when running with a high number of MPI ranks since when loading a module (e.g. paraview.simple) in parallel it can really kill the file system if thousands of processes are simultaneously trying to load a bunch of Python modules. Note though that this isn't needed for a Catalyst Python script since that is done specially where process 0 reads the file and broadcasts it to all of the other processes.
Cheers,
Andy
On Thu, Feb 4, 2016 at 8:54 AM, Angelini, Richard C (Rick) CIV USARMY RDECOM ARL (US) < richard.c.angelini.civ at mail.mil > wrote:
<blockquote>
Tim - I've already built ParaView on all of these systems - there are
modules available to load various version of Paraview. If you need to do
your own builds to support specific functionality - I can provide you the
build scripts we use on those systems.
-----Original Message-----
From: ParaView [mailto: paraview-bounces at paraview.org ] On Behalf Of Tim
Gallagher
Sent: Thursday, February 04, 2016 8:25 AM
To: paraview < paraview at paraview.org >
Subject: [Non-DoD Source] [Paraview] Building on Cray systems
All active links contained in this email were disabled. Please verify the
identity of the sender, and confirm the authenticity of all links contained
within the message prior to copying and pasting the address to a Web
browser.
----
Hi everybody,
I'm about to endeavor on the always fun process of building Paraview on Cray
systems, specifically Copper (ERDC), Garnet (ERDC) and Excalibur (ARL).
Little is ever easy on these systems and I've never succeeded at building
paraview on them in the past. However, we want to run with co-processing on
the compute nodes and so it's time to try again.
I saw there are some build scripts in the ParaviewSuperbuild for Cray
systems. Does anybody know of any documentation or examples on how to use
them? What dependencies do I need to build using the superbuild and what can
I use that is already on the system? For example -- python, HDF5, zlib, etc
are all available, but do I need to build my own versions?
Is it possible to build just Paraview (not using the superbuild) using the
system-installed modules? Does the FREEZE_PYTHON option work or help
eliminate the issues of running on the compute nodes?
If anybody has any advice on the best way to go, I would greatly appreciate
it. We need to have python, co-processing, and off-screen rendering enabled;
otherwise, it's just the standard build options.
Thanks!
Tim
_______________________________________________
Powered by Caution-www.kitware.com
Visit other Kitware open-source projects at
Caution- http://www.kitware.com/opensource/opensource.html
Please keep messages on-topic and check the ParaView Wiki at:
Caution- http://paraview.org/Wiki/ParaView
Search the list archives at: Caution- http://markmail.org/search/?q=ParaView
Follow this link to subscribe/unsubscribe:
Caution- http://public.kitware.com/mailman/listinfo/paraview
_______________________________________________
Powered by www.kitware.com
Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html
Please keep messages on-topic and check the ParaView Wiki at: http://paraview.org/Wiki/ParaView
Search the list archives at: http://markmail.org/search/?q=ParaView
Follow this link to subscribe/unsubscribe:
http://public.kitware.com/mailman/listinfo/paraview
</blockquote>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://public.kitware.com/pipermail/paraview/attachments/20160205/8fb8918f/attachment-0001.html>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: cray_build_pv5.0.txt
URL: <http://public.kitware.com/pipermail/paraview/attachments/20160205/8fb8918f/attachment-0001.txt>
More information about the ParaView
mailing list