KwGrid:Partners/Argonne National Lab: Difference between revisions

From KitwarePublic
Jump to navigationJump to search
mNo edit summary
mNo edit summary
Line 1: Line 1:
==Argonne National Lab, Mathematics and Computer Science Division==
==Argonne National Lab, Mathematics and Computer Science Division==


* Contacts: Michael E. Papka
* Contact(s): Michael E. Papka


The [http://www.mcs.anl.gov MCS division] at [http://www.anl.gov Argonne] operates a significant computing environment in support of a wide range of research and computational science. User communities include local researchers, Argonne scientists, and the national scientific community. Argonne facilities include three major parallel computing clusters, visualization systems, advanced display environments, collaborative environments, high-capacity network links and a diverse set of testbeds.
The [http://www.mcs.anl.gov MCS division] at [http://www.anl.gov Argonne] operates a significant computing environment in support of a wide range of research and computational science. User communities include local researchers, Argonne scientists, and the national scientific community. Argonne facilities include three major parallel computing clusters, visualization systems, advanced display environments, collaborative environments, high-capacity network links and a diverse set of testbeds.

Revision as of 16:54, 4 February 2005

Argonne National Lab, Mathematics and Computer Science Division

  • Contact(s): Michael E. Papka

The MCS division at Argonne operates a significant computing environment in support of a wide range of research and computational science. User communities include local researchers, Argonne scientists, and the national scientific community. Argonne facilities include three major parallel computing clusters, visualization systems, advanced display environments, collaborative environments, high-capacity network links and a diverse set of testbeds.

As one of the five participants in the NSF's Distributed Terascale Facility, MCS, in conjunction with the University of Chicago Computation Institute, operates the TeraGrid's visualization facility. The entire TeraGrid is a 13.6 TF grid of distributed clusters using Intel McKinley processors with over 6 TB of memory and greater than 600 TB of disk space. The full machine is distributed between NCSA, SDSC, Caltech, the Pittsburgh Computer Center, and the CI at Argonne. The individual clusters are connected together by a dedicated 40 Gb/s link that acts as the backbone for the machine. Argonne's component of the TeraGrid consists of 63 dual IA-64 nodes for computation, 96 dual Pentium IV nodes with Quadro4 900 XGL graphics accelerators for visualization, and 20 TB of storage.

Argonne operates a second supercomputer that is available to Argonne researchers and collaborators for production computing. This terascale Linux cluster has 350 compute nodes, each with a 2.4 GHz Pentium Xeon with 1.5GB of RAM. The cluster uses Myrinet 2000 and Ethernet for interconnect and has 20 TB of on-line storage in PVFS and GFS file systems.

In addition, Argonne has a cluster dedicated for computer science and open source development called "Chiba City". Chiba City has 512 Pentium-III 550MHz CPUs for computation, 32 Pentium-III 550 CPUs for visualization and 8 TB of disk. Chiba City is unique testbed that is principally used for system software development and testing.

Argonne has substantial visualization devices as well, each of which can be driven by the TeraGrid visualization cluster, by Chiba City, or by a number of smaller dedicated clusters. These devices include a 4-wall CAVE, the ActiveMural (an ~15 million pixel large-format tiled display), and several smaller tiled displays such as the portable MicroMural2, which has ~6 million pixels.

Finally, Argonne currently supports numerous Access Grid nodes, ranging from AG nodes in continual daily use to AG2 development nodes.

Template:KwGrid:Footer