Implementation, performance, and science results from a 30.7 TFLOPS IBM BladeCenter cluster

Publikation: Beitrag in FachzeitschriftForschungsartikelBeigetragenBegutachtung

Beitragende

  • Craig A. Stewart - , Indiana University Bloomington (Autor:in)
  • Matthew R. Link - , Indiana University Bloomington (Autor:in)
  • D. Scott McCaulay - , Indiana University Bloomington (Autor:in)
  • Greg Rodgers - , IBM (Autor:in)
  • George W. Turner - , Indiana University Bloomington (Autor:in)
  • David Y. Hancock - (Autor:in)
  • Peng Wang - (Autor:in)
  • Faisal Saied - (Autor:in)
  • Marlon Pierce - (Autor:in)
  • Ross Aiken - (Autor:in)
  • Matthias S. Müller - , Zentrum für Informationsdienste und Hochleistungsrechnen (ZIH) (Autor:in)
  • Matthias Jurenz - , Zentrum für Informationsdienste und Hochleistungsrechnen (ZIH) (Autor:in)
  • Matthias Lieber - , Zentrum für Informationsdienste und Hochleistungsrechnen (ZIH) (Autor:in)
  • Jenett Tillotson - (Autor:in)
  • Beth A. Plale - (Autor:in)

Abstract

This paper describes Indiana University's implementation, performance testing, and use of a large high performance computing system. IU's Big Red, a 20.48 TFLOPS IBM e1350 BladeCenter cluster, appeared in the 27th Top500 list as the 23rd fastest supercomputer in the world in June 2006. In spring 2007, this computer was upgraded to 30.72 TFLOPS. The e1350 BladeCenter architecture, including two internal networks accessible to users and user applications and two networks used exclusively for system management, has enabled the system to provide good scalability on many important applications while being well manageable. Implementing a system based on the JS21 Blade and PowerPC 970MP processor within the US TeraGrid presented certain challenges, given that Intel-compatible processors dominate the TeraGrid. However, the particular characteristics of the PowerPC have enabled it to be highly popular among certain application communities, particularly users of molecular dynamics and weather forecasting codes. A critical aspect of Big Red's implementation has been a focus on Science Gateways, which provide graphical interfaces to systems supporting end-to-end scientific workflows. Several Science Gateways have been implemented that access Big Red as a computational resource—some via the TeraGrid, some not affiliated with the TeraGrid. In summary, Big Red has been successfully integrated with the TeraGrid, and is used by many researchers locally at IU via grids and Science Gateways. It has been a success in terms of enabling scientific discoveries at IU and, via the TeraGrid, across the US.

Details

OriginalspracheEnglisch
Seiten (von - bis)157-174
Seitenumfang18
FachzeitschriftConcurrency and Computation: Practice and Experience
Jahrgang22
Ausgabenummer2
PublikationsstatusVeröffentlicht - 2009
Peer-Review-StatusJa

Externe IDs

Scopus 73949106522
ORCID /0000-0003-3137-0648/work/142238855

Schlagworte

Schlagwörter

  • blade cluster, HPC, TeraGrid