Get Services >  High Performance Computing

HPC Development Milestones at NUS Computer Centre

2012
  • Launch of the HPC Cloud Service (Pay-Per-Use service), which allows researchers to acquire dedicated HPC resources with quick turnaround time and flexibility.
  • Launch of the HPC managed service (Condominium service) to free researchers from HPC system operation and maintenance chores.
  • Introduction of two new HPC clusters with a total of 2240 CPU cores, expanding the HPC cluster capacity by more than 70%. The new clusters come with 8 fat nodes, each with 40 cores and 256GB of memory.
  • Introduction of a new GPU system with more than 16,000 GPU cores.

2011
  • Completion of the HPC data centre development, which enabled the expansion of various HPC resources and services to meet demand.
  • Introduction of a new Infiniband network (bandwidth capacity of 40Gbps) to integrate all HPC clusters and parallel file system with high-speed interconnection.

2010
  • Introduction of a new HPC cluster with hexa-core CPUs, added a total of 1152 cores or more than 50% of capacity to the HPC cluster pool.
  • The central HPC facility delivered more 10,000 research simulations a month.

2009
  • Implementation of the GPFS based parallel file system with a capacity of 120TB as a high-performance work-space for data intensive applications.
  • Launch of the HPCBio Portal as a convenient web-based access to more than 20 Bio-medical related applications.
  • Conclusion of the HPC Challenge with some winning projects achieved up to 80 times speedup for their simulations.

2008
  • User authentication and access control was integrated with the central Active Directory to enable single account and password access to both HPC and non-HPC resources and services.
  • User home directory was expanded and integrated with the central storage and file system to enable seamless access of files and data across laptop, desktop and HPC systems.
  • HPC Portal was upgraded to enable online account registration, cutting the account application time from days to around one hour.
  • The second multi-core HPC cluster was introduced with a total of 768 cores, expanding the overall cluster computing pool to more than 1200 CPU cores.

2007
  • The University introduced the first multi-core server cluster with a total of 336 processor cores, doubling its HPC capability to an aggregated computing power of 1.99 Teraflops for researchers.
  • The first Windows-based HPC cluster was introduced to provide staff with relentless parallel computing resources right from their desktop PC.

2006
  • TCG@NUS (Tera-scale Campus Grid at NUS) clinched the winning CIO Award 2006, beating more than 100 nominations from the public and private sectors in the region. The award further recognizes the cross-faculty efforts in harnessing idle computing cycles from existing desktop PCs on campus.

2005
  • Planned PC Grid expansion to include up to 1000 PCs. Data Grid to support BioInformatics applications.

2004
  • The Grid Innovation Zone was established with IBM and Intel to promote Grid computing technology.
  • The following were developed as part of the NUS Campus Grid project:
  • Grid Portal
  • First Access Grid node on campus. Adoption of IA64 technology and further expansion of cluster system had raised the capacity further to 844.84Gflops.

2003
  • The first Grid computing system (PC Grid with 120 PCs) was developed. The combination of Grid and cluster implementation had further boosted the computing capacity by another 3 folds (593.80Gflops).

2002
  • Adoption of open source based cluster technology had boosted the HPC capacity by more than 3 folds (193.16Gflops).
  • Implementation of the high-performance remote visualisation system.

2001
  • The HPC portal was launched to provide anytime anywhere web based access to HPC resources.

2000
  • The installation of the Compaq Alpha HPC systems had boosted the HPC capacity further by more the 4 folds (52.36Gflops).
  • The SAN storage was introduced to enhance the storage capacity for HPC.

1998
  • The installation of the SGI Origin2000 HPC system and the adoption of the cc-NUMA architecture had boosted the HPC capacity by about 4 folds (9.52Gflops).

1996
  • The number of research projects supported exceeded 100 for the first time.

1995
  • The Supercomputing & Visualisation Unit was set up at Computer Centre to support and promote High Performance Computing on campus.
  • NUS installed the first Cray Vector Supercomputer (Cray J90) in the region on campus (2.4Gflops). (Gflops - Billions of floating-points operation per second).
  • NUS set up the Visualisation Laboratory at the Computer Centre to provide high-end scientific visualisation resources to support research activities on campus. The Laboratory was equipped with the state-of-the-art SGI Onyx visualisation system. An MOU was signed by NUS and SGI to promote high-end visualisation technology on campus.

top