Skip to Content

Welcome to Supercomputing at Swinburne

Green II Overview

This is the main page for the latest Swinburne supercomputer which incorporates gSTAR
- the GPU Supercomputer for Theoretical Astrophysics Research.

This new facility features a compute component that is a hybrid of traditional x64 processing cores (the CPUs) and graphics processing units (GPUs). The compute nodes are combined with a petascale data store and the entire system is networked with QDR infiniband.

Breakthroughs in modern scientific research are intricately linked to the availability of computational resources. This is particularly true in the case of astrophysics which represents a major research effort in Australia and has a growing reliance on High Performance Computing to solve some of the most complex problems. As such there is an ongoing need to push the boundaries of computational performance. GPUs offer an enticing opportunity in this direction. Designed to maximise graphics processing an individual GPU has hundreds of processing engines that can act in parallel to rapidly complete suitable tasks. To date GPUs have been successfully applied to a range of scientific endeavours including particular astrophysics examples at Swinburne such as the rapid processing (in real time) of data collected at the Parkes radio telescope (pictured above right) and N-body simulations of the rich stellar environment of star clusters (below right).

The performance-to-power ratio of GPUs offers a new angle on green computing -- six of the new gSTAR nodes provide the same theoretical compute performance as the entire "Green Machine" (the CPU-based predecessor). However, to fully utilise GPUs requires programming effort. Recognising this, Swinburne has invested resources into developing research software for use on GPUs (including PhD projects), with a focus on the CUDA programming language that has led to Swinburne being recognised as a CUDA Research Centre by NVIDIA.

See here for an overview of the account application procedure.


The gSTAR Compute Nodes

HPC Wiki for usage information

ganglia cluster monitor

The purpose of gSTAR is to provide the national astrophysical community with a GPU-based facility for performing world-class simulations and to enable rapid processing of telescope data. Funding for gSTAR is provided by an Education Investment Fund (EIF) grant obtained in co-operation with (and administered by) Astronomy Australia Limited (AAL). It is hosted at Swinburne and operated as a national facility.

The gSTAR hardware is provided by SGI.

There are 50 standard SGI C3108-TY11 nodes that each contain:

  • 2 six-core Westmere processors at 2.66 GHz
    (each processor is 64-bit Intel Xeon 5650)
  • 48 GB RAM
  • 2 NVIDIA Tesla C2070 GPUs (each with 6 GB RAM).
There are also 3 high-density GPU nodes that have the same CPU capabilities as the standard nodes but each contain 7 NVIDIA Tesla M2090 GPUs. All GPUs perform at greater than 1 Tflop/s (single precision).

Time on gSTAR is split as 75% for national astronomy use and 25% for Swinburne-only use. Although in practice we actually make 40% of the full system - gSTAR and swinSTAR (see below) - available for national astronomy use. Up to half of the astronomy time will be allocated through a merit based proposal scheme judged by the Astronomy Supercomputer Time Allocation Committee (ASTAC) which is a committee of AAL. The remaining astronomy time will be available through a general access job queue.

Accounts are open to all astronomers at publicly funded institutions in Australia and all Swinburne staff/students. Details can be found here

Calls for proposals to secure large allocations of the national astronomy time will be publicised on the AAL website and through the Astronomical Society of Australia.


The swinSTAR Compute Nodes

There are 86 standard swinSTAR nodes that each contain:

  • 2 eight-core SandyBridge processors at 2.2 GHz
    (each processor is 64-bit Intel E5-2660)
  • 64 GB RAM
  • PCI-e Gen3 motherboard.
64 of these nodes contain a NVIDIA K10 GPU.
There are also 4 large-memory nodes that each contain 512 GB RAM and 32 cores.

These nodes are available to Swinburne staff/students as well as gSTAR users. The same account works on all gSTAR and swinSTAR nodes.

The swinSTAR nodes are the long-term successor to the green supercomputer which has operated since 2007.


Storage and Interconnect

The storage system provides 1.7 petabytes of usable disk space served by a Lustre file system.

Approximately 200 terabytes will be available to non-Swinburne gSTAR users.

The storage facility is networked to the gSTAR and swinSTAR nodes via non-blocking QDR infiniband (40 Gb/s).


Further Information

For more information on gSTAR or swinSTAR contact:
  A/Prof Jarrod Hurley
Centre for Astrophysics & Supercomputing
Swinburne University of Technology
PO Box 218
Hawthorn VIC 3122
Australia

Phone: +61-3 9214 5787
Fax: +61-3 9214 8797

Student Research Focus

Georgios Vernardos started his PhD at Swinburne in 2011 and is using GPUs to study cosmological gravitational microlensing.

There is strong evidence that the power sources for quasars are supermassive black holes lurking in their centres. However, these regions are far too small to be resolved by our best telescopes. Luckily, the deflection of light by the presence of mass, a phenomenon known as gravitational lensing, can act as a “gravitational telescope” which we can use to understand distant quasars. The advent of GPUs in astronomy greatly speeds up simulations in this domain of cosmological micro-lensing. Using the gSTAR supercomputer, I am investigating micro-lensed systems by generating tens of thousands of theoretical simulations. These results will be crucial for future planned all-sky surveys which are expected to discover thousands of new micro-lensed quasars.

Partners/Acknowledgements

Many thanks to Swinburne ITS who maintain the facility.