Skip to Content

Welcome to Supercomputing at Swinburne

Supercomputing Overview

Since its inception in 1998 the Centre for Astrophysics and Supercomputing has run a supercomputing facility on behalf of Swinburne University of Technology. Originally a linux beowulf cluster the supercomputer evolved in 2007 into a fully-integrated rack-mounted system called Green with a theoretical peak speed in excess of 10 Teraflop/s (10 trillion floating point operations per second). Further evolution throughout 2011/12 saw the installation of a new supercomputer that incorporated graphics processing unit (GPU) hardware to push the performance well beyond 100 Teraflop/s. This became known as Green II (or g2 for short) and included the gSTAR national facility (see below). Now we have OzSTAR, the next generation with Petaflops performance.

The Swinburne supercomputers have proven to be excellent research tools in areas of astronomy ranging from simulations of structure formation in the Universe to the processing of data collected from radio telescopes. They are also used by CAS staff to render content for 3-D animations and movies. More generally, the supercomputers are available for use by Swinburne University researchers and their collaborators, as well as a national facility for astronomy use.

For detailed information on the current supercomputing environment, including how to gain access and user guides, go to:

Supercomputing at Swinburne is made possible by the dedicated support of the Swinburne ITS team.


A key role of our latest supercomputer is to underpin the computational efforts of the Swinburne-hosted Centre of Excellence in Gravitational Wave Discovery (OzGrav). It will also continue our tradition of operating as a national facility for the astronomy community through the GPU Supercomputer for Theoretical Astrophysics Research (gSTAR) program which receives funding from Astronomy Australia Limited for hardware and software support. Thus OzSTAR is a natural marriage of these two national-facing streams. However, OzSTAR will primarily be available to drive research forward across all academic disciplines at Swinburne.

OzSTAR comprises:

  • 115 Dell EMC PowerEdge R740 compute nodes;
  • 4,140 x86 Intel Skylake Xeon Gold 6140 CPU cores at 2.3Ghz;
  • 230 NVIDIA Tesla P100 12GB GPUs (one per CPU socket);
  • 272 Intel Xeon Phi cores at 1.6Ghz across 4 C6320p KNL nodes;
  • a high-speed, low latency network fabric able to move data across each building block at over 100Gbps;
  • 5 Petabytes of usable storage on a highly-available Lustre-ZFS filesystem with 30 GB/s throughput.

More information on this project can be found at or jump straight to the OzSTAR user documentation.

Green II - gSTAR and swinSTAR

Installed at Swinburne in 2011 and 2012 under the banner of Green II (or g2) this supercomputer incorporates two compute facilities. The first is the GPU Supercomputer for Theoretical Astrophysics Research (gSTAR) purchased through a $1.04M Education Investment Fund grant obtained via Astronomy Australia Limited (AAL). This part of the supercomputer is primarily aimed at usage by the national astronomy community but with time available for Swinburne staff and students (of all research fields). The second compute facility is the Swinburne Supercomputer for Theoretical Academic Research (swinSTAR), funded by Swinburne and available to all Swinburne staff and students. Both compute facilities are networked together in combination with a three petabyte data store.

More information on this project can be found at

The Green Machine

This incarnation of the supercomputer was installed at Swinburne in May 2007. It comprised 145 Dell Power Edge 1950 nodes each with:
  • 2 quad-core Clovertown processors at 2.33 GHz
    (each processor is 64-bit low-volt Intel Xeon 5138)
  • 16 GB RAM
  • 2 x 500 GB drives

The Clovertown processors offered performance gains with increased performance-per-Watt over previous processors - hence the cluster being named the Green Machine.

To complement the data storage capabilities of the supercomputer the Centre also had over 100 TB of RAID5 disks and 77 TB of magnetic tape (in the form of 3 S4 DLT Tape robots) available for long-term data storage.

The nodes were controlled by a head node which distributed jobs to the cluster via a queue system controlled by Moab cluster management software. The operating system was CentOS 5.

Earlier Cluster History

Prior to May 2007 the CAS supercomputer was a linux beowulf cluster. In 2002 this became only the second machine in Australia to exceed 1 Tflops in performance and by 2004 further expansion provided a theoretical peak speed of 2 Tflops. The cluster comprised the following hardware:

  • 200 Pentium 4 3.2 GHz nodes
  • 32 Pentium 4 3.0 GHz nodes
  • 90 Dual Pentium 4 2.2 GHz server class nodes.

The operating system on this cluster was SuSE linux and it was networked with gigabit ethernet. The cluster routinely operated at 100% capacity and by 2007, having been over four years since the last major supercomputer upgrade and owing to expansion of CAS (in terms of people and projects), was ripe for replacement. However, some components of this cluster remain in use at Swinburne.

Information and Access

For more information about the Swinburne supercomputing program contact:
  Prof Jarrod Hurley
Centre for Astrophysics & Supercomputing
Swinburne University of Technology
PO Box 218
Hawthorn VIC 3122

Phone: +61-3 9214 5787
Fax: +61-3 9214 8797