systems and clusters
The software environment available on the RRZE hpc systems has been made as uniformely as possible, in order to allow users to change between clusters without having to start from scratch. Documentation for this environment can be found under HPC environment.
The following overview lists the systems for which the HPC service group can provide access.
The RRZE emmy-Cluster consists of 560 compute nodes with twenty Intel Xeon E5-2660v2 cores each. This is intended for massively parallel jobs.
The RRZE lima-Cluster consists of 500 compute nodes with twelve Intel Xeon 5650 cores each (6000 cores + 6000 SMT threads overall). This is the current main RRZE workhorse for parallel jobs.
The RRZE Woodcrest-Cluster consists of 217 compute nodes with four Intel Xeon 5160 cores each (868 cores overall). It is the previous main system and has now been degraded for use as a serial throughput cluster.
The RRZE Tinyblue-Cluster consists of 84 compute nodes with eight Intel Xeon 5550 cores each (672 cores + 672 SMT threads overall).
The RRZE TinyGPU-Cluster consists of 7 compute nodes with two NVIDIA GTX980 GPU boards each, one compute node with four NVIDIA C20xx GPU boards and one compute node with a NVIDIA Tesla K20c and a GeGorce GTX680 GPU board.
"memoryhog" and the TinyFat Compute-Cluster are intended for memory intensive programs.
The Linux-Testcluster consists of a diversity of systems (IA32/EM64T/AMD64/IA64) intended for benchmarking and software testing.
RRZE operates a cluster installation (16 nodes with 192 AMD Opteron Istanbul processors overall and 32 GB memory each) with Microsoft Windows Compute Cluster Server 2008.
This machine can be used as an access portal to reach the rest of the HPC systems from outside the university network. This is necessary, because most of our HPC systems are in private ip address ranges that can only be reached directly from inside the network of the FAU
RRZE operates a powerful storage system for HPC.
Users that require huge amounts of computing power can also apply to use the HPC systems of the "Leibniz-Rechenzentrum der Bayerischen Akademie der Wissenschaften" in munich.