Die letzten Meldungen

(behoben) Störung der Glasfaserverbindung Verwaltungsnetz zwischen Halbmondstraße und RRZE

22. Mai 2017

Aufgrund einer Störung an der Glasfaserverbindung des Verwaltungsnetzes (ZUV) zwischen dem Aufpunkt Halbmondstr. und dem RRZE können die Verwaltungsarbeitsplätze im Schloss, der Halbmondstr. sowie Krankenhaus- und Turnstr. gegenwärtig keine der zentralen Dienste oder das Internet erreichen.

Downtime of HPC clusters LiMa, TinyFAT & parts of Woody on Mon, May 15 (FINISHED)

12. Mai 2017

Due to urgent work on the power grid, the HPC clusters LiMa, TinyFAT & parts of Woody (w10xx = :sb and w12xx = :sl16) as well as Memoryhog have to be shut down on Monday, May 15th starting at 7 o’clock in the morning. As usual, jobs that would collide with the downtime will be postponed.

Wartung der FAUbox verschoben auf 4.5.2017

2. Mai 2017

Wegen Terminkollisionen wird die FAUbox-Wartung verschoben:

Meldungen nach Thema


memoryhog and the TinyFat cluster

Memoryhog and the TinyFat cluster are intended for running serial or moderately parallel (OpenMP) applications that require large amounts of memory in one machine.

This website shows information regarding the following topics:


Since October 2009 there is a machine called memoryhog available at RRZE. The actual physical machine behind the name has changed twice, the current incarnation exists since March 2011.

  • HP DL385 G7
  • 2 CPU sockets (16 CPU cores) with AMD Opteron 6134 ("Magny Cours") CPUs - 2,3 GHz
  • 128 GBytes of main memory
  • running Ubuntu LTS

To access the machine, you can just connect with SSH to
memoryhog.rrze.uni-erlangen.de. As with most HPC systems, this is only reachable from inside the uni-erlangen network. To access the machine from elsewhere, you need to use the dialog servers.
There is no reservation- or batchsystem for this machine, so be considerate of other users on the machine.

Processes hogging up too many resources or running for too long will be killed without notice.

TinyFat cluster

TinyFat is a cluster of nodes that have large amounts of main memory. It's basically a cluster of memoryhogs with access managed through a batchsystem.

TinyFat currently consists of these nodes:

  • 16 x
    • HP DL385 G7
    • 2 CPU sockets (16 CPU cores) with AMD Opteron 6134 ("Magny Cours") CPUs - 2,3 GHz
    • 128 GBytes of main memory
  • 1 x
    • HP DL580 G7
    • 4 CPU sockets (32 CPU cores + 32 SMT threads) with Intel Xeon X7560 ("Nehalem EX") CPUs - 2,27 GHz
    • 512 GBytes of main memory
  • 1 x
    • HP DL585 G7
    • 4 CPU sockets (48 CPU cores) with AMD Opteron 6176 SE ("Magny Cours") CPUs - 2,3 GHz
    • 192 GBytes of main memory
  • 3 x
    • SuperMicro
    • 2 CPU sockets (28 CPU cores + 28 SMT threads) with Intel Xeon E5-2680 v4 ("Broadwell") CPUs - 2,4 GHz
    • 512 GBytes of main memory
  • 8 x
    • SuperMicro
    • 2 CPU sockets (12 CPU cores + 12 SMT threads) with Intel Xeon E5-2643 v4 ("Broadwell") CPUs - 3,4 GHz
    • 256 GBytes of main memory
  • running Ubuntu LTS
  • Parts of the cluster have QDR Infiniband (fully non-blocking) between them.
  • Parts of the cluster have 10 Gigabit Ethernet connections.

Access to TinyFat is through the Woody Frontends. So, connect to


and you will be randomly routed to one of the frontends for Woody, as there are no extra frontends for TinyFat. See the documentation for the Woodcrest cluster for information about these frontends. Although the TinyFat compute nodes actually run Ubuntu LTS, the environment is compatible. Programs compiled for Woody will just run on TinyFat as well.

For submitting Jobs, you will have to use the command qsub.tinyfat instead of the normal qsub.

Further information

AMD Opteron "Magny Cours" processor series

The "Magny Cours" processors are used in many of the machines with lots of memory, because they have many memory channels, 4 per socket - which makes building machines with lots of memory cheaper. They are eight- or twelve-core processors with two dies per socket. The following graphic tries to illustrate that. It shows the structure of a two socket Magny Cours node equipped with the 12-core processor variant:

Block diagram of a two socket Magny Cours system

As you can see, inside the physical processor packaging (dashed line) there are actually two completely seperate processor packages that are linked through a thick Hyper Transport link. Each of the processor packages has two memory channels, which combined gives the four channels per socket. The two socket Magny Cours is therefore in essence a four socket system.

Parallel programs

memoryhog and all the nodes of TinyFat are ccNUMA machines, some with a very complex structure. Therefore, especially with OpenMP programs, it is essential to pin your threads to the right cores. See the OpenMP pinning section in our software environment documentation. If you are unsure how the cores are numbered on a machine, i.e. which core number is on which socket, use /apps/likwid/stable/bin/likwid-topology or /apps/likwid/stable/bin/likwid-topology -g.

Letzte Änderung: 8. Maerz 2017, Historie

zum Seitenanfang

Startseite | Kontakt | Impressum

RRZE - Regionales RechenZentrum Erlangen, Martensstraße 1, D-91058 Erlangen | Tel.: +49 9131 8527031 | Fax: +49 9131 302941


  1. Studierende
  2. Beschäftigte
  3. Einrichtungen
  4. IT-Beauftragte
  5. Presse & Öffentlichkeit