Die letzten Meldungen

SYSTEMAUSBILDUNG im SoSe 2017 „Grundlagen und Aspekte von Betriebssystemen und systemnahen Diensten”

21. April 2017

Mit Beginn des Sommersemesters 2017 setzt das RRZE auch seine Veranstaltungsreihe „SYSTEMAUSBILDUNG – Grundlagen und Aspekte von Betriebssystemen und systemnahen Diensten“ fort und lädt Sie herzlich zu seinen Vorträgen ein.
Weiterlesen...

Wartungsankündigung für „campo“ am 27.04.2017

19. April 2017

Aufgrund von Wartungsarbeiten wird das campo-Portal am Donnerstag, 27. April 2017 zwischen 15:00 Uhr und 19:00 Uhr nicht zur Verfügung stehen.
Weiterlesen...

Wartung der FAUbox

18. April 2017

Am Mittwoch, 19.04.2017 wird die FAUbox ein Versionsupgrade erhalten. Wir nutzen das Wartungsfenster des Loadbalancers. Start der Arbeiten wird 15:00 Uhr sein. Ende der Arbeiten an der FAUbox wird voraussichtlich um 15:30 sein.
Weiterlesen...

Meldungen nach Thema

 

Windows HPC2012 Compute Cluster

The RRZE's Windows cluster is running Externer Link:  Microsoft Windows Compute Cluster Server 2012. This is a Windows Server 2012 R2 based server with an installation of MS HPC Pack 2012 R2. Thanks to a cooperation with Externer Link:  Microsoft, we can provide this service to all customers interested in developing and running high performance scientific software in a Windows environment.

The Windows cluster includes the following components:

  • 16 compute nodes with Dual-Socket boards and two Hexa-Core AMD Opteron Istanbul processors(@2,60 GHz) each.
  • 32 GBytes of RAM per compute node i.e. 2,6 GBytes per core (ccNUMA architecture)
  • Head node with 8 GBytes of RAM and 4 Intel Xeon based cores(@2,53GHz)
  • Front node with 8 GBytes of RAM and 4 Intel Xeon based cores(@2,53GHz)


Account request for new users

  • Please fill out the required forms for user-account-requests provided here and submit it to RRZE via fax(09131 / 85 29966) or
    mail(Service-Theke, Regionales Rechenzentrum Erlangen (RRZE), Martensstraße 1, 91058 Erlangen).
  • Further more please contact hpc@rrze.fau.de for additional information.
  • For information on daily operations please enroll in the Externer Link:  mailing list.


Access and File Systems

Access to the Machine

Access to the system is granted via the frontend windowscc.rrze.uni-erlangen.de. The frontend and the cluster nodes all have private IP addresses and can only be reached directly from within the university network. If you want to connect from the outside, use an appropriate SSH tunnel to port 3389 of windowscc through the official dialog server cshpc. Please connect using the RDP protocol, either with the Windows Remote Desktop Client or the rdesktop tool under UNIX/Linux:

  • Windows Remote Desktop Client: This program is part of each Windows XP (or higher) installation and can be found under "Accessories"-"Communications". The client allows to make local resources (notably disks) visible on the remote system, which greatly facilitates file transfers.
  • rdesktop: This is an open source program that is part of all major Linux distributions. Use it by specifying the server to connect to as an argument:

    rdesktop -a 16 -f -k de windowscc.rrze.uni-erlangen.de

    The option -a 16 specifies a color depth of 16 bits, -f turns on fullscreen mode and -k de is required if you use a German keyboard layout. To leave fullscreen mode you can type ctrl-alt-enter. Killing the client leaves your session running and you can reconnect at any time.

    You can make client directories available on the remote server using option -r disk:<share>=<pathname>. The directory under <pathname> will then be accessible on the server under the UNC \\tsclient\<share>.

Walkthrough:

From within the university network or connected via VPN, working on a Unix platform:
  • Open a shell and type:
    rdesktop -f -k de windowscc.rrze.uni-erlangen.de.
    Enter your password and login.
From within the university network or connected via VPN, working on a Windows platform:
  • Click Start -> All Programs -> Accessories -> Remote Desktop Connection (or Start -> Run, type "mstsc" and press enter).
    Connect and login to:
    windowscc.rrze.uni-erlangen.de.
From outside the university network, working on a Unix platform:
  • Use the NXClient described here.
    Connect and login to:
    cshpc.rrze.uni-erlangen.de.
  • Open a shell and enter:
    xfreerdp /u:YOURUSERNAME /v:windowscc.rrze.uni-erlangen.de /bpp:16 /f.
    Replace YOURUSERNAME with your username and enter your password when requested. On the first login, you'll also have to accept the servers certificate.
From outside the university network, working on a Windows platform:
  • Download the RRZE-pre-configured NXClient Programm-Datei: from here and install it.
    Connect and login to:
    cshpc.rrze.uni-erlangen.de.
  • Open a shell and type:
    xfreerdp /u:YOURUSERNAME /v:windowscc.rrze.uni-erlangen.de /bpp:16 /f.
    Replace YOURUSERNAME with your username and enter your password when requested. On the first login, you'll also have to accept the servers certificate.

File Systems

On the head node you should not use the My Documents folder, as space is very limited. Furthermore this folder is only visible on that very node, and each of the compute nodes has its own "My Documents" for each user. Thus we provide a globally visible share for each user under \\aycasamba.rrze.uni-erlangen.de\hpc_vault\<group>\<username> which can be read and written from all nodes. This is the place to put all your development data, binaries and input/output data of jobs.

Programs Directory

C:\Programme

Home and Workingdirectory

\\aycasamba.rrze.uni-erlangen.de\hpc_vault\<group>\<username>

Old Storage

\\ccsmaster.rrze.uni-erlangen.de\ccsshare\<username>


Software Development

Microsoft Visual Studio 2008

Visual Studio 2008 is installed on the frontend. By default you can use all the programming languages that the IDE supports, most notably C/C++. There is, however, no Fortran compiler available from Microsoft. If you want to develop using Fortran, you have to use the installed Intel compiler in version 9.1. When making a new project, you can select Intel Fortran right away. For projects that want to use Intel's C/C++ compiler, you first have to make a standard C/C++ project and then convert it to the Intel project system using the appropriate entry in the project's context menu of the solution explorer.

OpenMP

Intel 9.1 compiler suite (C/C++ and Fortran)

Go to the "Language" section of the project properties. There you can either choose "Generate Parallel Code" for the "Process OpenMP Directives" option, or you can select "Generate Sequential Code", which links to a stub OpenMP library, but the code will run serial mode.

Microsoft C/C++ compiler

Go to the "Language" section of "C/C++" and choose "Yes" for the "OpenMP Support" option.

There is a known problem with the current version of the Microsoft Visual Studio C/C++ compiler in conjunction with OpenMP. The OpenMP specification does not require the omp.h file to be included unless you use the OpenMP runtime functions. If you do not include the omp.h file in your OpenMP program when using the Microsoft compiler, you will get an error message at runtime that a dynamic library is missing. The workaround is to include omp.h in all your OpenMP programs.

Setting the number of threads

Microsoft C/C++ compiler

The number of OpenMP threads can be controlled by setting the environment variable OMP_NUM_THREADS to the desired value. To do this inside Microsoft Visual Studio, you have to change the project settings, choose "Debug" on the left side and then add the OMP_NUM_THREADS variable to "Environment" on the right side.

Intel 9.1 compiler suite

There seems to be no option for changing the environment when using the Intel project system (instead of the MS compiler). Thus, binaries produced by the Intel compiler will run with the maximum number of threads, which equals the number of cores on the node, when started from the IDE. Inside batch jobs, you can of course set the OMP_NUM_THREADS variable as desired. This problem is being investigated.

Java

JRE

The front node as well as the cluster nodes are equipped with a recent Java Runtime Environment located in:
C:\Programme\java\java-current.
Please find the Java executable under:
C:\Programme\java\java-current\bin\java.exe.

JDK

Furthermore the frontnode has a recent JDK version installed under:
C:\Programme\Java\jdk-current\.

CVS and SVN

CVS and SVN are installed as well upon the frontend, under
C:\Program Files\TortoiseCVS
and
C:\Program Files\TortoiseSVN.
Both programs can be used via explorer-right-click-extension.


Important Libraries

OpenMP

The way to use OpenMP depends on which compiler you want to use, i.e. if your VS project uses the Microsoft or the Intel project system (see above).

MPI

Microsoft MPI (recommended)

Boost Library

Boost x64 library can be found in a recent version on every node under:
C:\Programme\boost\boost-current\lib.


Batch Processing

Compute Cluster Job Manager

General Access

  • To access the "Job Manager" hit the start button, choose "All Programs", choose "Microsoft HPC Pack 2012 R2" and click Compute Cluster Job Manager.
  • Click "Submit Job" in the "File" menu.
  • Insert the desired job name, job template(see below) and a description
  • On the processor pane specify the number of processors
  • On the task pane, add a task for any executables you want to schedule
  • Click submit

Command Line Job Manager

  • More functionality due to the possible use of scripting, is provided by the command line tool job. It is accessible from any command prompt of the Compute Cluster Server.
  • Example: job submit /numprocessors:8 /stdout:\\aycasamba.rrze.uni-erlangen.de\hpc_vault\<group>\<username>\iptest.txt hostname.exe Reserves 8 processors and runs one executable
  • Example: job submit /numprocessors:8 /stdout:\\aycasamba.rrze.uni-erlangen.de\hpc_vault\<group>\<username>\iptest.txt mpiexec hostname.exe Reserves 8 processors and runs the executable once for each processor
  • All cluster related command line tools are described on the Externer Link:  Microsoft Compute Cluster Command Line Reference page

Job Templates

Job templates are the Microsoft implementation of different cluster queues for different computational requirements and categories.
At the moment we provide the following job templates for you.

LongTermCalculation

This is the default calculation template wich you may use to run your heavy duty jobs.
The LongTermCalculation template has the following restrictions wich are verified automatically on job subission:

  • The maximum runtime is 2 days.
  • The node groups to run the job on have to include Group_LongTermCalculation.
TestCalculation (suspended, since the testnode is used for testing 2008 R2)

This template is designed for code and other testing and has a guaranteed short turn-around-time. You may test whether your program runs, aborts or writes logs as you want it to without having to wait in the productive queue.
Only one node is assigned here, therefore you should run your code only in a light version using this template. The TestCalculation template has the following restrictions wich are verified automatically on job subission:

  • The maximum runtime is 1 hour.
  • The node groups to run the job on have to include Group_TestCalculation.


Documentation

We have set up a small documentation wich you can find at your Desktop on HPC2k8Front2.
We recommend, that you read the articles that are numerated with numbers below 500.
Thank You! Your AdminTeam.

Letzte Änderung: 23. Februar 2016, Historie

zum Seitenanfang

Startseite | Kontakt | Impressum

RRZE - Regionales RechenZentrum Erlangen, Martensstraße 1, D-91058 Erlangen | Tel.: +49 9131 8527031 | Fax: +49 9131 302941

Zielgruppennavigation

  1. Studierende
  2. Beschäftigte
  3. Einrichtungen
  4. IT-Beauftragte
  5. Presse & Öffentlichkeit