Die letzten Meldungen

Wartungsankündigung für die FAU Zeiterfassung am 30.06.2016

28. Juni 2016

Wartungsankündigung für die FAU Zeiterfassung
Weiterlesen...

HPC-CAMPUSTREFFEN am 30.Juni am RRZE ab 15 Uhr c.t.

23. Juni 2016

Werte HPC-Nutzer und Interessierte,
Weiterlesen...

CAMPUSTREFFEN – Windows 10 an bayerischen Hochschulen am 07.Juli am RRZE, 14 – 17 Uhr

22. Juni 2016

Windows 10 an bayerischen Hochschulen
Weiterlesen...

Meldungen nach Thema

 

HQS@HPCII

HQS@HPCII is the successor of HQS@HPC. The official project start is at 2009/01/01. For a related project, see also Cluster perturbation theory for electron-phonon systems.

Abstract

The challenge of understanding the complex physical properties of highly correlated quantum systems has stimulated intense work on generic microscopic model Hamiltonians. Topical issues are charge, spin, and heat transport, quantum phase transitions, particle real-time dynamics, quantum fluctuation, temperature, detuning and decoherence effects, in particular in electronically low-dimensional materials or in geometrically restricted quantum systems/devices. The project addresses these problems by employing large-scale numerical investigations on high-end supercomputers. In particular we apply the density matrix renormalization group (DMRG) scheme in order to examine the intervening metallic phase at the spin-density-wave charge-density-wave transition in the one-dimensional Holstein-Hubbard model. Moreover we explore charge transport within a correlated/fluctuating background medium by means of an effective lattice model with a novel form of fermion-boson coupling. Combining exact diagonalisation, DMRG and kernel polynomial methods, we study the ground-state and spectral properties of this model, and discuss the possibility of a metal-insulator quantum phase transition in relation to Mott and Peierls transition scenarios. By means of recently developed Chebyshev expansion and Chebyshev space techniques we investigate the time evolution of finite quantum systems, and inspect the effects of the coupling of quantum systems to fermionic and bosonic baths.

As the predecessor KONWIHR project HQS@HPC has shown prominently, the importance of high-performance numerical software cannot be overrated even when using the most advanced algorithms. An explicit goal of this project is thus the further advancement of our high-performance implementations. The "hot spot" in our exact diagonalization codes is sparse matrix-vector multiplication (sMVM). We will employ recent developments in sMVM optimization to improve performance of ED. Furthermore, we will make use of data structures that enable architecture-specific data access optimizations. For shared-memory and hybrid ED codes, correct ccNUMA page placement will be paramount. As the rigid boundary conditions for ccNUMA placement work against optimal load balancing, the use of hybrid, hierarchical implementations that are ideally mapped to the core-socket-node-cluster structure of modern HPC systems will be thoroughly evaluated.

Logo des KONWIHR-II Projektes The project HQS@HPCII is partially funded by Externer Link:  KONWIHR (Competence Network for Technical, Scientific High Performance Computing in Bavaria).

Contact

Project managers:

  • Prof. Dr. Holger Fehske
    Lehrstuhl für Theoretische Physik II
    Ernst-Moritz-Arndt Universtät Greifswald
    017489 Greifswald
    +49 (0)3834 86 4760
    holger.fehske@physik.uni-greifswald.de

  • Dr. Georg Hager
    Regionales Rechenzentrum Erlangen
    HPC Services
    Martensstr. 1
    91058 Erlangen
    +49 (0)9131 85 28973
    georg.hager@rrze.fau.de

Staff (starting 2009/01/01)

  • Dipl.-Phys. Gerald Schubert

Partners

The following people are tightly involved in HQS@HPC:

  • Dr. Gerhard Wellein, Regionales Rechenzentrum Erlangen. gerhard.wellein@rrze.fau.de

  • Dr. Andreas Alvermann, Insitut Physik der Universität Greifswald. andreas.alvermann@physik.uni-greifswald.de

  • Dr. Satoshi Ejima, Insitut Physik der Universität Greifswald. satoshi.ejima@physik.uni-greifswald.de

Cooperations

We cooperate with several scientists in and out of Germany:

  • Prof. Dr. Eric Jeckelmann (Insitut für theoretische Physik der Universität Hannnover)

  • Prof. Dr. David M. Edwards (Department of Mathematics, Imperial College, London)

  • Prof. Dr. Alan R. Bishop (Los Alamos National Laboratory, T Division Director)

Talks

Relevant publications (up to 2008):

  • H. Fehske, R. Schneider and A. Weiße, Computational Many-Particle Physics, Lect. Notes Phys. 739 (Springer, Berlin Heidelberg 2008).
  • A. Alvermann and H. Fehske, Phys. Rev. B 77, 045125 (2008): Chebyshev approach to quantum systems coupled to a bath.
  • A. Alvermann, D. M. Edwards, and H. Fehske, Phys. Rev. Lett., 98, 056602 (2007): Boson controlled quantum transport.
  • G. Hager, T. Zeiser and G. Wellein, Workshop on Large-Scale Parallel Processing 2008, Externer Link:  arXiv:0712.2302: Data access optimizations for highly threaded multi-core CPUs with multiple memory controllers.
  • A. Weiße, G. Wellein, A. Alvermann, and H. Fehske, Rev. Mod. Phys. 78, 275 (2006): The kernel polynomial method.
  • G. Hager, T. Zeiser, J. Treibig and G. Wellein, In: Proceedings of the 2nd Russian-German Advanced Research Workshop on Computational Science and High Performance Computing, HLRS, Stuttgart, March 14 - 16, 2005. Optimizing performance on modern HPC systems: learning from simple kernel benchmarks.
  • G. Hager, E. Jeckelmann, H. Fehske, and G. Wellein, J. Comput. Phys. 194, 795 (2004): Parallelization strategies for density matrix renormalization group algorithms on shared-memory systems.
  • H. Fehske, G. Wellein, G. Hager, A. Weiße, and A. R. Bishop, Phys. Rev. B 69, 165115 (2004): Quantum lattice dynamical effects on the single-particle excitations in 1D Mott and Peierls insulators.
  • G. Wellein, G. Hager, A. Basermann and H. Fehske, In: J.M.L.M. Palma, J. Dongarra (eds.): High Performance Computing for Computational Science - VECPAR2002, Porto, June 26--28, 2002. Berlin, Springer, 2003. Fast sparse matrix-vector multiplication for TFlop/s computers.

Letzte Änderung: 13. Maerz 2012, Historie

zum Seitenanfang

Startseite | Kontakt | Impressum

RRZE - Regionales RechenZentrum Erlangen, Martensstraße 1, D-91058 Erlangen | Tel.: +49 9131 8527031 | Fax: +49 9131 302941

Zielgruppennavigation

  1. Studierende
  2. Beschäftigte
  3. Einrichtungen
  4. IT-Beauftragte
  5. Presse & Öffentlichkeit