Skip to main content

High Performance Computing (HPC)

Illustration zum Dienst High Performance Computing HPC
Photo: ZIM

The ZIM operates compute clusters for High Performance Computing (HPC) for the University of Potsdam and supports scientists at the University of Potsdam in using the supercomputer systems of the National High Performance Computing (NHR).


Two HPC clusters are currently in operation:

Illustration zum Dienst High Performance Computing HPC
Photo: ZIM

1. The ZIM HPC Cluster

The ZIM HPC cluster offers the following specifications.

It is accessible under the host name “login1.hpc.uni-potsdam.de”. It is a cluster consisting of

  •  redundant head nodes,
  •  redundant file servers and connection to the ZIM-SAN
  •  at least 17 compute nodes with current CPU architecture, five of them with GPUs (3 Nvidia V100 or 4 A100 each)
  •  Infiniband-Fabric

The storage systems are accessible from the head nodes and all compute nodes. The data on the scratch memory is not backed up by us, so the users themselves are responsible for backing up any data required in the longer term.

The job and resource scheduler SLURM distributes the computing jobs to the nodes. Direct user access to the computing nodes is not possible without job or resource allocation.

2. Cluster *Jarvis*

The cluster procured via a DFG large-scale research proposal has the following key data:

  •  Redundantly available management and login nodes
  •  Fast parallel scratch storage
  •  42 CPU nodes (AMD Genoa, 2x 192 cores)
  •  4 highmem nodes (2.3 TB RAM)
  •  2 GPU nodes (2x H100 80 GB and 94 GB vRAM each)
  •  Infiniband fabric

Documentation

The user documentation for the HPC cluster at the University of Potsdam can be found at this link: https://docs.hpc.uni-potsdam.de/

Dashboards: https://monitor.hpc.uni-potsdam.de

First steps: https://docs.hpc.uni-potsdam.de/overview/first_steps.html

Information on Jarvis: https://docs.hpc.uni-potsdam.de/jarvis/

Terms for new users of the compute cluster

  • Get an ssh client for your local machine (if you don't have it already), and generate a key pair in the format ed25519, e.g. "ssh-keygen -t ed25519".
  • To unlock access, log into Account.UP and navigate to the Other→ High Performance Computing page, where you can enter your SSH key (as well as optionally join a workgroup).
  • On the HPC cluster, SLURM is used as a job and resource scheduler. We will be happy to answer questions about its use at hpc-serviceuni-potsdamde.

Technical details

Information on using the NHR (National High Performance Computing)