Aarhus University Seal

PRIME – Programming Rig for Modern Engineering


GETTING ACCESS

You must be associated with the Faculty of Technical Sciences, Aarhus University to get access.

Employees:
Contact kknu@ece.au.dk to get access instructions.

It is expected that employees using the cluster (either directly or through students) will contribute to the future development and maintenance by applying for money for the cluster as part of their research applications.

Students and PhD students:
Contact kknu@ece.au.dk with c.c. to your supervisor to get access instructions.

Supervisors:
Please send an email to kknu@ece.au.dk when your student is no longer active.

Use in courses:
Please send an email to kknu@ece.au.dk with the name of the course, when it starts and when it ends.

GETTING ACCESS

You must be associated with the Faculty of Technical Sciences, Aarhus University to get access.

Employees:
Contact kknu@ece.au.dk to get access instructions.

It is expected that employees using the cluster (either directly or through students) will contribute to the future development and maintenance by applying for money for the cluster as part of their research applications.

Students and PhD students:
Contact kknu@ece.au.dk with c.c. to your supervisor to get access instructions.

Supervisors:
Please send an email to kknu@ece.au.dk when your student is no longer active.

Use in courses:
Please send an email to kknu@ece.au.dk with the name of the course, when it starts and when it ends.

High Performance Computer cluster for modern engineering based on CentOS, slurm, infiniband and NFS. Topics studied include: Nanomaterials, computational dynamics, tissue engineering, fracture mechanics, energy systems, molecular dynamics, computational fluid dynamics and many many more.

High Performance Computer cluster for modern engineering based on CentOS, slurm, infiniband and NFS. Topics studied include: Nanomaterials, computational dynamics, tissue engineering, fracture mechanics, energy systems, molecular dynamics, computational fluid dynamics and many many more.

HARDWARE

  • 31 compute Nodes with 24-64 cores
  • 1444 cores in total
  • 256-512 GB Memory pr. node.
  • Infiniband QDR, FDR and HDR interconnect
  • 6 Nvidia A100 GPUs
  • 185 TB NFS shared disc space
  • Slurm scheduler
  • CentOS 7

Contact

Keld Erik Knudsen

Member of Administrative Staff

Address

Inge Lehmanns Gade 10

DK-8000 Aarhus N