/
AgResearch eRI Compute System
  • Ready for review
  • AgResearch eRI Compute System

    The AgResearch eResearch Infrastructure (eRI) Compute cluster comprises the resources in the table below.

    Specifications

    PARTITION

    SLURM CPUS

    S:C:T

    NODES

    NODELIST

    Mem/Node

    PARTITION

    SLURM CPUS

    S:C:T

    NODES

    NODELIST

    Mem/Node

    compute

    256

    2:64:2

    6

    compute-[0-5]

    1T

    gpu

    96

    2:24:2

    1

    gpu-0

     

    hugemem

    256

    2:64:2

    2

    hugemem-[0-1]

    4T

     

     

     

     

     

     

    interactive

    8

    8:1:1

    3

    interactive-[0-2]

    15G

    vgpu

    32

    32:1:1

    4

    vgpu-[0-3]

     

    S:C:T stands for sockets : cores : threads

    SLURM CPUS refers to the number of consumable CPU resources each node offers the cluster and is dependent on the scheduler’s configuration.
    S*C*T = SLURM CPUS (2*64*2=256)

    CPUs

    All CPUs currently in the cluster are AMD EPYC Milan 7713 processors with a base clock of 2.0GHz and max boost of 3.675GHz.

    NB: virtualised compute nodes (e.g. in interactive and vgpu partitions) use the same underlying CPUs but may appear differently within the compute node instance.

    GPUs

    The gpu partition contains a single node with 2x NVIDIA A100 GPGPUs, PCIe 40GB cards

    The vgpu partition contains four virtualised compute nodes, each with a single NVIDIA A10 GPGPU, PCIe 24GB cards

    Related content

    How to check access permissions on eRI storage
    How to check access permissions on eRI storage
    Read with this
    AgR eRI compute cluster - applications as modules
    AgR eRI compute cluster - applications as modules
    More like this