Discovery Resource Overview

CARC's general-use HPC cluster Discovery has over 470 compute nodes available for researchers to use.

Note: Discovery is a shared resource, so there are limits in place on size and duration of jobs. This ensures that everyone has a chance to run jobs. For details on the limits, see Running Jobs.

For general CARC system specifications, see our Hgh Performance Computing page.

Discovery Cluster Overview video

Partitions and compute nodes

There are a few Slurm partitions available on Discovery, each with a separate job queue. These are general-use partitions available to all researchers. The table below describes the intended purpose for each partition:

PartitionPurpose
mainSerial and small-to-medium parallel jobs (single node or multiple nodes)
epyc-64Medium-to-large parallel jobs (single node or multiple nodes)
gpuJobs requiring GPU nodes
oneweekLong-running jobs (up to 7 days)
largememJobs requiring larger amounts of memory (up to 1 TB)
debugShort-running jobs for debugging purposes

Each partition has a different mix of compute nodes. The table below describes the available nodes by partition. Each node typically has two sockets with one multi-core processor each and an equal number of cores per processor. In the table below, the CPUs/node column refers to logical CPUs such that 1 logical CPU = 1 core = 1 thread.

PartitionCPU modelCPU frequencyCPUs/nodeGPU modelGPUs/nodeMemory/nodeNodes
mainxeon-2640v32.60 GHz16------59 GB84
mainxeon-2640v42.40 GHz20------59 GB76
mainxeon-41162.10 GHz24------89 GB39
mainxeon-41162.10 GHz24------184 GB41
mainxeon-2640v32.60 GHz16K40259 GB17
mainxeon-2640v42.40 GHz20K40259 GB41
epyc-64epyc-75422.90 GHz64------248 GB32
epyc-64epyc-75132.60 GHz64------248 GB61
gpuxeon-61302.10 GHz32V1002184 GB29
gpuxeon-2640v42.40 GHz20P1002123 GB38
gpuepyc-72822.80 GHz32A402248 GB12
gpuepyc-75132.60 GHz64A1002248 GB12
oneweekxeon-2650v22.60 GHz16------123 GB35
oneweekxeon-2650v22.60 GHz16------248 GB12
oneweekxeon-2640v32.60 GHz16------59 GB2
largememepyc-75132.60 GHz64------998 GB4
debugxeon-2650v22.60 GHz16------59 GB4
debugxeon-2640v32.60 GHz16K40259 GB1
debugxeon-2640v42.60 GHz20P1002123 GB1

Note: Use the nodeinfo command for similar real-time information.

There are a few commands you can use for more detailed node information. For CPUs, the lscpu command will provide information about CPUs. For nodes with GPUs, the nvidia-smi command and its various options will provide information about GPUs. In addition, after module load gcc/11.2.0 hwloc you can then use the lstopo command to view a node's topology.

CPU microarchitectures and instruction set extensions

Different CPU models also offer different CPU instruction set extensions. Compiled programs can use these extensions to boost performance. The following is a summary table:

CPU modelMicroarchitecturePartitionsSSESSE2SSE3SSE4AVXAVX2AVX-512
xeon-2650v2ivybridgeoneweek, debug
xeon-2640v3haswellmain, oneweek, debug
xeon-2640v4broadwellmain, gpu, debug
xeon-4116skylake_avx512main
xeon-6130skylake_avx512gpu
epyc-7542zen2epyc-64
epyc-7513zen3epyc-64, gpu, largemem
epyc-7282zen2gpu

Use the lscpu command while logged in to a compute node to list all available CPU flags.

GPU specifications in the GPU partition

There are four kinds of GPUs in the GPU partition: A100, A40, V100, P100. The following is a summary table for the GPU specifications:

GPU modelArchitectureMemoryMemory BandwidthBase Clock SpeedCuda CoresTensor CoresSingle Precision Performance (FP32)Double Precision Performance (FP64)
A100ampere40GB1.6TB/s765MHz691243219.5TFLOPS9.7TFLOPS
A40ampere48GB696GB/s1305MHz1075233637.4TFLOPS584.6GFLOPS
V100volta32GB900GB/s1230MHz512064014TFLOPS7TFLOPS
P100pascal16GB732GB/s1189MHz3584n/a9.3TFLOPS4.7TFLOPS
Back to top