For questions, please create a ticket at https://support.ccs.uky.edu/servicedesk/customer/portal



The current  HPC cluster is a traditional batch-processing cluster, with high-speed interconnects and a shared filesystem.

The Lipscomb Compute Cluster (LCC) consists of

  • 2 Admin nodes(Motherships)
Intel Processor NumberProcessor ClassCores per nodeNodes In ClusterMemory per node (GB)Network Node Names
Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHzHaswell242128Infiniband FDR (56Gbps)mothership[1-2]
  • 6 Login nodes
Intel Processor NumberProcessor ClassCores per nodeNodes In ClusterMemory per node (GB)NetworkNode Names
E5-2670Haswell246128Infiniband FDR (56Gbps)login[001-006]
  • 198  Compute nodes
Node TypeIntel Processor NumberProcessor ClassCores per nodeNodes In ClusterTotal Cores in ClusterMemory per node (GB)GPU TypeTotal GPU'sGPU RAMNetworkNode Names
Skylake Nodes6130Skylake32561,792192


Infiniband  EDR(100Gbps)skylake[001-056]
SKYLAKE with NVIDIA P100 cards6130Skylake32264192P100816GBInfiniband  EDR(100Gbps)gpdnode[001-002]
SKYLAKE with NVIDIA P100 cards6130Skylake3210320192P1004012GBInfiniband  EDR(100Gbps)gphnode[001-010]
SKYLAKE with NVIDIA V100 cards6130Skylake326192192V1002432GBInfiniband  EDR(100Gbps)gvnode[001-006]
CASCADE Nodes6252Cascade48522496192


Infiniband  EDR/2(50Gbps)cascade[001-052]
CASCADE Nodes6252Cascade48602880192


Infiniband  EDR(100Gbps)cascadeb[001-060]
CASCADE with NVIDIA V100 cards6230CASCADE4012240182V1004832GBInfiniband  EDR(100Gbps)gvnodeb[001-012]
  • 1 Data transfer node
Node TypeIntel Processor NumberProcessor ClassCores per nodeNodes In ClusterTotal Cores in ClusterMemory per node (GB)NetworkNode Names
DTN Node6152Cascade44144192

Ethernet (40 Gbps external)

EDR(100 Gbps Internal)

dtn


Lenovo GPFS (DSS-G) parallel file system -1: 1.3PB Usable (1.9PB RAW)

Lenovo GPFS (DSS-G) parallel file system -2: 1.6PB Usable (2.2PB RAW)