For questions, please create a ticket at https://support.ccs.uky.edu/servicedesk/customer/portal



The current  HPC cluster is a traditional batch-processing cluster, with high-speed interconnects and a shared filesystem.

The Lipscomb Compute Cluster (LCC) consists of

  • 2 Admin nodes
Intel Processor NumberProcessor ClassCores per nodeNodes In ClusterMemory per node (GB)Network Node Names
E5-2670Sandy Bridge16264Infiniband FDR (56Gbps)mothership[1-2]
  • 6 login nodes
Intel Processor NumberProcessor ClassCores per nodeNodes In ClusterMemory per node (GB)NetworkNode Names
E5-2670Sandy Bridge16664Infiniband FDR (56Gbps)login[001-006]
  • 297 legacy compute nodes (migrated from DLX2/DLX3)
Node TypeIntel Processor NumberProcessor ClassCores per nodeNodes In ClusterTotal CoresMemory per node (GB)Network Node Names
Migrated from DXL2  Regular NodesE5-2670Sandy Bridge162484,09664Infiniband  FDR(56Gbps)cnode[001-200,209-232,237-256]
Migrated from DXL2 FAT NodesE5-4640Sandy Bridge328256512Infiniband  FDR(56Gbps)fnode[001-008]
Migrated from DXL2 GPU NodesE5-2670Sandy Bridge1646464Infiniband  FDR(56Gbps)gnode[017-020]
Migrated from DXL2 JUMBO NodesE7-4820Westmere321323,000Infiniband  FDR(56Gbps)fnode000
Migrated from DXL3 Core 24 NodesCPU E5-2670Haswell242048016Infiniband  EDR(100Gbps)haswell[001-020]
  • 198 new compute nodes
Node TypeIntel Processor NumberProcessor ClassCores per nodeNodes In ClusterTotal Cores in ClusterMemory per node (GB)GPU TypeTotal GPU'sGPU RAMNetworkNode Names
Skylake Nodes6130Skylake32561,792192


Infiniband  EDR(100Gbps)skylake[001-056]
SKYLAKE with NVIDIA P100 cards6130Skylake32264192P100816GBInfiniband  EDR(100Gbps)gpdnode[001-002]
SKYLAKE with NVIDIA P100 cards6130Skylake3210320192P1004012GBInfiniband  EDR(100Gbps)gphnode[001-010]
SKYLAKE with NVIDIA V100 cards6130Skylake326192192V1002432GBInfiniband  EDR(100Gbps)gvnode[001-006]
CASCADE Nodes6252Cascade48522496192


Infiniband  EDR/2(50Gbps)cascade[001-052]
CASCADE Nodes6252Cascade48602880192


Infiniband  EDR(100Gbps)cascadeb[001-060]
CASCADE with NVIDIA V100 cards6230CASCADE4012240182V1004832GBInfiniband  EDR(100Gbps)gvnodeb[001-012]
  • 1 Data transfer node
Node TypeIntel Processor NumberProcessor ClassCores per nodeNodes In ClusterTotal Cores in ClusterMemory per node (GB)NetworkNode Names
DTN Node6152Cascade44144192

Ethernet (40 Gbps external)

EDR(100 Gbps Internal)

dtn


Lenovo GPFS (DSS-G) parallel file system -1: 1.3PB Usable (1.9PB RAW)

Lenovo GPFS (DSS-G) parallel file system -2: 1.6PB Usable (2.2PB RAW)