The current HPC cluster is a traditional batch-processing cluster, with high-speed interconnects and a shared filesystem.
The Lipscomb Compute Cluster (LCC) consists of
- 2 Admin nodes
Intel Processor Number | Processor Class | Cores per node | Nodes In Cluster | Memory per node (GB) | Network | Node Names |
E5-2670 | Sandy Bridge | 16 | 2 | 64 | Infiniband FDR (56Gbps) | mothership[1-2] |
- 6 login nodes
Intel Processor Number | Processor Class | Cores per node | Nodes In Cluster | Memory per node (GB) | Network | Node Names |
E5-2670 | Sandy Bridge | 16 | 6 | 64 | Infiniband FDR (56Gbps) | login[001-006] |
- 297 legacy compute nodes (migrated from DLX2/DLX3)
Node Type | Intel Processor Number | Processor Class | Cores per node | Nodes In Cluster | Total Cores | Memory per node (GB) | Network | Node Names |
Migrated from DXL2 Regular Nodes | E5-2670 | Sandy Bridge | 16 | 248 | 4,096 | 64 | Infiniband FDR(56Gbps) | cnode[001-200,209-232,237-256] |
Migrated from DXL2 FAT Nodes | E5-4640 | Sandy Bridge | 32 | 8 | 256 | 512 | Infiniband FDR(56Gbps) | fnode[001-008] |
Migrated from DXL2 GPU Nodes | E5-2670 | Sandy Bridge | 16 | 4 | 64 | 64 | Infiniband FDR(56Gbps) | gnode[017-020] |
Migrated from DXL2 JUMBO Nodes | E7-4820 | Westmere | 32 | 1 | 32 | 3,000 | Infiniband FDR(56Gbps) | fnode000 |
Migrated from DXL3 Core 24 Nodes | CPU E5-2670 | Haswell | 24 | 20 | 480 | 16 | Infiniband EDR(100Gbps) | haswell[001-020] |
- 198 new compute nodes
Node Type | Intel Processor Number | Processor Class | Cores per node | Nodes In Cluster | Total Cores in Cluster | Memory per node (GB) | GPU Type | Total GPU's | GPU RAM | Network | Node Names |
Skylake Nodes | 6130 | Skylake | 32 | 56 | 1,792 | 192 | Infiniband EDR(100Gbps) | skylake[001-056] | |||
SKYLAKE with NVIDIA P100 cards | 6130 | Skylake | 32 | 2 | 64 | 192 | P100 | 8 | 16GB | Infiniband EDR(100Gbps) | gpdnode[001-002] |
SKYLAKE with NVIDIA P100 cards | 6130 | Skylake | 32 | 10 | 320 | 192 | P100 | 40 | 12GB | Infiniband EDR(100Gbps) | gphnode[001-010] |
SKYLAKE with NVIDIA V100 cards | 6130 | Skylake | 32 | 6 | 192 | 192 | V100 | 24 | 32GB | Infiniband EDR(100Gbps) | gvnode[001-006] |
CASCADE Nodes | 6252 | Cascade | 48 | 52 | 2496 | 192 | Infiniband EDR/2(50Gbps) | cascade[001-052] | |||
CASCADE Nodes | 6252 | Cascade | 48 | 60 | 2880 | 192 | Infiniband EDR(100Gbps) | cascadeb[001-060] | |||
CASCADE with NVIDIA V100 cards | 6230 | CASCADE | 40 | 12 | 240 | 182 | V100 | 48 | 32GB | Infiniband EDR(100Gbps) | gvnodeb[001-012] |
- 1 Data transfer node
Node Type | Intel Processor Number | Processor Class | Cores per node | Nodes In Cluster | Total Cores in Cluster | Memory per node (GB) | Network | Node Names |
DTN Node | 6152 | Cascade | 44 | 1 | 44 | 192 | Ethernet (40 Gbps external) EDR(100 Gbps Internal) | dtn |
Lenovo GPFS (DSS-G) parallel file system -1: 1.3PB Usable (1.9PB RAW)
Lenovo GPFS (DSS-G) parallel file system -2: 1.6PB Usable (2.2PB RAW)