LEAP - High Performance Computing Cluster

As evidenced from its name, the LEAP (Learning, Exploration, Analysis, and Processing) next-generation High-Performance Computing (HPC) Cluster represents a significant advancement in Texas State University’s computing capabilities as it further enables the varied pursuits of a broad and growing research community. With a node count (123) of roughly 4 times that of it’s predecessor, STAR (at 36), and nearly 6 times the number of processors (at 3,532 to STAR’s 616), LEAP’s compute capacity is 14 times that of STAR at 135 TFlops (135 trillion floating point operations per second). Combined with LEAP’s introduction of a parallel file system and high-speed interconnect, this compute resource represents a true LEAP in computing capabilities for Texas State University researchers - turning research endeavors into realities.         

leaping bobcat

System Details: The LEAP Dell PowerEdge C6320 Cluster is configured with 120 compute nodes, each with 28 CPU cores via two (14-core) 2.4 GHz E5-2680v4 Intel Xeon (Broadwell) processors. With 128 GBs of memory and 400 GBs of SSD storage per node, the compute nodes provide an aggregate of 15 TBs of memory and 48 TBs of local storage.

Additionally, LEAP features two large memory (1.5TB) nodes with 72 CPU cores via four (18-core) 2.4 GHz E7-8867v4 Intel Xeon (Broadwell) processors. Compute nodes have access to a 1.5 PB GPFS parallel file system. An FDR InfiniBand high- speed network fabric interconnects the nodes with a  point-to-point bandwidth of 40Gb/sec (unidirectional).

Fueling Research and Discovery: LEAP is designed to provide cyber infrastructure covering a diverse application base with complex workflows. The system is architected to support capacity computing, optimized for quick turnaround on small/modest scale jobs while still providing plenty of resources for jobs that scale. The local SSDs on each compute node are beneficial to applications that exhibit random access data patterns or require fast access to significant amounts of compute node local scratch space. The large memory per node (128 GB) makes LEAP ideal for shared- memory applications or MPI codes with large per-process memory footprints. The AVX2-enabled Intel Broadwell processors provide excellent performance for applications with vectorizable loops or that make heavy use of optimized math libraries. Multiple-node jobs and MPI jobs sourcing and/or generating large datasets benefit significantly from the FDR Infiniband high-speed interconnect and the GPFS parallel file system.