The Research Computing Office currently provides access to two high performance computing (HPC) clusters. Simply speaking, an HPC cluster is a set of computers (nodes) that can collaborate to solve computational problems. These can typically be used for running large memory jobs, jobs that require large computational power using multiple nodes, or a large amount of jobs that can be executed in parallel.


People
FACULTY 3
Dr. Dureid El Moghraby Acting Director of Research Computing
Vinisha Pillai System Senior Administrator (UNIX)
Sanyo John Illikkal HPC System Senior Administrator
Dr. Sergio Martinez Scientific Computing Support-Outreach Senior Specialist

Khalifa University HPC Cluster
  • Master node: For resource scheduling using IBM Platform HPC software.
  • Submit nodes: 4 submit nodes for job submissions from the end-user computer using command line.
  • Compute nodes: 80 Compute Nodes with a total of 1,920 core processors; out of these 80 nodes:
    • 64 Nodes are based on an E5-2690 V3 (Haswell) 12-Core processor, which achieved 48.8 Teraflops (the general measure of computational power) using Linpack benchmark tools.
    • 16 Nodes are based on an E5-2697 V2 (Ivy Bridge) 12-C processor, which achieved 7.7 Teraflops using Linpack benchmark tools.
  • GPU nodes: 4 GPU (Graphic Processing Unit) nodes with 2 GPU cards per node, having a total of 8 GPUs and 19,968 CUDA Core Processors. The 8 GPU cards are based on NVIDIA K20M and achieved a performance of 8.1 teraflops.
  • Interconnectivity: All the nodes are interconnected using Infiniband FDR network (56Gbps network speed) in a fully non-blocking architecture.
  • Data Storage: The cluster is connected to KU Central data storage for home directories, application directories, scratch and research data.
SAN Campus HPC Cluster
  • Master node: For resource scheduling using Bright Cluster (PBSPro job scheduler).
  • Login nodes: 2 login nodes for job submissions from the end-user computer using command line.
  • Compute nodes: 73 Compute Nodes with a total of 1,004 core processors; out of these 73 nodes:
    • 41 Nodes based on an E5-2630 V3 (Haswell) 8-Core processor, which achieved 19 Teraflops (the general measure of computational power) using Linpack benchmark tools.
    • 23 Nodes based on an E5-2630 V2 (Ivy Bridge) 6-Core processor, which achieved 3.97 Teraflops using Linpack benchmark tools.
    • 9 Nodes based on an E5530 (Nehalem) 4-Core processor, which achieved 0.59 Teraflops using Linpack benchmark tools.
  • GPU nodes: 6 GPU (Graphic Processing Unit) nodes with 4 GPU cards per node, having a total of 24 GPUs and 119,808 CUDA Core Processors. The 24 GPU cards are based on NVIDIA Tesla K80 and achieved a performance of 3.17 teraflops.
  • Interconnectivity: All the nodes are interconnected using Infiniband FDR network (56Gbps network speed) in a fully non-blocking architecture.
  • Data Storage: The SAN Campus HPC cluster has isolated direct attached storage for home directories, application directories, and scratch.