High Performance Computing (HPC)
For many, the thought of supercomputers conjures up images of entire rooms or buildings filled with large machines. We are well aware of the progression to micro-computers and miniaturization of hardware – the supercomputers of yesterday now fit into our pockets. While device and computer hardware have grown smaller and more powerful, we have entered a new era of evolving technologies that demand the power of a modern kind of supercomputer: high-performance computers (HPCs).
Think of modern trends in technology: Big Data Analytics, Machine Learning, Neural Networks, Blockchain, and more. Yet again, we have found ourselves requiring more resources than what is readily available in powerful workstations and modern network servers. Enter HPC environments…
HPCs are specialized computers designed to perform resource-intensive workloads and exist in many different capacities. Each is a union of computer components combined to create a high-performance system able to process specific types of computer workloads.
A common design is to have a set of network servers interconnected by a low-latency and high-performance network, receiving instructions and jobs from a central component called “scheduler” that proper balance and forwards each process job to the more appropriate (or idler) server capable to complete that task. This architecture is called an HPC Cluster, and can change from project to project, depending on the specific project’s requirements. This distributed architecture makes it feasible to execute heavily parallelized applications – that if executed in a serialized way (one instruction at a time), would take longer than letting each server (or even each CPU core inside a unique server) execute separate jobs components that later will be joining them together to form the final solution.
Have an HPC based idea you would like to bring to life?
Here at RRC’s ACE Project Space, we have an HPC environment with a Graphical Processing Unit (GPU) server, which is available to students, companies, and researchers to explore their ideas. Our HPC uses a modern accelerator component (TESLA P-100), installed on a powerful network server, to bring together thousands of available cores to process heavy computational workloads. GPU components are not only dedicated to processing graphical applications and rendering image and video, although they were designed to do this very well. While our GPU server has 12 general-purpose embedded cores, it’s capability is extended by using an NVIDIA Tesla p100 GPU Accelerator containing many specialized GPU cores. By using numerous cores in parallel, workloads can be sped up significantly by splitting the work among each GPU core.
With multiple cores available, the execution of parallel programming models can be leveraged. This is a powerful tool to enhance workloads that usually takes months to be executed in common CPU architecture, such as with deep machine learning and mining cryptocurrencies. To enable the programmer to get the most from the architecture, NVIDIA (Tesla Accelerator developer) has created a specialized programming language called CUDA for code development that can directly manage and interact with the GPU architecture and its components.
Have a project proposal using our HPCs? Fill out our application form:
Below is a diagram of our current HPC Environment, as of early 2019. We plan on making some upgrades to our system, so the environment is subject to improvement.