ICatalyst
Cray HPC

What is HPC?

Climate change, mapping the human genome, unravelling the mysteries of black holes and structural engineering are all scientific areas that share something in common. They all require lots of computing power. Commercial industries also consume a lot of compute power for things such as; financial market analysis, oil & gas exploration, movie special effects, artificial intelligence and machine learning.

Introduction

Those tasks are too much for your average home computer and even a high-end server. The calculations required when operating at scale far exceed their abilities. High-Performance Computing (HPC) is where this computational power can come from.

In this post, I describe HPC, why it is used, how it works and examples of how it is applied in business and scientific research.

What is HPC?

High-Performance Computing (HPC) is a comparative measurement for systems that process data and computations at high speeds. HPC devices are constantly evolving and are typically defined as being part of the Top500 list. This means that with constantly improving technology, the lowest-performing HPC systems fall off this list.

An interesting trend maps the top HPC devices of 15 years ago to the performance of home devices today. That means, every year the home consumer can purchase devices as powerful as the most powerful devices of 15 years ago. That would also indicate that the top HPC’s of today, widely known as supercomputers, will equal the power of devices we carry around in 15 years.

IBM Blue Gene/Q Mira Supercomputer

HPC is typically not a single machine but a collection of many machines networked together to perform as one. The combined power can deliver performance at multiple times of a single computer or server. For comparison, an average laptop can perform around 4 GFLOPS (4 billion calculations per second), while the top HPCs, supercomputers, can perform over 95,000,000 GFLOPS (shortened to 95 petaflops).

Supercomputers can cost anywhere from $40 million to $1 billion to build, with a median value of $6 per GFLOP. Compared to your typical high-end mobile phone of $0.20 per GFLOP that might seem pretty steep. However, there is much more going on in a supercomputer than just raw GFLOPs.

Why is HPC important

“Today, to Out-Compute is to Out-Compete” – The European Technology for HPC describes the importance of HPC from an economic growth point of view.

HPC is used to tackle big problems. The problems are ones that impact our future, our wellbeing and prosperity. Topics in progress today involve climate change and weather patterns, medical research including the development and treatment of diseases. In industry, HPC is used to develop better-performing products including the development of new materials, safer and more efficient vehicles. It is what moves conversations from science-fiction to reality. In other cases, it creates science-fiction, by rendering the CGI and special effects used in countless blockbuster movies!

All areas of science require vast amounts of computing power, which is why HPC is an important element to research. By producing sophisticated scripts, scientists are able to model or simulate real-world conditions. The script can take in variations to the conditions being tested and can determine what would lead to the most desirable outcomes. That could be finding a medicine which reacts best to a specific patient or how the materials of a building reacts to earthquakes. Realistic models require lots of data for setting up the conditions to mimic a real or theoretical scenario.

Tying to compute some of these problems would take months or years to process on a normal computer. An average computer would crash trying to open some data sets, as the data is so large. That’s before we even get to analyse it!

The scientists that use HPC do not want to be concerned with the software, algorithms or hardware that goes into effective HPC. They want to progress their theories, shape future thinking and get answers to the questions they have. HPC can get them there.

Other problems such as the recent imaging of a black hole would have been impossible without HPCs. They are helping answer some of the big questions like “where do we come from” and “what is the universe made of.”

The first image of a black hole – Credit: Event Horizon Telescope Collaboration

How does HPC work?

HPC is not a single machine but a system of multiple machines connected together. A simplistic way to think about this is to think of 1 machine that takes 100 hours to compute a problem. If you had 100 machines you could do the same problem in 1 hour.

In reality, connecting 100 machines together introduces inefficiencies from co-ordinating the resources in this type of setup.

In some cases the speed of the computations performed is critical. When dealing with a developing storm or viral outbreak, the actions that can be taken are determined by how quickly information can be processed.

The individual machines in a connected setup are called nodes. This is where processing is performed. Groups of nodes are called clusters and HPC’s can have many clusters. They are managed by a scheduler – usually, a separate machine specifically set up to manage how to use the nodes and split a large problem into small parts for the nodes to work on. Results of a node are sent to a data store.

Simple diagram of a High-Performance Computing system

The architecture takes into account the capabilities of each part of the system. This is comprised of 3 main parts, storage, network and compute. An effective HPC looks to maximise each of these parts in unison. Hardware, software and communication speeds all play a role in determining how well HPC works.

HPC use Cases

The most common users of HPC systems are scientific researchers, engineers and academic institutions.

HPC is also a tool that can help address the Grand Societal Challenges we are facing:

  • Health, demographic change and well-being
  • Food security, sustainable agriculture and forestry, marine and inland water research, and the Bioeconomy
  • Secure, clean and efficient energy
  • Smart, green and integrated transport
  • Climate action, environment, resource efficiency and raw materials

HPC is also used in these industries:

  • Climate Change and Earth Science
      • World Community Grid is one of the most active volunteer computing projects that use collective computing power in order to solve HPC like problems through volunteer resources. A big part of their work in climate research.
  • Manufacturing and Heavy Industry
      • LS-DYNA is an expansive finite element analysis (FEA) suite used by engineers for automotive, aerospace, construction, military, product manufacturing, and bioengineering applications.
      • Abaqus Simulia encompasses a range of interoperable products, including Standard (general-purpose FEA), CAE (computer-aided engineering), Explicit (advanced FEA), CFD (computational fluid dynamics) and Multiphysics (multiphysics interactions)
  • Financial Markets

Closing

HPC is a time machine. Not only does it indicate the computing power that will be available to the mass public in the future, but it also allows us to model and simulate conditions that currently do not exist. Time can be accelerated so we can see the impact of our actions both with medicine and climate. It can tell us what products will break under certain stress, where markets will tend to go and where to find new sources of energy. We can travel back in time to the beginning of the universe, see stars forming and collapsing. Most of all it allows us to understand ourselves. Where did we come from and where are we going.

In the next post, I will explain what distributed computing is. How the LiteSolve distributed computing platform is providing tools for scientists, application developers and the general public. And how to get involved in shaping the future with HPC.

Add comment