Viper is the University of Hull’s High Performance Computer (HPC)
Viper, the University of Hull’s High Performance Computing facility, is used by research staff and students in many disciplines across the University and is a potentially significant tool for those with any sort of computational element to their research.
Within the university sector it is one of the leading High Performance Computing centres and one of the highest rated in the north of England. This is a significant research investment that provides a vital requirement to meet the ever growing demands of the University’s research community.
Present research involves studying the Galaxy, Vibrational effects of molecules, Semiconductor effects and computational linguistics.
What is HPC?
HPC is a term which refers to the use of multiple computers used in parallel to perform large and complex tasks. The purpose of this is to be able to split such a task into smaller units and share these across the other computers within the HPC. Together with a fast interconnection between the computers this provides an excellent way of processing such tasks.
This provides a very high throughput of tasks.
Typical HPC Workflows
There are many uses of Viper, depending on whether you simply want to get through a lot of data (capacity computing) or you need a lot of power to get things done in a reasonable amount of time (capability computing).
Viper can be used in a number of different ways to perform computational research analysis. Some of these are listed below:
Parallel (Single node)
This is where a job would run on a single node (i.e. up to 28 cores) probably using OpenMP , OpenMPI or one of the installed software packages.
Parallel (Multiple nodes)
This is where a job would need more cores than provided by a single node (presently 28 cores), again this would probably use OpenMP , OpenMPI or one of the installed software.
This is where a job requires a large amount of physical memory (currently 1Tbyte) to run. This would typically be a very large data set , or processing that requires a large memory model to contain intermediate data.
Viper has 4 GPU nodes which can be used for fast accelerated GPU-accelerated calculations, such as those using high end graphics manipulation. NVidia CUDA, OpenCL and OpenACC are examples of programming within this node. An example of installed software is Paraview , Virtual-GL or one of the software packages.
Viper has 2 visualisation nodes which can be used for interactive visualisation and viewing 3D models. An example of installed software is Avizo. https://www.fei.com/software/amira-avizo/
Viper is based around the Linux operating system and is composed of approximately 5,500 processing cores with the following specialised areas:
– 180 compute nodes, each with 2x 14-core Broadwell E5-2680v4 processors (2.4 –3.3 GHz), 128 GB DDR4 RAM
– 4 High memory nodes, each with 4x 10-core Haswell E5-4620v3 processors (2.0 GHz), 1TB DDR4 RAM
– 4 GPU nodes, each identical to compute nodes with the addition of a Nvidia A40 GPU card
– 2 Visualisations nodes with 2x Nvidia GTX 980TI
– Intel Omni-Path interconnect (100 Gb/s node-switch and switch-switch)
– 500 TB parallel file system (BeeGFS)
– 4 racks with dedicated cooling and hot-aisle containment
– Additional rack for storage and management components
– Dedicated, high-efficiency chiller on AS3 roof for cooling
– UPS and generator power failover