Viper is the University of Hull’s first central High Performance Computing (HPC) cluster. The culmination of a journey that started in 2012, Viper came online to users as a pilot service in June 2016. Within the university sector it is one of the leading High Performance Computing centres and the highest rated in the north of the UK.
Built on state of the art technology, Viper is already proving to be a key facility to many researchers, enabling new and novel approaches that wouldn’t have been possible before. Researchers across the university used more than 1.25 million core hours across nearly 100,000 jobs in the first month of operation.
HPC clusters comprise of a large number of interconnected computers that can be used together for large parallel tasks, or independently with each CPU core running a different users task. In the case of Viper, we have 190 compute nodes connected via Intel’s latest Omni-Path technology. Among these compute nodes are dedicated high memory nodes with 1TB of RAM, dedicated GPU nodes with multiple NVIDIA K40 GPU cards, and dedicated visualisation nodes. What this means to the research community at Hull is whatever the computation requirements of their research is, we should have the resources to meet their needs. HPC clusters such as Viper can be used to run specifically written code, with a range of compilers and libraries available for those developing code in C/C++, Fortran or Java etc. Alternatively, common desktop applications such as Matlab, COMSOL or SAS can also be run, where the large core count can mean running multiple experiments concurrently.
In the short time since going live we have seen adoption of the service, both in terms of registered users and core hours used, far exceed the what we anticipated.
This blog will contain updates from the HPC service at the University of Hull, from those involved in supporting the HPC facility and those using Viper, on our experience with HPC and topics relevant to HPC.