Research Software Engineering

What is Research Software Engineering?

Often abbreviated to RSE,  software has now become essential to research. People in research groups “who write code, not papers” have existed in some of the traditionally computationally heavy disciplines for decades. With the expansion of software and its tools into every area of academia this is now a very specialised area. This is particularly so within HPCs like Viper and multi-core system such as GPUs.

The skills and ability within the team allow you to structure the development of your research software therefore significantly aiding your research workflow. This also contributes to the proper development cycle of your code from inception to maturity with a proper maintenance procedure.

This stage can help you with your grant application and can provide the necessary information often required now in terms of obtaining additional funding.

What can we do?

Design

The most important part of any software production is the design, many professionals believe this stage should take up around 60% of the whole cycle, but it is often neglected with catastrophic development cycle and long overruns occurring as a result. Getting this part ‘right‘ is therefore essential.

Code Optimisation

Many times code is already existing and we can help in optimising this.

  • One with is to help to make this work with a multi-core single node part of HPC which tends to be easiest way to modify single core code, this uses a method of shared memory processing. An example is openMP.
  • Another way is to modify the code to make it work across multiple node using a software library called MPI (Message Passing Interface).

We have a team that area familiar with C/C++, Fortran, Python, Perl, with experience towards openMP, openMPI, and on GPUs ; CUDA, openACC.

Code Modification

This is particularly useful when code is ported from different system or to different architectures within the same system (for example from HPC Computing CPUs to GPU computing).

HPC Computing

Viper has the majority of its processing core as compute and most of the software is run here in a mixture of single node; multicore jobs or multiple node tasks which use multiple cores across many different nodes which communicate with each other through a very fast network (our is Intel’s Omni path).

GPU computing

Viper has 4 dedicated nodes each with a Nvidia A40 accelerator card or GPU. These are cards which have 2880 processing cores and 12Gbyte of memory (independent of the main motherboard). These processing cores (sometimes called streaming cores) are very dedicated small units which make them ideal for array manipulation where each unit changes one element at a time but altogether.

It is becoming increasingly common to use a general purpose graphics processing unit (GPGPU) as a modified form of stream processor (or a vector processor), running compute kernels. One area this lends itself to is the area of machine learning and more so deep learning using programming libraries such as Google’s Tensorflow. An area the team have some experience with also.

Not everything is suited to GPU acceleration and sometime it can actually slow down a more traditional CPU-based task, however some jobs have shown a significant speed up of anything from x2 to x30 times faster.