TESTS OF DIFFERENT MPI IMPLEMENTATIONS IN HPC/KVM CLUSTER

Speaker

Mr E. Alexandrov (LIT JINR)

Description

The work explored the possibility of combining Cloud and High-performance computing (HPC) cluster into a single system. Message Passing Interface (MPI) is the most popular programming technology used for parallel calculations conducted on HPC cluster. The solution of the problem of combining Cloud and HPC cluster is of interest to investigate the efficiency of MPI technology when it is used on virtual machines. In this paper various MPI implementations (IntelMPI, OpenMPI) were tested on virtual machines based on Kernel Virtual Machine (KVM) [1] hypervisor. The test polygon was placed on the Dell PowerEdge FX2 server which consists of 8 computing units with 2x CPU Intel Xeon E5-2680 v3, 256 GB RAM. Each computing unit was connected to others by integrated 10 Gbit/s switch. During the tests all available combinations of connections to local network and types of virtual network card supported by KVM hypervisor were investigated. For each combination the test of network’s bandwidth and processor load were measured. Testing was carried out for two cases: 1) when virtual machines were placed on different physical computing units; 2) when virtual machines were placed on the same physical computing unit. Various MPI implementations were used following tests: Intel MPI Benchmarks [2], the program for calculating CVC in Long Josephson Junctions [3], science package GIMM_FPEIVE [4] and so on. The results of the efficiency of MPI-tasks performed in HPC/KVM cluster are presented. The work was financially supported by the RFBR grant No. 15-29-01217. References 1. Humble Devassy Chirammal, Prasad Mukhedkar, Anil Vettathu. Mastering KVM Virtualization // August 2016 2. Intel® MPI Benchmarks User Guide. – https://software.intel.com/en-us/imb-user-guide 3. Atanasova P., Bashashin M.V., Rahmonov I.R., Shukrinov Yu.M., Volohova A.V., Zemlyanaya E.V. Numerical approach and parallel implementation for computer simulation of stacked long Josephson Junctions // Computer Research and Modeling. Vol. 8, No.4. 2016. Pp. 593-604. 4. Alexandrov E., Amirkhanov I. et al. Principles of Software Construction for Simulation of Physical Processes on Hybrid Computing Systems (on the Example of GIMM FPEIP Complex) // Bulletin of PFUR. Series: Mathematics. Information Sciences. Physics. No. 2. 2014. Pp. 197-205.

Primary author

Mr E. Alexandrov (LIT JINR)

Co-authors

Presentation materials