Evaluation of Network Interrupt Tuning on Virtualized Application Performance
nfiniband, as a high-bandwidth, low-latency network interconnect has for most of the last decade been regarded as the fabric of choice for HPC clusters. It is deployed in many commodity cluster, and as of TOP500 list of June 2011, used as the communication network in 41.2% of all systems. However, even as Infiniband usage continues to grow, several factors continue to hinder full utilization of the technology’s capabilities. A recent paper proposed that tuning network interrupt parameters can be done to improve the competitiveness of virtualized performance with that of native. We would look to further evaluate the performance impact of network interrupt tuning on virtualized application performamce, for example cache misses, TLB misses, etc.
Use of FutureSystems
We intend to install custom Operating System and device drivers in order to enable PCI-passthrough. This setup is required for us to be able to run performance evaluation of different benchmark suites between different combinations/configurations virtual machines and native. Benchmarks utilize MPI for communication.
Scale of Use
I want to run a set of comparisons on the entire set of allocated machines, for which we may need 2 months. This time period should include installation and setup time as well.