Comparision of Network Emulation Methods

Project Details

Project Lead
David Hancock 
Project Manager
David Hancock 
Institution
Indiana University, UITS - Research Technologies  
Discipline
Computer Science (401) 

Abstract

Dedicated network impairment devices are an accepted method to emulate network latency and loss, but are expensive and not available to most users. This project will compare the performance of a dedicated network impairment device with that of performing network emulation within the Linux kernel to simulate the parameters when utilizing TCP.

Intellectual Merit

This testing will be used as validation of WAN impairment of TCP and LNET over a single 100 gigabit Ethernet circuit being conducted in Germany through a partnership with the Technische Universit├Ąt Dresden.

Broader Impacts

Results will be widely disseminated through a joint paper with Technische Universit├Ąt Dresden and presented to Internet2 members in April.

Scale of Use

We will only be using the network impairment device connected to dedicated servers and storage in the IU Bloomington data center.

Results

The experiment consisted of host-to-host Iperf TCP performance while increasing parallel streams and inducing RTT latency utilizing FutureGrid's Spirent XGEM Network Impairments device.  The hosts were two IBM x3650's with Broadcom NetExtreme II BCM57710 NIC's.  RedHat release 5.5 Linux distribution was installed on each host, keeping stock kernel tuning in place.  An ethernet (eth0) interface on each host was connected back-to-back while the second ethernet (eth1) passed through the Spirent XGEM and Nexus 7018 using an untagged VLAN, as illustrated in the attached diagram.



The direct host-to-host link saw an average delay of .040 ms while the path through the XGEM (.004 ms) and Nexus (.026 ms) was .080 ms.

Dual five minute unidirectional TCP Iperf tests were conducted, one each across the direct and switched path.  Tests were initiated independently and occurred at approximately the same start time with a deviation of +/- 3 seconds initiation.  Results were gathered for each direct (D) and switched (S) test.  Follow-up tests were executed increasing the number of parallel streams Iperf (command line option -P) could transmit.  The number of streams included single, sixteen, thirty-two, sixty-four and ninety-six.  Delay was added via the Spirent at increments of default (.080 ms), 4.00 ms, 8.00 ms, 16.00 ms, 32.00 ms, 64.00 ms, 96.00 ms and 128.00 ms RTT.  The matrix yielded forty data points.  Additionally the experiments were repeated utilizing two different kernel tuning profiles, increasing the data points to 80 and 120.  The data points and graph (only switched path) show that as delay increased overall TCP performance increased as the number of parallel threads were increased.

Detailed results can be found in the attached text and excel files.