Network Aware Task Scheduling in Hadoop

Project Details

Project Lead
Lei Ye 
Project Manager
Lei Ye 
Project Members
Youngkyoon Suh  
Supporting Experts
Jerome Mitchell, Tak-Lon Wu  
Institution
University of Arizona, Computer Science Department  
Discipline
Computer Science (401) 

Abstract

Hadoop, an open-source MapReduce platform, has been widely used for big data processing in large scale distributed systems. Based on scalable cloud computing deployed in data centers, Hadoop runs on a virtualized cluster and its performance is closely tied to its task scheduler, which implicitly assumes that cluster nodes are homogenous. However, performance variation (referred to as jitter) exists among the virtualized nodes and impacts overall performance of workloads running on the Hadoop. The sources of variation come from a number of places such as: network congestion, disk contention, CPU overload, running out of memory, system failures and crashes, and etc. The scheduler has to reinvestigate the above problems and take more metrics into consideration when dispatching tasks to individual nodes where task tracker and data storage are both running. The conventional scheduling principle is to place a computing task as close to the data as possible. If a task is assigned to a node that does not contain the necessary data, the node will have to fetch data from a remote node sacrificing performance and wasting energy. Several existing research projects have pointed that network topology affects Hadoop performance and there are overloaded or congested paths in Hadoop network when running intensive workload. In our initial experiments, we found some interesting results from collected traces by running intensive workload like hive benchmark, pigmix, nutch, and TPC-H. A large number of Map tasks were scheduled remotely where network traffic would be an issue. Thus, we plan to design a new data-locality and network aware scheduling algorithm for Hadoop to address the above issues in terms of network traffic. The proposed scheduling algorithm not only follows data-locality principle but also utilizes cluster network traffic information to direct task assignment. We already designed and implemented a Hadoop simulator named HadoopSim integrated with a network simulator NS-3. HadoopSim replays the traces and simulates all Hadoop events and data transfers over network. We will design a new data-locality aware task scheduler in HadoopSim and quantify the new scheduler in terms of performance and energy efficiency compared with several existing schedulers like: FIFO scheduler, capacity scheduler, and fair scheduler.

Intellectual Merit

We propose novel scheduling algorithms that capture network traffic, enabling realization of improved Hadoop systems. Completion of this project will result in: (1) understanding of interactions of node communication and task scheduling in Hadoop; (2) understanding of the critical design issues in realizing network aware scheduler for Hadoop; (3) novel energy-efficient and improved network topology for Hadoop systems. Quantification of the proposed scheduler will further allow development of better decision making process that will minimize running time of the Hadoop workload. Finally, the proposed mechanisms and artifacts will enable further research in improved Hadoop systems of any scale.

Broader Impacts

Optimizing large scale distributed systems like Hadoop for processing big data on performance and energy consumption is the future direction for designing and building large data centers. Both are critical components of today’s computing infrastructure, affecting all aspects of computing. The proposed mechanisms will enable building of Hadoop systems that schedule task based on network topology and traffic information that result in lower energy consumption while still providing high performance and accessibility. Furthermore, the optimizations also have impact outside of computing and can affect society at large. More energy efficient systems will result in lower carbon footprint making the facilities greener and more sustainable in the future. This research will also help train the next generation of Computer Scientists, both at the graduate and undergraduate levels. The results of this research will also be integrated into the Department’s curriculum. Finally, all software and tools developed as part of this project will be made available to the broader research community.

Scale of Use

Specifically, the experiments will consider multiple configurations and scale from 50 nodes to 300 nodes or more for about 14 days.