RSS

Speculative Execution in Hadoop

24 Feb

Hadoop monitors task progress using a progress score between 0 and 1.

For a map, the progress score is the fraction of input data read.

For a reduce task, the execution is divided into three phases, each of which accounts for 1/3 of the score:
• The copy phase, when the task fetches map outputs.
• The sort phase, when map outputs are sorted by key.
• The reduce phase, when a user-defined function is applied to the list of map outputs with each key.
In each phase, the score is the fraction of data processed.
For example,
• a task halfway through the copy phase has a progress score of 1 / 2 * 1 / 3 = 1 / 6
• a task halfway through the reduce phase has a progress score of 1 / 3 + 1 / 3 + 1 / 2 * 1 / 3 = 5 / 6

Hadoop looks at the average progress score of each category of tasks (maps and reduces) to define a threshold for speculative execution. When a task’s progress score is less than the average for its category by a threshold, and the task has run for a certain amount of time, it is considered slow. The scheduler also ensures that at most one speculative copy of each task is running at a time. When running multiple jobs, Hadoop uses a FIFO discipline where the earliest submitted job is asked for a task to run, then the second, etc. There is also a priority system for putting jobs into higher-priority queues.

Reference: Improving MapReduce Performance in Heterogeneous Environments

 
Leave a comment

Posted by on February 24, 2012 in Hadoop, MapReduce

 

Leave a comment