Analysis of Hadoop's Performance Under Failures
Failures are common in today's data center environment and can significantly impact the performance of important jobs running on top of large scale computing frameworks. In this paper, the authors analyze Hadoop's behavior under compute node and process failures. Surprisingly, they find that even a single failure can have a large detrimental effect on job running times. They uncover several important design decisions underlying this distressing behavior: the inefficiency of Hadoop's statistical speculative execution algorithm, the lack of sharing failure information and the overloading of TCP failure semantics. They hope that their study will add new dimensions to the pursuit of robust large scale computing framework designs.