Enhancing NameNode Fault Tolerance in Hadoop Distributed File System

In today's cloud computing environment, Hadoop is applied for handling huge data, tens of terabytes to petabytes, with commodity hardware Hadoop Distributed File System (HDFS) for storage and software MapReduce for parallel data processing. In Hadoop version 1.0.3, there is a single metadata server called NameNode which stores the entire file system metadata in main memory and most of I/O operations are associated with those credential metadata. Hadoop is out of commission, if NameNode is crashed because it works on memory which becomes exhausted due to multiple concurrent accesses.

Provided by: International Journal of Computer Applications Topic: Big Data Date Added: Feb 2014 Format: PDF

Find By Topic