The Big Data Recovery System for Hadoop Cluster

Due to brisk growth of data storage in many internet service companies, there is always issue of regarding unstructured data storage which is generated in TeraBytes (TB) and PetaBytes (PB). Hadoop is always deal with the large amount of data Volume. Therefore, increase reliability and availability should be maintained. To gain the high availability characteristic of the Hadoop and to improve failure Recovery as early as possible or failure should be avoided. The failure of the HDFS, name node and master node affects the performance of the Hadoop cluster.

Resource Details

Provided by:
International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE)
Big Data