International Journal of Advanced Research in Computer Science and Software Engineering (IJARCSSE)
Big data concerns massive amount of data. Hadoop is a framework for processing large amount of data. Hadoop distributed file system and MapReduce programming model is use to storage and retrieval of big data. The terabytes size is easily stored on HDFS and analyzes using MapReduce. The authors' approach aims to study the issues related to Hadoop MapReduce architecture and provide the solution for problems. While talking about the MapReduce reliability play an important role while analyzed the data.