DO-CACHE: Mechanism for Hadoop Using Map Reduce in Big Data
In this big data world, the authors find daily-based data which are generated day-by-day e.g. Google, Amazon, Facebook, etc. The large amount of data is difficult to store, manage and analyze. It runs on various hardware's which are parallel arranged. The data that are huge in volume takes more time to execute for particular method and causes failure to distributed system. To defeat this issue, a framework was developed called Hadoop for big data processing and is being used in many organizations.