Today's digital era causes escalation of datasets. These datasets are termed as "Big data" due to its massive amount of volume, variety and velocity and is stored in distributed file system architecture. Hadoop is framework that supports Hadoop Distributed File System (HDFS) for storing and MapReduce for processing of large data sets in a distributed computing environment. Task assignment is possible by schedulers. Schedulers guarantee the fair allocation of resources among users. When a job is submitted by user, it will be placed into a job queue.