Main Article Content


Network bandwidth is a scarce resource in big data environments, so data locality is a fundamental problem for data-parallel frameworks such as Hadoop and Spark. Existing approaches solve this problem by scheduling computational tasks near the input data and considering the server’s free time, data placements, and data transfer costs. However, such approaches usually set identical values for data transfer costs, even though a multicore server’s data transfer cost increases with the number of data-remote tasks. Eventually, this hampers data-processing time, by minimizing it ineffectively. As a solution, we propose DynDL (Dynamic Data Locality), a novel data-locality-aware task-scheduling model that handles dynamic data transfer costs for multicore servers.

Article Details

How to Cite
D.Radhika, C.Shalini, T.Uma Maheswari, S.Pooja, & G.Subatharani. (2021). Designing a dynamic task scheduler in map reduce for hadoop framework. International Journal of Intellectual Advancements and Research in Engineering Computations, 7(2), 2289–2293. Retrieved from