Download now Free registration required
Within the scientific community many high performance applications are used in order to run experiments on data sets. These data sets can be very large in size or in number. Both of these situations can cause problems to the centralized manager scheduling the system. In the authors' approach they minimize the role of the manager by using a distributed hash table. This way all files have a given "Home" location to be at if they are going to be used which reduces location maintenance. They further reduce the strain on the central manager by using a counter-based bloom-filter.
- Format: PDF
- Size: 537.79 KB