Data Centers
Along with the rise of cloud computing, data centers are being reinvented via virtualization, servers and high-performance computing. Find out more in with the latest white papers and case studies.
-
Provably-Efficient Job Scheduling for Energy and Fairness in Geographically Distributed Data Centers
Decreasing the soaring energy cost is imperative in large data centers. Meanwhile, limited computational resources need to be fairly allocated among different organizations. Latency is another major concern for resource management. Nevertheless, energy cost, resource allocation fairness, and latency are important but often contradicting metrics on scheduling data center workloads....
Provided By Institute of Electrical & Electronic Engineers
-
Grid Computing for Different Applications
Grid Computing delivers on the potential in the growth and abundance of network connected systems and bandwidth: computation, collaboration and communication over the Advanced Web. At the heart of Grid Computing is a computing infrastructure that provides dependable, consistent, pervasive and inexpensive access to computational capabilities. The use of grid...
Provided By International Journal of Computer Technology and Applications
-
Highly Expandable Reconfigurable Platform Using Multi-FPGA based Boards
Reconfigurable computing has become an essential part of research since last few decades. By placing computationally intensive applications in the reconfigurable logic area of the system, the remarkable performance gains have been found. Among the different research directions in the domain of reconfigurable computing, the use of multiple reconfigurable devices...
Provided By International Journal of Computer Applications
-
A Balanced Scheduling Algorithm With Fault Tolerance and Task Migration Based on Primary Static Mapping (PSM) in Grid
In this paper, the authors present a balanced scheduling algorithm with considering the fault tolerance and task migration of allocating independent tasks in grid systems. Resource scheduling and its management are great challenges in heterogeneous environment. Hence load balancing is one of the best solutions to achieve the above purposes....
Provided By Payam Chu
-
Energy Optimizations for Data Center Network: Formulation and Its Solution
Data center consumes increasing amount of power nowadays, together with expanding number of data centers and upgrading data center scale, its power consumption becomes a knotty issue. While main efforts of this research focus on server and storage power reduction, network devices as part of the key components of data...
Provided By Nanyang Technological University
-
Smart Grids: A New Framework for Efficient Power Management in Datacenter Networks
The energy demand in the enterprise market segment demands a supply format that accommodates all generation and storage options with active participation by end users in demand response. Basically, with today's High Power Computing (HPC), a highly reliable, scalable, and cost effective energy solution that will satisfy power demands and...
Provided By The Schwa
-
Workflow Management for High Availability of Resources in Grid Environment
The objective of this paper is to propose workflow management for high availability of resources in grid computing. Grid is providing infrastructure for grid workflow for managing grid applications. The grid infrastructure focuses on large-scale resource sharing, innovative applications, and high performance orientation. Grid computing enables the sharing, selection and...
Provided By EuroJournals
-
Characterizing the Impact of the Workload on the Value of Dynamic Resizing in Data Centers
Energy consumption imposes a significant cost for data centers; yet much of that energy is used to maintain excess service capacity during periods of predictably low load. Resultantly, there has recently been interest in developing designs that allow the service capacity to be dynamically resized to match the current workload....
Provided By Cornell University
-
Novel Design of a 9T SRAM Cell With Reduced Leakage for Embedded Cache Memory Application
Low power memory design is one of the most challenging aspects in VLSI design. Trends in scaling technology have led to domination of leakage power dissipation. With large number of low power techniques being adopted for logic circuits, it is required to design low power SRAM cells that can reduce...
Provided By EuroJournals
-
Double Auction-Inspired Meta-Scheduling of Parallel Applications on Global Grids
Meta-schedulers map jobs to computational resources that are part of a grid, such as clusters, that in turn have their own local job schedulers. Existing Grid meta-schedulers either target system-centric metrics, such as utilization and throughput, or prioritize applications based on utility metrics provided by the users. The system-centric approach...
Provided By University of Medicine and Pharmacy
-
A Survey of Game Theoretic Approaches in Smart Grid
The concept of smart grid to transform the age old power grid into a smart and intelligent electric power distribution system is, currently, a hot research topic. Smart grid offers the merging of electrical power engineering technologies with network communications. Game theory has featured as an interesting technique, adopted by...
Provided By Institute of Electrical & Electronic Engineers
-
Cutting Down Electricity Cost in Internet Data Centers by Using Energy Storage
Electricity consumption comprises a significant fraction of total operating cost in data centers. System operators are required to reduce electricity bill as much as possible. In this paper, the authors consider utilizing available energy storage capability in data centers to reduce electricity bill under real-time electricity market. Laypunov optimization technique...
Provided By Institute of Electrical & Electronic Engineers
-
A Novel Approach to Enhance Reliability in Hadoop with Reduced Storage Requirement
A data grid should provide fast, reliable and transparent access of data to a large number of heterogeneous and geographically distributed users. The volume of data handled in a data grid is in the order of petabytes. Hadoop is a data grid framework. In hadoop data is split into blocks...
Provided By EuroJournals
-
Optimal Performance Mapping on Reconfigurable Architecture for Multimedia Applications
Coarse-Grained Reconfigurable Architectures (CGRAs) are capable of achieving both goals of high performance and flexibility. CGRAs not only improve performance by exploiting the features of repetitive computations, but also can adapt to diverse computations by dynamically changing configurations of an array of its internal Processing Elements (PEs) and their interconnections....
Provided By EuroJournals
-
Robust Dynamic Provable Data Possession
Remote Data Checking (RDC) allows clients to efficiently check the integrity of data stored at untrusted servers. This allows data owners to assess the risk of outsourcing data in the cloud, making RDC a valuable tool for data auditing. A robust RDC scheme incorporates mechanisms to mitigate arbitrary amounts of...
Provided By New Jersey Institute of Technology
-
A Survey on Dynamic Job Scheduling in Grid Environment Based on Heuristic Algorithms
Computational Grids are a new trend in distributed computing systems. They allow the sharing of geographically distributed resources in an efficient way, extending the boundaries of what the authors perceive as distributed computing. Various sciences can benefit from the use of grids to solve CPU-intensive problems, creating potential benefits to...
Provided By Creative Commons
-
Minimum Cost Virtually Single Storage Private Grid Using OGA
In this paper, the authors have constructed a low cost virtually single storage private grid using Object based Grid Architecture (OGA). In OGA the data, process privacy and inter communication are increased through an Object Oriented concept. In general, the grid is affected, without, scheduling down time and dedicated resource....
Provided By EuroJournals
-
Energy Footprint of Advanced Dense Numerical Linear Algebra Using Tile Algorithms on Multicore Architecture
The authors propose to study the impact on the energy footprint of two advanced algorithmic strategies in the context of high performance dense linear algebra libraries: mixed precision algorithms with iterative refinement allow to run at the peak performance of single precision floating-point arithmetic while achieving double precision accuracy and...
Provided By University of Tehran
-
LISP Part 3 - Deployed Network and Use-Cases
In this webcast, the presenters will take a deep-dive of the various use-cases LISP provides. From low opex multi-homing to using provider independent addresses to data center to mobility applications, they will show how one architectural solution can solve so many critical problems they have today in networking.
Provided By Oleksiy Kovyrin
-
Rethinking the Architecture Design of Data Center Networks
In the rising tide of the Internet of things, more and more things in the world are connected to the Internet. Recently, data has kept growing at a rate more than four times of that expected in Moore's law. This explosion of data comes from various sources such as mobile...
Provided By Hong Kong University of Science and Technology
-
Data Clustering and Topology Preservation Using 3D Visualization of Self Organizing Maps
The Self Organizing Maps (SOM) is regarded as an excellent computational tool that can be used in data mining and data exploration processes. The SOM usually create a set of prototype vectors representing the data set and carries out a topology preserving projection from high-dimensional input space onto a low-dimensional...
Provided By International Association of Engineers
-
Techila: High Performance Computing ISV Switches From Amazon, Adds $328,000 in Revenue
Techila is an Independent Software Vendor (ISV) based in Tampere, Finland. Its Techila middleware solution enables organizations that run computationally intensive applications to access high-performance computing capabilities - without investing in new hardware. Because its flagship solution required installing and supporting servers at customer sites, it was difficult for Techila...
Provided By Microsoft
-
System Networks Drive the Next Generation of Automated, Dynamic Datacenters by IDC
This IDC white paper discusses the move to 10GbE, convergence and Big Data for enterprise data centers. It describes the role that IBM System Networking is playing to provide innovation and value to IT organizations for building smarter data center networks today and planning for the future.
Provided By IBM
-
Unified Computing Platform Makes Server Allocation Around 60 Times Faster
DATEV eG offers software solutions and IT services across Europe for tax advisers, accountants and attorneys, as well as their clients. DATEV wanted a modern, easy-to-manage, failsafe, cost-saving and future-proof infrastructure solution that would integrate well within its existing rack server environment. The personalized infrastructure solution is based on a...
Provided By Cisco
-
Multiple Sequence Alignment on the Grid Computing Using Cache Technique
Multiple sequence alignment is an important problem and popular in the molecular biology. This is a basic problem that its solution could be used to proof and discover the similarity of the new sequence with other exist sequences; to define the evolution process of the family's sequences; as well as...
Provided By International Journal of Computer Science and Telecommunications
-
Best Practices - Handling IT Equipment in a Data Center
As the authors' technology-driven society continues to increase data processing and storage, the strain and demand on the Information Technology (IT) industry is much higher than that of most other industries. One of the most common tasks in any data center facility is the physical handling of servers and other...
Provided By ServerLIFT
-
Review of Deadlock Free Dynamic Reconfiguration in High-Speed LANS
The performance of the computer has increased in recent years, so the communication subsystems have become the main problem within the system. So, the switch based interconnection networks are used in the current high performance systems to solve this problem. To solve this problem, whenever the topological change occurs, the...
Provided By IPA Journals
-
From Grid Computing to Cloud Computing & Security Issues in Cloud Computing
The cloud is a next generation platform that provides dynamic resource pools, virtualization and high availability. In starting the concept of the Distributed Computing and the Grid Computing is discussed. Then the focus was on the concept of Cloud Computing and its characteristics. This paper provides a brief introduction to...
Provided By TechniaJOURNAL
-
Distributed Data Mining in the Grid Environment
Grid computing has emerged as an important new branch of distributed computing focused on large-scale resource sharing and high-performance orientation. In many applications, it is necessary to perform the analysis of very large data sets. The data are often large, geographically distributed and its complexity is increasing. In these area...
Provided By International Journal of Engineering and Innovative Technology (IJEIT)
-
Study of Maximum Power Point Tracking Using Perturb and Observe Method
The need for renewable energy sources is on the rise because of the acute energy crisis in the world today. India plans to produce 20 Gigawatts of Solar power by the year 2020, whereas the authors have only realized less than half a Gigawatt of their potential as of March...
Provided By International Journal of Advanced Research in Computer Engineering & Technology
-
Grid Scheduling Using PSO With SPV Rule
Grid computing can be defined as applying the resources of many computers in a network to a problem which requires a great number of computer processing cycles or access to large amounts of data. However, in the field of grid computing scheduling of tasks is a big challenge. The task...
Provided By International Journal of Advanced Research in Computer Science and Electronics Engineering
-
Trust Management System for Computational Grids
Computing systems are moving towards open, service oriented, dynamic and distributed architecture. In such an environment like grids support coordination of various resources that spans across multiple administrative domains. Diverse nature of grid resources introduces the challenge of establishing a trust between communicating entities. In this paper, the authors propose...
Provided By EuroJournals
-
Centralized and Distributed Replica Placement Algorithms for Data Grid
Common hurdles in the data grid are: ensuring availability, improving fault tolerance, reducing file access time, minimizing file transfer cost and controlling network bandwidth usage. The appropriate solution to overcome these hurdles is data replication. Two data replication algorithms are proposed in this paper; for the storage servers of geographically...
Provided By EuroJournals
-
Optimizing Latency and Throughput for Spawning Processes on Massively Multi-Core Processors
The execution of a SPMD application involves running multiple instances of a process with possibly varying arguments. With the widespread adoption of massively multi-core processors, there has been a focus towards harnessing the abundant compute resources effectively in a power-efficient manner. Although much work has been done towards optimizing distributed...
Provided By Association for Computing Machinery
-
ViaWest Supports the Global Outreach Efforts International Organization on a Mission to Alleviate Suffering, Poverty and Oppression
As an international organization on a mission to alleviate suffering, poverty and oppression, Mercy Corps' primary focus is on serving others. With its top priority in serving others, they needed to focus on its core IT infrastructure to ensure quick and reliable response during times of need. Mercy Corps required...
Provided By Viaway
-
Energy-Aware Service Allocation
In this paper the authors examine the problem of energy-aware resource allocation for hosting long-term services or on-demand compute jobs in clusters, e.g., deployed as part of computing infrastructures. They formalize the problem as three constrained optimization problems: maximize job performance under power consumption constraints, minimize power consumption under job...
Provided By University of Hawai'i
-
Integrating Web-Enabled Energy-Aware Smart Homes to the Smart Grid
Energy conservation is a global issue with great implications. High energy demands and environmental concerns force the transformation of electricity grids into smart grids, towards more rational utilization of energy. Embedded computing and smart metering transform houses into energy-aware environments, allowing residents to make informed choices about electricity. Web technologies...
Provided By University of Cumberlands
-
Dynamic Adaptive Virtual Core Mapping to Improve Power, Energy, and Performance in Multi-Socket Multicores
Consider a multithreaded parallel application running inside a multicore virtual machine context that is itself hosted on a multi-socket multicore physical machine. How should the VMM map virtual cores to physical cores? The authors compare a local mapping, which compacts virtual cores to processor sockets, and an interleaved mapping, which...
Provided By Association for Computing Machinery
-
Association Based Grid Resource Allocation Algorithm
A mechanism for coordinated sharing of distributed clusters based on computational economy is called Grid-association, allows the transparent use of resources from the association for whom the local resources are insufficient to meet its users' requirements. The use of computational economy methodology in coordinating resource allocation not only facilitates the...
Provided By EuroJournals
-
Optimization of Security Communication Wired Network by Means of Genetic Algorithms
The realization of security wired network is very critical when the network itself must be installed in an environment full of restrictions and constrains such as historical palaces, characterized by unique architectural features. The purpose of this paper is to illustrate an advanced installation design technique of security wired network...
Provided By Scientific Research
-
A New Approach to Grid Scheduling Using Random Weighted Genetic Algorithm with Fault Tolerance Strategy
Grid provides the user a huge amount of computational resources in a distributed manner, using which the authors can perform their tasks over these grid environments. These resources are geographically distributed around the globe and are dynamically available. Hence, to schedule them for actual use they need to consider various...
Provided By West Bend Broadcasting, Inc.
-
Coordinated Load Management in Peer-to-Peer Coupled Federated Grid Systems
This paper proposes a coordinated load management protocol for Peer-To-Peer (P2P) coupled federated Grid systems. The participants in the system, such as the resource providers and the consumers who belong to multiple control domains, work together to enable a coordinated federation. The coordinated load management protocol embeds a logical spatial...
Provided By Springer Healthcare
-
Searching for Software on the EGEE Infrastructure
Several large-scale Grid infrastructures are currently in operation around the world, federating an impressive collection of computational resources, a wide variety of application software, and hundreds of user communities. To better serve the current and prospective users of Grid infrastructures, it is important to develop advanced software retrieval services that...
Provided By Springer Healthcare
-
Fault-Management in P2P-MPI
The authors present in this paper a study on fault management in a grid middleware. The middleware is their home-grown software called P2P-MPI. This framework is MPJ compliant, allows users to execute message passing parallel programs, and its objective is to support environments using commodity hardware. Hence, running programs is...
Provided By Springer Healthcare
-
Optimization of Security Communication Wired Network by Means of Genetic Algorithms
The realization of security wired network is very critical when the network itself must be installed in an environment full of restrictions and constrains such as historical palaces, characterized by unique architectural features. The purpose of this paper is to illustrate an advanced installation design technique of security wired network...
Provided By Scientific Research
-
Numerical Methods for Solving Turbulent Flows by Using Parallel Technologies
Parallel implementation of algorithm of numerical solution of Navier-Stokes equations for large eddy simulation (LES) of turbulence is presented in this research. The Dynamic Smagorinsky model is applied for sub-grid simulation of tur-bulence. The numerical algorithm was worked out using a scheme of splitting on physical parameters. At the first...
Provided By Scientific Research
-
IaaS Clouds Vs. Clusters for HPC: A Performance Study
The increasing amount of data collected in the fields of physics and bio-informatics allows researchers to build realistic, and therefore accurate, models/simulations and gain a deeper understanding of complex systems. This analysis is often at the cost of greatly increased processing requirements. Cloud computing, which provides on demand resources, can...
Provided By IARIA
-
Towards Green HPC Blueprints
Effectiveness and power consumption is becoming a major problem in high-performance computing. Numbers of researchers are working on methodologies in order to increase efficiency of these systems on hardware and software levels. Several "Green" technologies are explained in this paper along with their pros and cons, with the aim of...
Provided By IARIA
-
System Reliability of Fault Tolerant Data Center
A Single Point Of Failure (SPOF) in system operations is a weak point of system reliability. Mean Time To Failure (MTTF) of system operations is equal to the shortage component's MTTF in system. A Tier IV data center is designed to eliminate the SPOF. Data center system reliability is not...
Provided By IARIA
-
VisIO: Enabling Interactive Visualization of Ultra-Scale, Time Series Data Via High-Bandwidth Distributed I/O Systems
Petascale simulations compute at resolutions ranging into billions of cells and write terabytes of data for visualization and analysis. Interactive visualization of this time series is a desired step before starting a new run. The I/O subsystem and associated network often are a significant impediment to interactive visualization of time-varying...
Provided By University of Central Florida
-
Chirp: A Practical Global Filesystem for Cluster and Grid Computing
Traditional distributed file system technologies designed for local and campus area networks do not adapt well to wide area grid computing environments. To address this problem, the authors have designed the Chirp distributed file system, which is designed from the ground up to meet the needs of grid computing. Chirp...
Provided By University of Northern Iowa
-
Grid Computing: The Next Decade
The evolution of the global scientific Cyber Infrastructure (CI) has, over the last 10+ years, led to a large diversity of CI instances. While specialized, competing and alternative CI building blocks are inherent to a healthy ecosystem, it also becomes apparent that the increasing degree of fragmentation is hindering interoperation,...
Provided By University of Northern Iowa
-
GrayWulf: Scalable Clustered Architecture for Data Intensive Computing
Data intensive computing presents a significant challenge for traditional supercomputing architectures that maximize FLOPS since CPU speed has surpassed IO capabilities of HPC systems and BeoWulf clusters. The authors present the architecture for a three tier commodity component cluster designed for a range of data intensive computations operating on petascale...
Provided By University of Hawaii
-
SHIP: Scalable Hierarchical Power Control for Large-Scale Data Centers
In today's data centers, precisely controlling server power consumption is an essential way to avoid system failures caused by power capacity overload or overheating due to increasingly high server density. While various power control strategies have been recently proposed, existing solutions are not scalable to control the power consumption of...
Provided By University of Tennessee
-
Power Optimization With Performance Assurance for Multi-Tier Applications in Virtualized Data Centers
Modern data centers must provide performance assurance for complex system software such as multi-tier web applications. In addition, the power consumption of data centers needs to be minimized to reduce operating costs and avoid system overheating. Various power-efficient performance management strategies have been proposed based on Dynamic Voltage and Frequency...
Provided By University of Tennessee
-
Hierarchical Power Control for Large-Scale Data Centers
In today's data centers, precisely controlling power consumption is an essential way to reduce operating costs, and avoid system failures caused by power capacity overload or overheating due to increasingly high server density. While various power control strategies have been recently proposed, existing solutions are not scalable to control the...
Provided By University of Tennessee
-
Cluster-Level Feedback Power Control for Performance Optimization
Power control is becoming a key challenge for effectively operating a modern data center. In addition to reducing operating costs, precisely controlling power consumption is an essential way to avoid system failures caused by power capacity overload or overheating due to increasing high server density. Control-theoretic techniques have recently shown...
Provided By University of Tennessee
-
Towards Optimal Sensor Placement for Hot Server Detection in Data Centers
Recent studies have shown that a significant portion of the total energy consumption of many data centers is caused by the inefficient operation of their cooling systems. Without effective thermal monitoring with accurate location information, the cooling systems often use unnecessarily low temperature set points to overcool the entire room,...
Provided By University of Tennessee
-
Co-Con: Coordinated Control of Power and Application Performance for Virtualized Server Clusters
Today's data centers face two critical challenges. First, various customers need to be assured by meeting their required service-level agreements such as response time and throughput. Second, server power consumption must be controlled in order to avoid failures caused by power capacity overload or system overheating due to increasing high...
Provided By University of Tennessee
-
Adaptive Power Control for Server Clusters
Power control is becoming a key challenge for effectively operating a modern data center. In addition to reducing operation costs, precisely controlling power consumption is an essential way to avoid system failures caused by power capacity overload or overheating due to increasing high-density. Control-theoretic techniques have recently shown a lot...
Provided By University of Tennessee
-
SHIP: A Scalable Hierarchical Power Control Architecture for Large-Scale Data Centers
In today's data centers, precisely controlling server power consumption is an essential way to avoid system failures caused by power capacity overload or overheating due to increasingly high server density. While various power control strategies have been recently proposed, existing solutions are not scalable to control the power consumption of...
Provided By University of Tennessee
-
MDCSim: A Multi-Tier Data Center Simulation Platform
Performance and power issues are becoming increasingly important in the design of large cluster based multitier data centers for supporting a multitude of services. Design and analysis of such large/complex distributed system often suffers from the lack of availability of an adequate physical infrastructure and the cost constraints especially in...
Provided By Pennsylvania State University
-
Challenges in Improving the Survivability of Data Centers
The survivability of data centers is critical to the survivability of the whole enterprise computing system. Building a truly survivable data center faces several daunting challenges, and existing techniques are still quite limited in handling them. This paper would briefly discusses a particular set of challenges the authors view as...
Provided By Pennsylvania State University
-
MROrchestrator: A Fine-Grained Resource Orchestration Framework for MapReduce Clusters
Efficient resource management in data centers and clouds running large distributed data processing frameworks like MapReduce is crucial for enhancing the performance of hosted applications and increasing resource utilization. However, existing resource scheduling schemes in Hadoop MapReduce allocate resources at the granularity of fixed-size, static portions of nodes, called slots....
Provided By Pennsylvania State University
-
Applications of Data Mining in the Management of Performance and Power in Data Centers
Performance and power issues are becoming increasingly important in the design of large data centers for supporting a multitude of services. There are many perspectives of addressing these issues using various computer science principles. In this paper, the author will discuss the applications of data mining techniques to manage power...
Provided By Pennsylvania State University
-
Perspectives On Distressed Assets: Banking And Securities Outlook
In December 2009, Deloitte's Center for Banking Solutions published its Outlook 2010, summarizing five trends that would dominate the banking and securities industry. Dealing with distressed assets was top of the list, and the report pointed out that 2010 represents something of a watershed moment, as banks in particular face...
Provided By Deloitte LLP
-
Coordinating Government Funding of File System and I/O Research Through the High End Computing University Research Activity
In 2003, the High End Computing Revitalization Task Force designated file systems and I/O as an area in need of national focus. The purpose of the High End Computing Interagency Working Group (HECIWG) is to coordinate government spending on File Systems and I/O (FSIO) R&D by all the government agencies...
Provided By NASA
-
Hedge Portfolios In Markets With Price Discontinuities
The authors consider a market consisting of multiple assets under jump-diffusion dynamics with European style options written on these assets. It is well-known that such markets are incomplete in the Harrison and Pliska sense. They derive a pricing relation by adopting a Radon-Nikodym derivative based on the exponential martingale of...
Provided By University of Technology Sydney
-
Divide & Conquer: Overcoming Computer Forensic Backlog Through Distributed Processing and Division of Labor
Until an organization is able to efficiently leverage existing resources, it will find itself trapped in the vicious cycle of too much work, too few people. Implementing a solution that amplifies existing resources by streamlining the investigative process and getting the most out of an organization's hardware is a permanent...
Provided By Access Data
-
FutureGrid Image Repository: A Generic Catalog and Storage System for Heterogeneous Virtual Machine Images
FutureGrid (FG) is an experimental, high-performance testbed that supports HPC, cloud and grid computing experiments for both application and computer scientist. FutureGrid includes the use of virtualization technology to allow the support of a wide range of operating systems in order to include a testbed for various cloud computing infrastructure...
Provided By Indiana University
-
Sigiri: Uniform Abstraction for Large-Scale Compute Resource Interactions
Scientists who conduct mid-range computationally heavy modeling and analysis often scramble to find sufficient computational resources to test and run their codes. The science they carry out is not petascale or even terascale science but the computational needs often go beyond what can be satisfied by their university. With the...
Provided By Indiana University
-
Energy-Aware High Performance Computing - A Taxonomy Study
To reduce the energy consumption and build a sustainable computer infrastructure now becomes a major goal of the high performance community. A number of research projects have been carried out in the field of energy-aware high performance computing. This paper is devoted to categorize energy-aware computing methods for the high-end...
Provided By Indiana University
-
TIDeFlow: The Time Iterated Dependency Flow Execution Model
The many-core revolution brought forward by recent advances in computer architecture has created immense challenges in the writing of parallel programs for High Performance Computing (HPC). Development of parallel HPC programs remains an art, and a universal doctrine for synchronization, scheduling and execution in general has not been found for...
Provided By University of Delaware
-
Polytasks: A Compressed Task Representation for HPC Runtimes
The increased number of execution units in many-core processors is driving numerous paradigm changes in parallel systems. Previous techniques that focused solely upon obtaining correct results are being rendered obsolete unless they can also provide results efficiently. This paper dives into the particular problem of efficiently supporting fine-grained task creation...
Provided By University of Delaware
-
Sigiri: Towards A Light-Weight Job Management System for Large Scale Systems
e-Science applications are often compute and data intensive, requiring large-scale compute systems for execution. Large-scale systems, however, support a variety of resource management interfaces that an end user must adapt to for compute job submission and management. Grid middleware solutions abstract these heterogeneous resource managers and offer a single unified...
Provided By Indiana University
-
ESG - Avoiding the Hazards of IT Consolidation
In an effort to reduce costs and streamline operations, today's large, distributed organizations are investing more in data center transformation, consolidation, and server virtualization. And yet many IT consolidation projects suffer from poor planning and implementation issues that ultimately impact application performance and user productivity. This technology brief investigates the...
Provided By Riverbed
-
SWARM: Scheduling Large-Scale Jobs Over the Loosely-Coupled HPC Clusters
Compute-intensive scientific applications are heavily reliant on the available quantity of computing resources. The Grid paradigm provides a large scale computing environment for scientific users. However, conventional Grid job submission tools do not provide a high-level job scheduling environment for these users across multiple institutions. For extremely large number of...
Provided By Indiana University
-
The Design and Implementation of a Multi-Level Content-Addressable Checkpoint File System
Long-running HPC applications guard against node failures by writing checkpoints to parallel file systems. Writing these checkpoints with petascale class machines has proven difficult and the increased concurrency demands of exascale computing will exacerbate this problem. To meet check-pointing demands and sustain application-perceived throughput at exascale, multi-tiered hierarchical storage architectures...
Provided By Indiana University
-
GoDEL: A Multidirectional Dataflow Execution Model for Large-Scale Computing
As the emerging trends in hardware architecture guided by performance, power efficiency and complexity drive one towards massive processor parallelism; there has been a renewed interest in dataflow models for large-scale computing. Dataflow programming models, being declarative in nature, lead to improved programmability at scale by implicitly managing the computation...
Provided By Indiana University
-
Banks' Great Bailout Of 2008-2009
This paper examines government policies aimed at rescuing banks from the effects of the financial crisis of 2007-2009. To delimit the scope of the analysis, the authors concentrate on the fiscal side of interventions and ignore, by design, the monetary policy reaction to the crisis. The policy response to the...
Provided By Indiana University