Data Centers
Along with the rise of cloud computing, data centers are being reinvented via virtualization, servers and high-performance computing. Find out more in with the latest white papers and case studies.
-
A Survey of Different Residential Energy Consumption Controlling Techniques for Autonomous DSM in Future Smart Grid Communications
In this paper, the authors present a survey of residential load controlling techniques to implement demand side management in future smart grid. Power generation sector facing important challenges in both quality and quantity to meet the increasing requirements of consumers. Energy efficiency, reliability, economics and integration of new energy resources...
Provided By COMSATS Institute of Information Technology
-
Home Energy Management Systems in Future Smart Grids
The authors present a detailed review of various Home Energy Management Schemes (HEMS). HEMs will increase savings, reduce peak demand and Pto Average Ratio (PAR). Among various applications of smart grid technologies, home energy management is probably the most important one to be addressed. Various steps have been taken by...
Provided By COMSATS Institute of Information Technology
-
Temporal Bandwidth-Intensive Virtual Network Allocation Optimization in a Data Center Network
In this paper, the authors consider bandwidth-intensive services for customers that want Virtual Networks (VN) in a data center environment. In particular, they consider this problem in a temporal context where bandwidth-intensive requests from each VN may arrive randomly at a review point, which may last for certain duration. Thus,...
Provided By University of Missouri-Columbia
-
Joint Virtual Machine Assignment and Traffic Engineering for Green Data Center Networks
The popularization of cloud computing brings emergency concern to the energy consumption in big data centers. Besides the servers, the energy consumed by the network in a data center is also considerable. Existing works for improving the network energy efficiency are mainly focused on traffic engineering, i.e., consolidating flows and...
Provided By University of Western Australia
-
Subsea Supplier Improves Application and Data Center Performance
Innovative technologies and the rising price of natural resources have pushed more companies into exploring and developing offshore fields of oil and gas. They wanted to support business activities globally by providing up-to-date reporting, virtualized and improve performance on servers running SAP applications and streamline data center management. Updated data...
Provided By Cisco
-
Two-Level Throughput and Latency IO Control for Parallel File Systems
Existing parallel file systems are unable to provide both throughput and response time guarantees for concurrent parallel applications. This limitation prevents different, competing applications from getting their desired performance as high-Performance Computing (HPC) systems continue to scale up and be used in a shared environment. This paper presents a new...
Provided By Florida International University
-
Dynamic Management Techniques For Increasing Energy Efficiency within a Data Center
In the authors' days data centers provide the global community an indispensable service: nearly unlimited access to almost any kind of information they can imagine by supporting most Internet services such as: web hosting and E-commerce services. Because of their capacity and their work, data centers have various impacts on...
Provided By University of British Columbia - Department of Computer Science
-
Data Center Best Practices: Managing Data with Cloud Computing
Data centers are looking at cloud computing as a way to deliver the necessary flexibility, scalability, efficiency and speed. This article looks at Data Center Best Practices and focuses on how companies can leverage data in innovated ways to better respond to changing market requirements and demands.
Provided By Oracle
-
Energy-Optimized Dynamic Deferral of Workload for Capacity Provisioning in Data Centers
In this paper, the authors explores the opportunity for energy cost saving in data centers that utilizes the flexibility from the Service Level Agreements (SLAs) and proposes a novel approach for capacity provisioning under bounded latency requirements of the workload. They investigate how many servers to keep active and how...
Provided By Institute of Electrical & Electronic Engineers
-
Locally Repairable Codes With Multiple Repair Alternatives
Distributed storage systems need to store data redundantly in order to provide some fault-tolerance and guarantee system reliability. Different coding techniques have been proposed to provide the required redundancy more efficiently than traditional replication schemes. However, compared to replication, coding techniques are less efficient for repairing lost redundancy, as they...
Provided By Nanyang Technological University
-
Bank Rakyat Indonesia Expands Domestic Coverage With Unified Data Center Architecture
PT. Bank Rakyat Indonesia (BRI) are constantly seeking out new markets to serve. There challenges were too keen competition in targeted markets, data center services need to be aligned with business expansion and optimize data center operations. They chose Cisco to overcome these challenges. They implemented Cisco Unified Data Center...
Provided By Cisco Systems
-
Non-Monetary Fair Scheduling - A Cooperative Game Theory Approach
The authors consider a multi-organizational system in which each organization contributes processors to the global pool but also jobs to be processed on the common resources. The fairness of the scheduling algorithm is essential for the stability and even for the existence of such systems (as organizations may refuse to...
Provided By Association for Computing Machinery
-
Preventive Inference Control in Data-centric Business Models
Inference control is a modern topic in data usage management, especially in the context of data-centric business models. However, it is generally not well understood how protection mechanisms could be designed to protect the users. The contributions of this paper are threefold: firstly, it describes the inference problem and relate...
Provided By University of Frankfurt
-
An Implementation of Binomial Method of Option Pricing using Parallel Computing
The Binomial method of option pricing is based on iterating over discounted option payoffs in a recursive fashion to calculate the present value of an option. Implementing the Binomial method to exploit the resources of a parallel computing cluster is non-trivial as the method is not easily parallelizable. The authors...
Provided By University of Mary Washington
-
Asset Management Company Builds New, Agile Data Centers
Built largely through acquisition of top asset managers, Allianz Global Investors (AGI), a world leader in asset management and a subsidiary, there challenges were need to streamline data center management by centralizing and consolidating five existing data centers and standardize on platforms and applications across acquired assets. They chose Cisco...
Provided By Cisco
-
Speeding Up Searching in B-cloud System
With the enormous growth in data amount day by day storage demands have been imposed crucial requirements for data center, where data access plays essential role in estimation of data storage systems effectiveness. For that, there were a lot of efforts have been done to speed up searching in huge...
Provided By Huazhong University of Science & Technology
-
Data Center Resource Management with Temporal Dynamic Workload
The proliferation of Internet services drives the data center expansion in both size and the number. More importantly, the energy consumption (as part of Total Cost of Ownership (TCO)) has become a social concern. When the workload demand is given, the data center operators' desire minimizing their TCO. On the...
Provided By University of Missouri-Columbia
-
Job Scheduling Algorithm for Computational Grid in Grid Computing Environment
Grid computing has emerged as a distributed methodology that coordinates the resources that are spread in the heterogeneous distributed environment. Grid Computing allow sharing of resources from heterogeneous and distributed locations. Grid Computing has wide variety of application areas including science, medical and research areas. But there are also some...
Provided By International Journal of Advanced Research in Computer Science and Software Engineering (IJARCSSE)
-
Global Load Balancing and Fault Tolerant Scheduling in Computational Grid
Grid computing is a replica of distributed computing that uses geographically and administratively different resources found on the network. These resources may include processing power, storage capacity, specific data, and other hardware such as input and output devices. In grid computing, individual users can access computers and data visibly, without...
Provided By International Journal of Engineering and Innovative Technology (IJEIT)
-
Hybrid Optical and Electrical Network Flows Scheduling in Cloud Data Centers
Hybrid intra-data centre networks, with optical and electrical capabilities, are attracting research interest in recent years. This is attributed to the emergence of new bandwidth greedy applications and novel computing paradigms. A key decision to make in networks of this type is the selection and placement of suitable flows for...
Provided By University of Essen
-
Brennercom Invests in Cloud and High-Density Innovation at Its Data Center
Brennercom is an Italian communications provider active over a vast area stretching from the provinces of Trentino Alto Adige throughout the territory. There challenges were to enhance Data Center capabilities with a flexible, high-performance solution able to deliver cloud and virtualization services, and develops a new business model. They chose...
Provided By Cisco
-
Avoid making major mistakes in data center virtualization (Spanish)
With increasing demand for data centers, network infrastructure may become more complex and more costly. Cisco In this article, you'll discover how to avoid the five major errors during consolidation and virtualization of data centers and how to correct them. Read the report.
Provided By Cisco
-
Scheduling Tightly-Coupled Applications on Heterogeneous Desktop Grids
Platforms that comprise volatile processors, such as desktop grids, have been traditionally used for executing independent-task applications. In this paper, the authors examine the scheduling of tightly-coupled iterative master-worker applications onto volatile processors. The main challenge is that workers must be simultaneously available for the application to make progress. They...
Provided By University of Hawai'i
-
Electrical Efficiency Measurement for Data Centers
Data center electrical efficiency is rarely planned or managed. The unfortunate result is that most data centers waste substantial amounts of electricity. Today it is both possible and prudent to plan, measure, and improve data center efficiency. In addition to reducing electrical consumption, efficiency improvements can gain users higher IT...
Provided By APC by Schneider Electric
-
Choosing Between Room, Row, and Rack-based Cooling for Data Centers
Latest generation high density and variable density IT equipment create conditions that traditional data center cooling was never intended to address, resulting in cooling systems that are oversized, inefficient, and unpredictable. Room, row, and rack-based cooling methods have been developed to address these problems. This paper describes these improved cooling...
Provided By APC by Schneider Electric
-
Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency
Both hot-air and cold-air containment can improve the predictability and efficiency of traditional data center cooling systems. While both approaches minimize the mixing of hot and cold air, there are practical differences in implementation and operation that have significant consequences on work environment conditions, PUE, and economizer mode hours. The...
Provided By APC by Schneider Electric
-
Ten Errors to Avoid When Commissioning a Data Center
Data center commissioning can deliver an unbiased evaluation of whether a newly constructed data center will be an operational success or a failure. Proper execution of the commissioning process is a critical step in determining how the data center operates as an integrated system. The documentation produced as a result...
Provided By APC by Schneider Electric
-
Containerized Power and Cooling Modules for Data Centers
Standardized, pre-assembled and integrated data center facility power and cooling modules are at least 60% faster to deploy, and provide a first cost savings of 13% or more compared to traditional data center power and cooling infrastructure. Facility modules, also referred to in the data center industry as containerized power...
Provided By APC by Schneider Electric
-
TCO Analysis of a Traditional Data Center vs. a Scalable, Containerized Data Center
Standardized, scalable, pre-assembled, and integrated data center facility power and cooling modules provide a “total cost of ownership” (TCO) savings of 30% compared to traditional, built-out data center power and cooling infrastructure. Avoiding overbuilt capacity and scaling the design over time contributes to a significant percentage of the overall savings....
Provided By APC by Schneider Electric
-
Cooling Entire Data Centers Using Only Row Cooling
Row cooling is emerging as a practical total cooling solution for new data centers due to its inherent high efficiency and predictable performance. Yet some IT equipment in data centers appears incompatible with row cooling because it is not arranged in neat rows due to the nature of the equipment...
Provided By APC by Schneider Electric
-
Data Center Temperature Rise During a Cooling System Outage
The data center architecture and its IT load significantly affect the amount of time available for continued IT operation following a loss of cooling. Some data center trends such as increasing power density, warmer supply temperatures, the “right-sizing” of cooling equipment, and the use of air containment may actually increase...
Provided By APC by Schneider Electric
-
A C++11 Implementation of Arbitrary-Rank Tensors for High-Performance Computing
In this paper, the authors discusses an efficient implementation of tensors of arbitrary rank by using some of the idioms introduced by the recently published C++ ISO Standard (C++11). With the aims at providing a basic building block for high-performance computing, a single Array class template is carefully crafted, from...
Provided By Cornell University
-
Embrace the New Era of Data Centre Services and the Cloud
Read this white paper to learn about the trends affecting the data center and how cloud services are playing a huge role in helping organizations to modernize, improve utilization, and future-proof their data centers for current and future success.
Provided By Hewlett Packard
-
Insurance Company Virtualizes Data Center and Desktops
Promutuel originated 160 years ago when farmers from Quebec had the innovative idea to band together and create their own insurance company. There challenges were runaway HQ data center server costs to support continual expansion strain space, power, cooling budgets and different WAN configurations and server setups at 150 remote...
Provided By Cisco
-
Data Center Physical Infrastructure: Optimising Business Value
To stay competitive in today’s rapidly changing business world, companies must update the way they view the value of their investment in data center physical infrastructure (DCPI). No longer are simply availability and upfront cost sufficient to make adequate business decisions. Agility, or business flexibility, and low total cost of...
Provided By Schneider Electric - AU
-
Top 10 Mistakes in Data Center Operations: Operating Efficient and Effective Data Centers
How can you avoid making major mistakes when operating and maintaining your data center(s)? The key lies in the methodology behind your operations and maintenance program. All too often, companies put immense amounts of capital and expertise into the design of their facilities. However, when construction is complete, data center...
Provided By Schneider Electric - AU
-
Solutions for a More Energy-Efficient Data Center: From Water to Fresh-Air Cooling
This white paper discusses recent trends in the industry toward increasing energy efficiency of the data center in order to save on power and cooling costs without reducing performance. Included in the discussion is a review of the key components in the data center that can affect power consumption and...
Provided By IBM
-
Algorithmics and IBM Platform Computing solution for financial markets
This paper describes best practices on how to spread the computational demands of advanced risk analytics across a dynamic grid computing environment. With a more advanced and capable computing infrastructure, banks can move from reactively measuring risk to actively managing risk based on timely insights drawn from rigorous analysis, all...
Provided By IBM
-
Grid Maturity Report
As grid technology surpasses its tenth year in the banking industry, it’s more important than ever for banks to determine how mature grid technology performs alongside newer innovation like cloud and big data. The new survey from Excelian seeks to help organizations baseline themselves against each other and to help...
Provided By IBM
-
A General Framework for Achieving Energy Efficiency in Data Center Networks
The popularization of cloud computing has brought a concern over the energy consumption in data centers. In addition to the energy consumed by servers, the energy consumed by the large number of network devices emerges as a significant problem. Existing work on energy-efficient data center networking primarily focuses on traffic...
Provided By Tsihai
-
Minimizing Data Center Cooling and Server Power Costs
Data centers have been often accused of high power consumption used for running of information technology equipment and for air conditioning. The paper focuses on total data center power optimization through a mathematical problem formulation along with an efficient algorithm to bring down the total data center power cost to...
Provided By University of Southern California (Marshall)
-
Information Acquisition And Under-diversification
If an investor wants to form a portfolio of risky assets and can exert effort to collect information on the future value of these assets before he invests, which assets should he learn about? The best assets to acquire information about are ones the investor expects to hold. But the...
Provided By National Bureau of Economic Research
-
Ratings Shopping And Asset Complexity: A Theory Of Ratings Inflation
Many identify inflated credit ratings as one contributor to the recent financial market turmoil. The authors develop an equilibrium model of the market for ratings and use it to examine possible origins of and cures for ratings inflation. In the model, asset issuers can shop for ratings - observe multiple...
Provided By National Bureau of Economic Research
-
Equity Depletion From Government-Guaranteed Debt
Government guarantees of private debt deplete equity. The depletion is greatest during periods when the probability of a guarantee payoff is highest. In a setting otherwise subject to Modigliani-Miller neutrality, firms issue guaranteed debt up to the limit the government permits. Declines in asset values raise debt in relation to...
Provided By National Bureau of Economic Research
-
Risk Price Dynamics
The authors present a novel approach to depicting asset pricing dynamics by characterizing shock exposures and prices for alternative investment horizons. They quantify the shock exposures in terms of elasticity's that measure the impact of a current shock on future cash-flow growth. The elasticity's are designed to accommodate nonlinearities in...
Provided By National Bureau of Economic Research
-
Cognition And Economic Outcomes In The Health And Retirement Survey
Dimensions of cognitive skills are potentially important but often neglected determinants of the central economic outcomes that shape overall well-being over the life course. There exists enormous variation among households in their rates of wealth accumulation, their holdings of financial assets, and the relative risk in their chosen asset portfolios...
Provided By National Bureau of Economic Research
-
Disasters Implied By Equity Index Options
The authors use prices of equity index options to quantify the impact of extreme events on asset returns. They define extreme events as departures from normality of the log of the pricing kernel and summarize their impact with high-order cumulants: skewness, kurtosis, and so on. They show that high-order cumulants...
Provided By National Bureau of Economic Research
-
Challenges for Parallel I/O in Grid Computing
With virtually limitless resources, GRID computing has the potential to solve large-scale scientific problems that eclipse even applications that run on the largest computing clusters today. The architecture of a computing GRID simply consists of a heterogeneous network infrastructure connecting heterogeneous machines presumed to be larger than most clusters of...
Provided By Northwestern University
-
Achieving Target MTTF by Duplicating Reliability-Critical Components in High Performance Computing Systems
Mean Time To failure, MTTF, is a commonly accepted metric for reliability. In this paper, the authors present a novel approach to achieve the desired MTTF with minimum redundancy. They analyze the failure behavior of large scale systems using failure logs collected by Los Alamos National Laboratory. They analyze the...
Provided By Northwestern University
-
Evaluating the Impact of Data Center Network Architectures on Application Performance in Virtualized Environments
In recent years, Data Center Network (DCN) architectures (e.g., DCell, FiConn, BCube, FatTree, and VL2) received a surge of interest from both the industry and academia. However, none of existing efforts provide an in-depth understanding of the impact of these architectures on application performance in practical multi-tier systems under realistic...
Provided By Northwestern University
-
Dynamic Software Updates for Parallel High Performance Applications
Despite using multiple concurrent processors, a typical high performance parallel application is long-running, taking hours, even days to arrive at a solution. To modify a running high performance parallel application, the programmer has to stop the computation, change the code, redeploy, and enqueue the updated version to be scheduled to...
Provided By Virginia Tech
-
Reusable Software Components for Accelerator-Based Clusters
The emerging accelerator-based heterogeneous clusters, comprising specialized processors such as the IBM Cell and GPUs, have exhibited excellent price to performance ratio as well as high energy-efficiency. However, developing and maintaining software for such systems is fraught with challenges, especially for modern High-Performance Computing (HPC) applications that can benefit the...
Provided By Virginia Tech
-
GePSeA: A General-Purpose Software Acceleration Framework for Lightweight Task Offloading
Specialized hardware accelerators have helped to improve application performance for many years. And as the authors scale to hundreds and thousands of cores, complex tasks, such as advanced application specific processing, need to be offloaded to these accelerators in order to achieve better performance scalability. However, such specialized hardware accelerators...
Provided By Virginia Tech
-
On the Energy Efficiency of Graphics Processing Units for Scientific Computing
The Graphics Processing Unit (GPU) has emerged as a computational accelerator that dramatically reduces the time to discovery in High-End Computing (HEC). However, while today's state-of-the-art GPU can easily reduce the execution time of a parallel code by many orders of magnitude, it arguably comes at the expense of significant...
Provided By Virginia Tech
-
Grid Learning Classifiers - A Web Based Interface
The toolkit for learning classifier system for grid data mining is a communication channel between remote users and gridclass system. Gridclass system is the system for grid data mining, grid computing approach in the distributed data mining. This toolkit is a web based system therefore end users can set the...
Provided By University of Milano-Bicocca
-
Asset Based Lending: Is Now the Time?
Asset Based Lending (ABL) at its most basic is simply lending secured by an asset pledged by a borrower. However, in the financial services world it has become more narrowly defined as lending focused on the collateral asset's value for the loan structure. This is in contrast to the more...
Provided By Wisemar
-
Avoiding Overages by Deferred Aggregate Demand for PEV Charging on the Smart Grid
The authors model the aggregate overnight demand for electricity by a large community of (possibly hybrid) Plug-in Electric Vehicles (PEVs) each of whose power demand follows a prescribed profile and is interruptible. Rather than a spot-price system for household consumers (which would necessarily need to be operated by automated means...
Provided By Uppror Media Group
-
Efficient Access to Many Small Files in a Filesystem for Grid Computing
Many potential users of grid computing systems have a need to manage large numbers of small files. However, computing and storage grids are generally optimized for the management of large files. As a result, users with small files achieve performance several orders of magnitude worse than possible. Archival tools and...
Provided By University of Notre Dame
-
Applying Feedback Control to a Replica Management System
Many modern storage systems used for large-scale scientific systems are multiple use, independently administrated clusters or grids. A common technique to gain storage reliability over a long period of time is the creation of data replicas on multiple servers, but in the presence of server failures, ongoing corrective action must...
Provided By University of Notre Dame
-
Dynamic Partitioning of the Cache Hierarchy in Shared Data Centers
Due to the imperative need to reduce the management costs of large data centers, operators multiplex several concurrent database applications on a server farm connected to shared network attached storage. Determining and enforcing per-application resource quotas in the resulting cache hierarchy, on the fly, poses a complex resource allocation problem...
Provided By VLDB Endowment
-
Out-of-Order Processing: A New Architecture for High-Performance Stream Systems
Many stream-processing systems enforce an order on data streams during query evaluation to help unblock blocking operators and purge state from stateful operators. Such In-Order Processing (IOP) systems not only must enforce order on input streams, but also require that query operators preserve order. This order preserving requirement constrains the...
Provided By VLDB Endowment
-
An Agent-Based Service Discovery Algorithm Using Agent Directors for Grid Computing
Grid computing has emerged as a viable method to solve computational and data-intensive problems which are executable in various domains from business computing to scientific research. However, grid environments are largely heterogeneous, distributed and dynamic, all of which increase the complexities involved in developing grid applications. Several software has been...
Provided By Azad University
-
Resource Provisioning based on Preempting Virtual Machines in Resource Sharing Environments
Resource provisioning is one of the main challenges in large-scale resource sharing environments such as federated Grids. Recently, many resource management systems in these environments have started to use the lease abstraction and Virtual Machines (VMs) for resource provisioning. In resource sharing environments resource providers serve requests from external users...
Provided By University of Medicine and Pharmacy
-
Double Auction-Inspired Meta-Scheduling of Parallel Applications on Global Grids
Meta-schedulers map jobs to computational resources that are part of a grid, such as clusters, that in turn have their own local job schedulers. Existing Grid meta-schedulers either target system-centric metrics, such as utilization and throughput, or prioritize applications based on utility metrics provided by the users. The system-centric approach...
Provided By University of Medicine and Pharmacy
-
Five Layer Security Architecture & Policies for Grid Computing System
In this paper, the authors described four layer architecture of Grid Computing System, analyzes security requirements and problems existing in Grid Computing System. This paper presents a new approach of five layer security architecture of Grid Computing System, defines a new set of security policies & gives the representation. State...
Provided By International Journal of Computer Science and Information Technologies
-
Information -Aware Scheduling Strategies for Desktop Grid Environment
In this paper, the authors will show how it is possible to build Information-aware schedulers able to outperform The Work Queue with Replication-Fault Tolerant Scheduler(WQR-FT). They propose different scheduling policies considering information about resources and applications. They will discuss two task selection policies and four machine selection policies that when...
Provided By International Journal of Computer Science and Information Technologies
-
Dynamic Deferral of Workload for Capacity Provisioning in Data Centers
Recent increase in energy prices has led researchers to find better ways for capacity provisioning in data centers to reduce energy wastage due to the variation in workload. This paper explores the opportunity for cost saving utilizing the flexibility from the Service Level Agreements (SLAs) and proposes a novel approach...
Provided By Shandong Institute of Business And Technology
-
Design and Implementation of GUISET-Driven Authentication Framework
Authentication is the process of identifying a user on the basis of the credentials provided. In reality, the user does not necessarily have to be a person; it can be an application that is making a remote call from the intranet or Internet. Different Grid domains provide different security policies...
Provided By University of Zimbabwe
-
Channel Middle-Tier Strategy For A Leading North American Financial Services Company
The challenges faced by the Banking & Financial Services globally are to reduce the application complexity in client's IT environment due to growing number of information delivery channels, to rationalize the overlapping business transactions (Balance Inquiry, Transaction History, Account List, Payments, Transfer) across multiple channels (Online Banking, eTM, Sales, Service,...
Provided By Patni
-
Construction Industry Advisor
A corporation is an independent legal entity, separate from the people who own, control and manage it. Contractors often choose this business structure because it insulates them from personal liability - only the business's assets are at risk. This generally means owners need not worry about creditors seizing their personal...
Provided By GBQ
-
Parallelising Wavefront Applications on General-Purpose GPU Devices
Pipelined wavefront applications form a large portion of the high performance scientific computing workloads at supercomputing centres such as LANL in the United States and AWE in the United Kingdom. This paper investigates the viability of utilising Graphics Processing Units (GPUs) for the acceleration of these codes, using NVIDIA's Compute...
Provided By University of Warsaw
-
Performance Prediction for Running Workflows Under Role-Based Authorization Mechanisms
When investigating the performance of running scientific/commercial workflows in parallel and distributed systems, the authors often take into account only the resources allocated to the tasks constituting the workflow, assuming that computational resources will accept the tasks and execute them to completion once the processors are available. In reality, and...
Provided By University of Warsaw
-
A Framework for Data Center Scale Dynamic Resource Allocation Algorithms
The scale and complexity of online applications and e-business infrastructures has led service providers to rely on the capabilities of large-scale hosting platforms, i.e., data centers. Dynamic Resource Allocation (DRA) algorithms have been shown to allow server resource allocation to be matched with application workload, which can improve server resource...
Provided By University of Warsaw
-
A Novel Soft Computing Based Algorithm for the Control of Dynamic Uncertain Systems-An Application to Dc-Dc Converters
Both Soft computing based controllers and sliding mode controllers have been utilized to regulate the output voltage of dc-dc converters in response to changes in the load and the input voltage. Although both control techniques possess desirable characteristics, they have disadvantages which prevent them from being applied extensively. Many researchers...
Provided By St.Peter"s University
-
High Performance Computing Network for Cloud Environment Using Simulators
Cloud computing is the next generation computing. Adopting the cloud computing is like signing up new form of a website. The GUI which controls the cloud computing make is directly control the hardware resource and user application. The difficulty part in cloud computing is to deploy in real environment. Its'...
Provided By Karpagam University
-
Uncertainty In The Public Debt Market And Stochastic Lon-Run Growth
In a continuous time model, a representative household has to allocate its investment and consumption in an optimal manner under conditions of uncertainty. In the present paper it is hypothesized that there are two types of assets: a risk-free and a risky asset. The risk-free asset is assumed to be...
Provided By University of Macedonia
-
Introduction to Virtual Desktop Architectures
Enterprises have embraced various forms of virtualization during the past few decades. Server virtualization, made popular in the early 2000s through efficient hypervisor technology, made it possible for enterprises to do more with less - increasing datacenter capacity by allocating server workloads on otherwise unused or underused servers. This directly...
Provided By RingCube Technologies
-
Data Center Efficiency and Optimization
Learn about strategies to optimize compute performance in data center facing power and cooling constraints to improve data center efficiency.
Provided By Intel Corporation
-
Intelligent Power Management Improves Rack Density
Intel® Intelligent Power Node Manager is a smart way to optimize and manage power and cooling resources in the data center.
Provided By Intel Corporation
-
Performance Evaluation of DSR in Various Placement Environments
The main method for evaluating the performance of MANETs is simulation. This paper is subjected to the Dynamic Source Routing protocol (DSR) and evaluated its performance in three different placement environments namely random, grid and uniform. The authors investigated the QOS metrics namely average jitter, average end-to-end delay, packet delivery...
Provided By Andhra University