Data Centers
Along with the rise of cloud computing, data centers are being reinvented via virtualization, servers and high-performance computing. Find out more in with the latest white papers and case studies.
-
Driving operational excellence with predictive analytics
Manage physical and virtual assets, maintain your infrastructure and capital equipment and maximize the efficiency of your people, processes and assets with predictive analytics.
Provided By IBM
-
Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer
In this paper, the authors investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded BlueGene/Q supercomputer at Argonne National laboratory, and quantify the...
Provided By Texas A&M International University
-
Data Center Energy Cost Minimization: A Spatio-Temporal Scheduling Approach
Cloud computing is supported by an infrastructure known as Internet Data Center (IDC). As cloud computing thrives, the energy consumption and cost for IDCs are exploding. There is growing interest in energy cost minimization for IDCs in deregulated electricity markets. In this paper, the authors study how to leverage both...
Provided By Institute of Electrical & Electronic Engineers
-
Video: Getting up to speed on the 21st century data center

Today’s data centers are more critical than ever as the nerve centers of modern business. There are new technologies that can make data centers more flexible, more powerful, and far more efficient. Of course, there’s also the cloud and service...Provided By ZDNet
-
Video: Q&A on Next Generation Networks with the Editors in Chief
The rise of cloud computing, big data, and video is driving tremendous demand for faster and more efficient networks. ZDNet editor in chief Larry Dignan and TechRepublic editor in chief Jason Hiner discuss the leading technologies and vendors that are driving the next generation of networking and how organizations are...
Provided By ZDNet
-
Video: Q&A on Next Generation Networks with the Editors in Chief (AU)
The rise of cloud computing, big data, and video is driving tremendous demand for faster and more efficient networks. ZDNet editor in chief Larry Dignan and TechRepublic editor in chief Jason Hiner discuss the leading technologies and vendors that are driving the next generation of networking and how organizations are...
Provided By ZDNet
-
MultiGreen: Cost-Minimizing Multi-Source Datacenter Power Supply with Online Control
Faced by soaring power cost, large footprint of carbon emission and unpredictable power outage, more and more modern Cloud Service Providers (CSPs) begin to mitigate these challenges by equipping their Datacenter Power Supply System (DPSS) with multiple sources: smart grid with time-varying electricity prices, Uninterrupted Power Supply (UPS) of finite...
Provided By Huazhong University of Science & Technology
-
SmartDPSS: Cost-Minimizing Multi-Source Power Supply for Datacenters with Arbitrary Demand
To tackle soaring power costs, significant carbon emission and unexpected power outage, Cloud Service Providers (CSPs) typically equip their Datacenters with a Power Supply System (DPSS) nurtured by multiple sources: smart grid with time-varying electricity prices, uninterrupted power supply (UPS), and renewable energy with intermittent and uncertain supply. It remains...
Provided By Huazhong University of Science & Technology
-
University of Tennessee Knoxville Boosts Competitive Edge With Citrix Virtualization on Flexpod
The University of Tennessee (UT), Knoxville, is one of the oldest public universities in the U.S. There challenge is to support anywhere, anytime student computing environment while conserving IT and financial resources. Build application and desktop virtualization delivery system with Citrix based on the FlexPod data center platform from Cisco...
Provided By Cisco
-
Preparing the Physical Infrastructure of Receiving Data Centres for Consolidation
With the lack of on-site resources, increasing demand for constant availability , and changes in the way equipment is deployed have heightened your day-to-day challenges and pressures with ensuring the health of your company's IT physical infrastructure. This paper gives examples of what is becoming a standard architecture for preparing...
Provided By APC by Schneider Electric
-
Efficient Parallel Partition-based Algorithms for Similarity Search and Join with Edit Distance Constraints
The quantity of data in real-world applications is growing significantly while the data quality is still a big problem. Similarity search and similarity join are two important operations to address the poor data quality problem. Although many similarity search and join algorithms have been proposed, they did not utilize the...
Provided By Association for Computing Machinery
-
Military-Standard Data Center Management
In the case of the North Atlantic Treaty Organization (NATO) lives depend on it. There challenges were to make IT operations faster, better, and cheaper and move to greenfield data center without disrupting military operations. They chose Cisco to overcome these challenges. They implemented Cisco Unified Data Center solution based...
Provided By Cisco
-
Heterogeneous Delay Tolerant Task Scheduling and Energy Management in the Smart Grid With Renewable Energy
The smart grid is the new generation of electricity grid that can efficiently utilize new distributed sources of energy (e.g., harvested renewable energy), and allow for dynamic electricity price. In this paper, the authors investigate the cost minimization problem for an end-user, such as a home, community, or a business,...
Provided By Institute of Electrical & Electronic Engineers
-
A Framework for High Performance Simulation of Transactional Data Grid Platforms
One reason for the success of in-memory (transactional) data grids lies on their ability to t elasticity requirements imposed by the cloud oriented pay-as-you-go cost model. In fact, by relying on in-memory data maintenance, these platforms can be dynamically resized by simply setting up (or shutting down) instances of so...
Provided By Association for Computing Machinery
-
Safeguarding the Status of Europe's Largest Port City
Rotterdam, the second-largest city in the Netherlands, is also notable as the largest port in Europe. Their challenges are to assure future delivery of local government and port-based services and reduce costs and improve efficiency. They chose Cisco to overcome these problems. FlexPod architecture, integrating Cisco Nexus data center switches...
Provided By Cisco
-
Minimizing Flow Completion Times in Data Centers
For provisioning large-scale online applications such as web search, social networks and advertisement systems, data centers face extreme challenges in providing low latency for short flows (that result from end-user actions) and high throughput for background flows (that are needed to maintain data consistency and structure across massively distributed systems)....
Provided By University of Pitesti
-
Task Scheduling in Grid Computing Systems Constrained by Resource Availability
In this paper, a survey of some of the approaches to task scheduling in grid computing systems based on resource availability is discussed. To improve task scheduling, resource availability can be predicted by multistate availability predictors. Execution time of each application is modeled by a random variable. Each application is...
Provided By International Journal of Engineering and Innovative Technology (IJEIT)
-
Middleware and Toolkits in Grid Computing
The increasing demand for more computing power and data storage capacity in many fields of business, research, engineering, medical and science has raised the emergence of Grid Computing. Grid computing facilitates the environment where computers are interconnected with each other in such a manner that for making the execution faster...
Provided By Lovely Professional University
-
A Taxonomy of Performance Assurance Methodologies and Its Application in High Performance Computer Architectures
In this paper, the authors present a systematic approach to the complex problem of high confidence performance assurance of high performance architectures based on methods used over several generations of industrial microprocessors. A taxonomy is presented for performance assurance through three key stages of a product life cycle-high level performance,...
Provided By Intel Corporation
-
Grid Enabled Environment for Image Processing Applications: A Review
Grid computing is an emerging technology which is finding its importance in handling large volumes of data. It is used to share and coordinate the distributed processing resources to achieve supercomputing capability. A high end computing facility and mass data storage is needed for any image processing applications which are...
Provided By International Journal of Computer Applications
-
Path Consolidation for Dynamic Right-Sizing of Data Center Networks
Data center topologies typically consist of multi-rooted trees with many equal-cost paths between a given pair of hosts. Existing power optimization techniques do not utilize this property of data center networks for power proportionality. In this paper, the authors exploit this opportunity and show that significant energy savings can be...
Provided By University of Calgary
-
Profit-Optimal and Stability-Aware Load Curtailment in Smart Grids
A key feature of future smart grids is demand response. With the integration of a two-way communication infrastructure, a smart grid allows its operator to monitor the production and usage of power in real time. In this paper, the authors optimize operator profits for the different cases of load curtailment,...
Provided By Purdue Federal Credit Union
-
The CORE Storage Primitive: Cross-Object Redundancy for Efficient Data Repair & Access in Erasure Coded Storage
Erasure codes are an integral part of many distributed storage systems aimed at Big Data, since they provide high fault-tolerance for low overheads. However, traditional erasure codes are inefficient on reading stored data in degraded environments (when nodes might be unavailable), and on replenishing lost data (vital for long term...
Provided By Nanyang Technological University
-
Temporal Load Balancing With Service Delay Guarantees for Data Center Energy Cost Optimization
Cloud computing services are becoming integral part of people's daily life. These services are supported by infrastructure known as Internet Data Center (IDC). As demand for cloud computing services soars, energy consumed by IDCs is skyrocketing. Both academia and industry have paid great attention to energy management of IDCs. This...
Provided By Stanford Technology Ventures Program
-
Online Control of Datacenter Power Supply Under Uncertain Demand and Renewable Energy
Modern Cloud Service Providers (CSPs) equip their Datacenter Power Supply System (DPSS) with multisources to mitigate power cost, carbon emission, and power outage: multi-markets grid with time-varying energy prices, finite capacity of Uninterrupted Power Supply (UPS), and certain volumes of intermittent renewable energy. With the presence of uncertain renewable sources...
Provided By Huazhong University of Science & Technology
-
Yank: Enabling Green Data Centers to Pull the Plug
Balancing a data center's reliability, cost, and carbon emissions is challenging. For instance, data centers designed for high availability require a continuous flow of power to keep servers powered on, and must limit their use of clean, but intermittent, renewable energy sources. In this paper, the authors present Yank, which...
Provided By University of Massachusetts
-
Transitioning Business Continuity To The Cloud
In the past, server, storage and data center redundancy were the only options that met corporate resilience objectives. Read this white paper to gain an understanding of the resiliency challenges facing organizations today and how you can transition to cloud-based resiliency.
Provided By IBM
-
Build A High-Performance Security Organization
Companies must not only protect their organization from danger but must continue to compete in the marketplace. Companies that ignore security do so at their own risk. Read this Forrester Research Inc report to gain an understanding of what it takes to build a high performance security practice.
Provided By IBM
-
Vblock Helps Eclipse Resurrect Intellectual Property From Aging Infrastructure and Take Flight
Eclipse Aerospace, maker of technologically advanced, twin-engine jets, they want to replace outdated systems with a virtualized data center that supports the business more efficiently and cost-effectively. They chose Cisco to overcome these challenges. Vblock Systems running SAP business applications and Siemens Team center and NX CAD Server software. The...
Provided By vcdgear.com
-
Improve IT Productivity with Unified Data Center
Increase ROI and improve productivity. See how the Cisco Unified Data Center can help you achieve these two goals by simplifying, standardizing, and automating IT processes, so you can turn your data center into an innovation center.
Provided By Cisco
-
Power Struggles: Revisiting the RISC vs. CISC Debate on Contemporary ARM and X86 Architectures
RISC vs. CISC wars raged in the 1980s when chip area and processor design complexity were the primary constraints and desktops and servers exclusively dominated the computing landscape. Today, energy and power are the primary design constraints and the computing landscape is significantly different: growth in tablets and Smartphones running...
Provided By University of Winnipeg
-
Considerations for Owning Versus Outsourcing Data Center Physical Infrastructure
When faced with the decision of upgrading an existing data center, building new, or leasing space in a retail collocation data center, there are both quantitative and qualitative differences to consider. The 10-year TCO may favor upgrading or building over outsourcing, however, this paper demonstrates that the economics may be...
Provided By Schneider Corporation
-
Numerical Methods for Solving Turbulent Flows by Using Parallel Technologies
Parallel implementation of algorithm of numerical solution of Navier-Stokes equations for large eddy simulation (LES) of turbulence is presented in this research. The Dynamic Smagorinsky model is applied for sub-grid simulation of tur-bulence. The numerical algorithm was worked out using a scheme of splitting on physical parameters. At the first...
Provided By Scientific Research
-
Demand Based Hierarchical QoS Using Storage Resource Pools
The high degree of storage consolidation in modern virtualized datacenters requires flexible and efficient ways to allocate IO resources among Virtual Machines (VMs). Existing IO resource management techniques have two main deficiencies: they are restricted in their ability to allocate resources across multiple hosts sharing a storage device, and they...
Provided By Wayne Stansfield
-
A Reliable Schedule with Budget Constraints in Grid Computing
The application system while executing in a Grid environment may encounter a failure. This phenomenon can be overcome by a reliable scheduler who plays the major role of allocating the applications to the reliable resources based on the reliability requirement of the applications given by the users. The reliability requirement...
Provided By Karunya University
-
Survey on Resource Allocation in Grid
Resource allocation is a problem that arises whenever different resources, which have preferences in participating, have to be distributed among multiple autonomous entities. A distributed system is composed of such computers which are separately located and connected with each other through a network Resource allocation is a problem that arises...
Provided By International Journal of Engineering and Innovative Technology (IJEIT)
-
High Performance Computation Through Slicing and Value Replacement With CCDD Approach
In software development and maintenance stages, programmers need to frequently debug the software. Software fault localization is one of the most exclusive, tedious and time intense activities in program debugging. A common approach to fix software error is computing suspiciousness of program elements according to failed test executions and passed...
Provided By International Journal of Innovative Technology and Exploring Engineering (IJITEE)
-
Optimization of Large Scale of Files Transfer in Meteorological Grid
In this paper, the authors focus on performance enhancement of large scale of small files transfer, which is critical to the performance of meteorological grid. GridFTP and compression techniques are used to optimize the efficiency. The transfer parameters are configured before transmission, such as extended block mode (Mode E), TCP...
Provided By Nanjing University
-
Parallel Computing for Accelerated Texture Classification with Local Binary Pattern Descriptors Using OpenCL
In this paper, a novel parallelized implementation of rotation invariant texture classification using Heterogeneous Computing Platforms like CPU and Graphics Processing Unit (GPU) is proposed. A complete modeling of the LBP operator as well as its improvised versions of Complete Local Binary Patterns (CLBP) and Multi-scale Local Binary Patterns (MLBP)...
Provided By International Journal of Computer Applications
-
An Optimized Algorithm for Enhancement of Performance of Distributed Computing System
Distributed Computing System (DCS) presents a platform consisting of multiple computing nodes connected in some fashion to which various modules of a task can be assigned. A node is any device connected to a computer network. Nodes can be computers or various other network applications. A task should be assigned...
Provided By International Journal of Computer Applications
-
GPS/GPRS/GSM Based Mobile Asset Tracking
Global Positioning Satellites (GPS) enable the tracking of all kinds of mobile assets accurately and provide their real time positions to the owners on a 24 by 7 basis over the GPRS/GSM link. While the GPS provides the latitude/longitude information of the mobile asset a given time, this information can...
Provided By Calsoft Labs
-
Towards an Advanced Distributed Computing
In recent years grids and peer-to-peer networks have gained popularity as favourable platforms for the next generation of parallel and distributed computing. Although grid computing was conceived in re-search organizations to support initially compute-intensive scientific applications, enterprises of all types are beginning to recognize this technology as a foundation for...
Provided By Academy of Sciences of the Czech Republic
-
Rossignol: Design & Manufacturing International Organization Deploys Riverbed Steelhead Appliances to Consolidate Its IT Systems to a Private Cloud
Rossignol is a leading designer and manufacturer of skis, bindings, ski boots, snowboards, technical equipment and clothing. Rossignol's IT infrastructure was expanding, and the company wanted to centralize its IT resources to a private cloud in order to improve overall IT management and reduce costs. However, Rossignol was aware that...
Provided By Riverbed Technology
-
Mercury: Interactive Riverbed Steelhead Appliances Reduce Backup Windows by 75%
Mercury Interactive is the global leader in Business Technology Optimization (BTO), is one of the largest enterprise software companies in the world. Mercury Interactive wanted to ensure that backup and replication for the software development team was providing the high level of data protection the company needed. With an ever-growing...
Provided By Riverbed Technology
-
Gaseous and Particulate Contamination Guidelines for Data Centers
The recent increase in the rate of hardware failures in data centers high in sulfur-bearing gases, highlighted by the number of recent publications on the subject, led to the need for this white paper that recommends that in addition to temperature-humidity control, dust and gaseous contamination should also be monitored...
Provided By AMD
-
Adaptive Security and Privacy in Smart Grids: A Software Engineering Vision
Despite the benefits offered by smart grids, energy producers, distributors and consumers are increasingly concerned about possible security and privacy threats. These threats typically manifest themselves at runtime as new usage scenarios arise and vulnerabilities are discovered. Adaptive security and privacy promise to address these threats by increasing awareness and...
Provided By The Only Solution
-
Cross-Vendor Migration
Virtualization enables live migration, the ability to move a running guest from one physical machine to another with little or no disruption to the guest or its users. Live migration allows various load-balancing and high-availability techniques to be implemented by the hypervisor and datacenter management software. Unfortunately for live migration,...
Provided By Advanced Micro Devices
-
Defining Characteristics of QFabric
QFabric is the name of a packet-switched networking technology purpose-built to enable the construction of highly efficient, cost effective, dynamic, and easy to manage data centers over a wide range of scales using standard off-the-shelf computing, storage, and services elements. The purpose of this note is to introduce a set...
Provided By Juniper
-
Green ICT - The Greening of Business
All over the world, IT is playing an increasingly important role - in both business and individuals' private lives. It is also consuming ever greater amounts of energy and is therefore the source of significant CO? emissions. IT now causes the release of as much carbon dioxide into the atmosphere...
Provided By T-Systems International
-
Vblock Helps Eclipse Resurrect Intellectual Property From Aging Infrastructure and Take Flight
Eclipse Aerospace, maker of technologically advanced, twin-engine jets, they want to replace outdated systems with a virtualized data center that supports the business more efficiently and cost-effectively. They chose Cisco to overcome these challenges. Vblock Systems running SAP business applications and Siemens Team center and NX CAD Server software. The...
Provided By vcdgear.com
-
The Effect of Intellectual Property Commoditization on Business Strategy
In the early 21st century a greater focus on intangible assets is gaining ground. This marks a paradigm shift in how intangible assets - specifically, Intellectual Property (IP) - are viewed from an economic standpoint. Patents, while once viewed as tools used primarily for the prevention of copying of inventions,...
Provided By Nerac
-
Best Business Intelligence (BI) Practices to Transform Your Data Into Information
Organizations have implemented enterprise applications such as ERP and CRM for business process execution but have failed to address the key enterprise asset: data. In the current economic environment, enterprises that fail to capitalize on their data assets jeopardize their cost structures and their earnings potential. It is vital for...
Provided By ZSL
-
Virtualization Implementation Model for Cost Effective & Efficient Data Centers
Data centers form a key part of the infrastructure upon which a variety of information technology services are built. They provide the capabilities of centralized repository for storage, management, networking and dissemination of data. With the rapid increase in the capacity and size of data centers, there is a continuous...
Provided By Universiti Teknikal Malaysia Melaka
-
Passing the Data Center Litmus Test
Having developed the very first 100 percent Web-based time tracking solution, Timesheet, Austin-based Journyx knows first hand the importance of having a reliable data center backing up company? and services. The challenge was to maximize the reliability and performance of its services. Because of the wide geographical coverage area of...
Provided By Core NAP
-
Small Data Center Service With Big Data Center Capabilities
When RipCode - the first company to address video transcoding as a true network appliance - identified the need for a hosted data center environment to place its products for client trials and internal R&D, the company sought out a trusted partner with a secure track record, superior connectivity rates,...
Provided By Core NAP
-
A Quick Primer on Data Center Tier Classifications
Today, a number of companies are either consolidating their data centers or implementing new data center projects. Their decisions are often based on the appropriate "tier level" of the IT facility. The tier level is determined by an industry standard classification system for infrastructure performance. A four-tier system rates a...
Provided By Switch and Data
-
The Case for Thermoelectrics in the Data Center: A New Approach and Use of an Old Science
Over the past several years' corporate enterprises and the Information Technology (IT) industry in general have increased their awareness relating to the "Greening" of the data center or the "Greening of IT" for the main purpose of reducing energy consumption and costs. Many standards, technologies and best practices have been...
Provided By Applied Methodologies
-
Making the Right Move: Major Disaster Spurs Hancock Bank to Find a New Home
Imagine the roof ripped off office building, with equipment strewn hundreds of yards away and water making an island out of datacenter. The challenge was "Total Loss" event - Hurricane Katrina - forced Hancock Bank to find a temporary home for its IT infrastructure while building a state-of-the-art facility from...
Provided By SunGard Availability Services
-
Grid and Cloud Computing Technology: Interoperability and Standardization for the Telecommunications Industry
The European Telecommunications Standards Institute formed the Grid Technical Committee (TC GRID) in 2006 with participation from over 20 organizations spanning private sector, government, and academia. It had a specific mandate to address the convergence of the IT and Telecommunications industries in particular in the domain of interoperability. Grid computing,...
Provided By ETSI
-
Making Geo-Replicated Systems Fast as Possible, Consistent When Necessary
Online services distribute and replicate state across geographically diverse data centers and direct user requests to the closest or least loaded site. While effectively ensuring low latency responses, this approach is at odds with maintaining cross-site consistency. The authors make three contributions to address this tension; they propose RedBlue consistency,...
Provided By Max Planck Institute for Software Systems
-
Why Cloud Computing Is Here to Stay
While energy prices and CPU power consumption are out of the control reducing operating expense by optimizing amount of labor needed to perform day-to-day maintenance, upgrades and provisioning of the new equipment. Let's take a look at where the data center dollars go. Costs associated with server management and administration...
Provided By Freedom OSS
-
Backing Up Enterprise Applications: Transaction Consistency Is Key
In this webcast, the presenter will start talking about what are volume shadow copy services? What is this thing that's called VSS? The presenter also explains about the backing up enterprise applications and advise customers and clients around server, desktop, and application virtualization in addition to cloud computing strategies and...
Provided By Vee Software
-
Grid Operating System: Making Dynamic Virtual Services in Organizations
The structure of a grid system should be such that even a small personal computer can avail the facility of many supercomputers at a time. A grid is formulated by the number of supercomputers are used and participated in the computations. Grid computing has no straightforward way to control and...
Provided By International Journal of Computer Theory and Engineering (IJCTE)
-
Data Clustering and Topology Preservation Using 3D Visualization of Self Organizing Maps
The Self Organizing Maps (SOM) is regarded as an excellent computational tool that can be used in data mining and data exploration processes. The SOM usually create a set of prototype vectors representing the data set and carries out a topology preserving projection from high-dimensional input space onto a low-dimensional...
Provided By International Association of Engineers
-
Distributed S-Net: Cluster and Grid Computing without the Hassle
S-NET is a declarative coordination language and component technology primarily aimed at modern multi-core/ many-core chip architectures. It builds on the concept of stream processing to structure dynamically evolving networks of communicating asynchronous components. Components themselves are implemented using a conventional language suitable for the application domain. The authors present...
Provided By University of Almeria
-
Authorisation Infrastructure for On-Demand Network Resource Provisioning
High performance Grid applications require high speed network infrastructure that should be capable to provide network connectivity service on-demand. This paper presents results of the development of the AuthoriZation (AuthZ) infrastructure for on-demand multi-domain Network Resource Provisioning (NRP). The authors propose a general Complex Resource Provisioning (CRP) model that can...
Provided By University of Almeria
-
Security Services Lifecycle Management in On-Demand Infrastructure Services Provisioning
Modern e-Science and high technology industry require high-performance and complicated network and computer infrastructure to support distributed collaborating groups of researchers and applications that should be provisioned on-demand. The effective use and management of the dynamically provisioned services can be achieved by using the Service Delivery Framework (SDF) proposed by...
Provided By University of Almeria
-
Increasing Data Center Energy Efficiency by Monitoring and Targeting to Heating and Cooling Metrics
Data center heat density has been increasing since the advent of the server. This has become particularly problematic during the past few years as data center managers have struggled to cope with heat and power intensity problems. These issues have resulted in enormous energy bills and a rising carbon impact....
Provided By TRENDPOINT SYSTEMS
-
IT-Enabled Integration of Renewables: A Concept for the Smart Power Grid
The wide utilization of information and communication technologies is hoped to enable amore efficient and sustainable operation of electric power grids. This paper analyses the benefits of smart power grids for the integration of renewable energy resources into the existing grid infrastructure. Therefore, the concept of a smart power grid...
Provided By EURASIP
-
Designing Green Datacenters
Until the knock of environment friendly concepts and public concerns, traditional datacenters had fixed their eye on maximum uptime. Times have changed and datacenters are concentrating on conserving energy. This paper provides some tips that can contribute towards building eco-friendly datacenters. It ends saying that it's important to keep a...
Provided By vishalvasu.com
-
Multi Objectives Heuristic Algorithm for Grid Computing
Grid computing provides the means of using and sharing heterogeneous resources that are geographically distributed to solve complex scientific or technical problems. Task scheduling is critical to achieving high performance on grid computing environment. The objective of the scheduling process is to map each task with specific requirements to a...
Provided By King Abdulaziz University
-
An Efficient Model to Minimize Makespan On Data Grids Considering Local Tasks
Scheduling divisible loads on the resources of a grid has gain a great attention in last years. In grids, load scheduling plays a crucial role in achieving high utilization of resources. Many scheduling algorithms assume that grid resources such as CPU power are constant which mean that they are static...
Provided By Al-Azhar University
-
WATS: Workload-Aware Task Scheduling in Asymmetric Multi-Core Architectures
Asymmetric Multi-Core (AMC) architectures have shown high performance as well as power efficiency. However, current parallel programming environments do not perform well on AMC due to their assumption that all cores are symmetric and provide equal performance. Their random task scheduling policies, such as task-stealing, can result in unbalanced workloads...
Provided By Shanghai Institute of Applied Physics, Chinese Academy of Sciences
-
Gaining Secure Assets Using Integrated Components of Grid Security Infrastructure (GSI) Creating Inside Grid Environment
The existence of grid computing in the near future is an admitted reality. The ubiquity of the grid computing connection to desktops has brought both boons to scientists as well as a cause of concern due to the security of digital assets that may be unknowingly exposed. Grid Security Infrastructure...
Provided By University of Pune
-
Towards Novel and Efficient Security Architecture for Role-Based Access Control in Grid Computing
Off late there arose a necessity to distribute computing applications frequently across grids. Ever more these applications depend on services like data transfer or data portal services and submission of jobs. Owing to the fact that the distribution of services and resources in wide-area networks are heterogeneous, dynamic, and multi-domain,...
Provided By Government College of Engineering
-
Quorum Based Distributed Mutual System
The main goal of a distributed computing system is to connect users and resources in a transparent, open, and scalable way. Ideally this arrangement is drastically more fault tolerant and more powerful than many combinations of stand-alone computer systems. Openness is the property of distributed systems such that each subsystem...
Provided By Osmania University
-
Grid-HPA: Predicting Resource Requirements of a Job in the Grid Computing Environment
For complete support of Quality of Service, it is better that environment itself predicts resource requirements of a job by using special methods in the Grid computing. The exact and correct prediction causes exact matching of required resources with available resources. After the execution of each job, the used resources...
Provided By Iran University of Science and Technology
-
Efficient and Collective Global, Local Memory Management for High Performance Cluster Computing
The first inspiration for cluster computing was developed in the 1960s by IBM as an alternative of linking large mainframes to provide a more cost effective form of commercial parallelism. However, cluster computing did not gain momentum until the convergence of three important trends in the 1980s: high-performance microprocessors, high-speed...
Provided By Osmania University
-
Report on Web Application Firewall
Rsignia's paper is to protect, secure, and manage IT applications, infrastructure and digital assets. Today's federal Intelligence and law enforcement agencies need to locate, filter, capture and analyze sensitive information from large volumes of internet traffic, while at all times complying with the appropriate laws and regulations, such as Lawful...
Provided By Rsignia
-
Grid Computing Based Model for Remote Monitoring of Energy Flow and Prediction of HT Line Loss in Power Distribution System
Grid Computing has been identified as an important new technology in scientific and engineering fields as well as many commercial and industrial enterprises. Grid Computing is a form of distributed computing that involves coordination and sharing of computing application, data storage or network resources across dynamic and geographically dispersed organizations....
Provided By JATIT