Oak Ridge National Laboratory

Displaying 1-33 of 33 results

  • White Papers // Jan 2014

    A Survey of Techniques For Improving Energy Efficiency in Embedded Computing Systems

    Recent technological advances have greatly improved the performance and features of embedded systems. With the number of just mobile devices now reaching nearly equal to the population of earth, embedded systems have truly become ubiquitous. These trends, however, have also made the task of managing their power consumption extremely challenging....

    Provided By Oak Ridge National Laboratory

  • White Papers // Jul 2013

    Toward a Performance/Resilience Tool for Hardware/Software Co-Design of High-Performance Computing Systems

    xSim is a simulation-based performance investigation toolkit that permits running High-Performance Computing (HPC) applications in a controlled environment with millions of concurrent execution threads, while observing application performance in a simulated extreme-scale system for hardware/software co-design. The presented paper details newly developed features for xSim that permit the injection of...

    Provided By Oak Ridge National Laboratory

  • White Papers // May 2013

    A Temporal Locality-Aware Page-Mapped Flash Translation Layer

    The poor performance of random writes has been a cause of major concern which needs to be addressed to better utilize the potential of flash in enterprise-scale environments. The authors examine one of the important causes of this poor performance: the design of the Flash Translation Layer (FTL) which performs...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jan 2013

    Recovering Transient Data: Automated On-Demand Data Reconstruction and Offloading for Supercomputers

    It has become a national priority to build and use PetaFlop supercomputers. The dependability of such large systems has been recognized as a key issue that can impact their usability. Even with smaller, existing machines, failures are the norm rather than an exception. Research has shown that storage systems are...

    Provided By Oak Ridge National Laboratory

  • White Papers // May 2012

    Automatic Construction of Anomaly Detectors from Graphical Models

    Detection of rare or previously unseen attacks in cyber security presents a central challenge: how does one search for a sufficiently wide variety of types of anomalies and yet allow the process to scale to increasingly complex data? In particular, creating each anomaly detector manually and training each one separately...

    Provided By Oak Ridge National Laboratory

  • White Papers // May 2012

    Dead Phish: An Examination of Deactivated Phishing Sites

    Efforts to combat phishing and fraud online often center around filtering the phishing messages and disabling phishing Web sites to prevent users from being deceived. Two potential approaches to disabling a phishing site are to eliminate the required DNS records to reach the site and to remove the site from...

    Provided By Oak Ridge National Laboratory

  • White Papers // Feb 2012

    NVMalloc: Exposing an Aggregate SSD Store as a Memory Partition in Extreme-Scale Machines

    DRAM is a precious resource in extreme-scale machines and is increasingly becoming scarce, mainly due to the growing number of cores per node. On future multi-petaflop and exaflop machines, the memory pressure is likely to be so severe that the authors need to rethink their memory usage models. Fortunately, the...

    Provided By Oak Ridge National Laboratory

  • White Papers // Feb 2012

    Identifying Opportunities for Byte-Addressable Non-Volatile Memory in Extreme-Scale Scientific Applications

    Exascale computing platforms are expected to become available later this decade. Although the precise details of the exascale systems are not yet known, it is rather certain that these systems will bring with them grand challenges in several different dimensions. Future exascale systems face extreme power challenges. To improve power...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jan 2012

    Just-in-Time Staging of Large Input Data for Supercomputing Jobs

    High performance computing is facing a data deluge from state-of-the-art colliders and observatories. Large data-sets from these facilities, and other end-user sites, are often inputs to intensive analyses on modern supercomputers. Timely staging in of input data at the supercomputer's local storage can not only optimize space usage, but also...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jun 2011

    HybridStore: A Cost-Efficient, High-Performance Storage System Combining SSDs and HDDs

    Unlike the use of DRAM for caching or buffering, certain idiosyncrasies of NAND Flash-based Solid-State Drives (SSDs) make their integration into existing systems non-trivial. Flash memory suffers from limits on its reliability, is an order of magnitude more expensive than the magnetic Hard Disk Drives (HDDs), and can sometimes be...

    Provided By Oak Ridge National Laboratory

  • White Papers // Feb 2011

    Challenges in Securing the Interface Between the Cloud and Pervasive Systems

    Cloud computing presents an opportunity for pervasive systems to leverage computational and storage resources to accomplish tasks that would not normally be possible on such resource-constrained devices. Cloud computing can enable hardware designers to build lighter systems that last longer and are more mobile. Despite the advantages cloud computing offers...

    Provided By Oak Ridge National Laboratory

  • White Papers // Feb 2011

    Embracing the Cloud for Better Cyber Security

    The future of cyber security is inextricably tied to the future of computing. Organizational needs and economic factors will drive computing outcomes. Cyber security researchers and practitioners must recognize the path of computing evolution and position themselves to influence the process to incorporate security as an inherent property. The best...

    Provided By Oak Ridge National Laboratory

  • White Papers // Nov 2010

    Workload Characterization of a Leadership Class Storage Cluster

    Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, the authors characterize the scientific workloads of the world's fastest HPC (High Performance Computing) storage cluster, Spider, at the Oak Ridge...

    Provided By Oak Ridge National Laboratory

  • White Papers // Nov 2010

    Collective Prefetching for Parallel I/O Systems

    Data prefetching can be beneficial for improving parallel I/O system performance, but the amount of benefit depends on how efficiently and swiftly prefetches can be done. In this paper, the authors propose a new prefetching strategy, called collective prefetching. The idea is to exploit the correlation among I/O accesses of...

    Provided By Oak Ridge National Laboratory

  • White Papers // Oct 2010

    Enabling Data Discovery Through Virtual Internet Repositories

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source and software developed at Oak Ridge National Laboratory. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. A major new version of Mercury was developed...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jun 2010

    Aggregation of Real-Time System Monitoring Data for Analyzing Large-Scale Parallel and Distributed Computing Environments

    The authors present a monitoring system for large-scale parallel and distributed computing environments that allows to trade-off accuracy in a tunable fashion to gain scalability without compromising fidelity. The approach relies on classifying each gathered monitoring metric based on individual needs and on aggregating messages containing classes of individual monitoring...

    Provided By Oak Ridge National Laboratory

  • White Papers // Mar 2010

    Multi Stage Attack Detection System for Network Administrators Using Data Mining

    In this paper, the authors present a method to discover, visualize, and predict behavior pattern of attackers in a network based system. They proposed a system that is able to discover temporal pattern of intrusion which reveal behaviors of attackers using alerts generated by Intrusion Detection System (IDS). They use...

    Provided By Oak Ridge National Laboratory

  • White Papers // Feb 2010

    Cybersecurity Through Real-Time Distributed Control Systems

    Critical infrastructure sites and facilities are becoming increasingly dependent on interconnected physical and cyber-based Real-Time Distributed Control Systems (RTDCSs). A mounting cybersecurity threat results from the nature of these ubiquitous and sometimes unrestrained communications interconnections. Much work is under way in numerous organizations to characterize the cyber threat, determine means...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jan 2010

    Reconciling Scratch Space Consumption, Exposure, and Volatility to Achieve Timely Staging of Job Input Data

    Innovative scientific applications and emerging dense data sources are creating a data deluge for high-end computing systems. Processing such large input data typically involves copying (or staging) onto the supercomputer's specialized high-speed storage, scratch space, for sustained high I/O throughput. The current practice of conservatively staging data as early as...

    Provided By Oak Ridge National Laboratory

  • White Papers // Aug 2009

    Scheduling Dense Linear Algebra Operations on Multicore Processors

    State-of-the-art dense linear algebra software, such as the LAPACK and ScaLAPACK libraries, suffers performance losses on multicore processors due to their inability to fully exploit thread-level parallelism. At the same time, the coarse-grain dataflow model gains popularity as a paradigm for programming multicore architectures. This paper looks at implementing classic...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jul 2009

    A Stigmergy Approach for Open Source Software Developer Community Simulation

    The stigmergy collaboration approach provides a hypothesized explanation about how online groups work together. In this research, the authors presented a stigmergy approach for building an agent based Open Source Software (OSS) developer community collaboration simulation. They used group of actors who collaborate on OSS projects as their frame of...

    Provided By Oak Ridge National Laboratory

  • White Papers // May 2009

    Earth System Grid Authentication Infrastructure: Integrating Local Authentication, OpenID and PKI

    Climate scientists face a wide variety of practical problems, but there exists an overarching need to efficiently access and manipulate climate model data. Increasingly, for example, researchers must assemble and analyze large datasets that are archived in different formats on disparate platforms, and must extract portions of datasets to compute...

    Provided By Oak Ridge National Laboratory

  • White Papers // Nov 2008

    Performance of RDMA-Capable Storage Protocols on Wide-Area Network

    Because of its high throughput, low CPU utilization, and direct data placement, RDMA (Remote Direct Memory Access) has been adopted for transport in a number of storage protocols, such as NFS and iSCSI. In this presentation, the author provides a performance evaluation of RDMA-based NFS and iSCSI on Wide-Area Network...

    Provided By Oak Ridge National Laboratory

  • White Papers // Sep 2008

    Early Evaluation of IBM BlueGene/P

    BlueGene/P (BG/P) is the second generation BlueGene architecture from IBM, succeeding BlueGene/L (BG/L). BG/P is a System-on-Chip (SoC) design that uses four PowerPC 450 cores operating at 850 MHz with a double precision, dual pipe floating point unit per core. These chips are connected with multiple interconnection networks including a...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jul 2008

    An Analysis of HPC Benchmarks in Virtual Machine Environments

    Virtualization technology has been gaining acceptance in the scientific community due to its overall flexibility in running HPC applications. It has been reported that a specific class of applications is better suited to a particular type of virtualization scheme or implementation. For example, Xen has been shown to perform with...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jun 2008

    ParColl: Partitioned Collective I/O on the Cray XT

    Collective I/O orchestrates I/O from parallel processes by aggregating fine-grained requests into large ones. However, its performance is typically a fraction of the potential I/O bandwidth on large scale platforms such as Cray XT. Based on the authors' analysis, the time spent in global process synchronization dominates the actual time...

    Provided By Oak Ridge National Laboratory

  • White Papers // Feb 2008

    Xen-Based HPC: A Parallel I/O Perspective

    Virtualization using Xen-based virtual machine environment has yet to permeate the field of High Performance Computing (HPC). One major requirement for HPC is the availability of scalable and high performance I/O. Conventional wisdom suggests that virtualization of system services must lead to degraded performance. In this paper, the authors take...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jan 2008

    Performance Characterization and Optimization of Parallel I/O on the Cray XT

    In this paper, the authors present an extensive characterization, tuning, and optimization of parallel I/O on the Cray XT supercomputer, named Jaguar, at Oak Ridge National Laboratory. They have characterized the performance and scalability for different levels of storage hierarchy including a single Lustre object storage target, a single S2A...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jan 2008

    System-Level Virtualization for High Performance Computing

    System-level virtualization has been a research topic since the 70's but regained popularity during the past few years because of the availability of efficient solution such as Xen and the implementation of hardware support in commodity processors (e.g. Intel-VT and AMD-V). However, a majority of system-level virtualization projects is guided...

    Provided By Oak Ridge National Laboratory

  • White Papers // Oct 2007

    Virtualized Environments for the Harness High Performance Computing Workbench

    In this paper the authors describe recent accomplishments in providing a virtualized environment concept and prototype for scientific application development and deployment as part of the Harness High-Performance Computing (HPC) work-bench research effort. The presented paper focuses on tools and mechanisms that simplify scientific application development and deployment tasks, such...

    Provided By Oak Ridge National Laboratory

  • White Papers // May 2007

    Efficiency Evaluation of Cray XT Parallel IO Stack

    PetaScale computing platforms need to be coupled with efficient IO subsystems that can deliver commensurate IO throughput to scientific applications. In order to gain insights into the deliverable IO efficiency on the Cray XT platform at ORNL, this paper presents an in-depth efficiency evaluation of its parallel IO software stack....

    Provided By Oak Ridge National Laboratory

  • White Papers // Oct 2006

    Towards High Availability for High-Performance Computing System Services: Accomplishments and Limitations

    High-Performance Computing (HPC) plays a significant role for the scientific research community as an enabling technology. Scientific HPC applications, like the Terascale Supernova Initiative or the Community Climate System Model (CCSM), help to understand the complex nature of open research questions and drive the race for scientific discovery through advanced...

    Provided By Oak Ridge National Laboratory

  • White Papers // Apr 2006

    MOLAR: Adaptive Runtime Support for High-End Computing Operating and Runtime Systems

    MOLAR is a multi-institutional research effort that concentrates on adaptive, reliable, and efficient Operating and Runtime System (OS/R) solutions for ultra-scale, high-end scientific computing on the next generation of super-computers. This paper addresses the challenges outlined in FAST-OS (Forum to Address Scalable Technology for runtime and Operating Systems) and HECRTF...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jun 2011

    HybridStore: A Cost-Efficient, High-Performance Storage System Combining SSDs and HDDs

    Unlike the use of DRAM for caching or buffering, certain idiosyncrasies of NAND Flash-based Solid-State Drives (SSDs) make their integration into existing systems non-trivial. Flash memory suffers from limits on its reliability, is an order of magnitude more expensive than the magnetic Hard Disk Drives (HDDs), and can sometimes be...

    Provided By Oak Ridge National Laboratory

  • White Papers // Nov 2010

    Workload Characterization of a Leadership Class Storage Cluster

    Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, the authors characterize the scientific workloads of the world's fastest HPC (High Performance Computing) storage cluster, Spider, at the Oak Ridge...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jan 2013

    Recovering Transient Data: Automated On-Demand Data Reconstruction and Offloading for Supercomputers

    It has become a national priority to build and use PetaFlop supercomputers. The dependability of such large systems has been recognized as a key issue that can impact their usability. Even with smaller, existing machines, failures are the norm rather than an exception. Research has shown that storage systems are...

    Provided By Oak Ridge National Laboratory

  • White Papers // May 2009

    Earth System Grid Authentication Infrastructure: Integrating Local Authentication, OpenID and PKI

    Climate scientists face a wide variety of practical problems, but there exists an overarching need to efficiently access and manipulate climate model data. Increasingly, for example, researchers must assemble and analyze large datasets that are archived in different formats on disparate platforms, and must extract portions of datasets to compute...

    Provided By Oak Ridge National Laboratory

  • White Papers // Nov 2008

    Performance of RDMA-Capable Storage Protocols on Wide-Area Network

    Because of its high throughput, low CPU utilization, and direct data placement, RDMA (Remote Direct Memory Access) has been adopted for transport in a number of storage protocols, such as NFS and iSCSI. In this presentation, the author provides a performance evaluation of RDMA-based NFS and iSCSI on Wide-Area Network...

    Provided By Oak Ridge National Laboratory

  • White Papers // Feb 2010

    Cybersecurity Through Real-Time Distributed Control Systems

    Critical infrastructure sites and facilities are becoming increasingly dependent on interconnected physical and cyber-based Real-Time Distributed Control Systems (RTDCSs). A mounting cybersecurity threat results from the nature of these ubiquitous and sometimes unrestrained communications interconnections. Much work is under way in numerous organizations to characterize the cyber threat, determine means...

    Provided By Oak Ridge National Laboratory

  • White Papers // Feb 2011

    Challenges in Securing the Interface Between the Cloud and Pervasive Systems

    Cloud computing presents an opportunity for pervasive systems to leverage computational and storage resources to accomplish tasks that would not normally be possible on such resource-constrained devices. Cloud computing can enable hardware designers to build lighter systems that last longer and are more mobile. Despite the advantages cloud computing offers...

    Provided By Oak Ridge National Laboratory

  • White Papers // Feb 2011

    Embracing the Cloud for Better Cyber Security

    The future of cyber security is inextricably tied to the future of computing. Organizational needs and economic factors will drive computing outcomes. Cyber security researchers and practitioners must recognize the path of computing evolution and position themselves to influence the process to incorporate security as an inherent property. The best...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jul 2009

    A Stigmergy Approach for Open Source Software Developer Community Simulation

    The stigmergy collaboration approach provides a hypothesized explanation about how online groups work together. In this research, the authors presented a stigmergy approach for building an agent based Open Source Software (OSS) developer community collaboration simulation. They used group of actors who collaborate on OSS projects as their frame of...

    Provided By Oak Ridge National Laboratory

  • White Papers // Mar 2010

    Multi Stage Attack Detection System for Network Administrators Using Data Mining

    In this paper, the authors present a method to discover, visualize, and predict behavior pattern of attackers in a network based system. They proposed a system that is able to discover temporal pattern of intrusion which reveal behaviors of attackers using alerts generated by Intrusion Detection System (IDS). They use...

    Provided By Oak Ridge National Laboratory

  • White Papers // Feb 2012

    NVMalloc: Exposing an Aggregate SSD Store as a Memory Partition in Extreme-Scale Machines

    DRAM is a precious resource in extreme-scale machines and is increasingly becoming scarce, mainly due to the growing number of cores per node. On future multi-petaflop and exaflop machines, the memory pressure is likely to be so severe that the authors need to rethink their memory usage models. Fortunately, the...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jan 2010

    Reconciling Scratch Space Consumption, Exposure, and Volatility to Achieve Timely Staging of Job Input Data

    Innovative scientific applications and emerging dense data sources are creating a data deluge for high-end computing systems. Processing such large input data typically involves copying (or staging) onto the supercomputer's specialized high-speed storage, scratch space, for sustained high I/O throughput. The current practice of conservatively staging data as early as...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jan 2012

    Just-in-Time Staging of Large Input Data for Supercomputing Jobs

    High performance computing is facing a data deluge from state-of-the-art colliders and observatories. Large data-sets from these facilities, and other end-user sites, are often inputs to intensive analyses on modern supercomputers. Timely staging in of input data at the supercomputer's local storage can not only optimize space usage, but also...

    Provided By Oak Ridge National Laboratory

  • White Papers // Feb 2012

    Identifying Opportunities for Byte-Addressable Non-Volatile Memory in Extreme-Scale Scientific Applications

    Exascale computing platforms are expected to become available later this decade. Although the precise details of the exascale systems are not yet known, it is rather certain that these systems will bring with them grand challenges in several different dimensions. Future exascale systems face extreme power challenges. To improve power...

    Provided By Oak Ridge National Laboratory

  • White Papers // Apr 2006

    MOLAR: Adaptive Runtime Support for High-End Computing Operating and Runtime Systems

    MOLAR is a multi-institutional research effort that concentrates on adaptive, reliable, and efficient Operating and Runtime System (OS/R) solutions for ultra-scale, high-end scientific computing on the next generation of super-computers. This paper addresses the challenges outlined in FAST-OS (Forum to Address Scalable Technology for runtime and Operating Systems) and HECRTF...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jun 2010

    Aggregation of Real-Time System Monitoring Data for Analyzing Large-Scale Parallel and Distributed Computing Environments

    The authors present a monitoring system for large-scale parallel and distributed computing environments that allows to trade-off accuracy in a tunable fashion to gain scalability without compromising fidelity. The approach relies on classifying each gathered monitoring metric based on individual needs and on aggregating messages containing classes of individual monitoring...

    Provided By Oak Ridge National Laboratory

  • White Papers // Oct 2007

    Virtualized Environments for the Harness High Performance Computing Workbench

    In this paper the authors describe recent accomplishments in providing a virtualized environment concept and prototype for scientific application development and deployment as part of the Harness High-Performance Computing (HPC) work-bench research effort. The presented paper focuses on tools and mechanisms that simplify scientific application development and deployment tasks, such...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jan 2008

    System-Level Virtualization for High Performance Computing

    System-level virtualization has been a research topic since the 70's but regained popularity during the past few years because of the availability of efficient solution such as Xen and the implementation of hardware support in commodity processors (e.g. Intel-VT and AMD-V). However, a majority of system-level virtualization projects is guided...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jul 2013

    Toward a Performance/Resilience Tool for Hardware/Software Co-Design of High-Performance Computing Systems

    xSim is a simulation-based performance investigation toolkit that permits running High-Performance Computing (HPC) applications in a controlled environment with millions of concurrent execution threads, while observing application performance in a simulated extreme-scale system for hardware/software co-design. The presented paper details newly developed features for xSim that permit the injection of...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jul 2008

    An Analysis of HPC Benchmarks in Virtual Machine Environments

    Virtualization technology has been gaining acceptance in the scientific community due to its overall flexibility in running HPC applications. It has been reported that a specific class of applications is better suited to a particular type of virtualization scheme or implementation. For example, Xen has been shown to perform with...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jan 2014

    A Survey of Techniques For Improving Energy Efficiency in Embedded Computing Systems

    Recent technological advances have greatly improved the performance and features of embedded systems. With the number of just mobile devices now reaching nearly equal to the population of earth, embedded systems have truly become ubiquitous. These trends, however, have also made the task of managing their power consumption extremely challenging....

    Provided By Oak Ridge National Laboratory

  • White Papers // Nov 2010

    Collective Prefetching for Parallel I/O Systems

    Data prefetching can be beneficial for improving parallel I/O system performance, but the amount of benefit depends on how efficiently and swiftly prefetches can be done. In this paper, the authors propose a new prefetching strategy, called collective prefetching. The idea is to exploit the correlation among I/O accesses of...

    Provided By Oak Ridge National Laboratory

  • White Papers // Aug 2009

    Scheduling Dense Linear Algebra Operations on Multicore Processors

    State-of-the-art dense linear algebra software, such as the LAPACK and ScaLAPACK libraries, suffers performance losses on multicore processors due to their inability to fully exploit thread-level parallelism. At the same time, the coarse-grain dataflow model gains popularity as a paradigm for programming multicore architectures. This paper looks at implementing classic...

    Provided By Oak Ridge National Laboratory

  • White Papers // Oct 2010

    Enabling Data Discovery Through Virtual Internet Repositories

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source and software developed at Oak Ridge National Laboratory. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. A major new version of Mercury was developed...

    Provided By Oak Ridge National Laboratory

  • White Papers // May 2012

    Automatic Construction of Anomaly Detectors from Graphical Models

    Detection of rare or previously unseen attacks in cyber security presents a central challenge: how does one search for a sufficiently wide variety of types of anomalies and yet allow the process to scale to increasingly complex data? In particular, creating each anomaly detector manually and training each one separately...

    Provided By Oak Ridge National Laboratory

  • White Papers // May 2012

    Dead Phish: An Examination of Deactivated Phishing Sites

    Efforts to combat phishing and fraud online often center around filtering the phishing messages and disabling phishing Web sites to prevent users from being deceived. Two potential approaches to disabling a phishing site are to eliminate the required DNS records to reach the site and to remove the site from...

    Provided By Oak Ridge National Laboratory

  • White Papers // Oct 2006

    Towards High Availability for High-Performance Computing System Services: Accomplishments and Limitations

    High-Performance Computing (HPC) plays a significant role for the scientific research community as an enabling technology. Scientific HPC applications, like the Terascale Supernova Initiative or the Community Climate System Model (CCSM), help to understand the complex nature of open research questions and drive the race for scientific discovery through advanced...

    Provided By Oak Ridge National Laboratory

  • White Papers // May 2013

    A Temporal Locality-Aware Page-Mapped Flash Translation Layer

    The poor performance of random writes has been a cause of major concern which needs to be addressed to better utilize the potential of flash in enterprise-scale environments. The authors examine one of the important causes of this poor performance: the design of the Flash Translation Layer (FTL) which performs...

    Provided By Oak Ridge National Laboratory

  • White Papers // Sep 2008

    Early Evaluation of IBM BlueGene/P

    BlueGene/P (BG/P) is the second generation BlueGene architecture from IBM, succeeding BlueGene/L (BG/L). BG/P is a System-on-Chip (SoC) design that uses four PowerPC 450 cores operating at 850 MHz with a double precision, dual pipe floating point unit per core. These chips are connected with multiple interconnection networks including a...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jun 2008

    ParColl: Partitioned Collective I/O on the Cray XT

    Collective I/O orchestrates I/O from parallel processes by aggregating fine-grained requests into large ones. However, its performance is typically a fraction of the potential I/O bandwidth on large scale platforms such as Cray XT. Based on the authors' analysis, the time spent in global process synchronization dominates the actual time...

    Provided By Oak Ridge National Laboratory

  • White Papers // Feb 2008

    Xen-Based HPC: A Parallel I/O Perspective

    Virtualization using Xen-based virtual machine environment has yet to permeate the field of High Performance Computing (HPC). One major requirement for HPC is the availability of scalable and high performance I/O. Conventional wisdom suggests that virtualization of system services must lead to degraded performance. In this paper, the authors take...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jan 2008

    Performance Characterization and Optimization of Parallel I/O on the Cray XT

    In this paper, the authors present an extensive characterization, tuning, and optimization of parallel I/O on the Cray XT supercomputer, named Jaguar, at Oak Ridge National Laboratory. They have characterized the performance and scalability for different levels of storage hierarchy including a single Lustre object storage target, a single S2A...

    Provided By Oak Ridge National Laboratory

  • White Papers // May 2007

    Efficiency Evaluation of Cray XT Parallel IO Stack

    PetaScale computing platforms need to be coupled with efficient IO subsystems that can deliver commensurate IO throughput to scientific applications. In order to gain insights into the deliverable IO efficiency on the Cray XT platform at ORNL, this paper presents an in-depth efficiency evaluation of its parallel IO software stack....

    Provided By Oak Ridge National Laboratory