Oak Ridge National Laboratory

Displaying 1-23 of 23 results

  • White Papers // May 2013

    A Temporal Locality-Aware Page-Mapped Flash Translation Layer

    The poor performance of random writes has been a cause of major concern which needs to be addressed to better utilize the potential of flash in enterprise-scale environments. The authors examine one of the important causes of this poor performance: the design of the Flash Translation Layer (FTL) which performs...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jan 2013

    Recovering Transient Data: Automated On-Demand Data Reconstruction and Offloading for Supercomputers

    It has become a national priority to build and use PetaFlop supercomputers. The dependability of such large systems has been recognized as a key issue that can impact their usability. Even with smaller, existing machines, failures are the norm rather than an exception. Research has shown that storage systems are...

    Provided By Oak Ridge National Laboratory

  • White Papers // May 2012

    Automatic Construction of Anomaly Detectors from Graphical Models

    Detection of rare or previously unseen attacks in cyber security presents a central challenge: how does one search for a sufficiently wide variety of types of anomalies and yet allow the process to scale to increasingly complex data? In particular, creating each anomaly detector manually and training each one separately...

    Provided By Oak Ridge National Laboratory

  • White Papers // May 2012

    Dead Phish: An Examination of Deactivated Phishing Sites

    Efforts to combat phishing and fraud online often center around filtering the phishing messages and disabling phishing Web sites to prevent users from being deceived. Two potential approaches to disabling a phishing site are to eliminate the required DNS records to reach the site and to remove the site from...

    Provided By Oak Ridge National Laboratory

  • White Papers // Feb 2012

    NVMalloc: Exposing an Aggregate SSD Store as a Memory Partition in Extreme-Scale Machines

    DRAM is a precious resource in extreme-scale machines and is increasingly becoming scarce, mainly due to the growing number of cores per node. On future multi-petaflop and exaflop machines, the memory pressure is likely to be so severe that the authors need to rethink their memory usage models. Fortunately, the...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jan 2012

    Just-in-Time Staging of Large Input Data for Supercomputing Jobs

    High performance computing is facing a data deluge from state-of-the-art colliders and observatories. Large data-sets from these facilities, and other end-user sites, are often inputs to intensive analyses on modern supercomputers. Timely staging in of input data at the supercomputer's local storage can not only optimize space usage, but also...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jun 2011

    HybridStore: A Cost-Efficient, High-Performance Storage System Combining SSDs and HDDs

    Unlike the use of DRAM for caching or buffering, certain idiosyncrasies of NAND Flash-based Solid-State Drives (SSDs) make their integration into existing systems non-trivial. Flash memory suffers from limits on its reliability, is an order of magnitude more expensive than the magnetic Hard Disk Drives (HDDs), and can sometimes be...

    Provided By Oak Ridge National Laboratory

  • White Papers // Feb 2011

    Challenges in Securing the Interface Between the Cloud and Pervasive Systems

    Cloud computing presents an opportunity for pervasive systems to leverage computational and storage resources to accomplish tasks that would not normally be possible on such resource-constrained devices. Cloud computing can enable hardware designers to build lighter systems that last longer and are more mobile. Despite the advantages cloud computing offers...

    Provided By Oak Ridge National Laboratory

  • White Papers // Feb 2011

    Embracing the Cloud for Better Cyber Security

    The future of cyber security is inextricably tied to the future of computing. Organizational needs and economic factors will drive computing outcomes. Cyber security researchers and practitioners must recognize the path of computing evolution and position themselves to influence the process to incorporate security as an inherent property. The best...

    Provided By Oak Ridge National Laboratory

  • White Papers // Nov 2010

    Workload Characterization of a Leadership Class Storage Cluster

    Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, the authors characterize the scientific workloads of the world's fastest HPC (High Performance Computing) storage cluster, Spider, at the Oak Ridge...

    Provided By Oak Ridge National Laboratory

  • White Papers // Oct 2010

    Enabling Data Discovery Through Virtual Internet Repositories

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source and software developed at Oak Ridge National Laboratory. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. A major new version of Mercury was developed...

    Provided By Oak Ridge National Laboratory

  • White Papers // Mar 2010

    Multi Stage Attack Detection System for Network Administrators Using Data Mining

    In this paper, the authors present a method to discover, visualize, and predict behavior pattern of attackers in a network based system. They proposed a system that is able to discover temporal pattern of intrusion which reveal behaviors of attackers using alerts generated by Intrusion Detection System (IDS). They use...

    Provided By Oak Ridge National Laboratory

  • White Papers // Feb 2010

    Cybersecurity Through Real-Time Distributed Control Systems

    Critical infrastructure sites and facilities are becoming increasingly dependent on interconnected physical and cyber-based Real-Time Distributed Control Systems (RTDCSs). A mounting cybersecurity threat results from the nature of these ubiquitous and sometimes unrestrained communications interconnections. Much work is under way in numerous organizations to characterize the cyber threat, determine means...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jan 2010

    Reconciling Scratch Space Consumption, Exposure, and Volatility to Achieve Timely Staging of Job Input Data

    Innovative scientific applications and emerging dense data sources are creating a data deluge for high-end computing systems. Processing such large input data typically involves copying (or staging) onto the supercomputer's specialized high-speed storage, scratch space, for sustained high I/O throughput. The current practice of conservatively staging data as early as...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jul 2009

    A Stigmergy Approach for Open Source Software Developer Community Simulation

    The stigmergy collaboration approach provides a hypothesized explanation about how online groups work together. In this research, the authors presented a stigmergy approach for building an agent based Open Source Software (OSS) developer community collaboration simulation. They used group of actors who collaborate on OSS projects as their frame of...

    Provided By Oak Ridge National Laboratory

  • White Papers // May 2009

    Earth System Grid Authentication Infrastructure: Integrating Local Authentication, OpenID and PKI

    Climate scientists face a wide variety of practical problems, but there exists an overarching need to efficiently access and manipulate climate model data. Increasingly, for example, researchers must assemble and analyze large datasets that are archived in different formats on disparate platforms, and must extract portions of datasets to compute...

    Provided By Oak Ridge National Laboratory

  • White Papers // Nov 2008

    Performance of RDMA-Capable Storage Protocols on Wide-Area Network

    Because of its high throughput, low CPU utilization, and direct data placement, RDMA (Remote Direct Memory Access) has been adopted for transport in a number of storage protocols, such as NFS and iSCSI. In this presentation, the author provides a performance evaluation of RDMA-based NFS and iSCSI on Wide-Area Network...

    Provided By Oak Ridge National Laboratory

  • White Papers // Sep 2008

    Early Evaluation of IBM BlueGene/P

    BlueGene/P (BG/P) is the second generation BlueGene architecture from IBM, succeeding BlueGene/L (BG/L). BG/P is a System-on-Chip (SoC) design that uses four PowerPC 450 cores operating at 850 MHz with a double precision, dual pipe floating point unit per core. These chips are connected with multiple interconnection networks including a...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jun 2008

    ParColl: Partitioned Collective I/O on the Cray XT

    Collective I/O orchestrates I/O from parallel processes by aggregating fine-grained requests into large ones. However, its performance is typically a fraction of the potential I/O bandwidth on large scale platforms such as Cray XT. Based on the authors' analysis, the time spent in global process synchronization dominates the actual time...

    Provided By Oak Ridge National Laboratory

  • White Papers // Feb 2008

    Xen-Based HPC: A Parallel I/O Perspective

    Virtualization using Xen-based virtual machine environment has yet to permeate the field of High Performance Computing (HPC). One major requirement for HPC is the availability of scalable and high performance I/O. Conventional wisdom suggests that virtualization of system services must lead to degraded performance. In this paper, the authors take...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jan 2008

    Performance Characterization and Optimization of Parallel I/O on the Cray XT

    In this paper, the authors present an extensive characterization, tuning, and optimization of parallel I/O on the Cray XT supercomputer, named Jaguar, at Oak Ridge National Laboratory. They have characterized the performance and scalability for different levels of storage hierarchy including a single Lustre object storage target, a single S2A...

    Provided By Oak Ridge National Laboratory

  • White Papers // May 2007

    Efficiency Evaluation of Cray XT Parallel IO Stack

    PetaScale computing platforms need to be coupled with efficient IO subsystems that can deliver commensurate IO throughput to scientific applications. In order to gain insights into the deliverable IO efficiency on the Cray XT platform at ORNL, this paper presents an in-depth efficiency evaluation of its parallel IO software stack....

    Provided By Oak Ridge National Laboratory

  • White Papers // Oct 2006

    Towards High Availability for High-Performance Computing System Services: Accomplishments and Limitations

    High-Performance Computing (HPC) plays a significant role for the scientific research community as an enabling technology. Scientific HPC applications, like the Terascale Supernova Initiative or the Community Climate System Model (CCSM), help to understand the complex nature of open research questions and drive the race for scientific discovery through advanced...

    Provided By Oak Ridge National Laboratory

  • White Papers // Oct 2006

    Towards High Availability for High-Performance Computing System Services: Accomplishments and Limitations

    High-Performance Computing (HPC) plays a significant role for the scientific research community as an enabling technology. Scientific HPC applications, like the Terascale Supernova Initiative or the Community Climate System Model (CCSM), help to understand the complex nature of open research questions and drive the race for scientific discovery through advanced...

    Provided By Oak Ridge National Laboratory

  • White Papers // May 2013

    A Temporal Locality-Aware Page-Mapped Flash Translation Layer

    The poor performance of random writes has been a cause of major concern which needs to be addressed to better utilize the potential of flash in enterprise-scale environments. The authors examine one of the important causes of this poor performance: the design of the Flash Translation Layer (FTL) which performs...

    Provided By Oak Ridge National Laboratory

  • White Papers // Nov 2010

    Workload Characterization of a Leadership Class Storage Cluster

    Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, the authors characterize the scientific workloads of the world's fastest HPC (High Performance Computing) storage cluster, Spider, at the Oak Ridge...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jan 2013

    Recovering Transient Data: Automated On-Demand Data Reconstruction and Offloading for Supercomputers

    It has become a national priority to build and use PetaFlop supercomputers. The dependability of such large systems has been recognized as a key issue that can impact their usability. Even with smaller, existing machines, failures are the norm rather than an exception. Research has shown that storage systems are...

    Provided By Oak Ridge National Laboratory

  • White Papers // Oct 2010

    Enabling Data Discovery Through Virtual Internet Repositories

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source and software developed at Oak Ridge National Laboratory. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. A major new version of Mercury was developed...

    Provided By Oak Ridge National Laboratory

  • White Papers // May 2012

    Automatic Construction of Anomaly Detectors from Graphical Models

    Detection of rare or previously unseen attacks in cyber security presents a central challenge: how does one search for a sufficiently wide variety of types of anomalies and yet allow the process to scale to increasingly complex data? In particular, creating each anomaly detector manually and training each one separately...

    Provided By Oak Ridge National Laboratory

  • White Papers // May 2012

    Dead Phish: An Examination of Deactivated Phishing Sites

    Efforts to combat phishing and fraud online often center around filtering the phishing messages and disabling phishing Web sites to prevent users from being deceived. Two potential approaches to disabling a phishing site are to eliminate the required DNS records to reach the site and to remove the site from...

    Provided By Oak Ridge National Laboratory

  • White Papers // May 2009

    Earth System Grid Authentication Infrastructure: Integrating Local Authentication, OpenID and PKI

    Climate scientists face a wide variety of practical problems, but there exists an overarching need to efficiently access and manipulate climate model data. Increasingly, for example, researchers must assemble and analyze large datasets that are archived in different formats on disparate platforms, and must extract portions of datasets to compute...

    Provided By Oak Ridge National Laboratory

  • White Papers // Nov 2008

    Performance of RDMA-Capable Storage Protocols on Wide-Area Network

    Because of its high throughput, low CPU utilization, and direct data placement, RDMA (Remote Direct Memory Access) has been adopted for transport in a number of storage protocols, such as NFS and iSCSI. In this presentation, the author provides a performance evaluation of RDMA-based NFS and iSCSI on Wide-Area Network...

    Provided By Oak Ridge National Laboratory

  • White Papers // Feb 2010

    Cybersecurity Through Real-Time Distributed Control Systems

    Critical infrastructure sites and facilities are becoming increasingly dependent on interconnected physical and cyber-based Real-Time Distributed Control Systems (RTDCSs). A mounting cybersecurity threat results from the nature of these ubiquitous and sometimes unrestrained communications interconnections. Much work is under way in numerous organizations to characterize the cyber threat, determine means...

    Provided By Oak Ridge National Laboratory

  • White Papers // Feb 2011

    Challenges in Securing the Interface Between the Cloud and Pervasive Systems

    Cloud computing presents an opportunity for pervasive systems to leverage computational and storage resources to accomplish tasks that would not normally be possible on such resource-constrained devices. Cloud computing can enable hardware designers to build lighter systems that last longer and are more mobile. Despite the advantages cloud computing offers...

    Provided By Oak Ridge National Laboratory

  • White Papers // Feb 2011

    Embracing the Cloud for Better Cyber Security

    The future of cyber security is inextricably tied to the future of computing. Organizational needs and economic factors will drive computing outcomes. Cyber security researchers and practitioners must recognize the path of computing evolution and position themselves to influence the process to incorporate security as an inherent property. The best...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jul 2009

    A Stigmergy Approach for Open Source Software Developer Community Simulation

    The stigmergy collaboration approach provides a hypothesized explanation about how online groups work together. In this research, the authors presented a stigmergy approach for building an agent based Open Source Software (OSS) developer community collaboration simulation. They used group of actors who collaborate on OSS projects as their frame of...

    Provided By Oak Ridge National Laboratory

  • White Papers // Mar 2010

    Multi Stage Attack Detection System for Network Administrators Using Data Mining

    In this paper, the authors present a method to discover, visualize, and predict behavior pattern of attackers in a network based system. They proposed a system that is able to discover temporal pattern of intrusion which reveal behaviors of attackers using alerts generated by Intrusion Detection System (IDS). They use...

    Provided By Oak Ridge National Laboratory

  • White Papers // Feb 2012

    NVMalloc: Exposing an Aggregate SSD Store as a Memory Partition in Extreme-Scale Machines

    DRAM is a precious resource in extreme-scale machines and is increasingly becoming scarce, mainly due to the growing number of cores per node. On future multi-petaflop and exaflop machines, the memory pressure is likely to be so severe that the authors need to rethink their memory usage models. Fortunately, the...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jan 2010

    Reconciling Scratch Space Consumption, Exposure, and Volatility to Achieve Timely Staging of Job Input Data

    Innovative scientific applications and emerging dense data sources are creating a data deluge for high-end computing systems. Processing such large input data typically involves copying (or staging) onto the supercomputer's specialized high-speed storage, scratch space, for sustained high I/O throughput. The current practice of conservatively staging data as early as...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jan 2012

    Just-in-Time Staging of Large Input Data for Supercomputing Jobs

    High performance computing is facing a data deluge from state-of-the-art colliders and observatories. Large data-sets from these facilities, and other end-user sites, are often inputs to intensive analyses on modern supercomputers. Timely staging in of input data at the supercomputer's local storage can not only optimize space usage, but also...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jun 2011

    HybridStore: A Cost-Efficient, High-Performance Storage System Combining SSDs and HDDs

    Unlike the use of DRAM for caching or buffering, certain idiosyncrasies of NAND Flash-based Solid-State Drives (SSDs) make their integration into existing systems non-trivial. Flash memory suffers from limits on its reliability, is an order of magnitude more expensive than the magnetic Hard Disk Drives (HDDs), and can sometimes be...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jan 2008

    Performance Characterization and Optimization of Parallel I/O on the Cray XT

    In this paper, the authors present an extensive characterization, tuning, and optimization of parallel I/O on the Cray XT supercomputer, named Jaguar, at Oak Ridge National Laboratory. They have characterized the performance and scalability for different levels of storage hierarchy including a single Lustre object storage target, a single S2A...

    Provided By Oak Ridge National Laboratory

  • White Papers // May 2007

    Efficiency Evaluation of Cray XT Parallel IO Stack

    PetaScale computing platforms need to be coupled with efficient IO subsystems that can deliver commensurate IO throughput to scientific applications. In order to gain insights into the deliverable IO efficiency on the Cray XT platform at ORNL, this paper presents an in-depth efficiency evaluation of its parallel IO software stack....

    Provided By Oak Ridge National Laboratory

  • White Papers // Feb 2008

    Xen-Based HPC: A Parallel I/O Perspective

    Virtualization using Xen-based virtual machine environment has yet to permeate the field of High Performance Computing (HPC). One major requirement for HPC is the availability of scalable and high performance I/O. Conventional wisdom suggests that virtualization of system services must lead to degraded performance. In this paper, the authors take...

    Provided By Oak Ridge National Laboratory

  • White Papers // Sep 2008

    Early Evaluation of IBM BlueGene/P

    BlueGene/P (BG/P) is the second generation BlueGene architecture from IBM, succeeding BlueGene/L (BG/L). BG/P is a System-on-Chip (SoC) design that uses four PowerPC 450 cores operating at 850 MHz with a double precision, dual pipe floating point unit per core. These chips are connected with multiple interconnection networks including a...

    Provided By Oak Ridge National Laboratory

  • White Papers // Jun 2008

    ParColl: Partitioned Collective I/O on the Cray XT

    Collective I/O orchestrates I/O from parallel processes by aggregating fine-grained requests into large ones. However, its performance is typically a fraction of the potential I/O bandwidth on large scale platforms such as Cray XT. Based on the authors' analysis, the time spent in global process synchronization dominates the actual time...

    Provided By Oak Ridge National Laboratory