Lawrence Berkeley National Laboratory

Displaying 1-11 of 11 results

  • White Papers // Oct 2011

    Automatic Scaling of OpenMP Beyond Shared Memory

    OpenMP is an explicit parallel programming model that offers reasonable productivity. Its memory model assumes a shared address space, and hence the direct translation - as done by common OpenMP compilers - requires underlying shared-memory architecture. Many lab machines include 10s of processors, built from commodity components and thus include...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Sep 2011

    Motivation, Design, Deployment and Evolution of a Guaranteed Bandwidth Network Service

    Much of modern science is dependent on high performance distributed computing and data handling. This distributed infrastructure, in turn, depends on high speed networks and services - especially when the science infrastructure is widely distributed geographically - to enable the science because the science is dependent on high throughput so...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Aug 2011

    Optimized Pre-Copy Live Migration for Memory Intensive Applications

    Live migration is a widely used technique for resource consolidation and fault tolerance. KVM and Xen use iterative pre-copy approaches which work well in practice for commercial applications. In this paper, the authors study pre-copy live migration of MPI and OpenMP scientific applications running on KVM and present a detailed...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Dec 2010

    Performance Analysis of High Performance Computing Applications on the Amazon Web Services Cloud

    Cloud computing has seen tremendous growth, particularly for commercial web applications. The on-demand, pay-as-you-go model creates a flexible and cost-effective means to access compute resources. For these reasons, the scientific computing community has shown increasing interest in exploring cloud computing. However, the underlying implementation and performance of clouds are very...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Nov 2010

    A Flexible Reservation Algorithm for Advance Network Provisioning

    Many scientific applications need support from a communication infrastructure that provides predictable performance, which requires effective algorithms for bandwidth reservations. Network reservation systems such as ESnet's OSCARS, establish guaranteed bandwidth of secure virtual circuits for a certain bandwidth and length of time. However, users currently cannot inquire about bandwidth availability,...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Apr 2010

    Resource Management in the Tessellation Manycore OS

    Tessellation is a manycore OS targeted at the resource management challenges of emerging client devices, including the need for real-time and QoS guarantees. It is predicated on two central ideas: Space-Time Partitioning (STP) and two-level scheduling. STP provides performance isolation and strong partitioning of resources among interacting software components, called...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Mar 2010

    Outside the Closed World: On Using Machine Learning for Network Intrusion Detection

    In network intrusion detection research, one popular strategy for finding attacks is monitoring a network's activity for anomalies: deviations from profiles of normality previously learned from benign traffic, typically identified using tools borrowed from the machine learning community. However, despite extensive academic research one finds a striking gap in terms...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Jan 2010

    Experiences With TCP/IP Over an ATM OC12 WAN

    This paper discusses the performance testing experiences of a 622.08 Mbps OC12 link. The link will be used for large bulk data transfer, and as such, of interest are both the ATM level throughput rates and end-to-end TCP/IP throughput rates. Tests were done to evaluate the ATM switches, the IP...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Apr 2009

    Best Practices for Data Centers: Lessons Learned from Benchmarking 22 Data Centers

    Over the past few years, the authors benchmarked 22 data center buildings. From this effort, they realised that data centers can be over 40 times as energy intensive as conventional office buildings. Studying the more efficient of these facilities enabled them to compile a set of "Best-practice" technologies for energy...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Mar 2009

    Mathematical and Statistical Opportunities in Cyber Security

    The role of mathematics in a complex system such as the Internet has yet to be deeply explored. This paper summarizes some of the important and pressing problems in cyber security from the viewpoint of open science environments. The author starts by posing the question "What fundamental problems exist within...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Dec 2006

    Scientific Computing Kernels on the Cell Processor

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this paper, the authors examine the potential...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Aug 2011

    Optimized Pre-Copy Live Migration for Memory Intensive Applications

    Live migration is a widely used technique for resource consolidation and fault tolerance. KVM and Xen use iterative pre-copy approaches which work well in practice for commercial applications. In this paper, the authors study pre-copy live migration of MPI and OpenMP scientific applications running on KVM and present a detailed...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Mar 2009

    Mathematical and Statistical Opportunities in Cyber Security

    The role of mathematics in a complex system such as the Internet has yet to be deeply explored. This paper summarizes some of the important and pressing problems in cyber security from the viewpoint of open science environments. The author starts by posing the question "What fundamental problems exist within...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Jan 2010

    Experiences With TCP/IP Over an ATM OC12 WAN

    This paper discusses the performance testing experiences of a 622.08 Mbps OC12 link. The link will be used for large bulk data transfer, and as such, of interest are both the ATM level throughput rates and end-to-end TCP/IP throughput rates. Tests were done to evaluate the ATM switches, the IP...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Apr 2009

    Best Practices for Data Centers: Lessons Learned from Benchmarking 22 Data Centers

    Over the past few years, the authors benchmarked 22 data center buildings. From this effort, they realised that data centers can be over 40 times as energy intensive as conventional office buildings. Studying the more efficient of these facilities enabled them to compile a set of "Best-practice" technologies for energy...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Mar 2010

    Outside the Closed World: On Using Machine Learning for Network Intrusion Detection

    In network intrusion detection research, one popular strategy for finding attacks is monitoring a network's activity for anomalies: deviations from profiles of normality previously learned from benign traffic, typically identified using tools borrowed from the machine learning community. However, despite extensive academic research one finds a striking gap in terms...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Dec 2010

    Performance Analysis of High Performance Computing Applications on the Amazon Web Services Cloud

    Cloud computing has seen tremendous growth, particularly for commercial web applications. The on-demand, pay-as-you-go model creates a flexible and cost-effective means to access compute resources. For these reasons, the scientific computing community has shown increasing interest in exploring cloud computing. However, the underlying implementation and performance of clouds are very...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Nov 2010

    A Flexible Reservation Algorithm for Advance Network Provisioning

    Many scientific applications need support from a communication infrastructure that provides predictable performance, which requires effective algorithms for bandwidth reservations. Network reservation systems such as ESnet's OSCARS, establish guaranteed bandwidth of secure virtual circuits for a certain bandwidth and length of time. However, users currently cannot inquire about bandwidth availability,...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Sep 2011

    Motivation, Design, Deployment and Evolution of a Guaranteed Bandwidth Network Service

    Much of modern science is dependent on high performance distributed computing and data handling. This distributed infrastructure, in turn, depends on high speed networks and services - especially when the science infrastructure is widely distributed geographically - to enable the science because the science is dependent on high throughput so...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Apr 2010

    Resource Management in the Tessellation Manycore OS

    Tessellation is a manycore OS targeted at the resource management challenges of emerging client devices, including the need for real-time and QoS guarantees. It is predicated on two central ideas: Space-Time Partitioning (STP) and two-level scheduling. STP provides performance isolation and strong partitioning of resources among interacting software components, called...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Dec 2006

    Scientific Computing Kernels on the Cell Processor

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this paper, the authors examine the potential...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Oct 2011

    Automatic Scaling of OpenMP Beyond Shared Memory

    OpenMP is an explicit parallel programming model that offers reasonable productivity. Its memory model assumes a shared address space, and hence the direct translation - as done by common OpenMP compilers - requires underlying shared-memory architecture. Many lab machines include 10s of processors, built from commodity components and thus include...

    Provided By Lawrence Berkeley National Laboratory