Lawrence Berkeley National Laboratory

Displaying 1-9 of 9 results

  • White Papers // Sep 2011

    Motivation, Design, Deployment and Evolution of a Guaranteed Bandwidth Network Service

    Much of modern science is dependent on high performance distributed computing and data handling. This distributed infrastructure, in turn, depends on high speed networks and services - especially when the science infrastructure is widely distributed geographically - to enable the science because the science is dependent on high throughput so...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Aug 2011

    Optimized Pre-Copy Live Migration for Memory Intensive Applications

    Live migration is a widely used technique for resource consolidation and fault tolerance. KVM and Xen use iterative pre-copy approaches which work well in practice for commercial applications. In this paper, the authors study pre-copy live migration of MPI and OpenMP scientific applications running on KVM and present a detailed...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Dec 2010

    Performance Analysis of High Performance Computing Applications on the Amazon Web Services Cloud

    Cloud computing has seen tremendous growth, particularly for commercial web applications. The on-demand, pay-as-you-go model creates a flexible and cost-effective means to access compute resources. For these reasons, the scientific computing community has shown increasing interest in exploring cloud computing. However, the underlying implementation and performance of clouds are very...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Nov 2010

    A Flexible Reservation Algorithm for Advance Network Provisioning

    Many scientific applications need support from a communication infrastructure that provides predictable performance, which requires effective algorithms for bandwidth reservations. Network reservation systems such as ESnet's OSCARS, establish guaranteed bandwidth of secure virtual circuits for a certain bandwidth and length of time. However, users currently cannot inquire about bandwidth availability,...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Mar 2010

    Outside the Closed World: On Using Machine Learning for Network Intrusion Detection

    In network intrusion detection research, one popular strategy for finding attacks is monitoring a network's activity for anomalies: deviations from profiles of normality previously learned from benign traffic, typically identified using tools borrowed from the machine learning community. However, despite extensive academic research one finds a striking gap in terms...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Jan 2010

    Experiences With TCP/IP Over an ATM OC12 WAN

    This paper discusses the performance testing experiences of a 622.08 Mbps OC12 link. The link will be used for large bulk data transfer, and as such, of interest are both the ATM level throughput rates and end-to-end TCP/IP throughput rates. Tests were done to evaluate the ATM switches, the IP...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Apr 2009

    Best Practices for Data Centers: Lessons Learned from Benchmarking 22 Data Centers

    Over the past few years, the authors benchmarked 22 data center buildings. From this effort, they realised that data centers can be over 40 times as energy intensive as conventional office buildings. Studying the more efficient of these facilities enabled them to compile a set of "Best-practice" technologies for energy...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Mar 2009

    Mathematical and Statistical Opportunities in Cyber Security

    The role of mathematics in a complex system such as the Internet has yet to be deeply explored. This paper summarizes some of the important and pressing problems in cyber security from the viewpoint of open science environments. The author starts by posing the question "What fundamental problems exist within...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Aug 2008

    The IceCube Data Acquisition Software: Lessons Learned During Distributed, Collaborative, Multi-Disciplined Software Development

    This paper reports on lessons learned during the development of the data acquisition software for the IceCube project - specifically, how to effectively address the unique challenges presented by a distributed, collaborative, multi-institutional, multi-disciplined project such as this. While development progress in software projects is often described solely in terms...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Aug 2011

    Optimized Pre-Copy Live Migration for Memory Intensive Applications

    Live migration is a widely used technique for resource consolidation and fault tolerance. KVM and Xen use iterative pre-copy approaches which work well in practice for commercial applications. In this paper, the authors study pre-copy live migration of MPI and OpenMP scientific applications running on KVM and present a detailed...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Mar 2009

    Mathematical and Statistical Opportunities in Cyber Security

    The role of mathematics in a complex system such as the Internet has yet to be deeply explored. This paper summarizes some of the important and pressing problems in cyber security from the viewpoint of open science environments. The author starts by posing the question "What fundamental problems exist within...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Jan 2010

    Experiences With TCP/IP Over an ATM OC12 WAN

    This paper discusses the performance testing experiences of a 622.08 Mbps OC12 link. The link will be used for large bulk data transfer, and as such, of interest are both the ATM level throughput rates and end-to-end TCP/IP throughput rates. Tests were done to evaluate the ATM switches, the IP...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Apr 2009

    Best Practices for Data Centers: Lessons Learned from Benchmarking 22 Data Centers

    Over the past few years, the authors benchmarked 22 data center buildings. From this effort, they realised that data centers can be over 40 times as energy intensive as conventional office buildings. Studying the more efficient of these facilities enabled them to compile a set of "Best-practice" technologies for energy...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Aug 2008

    The IceCube Data Acquisition Software: Lessons Learned During Distributed, Collaborative, Multi-Disciplined Software Development

    This paper reports on lessons learned during the development of the data acquisition software for the IceCube project - specifically, how to effectively address the unique challenges presented by a distributed, collaborative, multi-institutional, multi-disciplined project such as this. While development progress in software projects is often described solely in terms...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Mar 2010

    Outside the Closed World: On Using Machine Learning for Network Intrusion Detection

    In network intrusion detection research, one popular strategy for finding attacks is monitoring a network's activity for anomalies: deviations from profiles of normality previously learned from benign traffic, typically identified using tools borrowed from the machine learning community. However, despite extensive academic research one finds a striking gap in terms...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Dec 2010

    Performance Analysis of High Performance Computing Applications on the Amazon Web Services Cloud

    Cloud computing has seen tremendous growth, particularly for commercial web applications. The on-demand, pay-as-you-go model creates a flexible and cost-effective means to access compute resources. For these reasons, the scientific computing community has shown increasing interest in exploring cloud computing. However, the underlying implementation and performance of clouds are very...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Nov 2010

    A Flexible Reservation Algorithm for Advance Network Provisioning

    Many scientific applications need support from a communication infrastructure that provides predictable performance, which requires effective algorithms for bandwidth reservations. Network reservation systems such as ESnet's OSCARS, establish guaranteed bandwidth of secure virtual circuits for a certain bandwidth and length of time. However, users currently cannot inquire about bandwidth availability,...

    Provided By Lawrence Berkeley National Laboratory

  • White Papers // Sep 2011

    Motivation, Design, Deployment and Evolution of a Guaranteed Bandwidth Network Service

    Much of modern science is dependent on high performance distributed computing and data handling. This distributed infrastructure, in turn, depends on high speed networks and services - especially when the science infrastructure is widely distributed geographically - to enable the science because the science is dependent on high throughput so...

    Provided By Lawrence Berkeley National Laboratory