University of Wisconsin-La Crosse

Displaying 1-40 of 44 results

  • White Papers // May 2014

    DimmWitted: A Study of Main-Memory Statistical Analytics

    The authors perform the first study of the tradeoff space of access methods and replication to support statistical analytics using first-order methods executed in the main memory of a Non-Uniform Memory Access (NUMA) machine. Statistical analytics systems differ from conventional SQL-analytics in the amount and types of memory incoherence they...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Mar 2014

    Not-So-Random Numbers in Virtualized Linux and the Whirlwind RNG

    Virtualized environments are widely thought to cause problems for software-based Random Number Generators (RNGs), due to use of Virtual Machine (VM) snapshots as well as fewer and believed-to-be lower quality entropy sources. Despite this, the authors are unaware of any published analysis of the security of critical RNGs when running...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jan 2014

    ViewBox: Integrating Local File Systems with Cloud Storage Services

    Cloud-based file synchronization services have become enormously popular in recent years, both for their ability to synchronize files across multiple clients and for the automatic cloud backups they provide. However, despite the excellent reliability that the cloud back-end provides, the loose coupling of these services and the local file system...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jan 2014

    Supporting x86-64 Address Translation for 100s of GPU Lanes

    Efficient memory sharing between CPU and GPU threads can greatly expand the effective set of GPGPU workloads. For increased programmability, this memory should be uniformly virtualized, necessitating compatible address translation support for GPU memory references. However, even a modest GPU might need 100s of translations per cycle (6 CUs 64...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jan 2014

    gem5-gpu: A Heterogeneous CPU-GPU Simulator

    gem5-gpu is a new simulator that models tightly integrated CPU-GPU systems. It builds on gem5, a modular full-system CPU simulator, and GPGPU-Sim, a detailed GPGPU simulator. gem5-gpu routes most memory accesses through ruby, which is a highly configurable memory system in gem5. By doing this, it is able to simulate...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jan 2014

    Coherent Network Interfaces for Fine-Grain Communication

    Historically, processor accesses to memory-mapped device registers have been marked uncachable to insure their visibility to the device. The ubiquity of snooping cache coherence, however, makes it possible for processors and devices to interact with cachable, coherent memory operations. Using coherence can improve performance by facilitating burst transfers of whole...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Sep 2013

    Storage-Class Memory Needs Flexible Interfaces

    Emerging Storage-Class Memory (SCM) technologies bring the best of two worlds: the low-latency and random-access of memory together with the persistence of disks. With low-latency storage-class memory, software can be a major contributor to access latency. To minimize latency, file system architecture has to provide flexibility in customizing the file...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Aug 2013

    Distribution-Based Query Scheduling

    Query scheduling, a fundamental problem in database management systems, has recently received a renewed attention, perhaps in part due to the rise of the \"Database as a Service\" (DaaS) model for database deployment. While there has been a great deal of work investigating different scheduling algorithms, there has been comparatively...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Aug 2013

    Online, Asynchronous Schema Change in F1

    The authors introduce a protocol for schema evolution in a globally distributed database management system with shared data, stateless servers, and no global membership. Their protocol is asynchronous - it allows different servers in the database system to transition to a new schema at different times - and online -...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jul 2013

    CMP Directory Coherence: One Granularity Does Not Fit All

    To support legacy software, large CMPs often provide cache coherence via an on-chip directory rather than snooping. In those designs, a key challenge is maximizing the effectiveness of precious on-chip directory state. Most current directory protocols miss an opportunity by organizing all state in per-block records. To increase the \"Reach\"...

    Provided By University of Wisconsin-La Crosse

  • White Papers // May 2013

    Box: Towards Reliability and Consistency in Dropbox-like File Synchronization Services

    Cloud-based file synchronization services, such as Dropbox, have never been more popular. They provide excellent reliability and durability in their server-side storage, and can provide a consistent view of their synchronized files across multiple clients. However, the loose coupling of these services and the local file system may, in some...

    Provided By University of Wisconsin-La Crosse

  • White Papers // May 2013

    Fault Isolation and Quick Recovery in Isolation File Systems

    High availability is critical for file systems. For desktops and laptops, local file systems directly affect data access for the user; for mobile devices, user data is also stored in a local file system; for file and storage servers, a shared cluster file system may be used to store virtual...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Mar 2013

    High-Throughput GPU-Based LDPC Decoding

    Low-Density Parity-Check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jan 2013

    Getting Real: Lessons in Transitioning Research Simulations into Hardware Systems

    Flash-based Solid-State Drives (SSDs) have revolutionized storage with their high performance. Their sophisticated internal mechanisms have led to a plethora of research on how to optimize applications, file systems, and internal SSD designs. Due to the closed nature of commercial devices though, most research on the internals of an SSD,...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jan 2013

    Warming up Storage-Level Caches with Bonfire

    Large caches in storage servers have become essential for meeting service levels required by applications. These caches need to be warmed with data often today due to various scenarios including dynamic creation of cache space and server restarts that clear cache contents. When large storage caches are warmed at the...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jan 2013

    Generic Design Patterns for Tunable and High-Performance SSD-Based Indexes

    A number of data-intensive systems require using random hash-based indexes of various forms, e.g., hash-tables, Bloom filters, and locality sensitive hash tables. In this paper, the authors present general SSD optimization techniques that can be used to design a variety of such indexes while ensuring higher performance and easier tunability...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jan 2013

    Multirate Media Streaming Using Network Coding

    Multimedia data transfers typically involve large volumes of data. Multirate multicast transmissions using layered source coding are generally used to deliver data streams to heterogeneous receivers. Network coding has been envisioned to increase throughput and deliver higher data rates than conventional source coding or no coding. The paper proposes a...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Sep 2012

    SymDrive: Testing Drivers without Devices

    Device drivers are critical to operating-system reliability, yet are difficult to test and debug. They run in kernel mode, which prohibits the use of many runtime program-analysis tools available for user-mode code, such as Val-grind. Their need for hardware can prevent testing altogether: over two dozen driver Linux and FreeBSD...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Sep 2012

    Automated Concurrency-Bug Fixing

    Concurrency bugs are widespread in multithreaded programs. Fixing them is time-consuming and error-prone. The authors present CFix, a system that automates the repair of concurrency bugs. CFix works with a wide variety of concurrency-bug detectors. For each failure-inducing interleaving reported by a bug detector, CFix first determines a combination of...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jul 2012

    Fast Peak-to-Peak Restart for SSD Buffer Pool Extension

    A promising usage of Flash SSDs in a DBMS is to use it to extend the main memory buffer pool by caching in the SSD selected pages that are evicted from the buffer pool. These schemes have been shown to produce big performance gains in the steady state. Simple methods...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Apr 2012

    Operating Systems Should Manage Accelerators

    The inexorable demand for computing power has lead to increasing interest in accelerator-based designs. An accelerator is specialized hardware unit that can perform a set of tasks with much higher performance or power efficiency than a general-purpose CPU. They may be embedded in the pipeline as a functional unit, as...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Feb 2012

    De-Indirection for Flash-Based SSDs With Nameless Writes

    The authors present Nameless Writes, a new device interface that removes the need for indirection in modern Solid-State storage Devices (SSDs). Nameless writes allow the device to choose the location of a write; only then is the client informed of the name (i.e., address) where the block now resides. Doing...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Feb 2012

    On the Dispersions of Three Network Information Theory Problems

    The authors characterize fundamental limits for distributed lossless source coding (the Slepian-Wolf problem), the multiple-access channel and the asymmetric broadcast channel in the finite blocklength setting. For the Slepian-Wolf problem, they introduce a fundamental quantity known as the entropy dispersion matrix, which is analogous to the scalar dispersion quantities that...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jan 2012

    Consistency Without Ordering

    Modern file systems use ordering points to maintain consistency in the face of system crashes. However, such ordering leads to lower performance, higher complexity, and a strong and perhaps naive dependence on lower layers to correctly enforce the ordering of writes. In this paper, the authors introduce the No-order File...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jan 2012

    Finite Blocklength Slepian-Wolf Coding

    The authors characterize the fundamental limits for distributed lossless source coding (Slepian-Wolf) in the finite blocklength regime. They introduce a fundamental quantity known as the entropy dispersion matrix, which is analogous to scalar dispersion quantities. They show that if this matrix is positive-definite, the optimal rate region under the constraint...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Oct 2011

    Canonical Estimation in a Rare-Events Regime

    The authors propose a general methodology for performing statistical inference within a 'rare-events regime' that was recently suggested by Wagner, Viswanath and Kulkarni. Their approach allows one to easily establish consistent estimators for a very large class of canonical estimation problems, in a large alphabet setting. These include the problems...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Sep 2011

    When Free Is Not Really Free: What Does It Cost to Run a Database Workload in the Cloud?

    One of the greatest hurdles associated with deploying traditional on-site Relational DataBase Management Systems (RDBMSs) is the overall complexity of choosing, configuring, and maintaining the RDBMS as well as the server it operates on. The current computing trend towards cloud-based Database-as-a-Service (DaaS) as an alternative to traditional on-site Relational DataBase...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Apr 2011

    Quarantine: Fault Tolerance for Concurrent Servers with Data-Driven Selective Isolation

    Commodity CPUs are now parallel processors. Parallelism has become a tenet in most processor designs, with some researchers suggesting that the era of manycore systems consisting of thousands of cores is fast approaching. The authors present Quarantine, a system that enables data driven selective isolation within concurrent server applications. Instead...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Apr 2011

    Coerced Cache Eviction and Discreet Mode Journaling: Dealing with Misbehaving Disks

    The authors present Coerced Cache Eviction (CCE), a new method to force writes to disk in the presence of a disk cache that does not properly obey write-cache configuration or flush requests. They demonstrate the utility of CCE by building a new journaling mode within the Linux ext3 file system....

    Provided By University of Wisconsin-La Crosse

  • White Papers // Mar 2011

    Rethinking Query Processing for Energy Efficiency: Slowing Down to Win the Race

    The biggest change in the TPC benchmarks in over two decades is now well underway - namely the addition of an energy efficiency metric along with traditional performance metrics. This change is fueled by the growing, real, and urgent demand for energy-efficient database processing. Database query processing engines must now...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Dec 2010

    Making the Common Case the Only Case with AnticipatoryMemory Allocation

    The authors present Anticipatory Memory Allocation (AMA), a new method to build kernel code that is robust to memory allocation failures. AMA avoids the usual difficulties in handling allocation failures through a novel combination of static and dynamic techniques. Specifically, a developer, with assistance from AMA static analysis tools, determines...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jun 2010

    Removing The Costs Of Indirection in Flash-based SSDs with Nameless Writes

    The authors present nameless writes, a new interface that obviates the need for indirection in modern Solid-State storage Devices (SSDs). Nameless writes allow the device to pick the location of a write and only then inform the client above of the decision. Doing so keeps control of block allocation decisions...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Feb 2010

    Estimating The Effects Of Dormitory Living On Student Performance

    Many large universities require freshman to live in dormitories on the basis that living on campus leads to better classroom performance and lower drop out incidence. Large universities also provide a number of academic services in dormitories such as tutoring and student organizations that encourage an environment conducive to learning....

    Provided By University of Wisconsin-La Crosse

  • White Papers // Feb 2010

    End-to-End Data Integrity for File Systems: A ZFS Case Study

    The authors present a study of the effects of disk and memory corruption on file system data integrity. Their analysis focuses on Sun's ZFS, a modern commercial offering with numerous reliability mechanisms. Through careful and thorough fault injection, they show that ZFS is robust to a wide range of disk...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Nov 2009

    Password Authentication from a Human Factors Perspective: Results of a Survey among End-Users

    Considering that many organizations today are extremely dependent on information technology, Computer and Information Security (CIS) has become a critical concern from a business viewpoint. CIS is concerned with protecting the confidentiality, integrity, accessible information, when using computer systems. Much research has been conducted on CIS in the past years....

    Provided By University of Wisconsin-La Crosse

  • White Papers // Oct 2009

    Discovery-Driven Graph Summarization

    Large graph datasets are ubiquitous in many domains, including social networking and biology. Graph summarization techniques are crucial in such domains as they can assist in uncovering useful insights about the patterns hidden in the underlying data. One important type of graph summarization is to produce small and informative summaries...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Sep 2009

    Why panic()? Improving Reliability with Restartable File Systems

    The file system is one of the most critical components of the operating system. Almost all applications running in the operating system require file systems to be available for their proper operation. Though file-system availability is critical in many cases, very little work has been done on tolerating file system...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jun 2009

    Complexity of Component-Based Development of Embedded Systems

    The paper discusses complexity of Component-Based Development (CBD) of embedded systems. Although CBD has its merits, it must be augmented with methods to control the complexities that arise due to resource constraints, timeliness, and run-time deployment of components in embedded system development. Software component specification, system-level testing, and run-time reliability...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Oct 2008

    SQCK: A Declarative File System Checker

    The lowly state of the art for file system checking and repair does not match what is needed to keep important data available for users. Current file system checkers, such as e2fsck, are complex pieces of imperfect code written in low-level languages. The authors introduce SQCK, a file system checker...

    Provided By University of Wisconsin-La Crosse

  • White Papers // May 2008

    Towards Realistic File-System Benchmarks with CodeMRI

    Benchmarks are crucial to understanding software systems and assessing their performance. In file-system research, synthetic benchmarks are accepted and widely used as substitutes for more realistic and complex workloads. However, synthetic benchmarks are largely based on the benchmark writer's interpretation of the real workload, and how it exercises the system...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Oct 2008

    SQCK: A Declarative File System Checker

    The lowly state of the art for file system checking and repair does not match what is needed to keep important data available for users. Current file system checkers, such as e2fsck, are complex pieces of imperfect code written in low-level languages. The authors introduce SQCK, a file system checker...

    Provided By University of Wisconsin-La Crosse

  • White Papers // May 2008

    Towards Realistic File-System Benchmarks with CodeMRI

    Benchmarks are crucial to understanding software systems and assessing their performance. In file-system research, synthetic benchmarks are accepted and widely used as substitutes for more realistic and complex workloads. However, synthetic benchmarks are largely based on the benchmark writer's interpretation of the real workload, and how it exercises the system...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jan 2008

    EIO: Error Handling is Occasionally Correct

    The reliability of file systems depends in part on how well they propagate errors. The authors develop a static analysis technique, EDP, that analyzes how file systems and storage device drivers propagate error codes. Running their EDP analysis on all file systems and 3 major storage device drivers in Linux...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Mar 2006

    Dependability Analysis of Virtual Memory Systems

    Recent research has shown that even modern hard disks have complex failure modes that do not conform to \"Failstop\" operation. Disks exhibit partial failures like block access errors and block corruption. Commodity operating systems are required to deal with such failures as commodity hard disks are known to be failure-prone....

    Provided By University of Wisconsin-La Crosse

  • White Papers // Mar 2014

    Not-So-Random Numbers in Virtualized Linux and the Whirlwind RNG

    Virtualized environments are widely thought to cause problems for software-based Random Number Generators (RNGs), due to use of Virtual Machine (VM) snapshots as well as fewer and believed-to-be lower quality entropy sources. Despite this, the authors are unaware of any published analysis of the security of critical RNGs when running...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Sep 2013

    Storage-Class Memory Needs Flexible Interfaces

    Emerging Storage-Class Memory (SCM) technologies bring the best of two worlds: the low-latency and random-access of memory together with the persistence of disks. With low-latency storage-class memory, software can be a major contributor to access latency. To minimize latency, file system architecture has to provide flexibility in customizing the file...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Apr 2012

    Operating Systems Should Manage Accelerators

    The inexorable demand for computing power has lead to increasing interest in accelerator-based designs. An accelerator is specialized hardware unit that can perform a set of tasks with much higher performance or power efficiency than a general-purpose CPU. They may be embedded in the pipeline as a functional unit, as...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jun 2009

    Complexity of Component-Based Development of Embedded Systems

    The paper discusses complexity of Component-Based Development (CBD) of embedded systems. Although CBD has its merits, it must be augmented with methods to control the complexities that arise due to resource constraints, timeliness, and run-time deployment of components in embedded system development. Software component specification, system-level testing, and run-time reliability...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Feb 2010

    Estimating The Effects Of Dormitory Living On Student Performance

    Many large universities require freshman to live in dormitories on the basis that living on campus leads to better classroom performance and lower drop out incidence. Large universities also provide a number of academic services in dormitories such as tutoring and student organizations that encourage an environment conducive to learning....

    Provided By University of Wisconsin-La Crosse

  • White Papers // Sep 2012

    SymDrive: Testing Drivers without Devices

    Device drivers are critical to operating-system reliability, yet are difficult to test and debug. They run in kernel mode, which prohibits the use of many runtime program-analysis tools available for user-mode code, such as Val-grind. Their need for hardware can prevent testing altogether: over two dozen driver Linux and FreeBSD...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Sep 2012

    Automated Concurrency-Bug Fixing

    Concurrency bugs are widespread in multithreaded programs. Fixing them is time-consuming and error-prone. The authors present CFix, a system that automates the repair of concurrency bugs. CFix works with a wide variety of concurrency-bug detectors. For each failure-inducing interleaving reported by a bug detector, CFix first determines a combination of...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jan 2013

    Multirate Media Streaming Using Network Coding

    Multimedia data transfers typically involve large volumes of data. Multirate multicast transmissions using layered source coding are generally used to deliver data streams to heterogeneous receivers. Network coding has been envisioned to increase throughput and deliver higher data rates than conventional source coding or no coding. The paper proposes a...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jan 2012

    Finite Blocklength Slepian-Wolf Coding

    The authors characterize the fundamental limits for distributed lossless source coding (Slepian-Wolf) in the finite blocklength regime. They introduce a fundamental quantity known as the entropy dispersion matrix, which is analogous to scalar dispersion quantities. They show that if this matrix is positive-definite, the optimal rate region under the constraint...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Feb 2012

    On the Dispersions of Three Network Information Theory Problems

    The authors characterize fundamental limits for distributed lossless source coding (the Slepian-Wolf problem), the multiple-access channel and the asymmetric broadcast channel in the finite blocklength setting. For the Slepian-Wolf problem, they introduce a fundamental quantity known as the entropy dispersion matrix, which is analogous to the scalar dispersion quantities that...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Oct 2011

    Canonical Estimation in a Rare-Events Regime

    The authors propose a general methodology for performing statistical inference within a 'rare-events regime' that was recently suggested by Wagner, Viswanath and Kulkarni. Their approach allows one to easily establish consistent estimators for a very large class of canonical estimation problems, in a large alphabet setting. These include the problems...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Mar 2013

    High-Throughput GPU-Based LDPC Decoding

    Low-Density Parity-Check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Nov 2009

    Password Authentication from a Human Factors Perspective: Results of a Survey among End-Users

    Considering that many organizations today are extremely dependent on information technology, Computer and Information Security (CIS) has become a critical concern from a business viewpoint. CIS is concerned with protecting the confidentiality, integrity, accessible information, when using computer systems. Much research has been conducted on CIS in the past years....

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jan 2014

    Coherent Network Interfaces for Fine-Grain Communication

    Historically, processor accesses to memory-mapped device registers have been marked uncachable to insure their visibility to the device. The ubiquity of snooping cache coherence, however, makes it possible for processors and devices to interact with cachable, coherent memory operations. Using coherence can improve performance by facilitating burst transfers of whole...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Aug 2013

    Distribution-Based Query Scheduling

    Query scheduling, a fundamental problem in database management systems, has recently received a renewed attention, perhaps in part due to the rise of the \"Database as a Service\" (DaaS) model for database deployment. While there has been a great deal of work investigating different scheduling algorithms, there has been comparatively...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Aug 2013

    Online, Asynchronous Schema Change in F1

    The authors introduce a protocol for schema evolution in a globally distributed database management system with shared data, stateless servers, and no global membership. Their protocol is asynchronous - it allows different servers in the database system to transition to a new schema at different times - and online -...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Sep 2011

    When Free Is Not Really Free: What Does It Cost to Run a Database Workload in the Cloud?

    One of the greatest hurdles associated with deploying traditional on-site Relational DataBase Management Systems (RDBMSs) is the overall complexity of choosing, configuring, and maintaining the RDBMS as well as the server it operates on. The current computing trend towards cloud-based Database-as-a-Service (DaaS) as an alternative to traditional on-site Relational DataBase...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Oct 2009

    Discovery-Driven Graph Summarization

    Large graph datasets are ubiquitous in many domains, including social networking and biology. Graph summarization techniques are crucial in such domains as they can assist in uncovering useful insights about the patterns hidden in the underlying data. One important type of graph summarization is to produce small and informative summaries...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Mar 2011

    Rethinking Query Processing for Energy Efficiency: Slowing Down to Win the Race

    The biggest change in the TPC benchmarks in over two decades is now well underway - namely the addition of an energy efficiency metric along with traditional performance metrics. This change is fueled by the growing, real, and urgent demand for energy-efficient database processing. Database query processing engines must now...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jul 2012

    Fast Peak-to-Peak Restart for SSD Buffer Pool Extension

    A promising usage of Flash SSDs in a DBMS is to use it to extend the main memory buffer pool by caching in the SSD selected pages that are evicted from the buffer pool. These schemes have been shown to produce big performance gains in the steady state. Simple methods...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jan 2013

    Generic Design Patterns for Tunable and High-Performance SSD-Based Indexes

    A number of data-intensive systems require using random hash-based indexes of various forms, e.g., hash-tables, Bloom filters, and locality sensitive hash tables. In this paper, the authors present general SSD optimization techniques that can be used to design a variety of such indexes while ensuring higher performance and easier tunability...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Oct 2007

    Future Superscalar Processors Based on Instruction Compounding

    Future processor cores will achieve both high performance and power efficiency by using relatively simple hardware designs. This paper proposes and describes such a future processor in some detail, to illustrate the key features of future designs. A key feature of the future processor is the use of compounded or...

    Provided By University of Wisconsin-La Crosse

  • White Papers // May 2014

    DimmWitted: A Study of Main-Memory Statistical Analytics

    The authors perform the first study of the tradeoff space of access methods and replication to support statistical analytics using first-order methods executed in the main memory of a Non-Uniform Memory Access (NUMA) machine. Statistical analytics systems differ from conventional SQL-analytics in the amount and types of memory incoherence they...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jan 2014

    Supporting x86-64 Address Translation for 100s of GPU Lanes

    Efficient memory sharing between CPU and GPU threads can greatly expand the effective set of GPGPU workloads. For increased programmability, this memory should be uniformly virtualized, necessitating compatible address translation support for GPU memory references. However, even a modest GPU might need 100s of translations per cycle (6 CUs 64...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jan 2014

    gem5-gpu: A Heterogeneous CPU-GPU Simulator

    gem5-gpu is a new simulator that models tightly integrated CPU-GPU systems. It builds on gem5, a modular full-system CPU simulator, and GPGPU-Sim, a detailed GPGPU simulator. gem5-gpu routes most memory accesses through ruby, which is a highly configurable memory system in gem5. By doing this, it is able to simulate...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jul 2013

    CMP Directory Coherence: One Granularity Does Not Fit All

    To support legacy software, large CMPs often provide cache coherence via an on-chip directory rather than snooping. In those designs, a key challenge is maximizing the effectiveness of precious on-chip directory state. Most current directory protocols miss an opportunity by organizing all state in per-block records. To increase the \"Reach\"...

    Provided By University of Wisconsin-La Crosse

  • White Papers // May 2008

    Combinational Test Generation for Acyclic Sequential Circuits Using a Balanced ATPG Model

    To create a combinational ATPG model for an acyclic sequential circuit, all unbalanced fanouts, i.e., fanouts reconverging with different sequential depths are moved toward primary inputs using a retiming-like transformation. All flip-flops are then shorted and unbalanced primary input fanouts are split as additional primary inputs. A combinational test vector...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Feb 2010

    End-to-End Data Integrity for File Systems: A ZFS Case Study

    The authors present a study of the effects of disk and memory corruption on file system data integrity. Their analysis focuses on Sun's ZFS, a modern commercial offering with numerous reliability mechanisms. Through careful and thorough fault injection, they show that ZFS is robust to a wide range of disk...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jan 2014

    ViewBox: Integrating Local File Systems with Cloud Storage Services

    Cloud-based file synchronization services have become enormously popular in recent years, both for their ability to synchronize files across multiple clients and for the automatic cloud backups they provide. However, despite the excellent reliability that the cloud back-end provides, the loose coupling of these services and the local file system...

    Provided By University of Wisconsin-La Crosse

  • White Papers // May 2013

    Box: Towards Reliability and Consistency in Dropbox-like File Synchronization Services

    Cloud-based file synchronization services, such as Dropbox, have never been more popular. They provide excellent reliability and durability in their server-side storage, and can provide a consistent view of their synchronized files across multiple clients. However, the loose coupling of these services and the local file system may, in some...

    Provided By University of Wisconsin-La Crosse

  • White Papers // May 2013

    Fault Isolation and Quick Recovery in Isolation File Systems

    High availability is critical for file systems. For desktops and laptops, local file systems directly affect data access for the user; for mobile devices, user data is also stored in a local file system; for file and storage servers, a shared cluster file system may be used to store virtual...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jan 2013

    Getting Real: Lessons in Transitioning Research Simulations into Hardware Systems

    Flash-based Solid-State Drives (SSDs) have revolutionized storage with their high performance. Their sophisticated internal mechanisms have led to a plethora of research on how to optimize applications, file systems, and internal SSD designs. Due to the closed nature of commercial devices though, most research on the internals of an SSD,...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jan 2013

    Warming up Storage-Level Caches with Bonfire

    Large caches in storage servers have become essential for meeting service levels required by applications. These caches need to be warmed with data often today due to various scenarios including dynamic creation of cache space and server restarts that clear cache contents. When large storage caches are warmed at the...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Feb 2012

    De-Indirection for Flash-Based SSDs With Nameless Writes

    The authors present Nameless Writes, a new device interface that removes the need for indirection in modern Solid-State storage Devices (SSDs). Nameless writes allow the device to choose the location of a write; only then is the client informed of the name (i.e., address) where the block now resides. Doing...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Jan 2012

    Consistency Without Ordering

    Modern file systems use ordering points to maintain consistency in the face of system crashes. However, such ordering leads to lower performance, higher complexity, and a strong and perhaps naive dependence on lower layers to correctly enforce the ordering of writes. In this paper, the authors introduce the No-order File...

    Provided By University of Wisconsin-La Crosse

  • White Papers // Apr 2011

    Quarantine: Fault Tolerance for Concurrent Servers with Data-Driven Selective Isolation

    Commodity CPUs are now parallel processors. Parallelism has become a tenet in most processor designs, with some researchers suggesting that the era of manycore systems consisting of thousands of cores is fast approaching. The authors present Quarantine, a system that enables data driven selective isolation within concurrent server applications. Instead...

    Provided By University of Wisconsin-La Crosse