Carnegie Mellon University

Displaying 401-440 of 545 results

  • White Papers // Jan 2010

    Configuring Your Web Browser and Using WebISO

    To be compatible with the Carnegie Mellon Web Portal and services provided by Administrative Computing, Computing Services and the Office of Technology for Education, your web browser must meet the following requirements: 1)The browser must be configured to accept cookies. 2) The browser must be configured to run JavaScript. 3)...

    Provided By Carnegie Mellon University

  • White Papers // Jan 2010

    Dynamic Source Routing in Ad Hoc Wireless Networks

    An ad hoc network is a collection of wireless mobile hosts forming a temporary network without the aid of any established infrastructure or centralized administration. In such an environment, it may be necessary for one mobile host to enlist the aid of other hosts in forwarding a packet to its...

    Provided By Carnegie Mellon University

  • White Papers // Jan 2010

    Physical Layer-Constrained Routing in Ad-hoc Wireless Networks: A Modified AODV Protocol with Power Control

    Routing in Ad Hoc wireless networks is not only a problem of finding a route with shortest length, but it is also a problem of finding a stable and good quality communication route in order to avoid any unnecessary packet loss. In this paper, authors propose a modified ad hoc...

    Provided By Carnegie Mellon University

  • White Papers // Jan 2010

    On the Approximability of Some Network Design Problems

    Approximation algorithms have had much success in the area of network design, with both combinatorial and linear-programming based techniques leading to many constant factor approximation algorithms. Despite these successes, several basic network design problems have eluded the quest for constant-factor approximations, with the current best approximation guarantees being logarithmic or...

    Provided By Carnegie Mellon University

  • White Papers // Jan 2010

    A Resource Allocation Model for QoS Management

    Quality of Service (QoS) has been receiving wide attention in recent years in many research communities including networking, multimedia systems and distributed systems. In large distributed systems such as those used in defense systems, on demand service and inter-networked systems, applications contending for system resources must satisfy timing, reliability and...

    Provided By Carnegie Mellon University

  • Webcasts // Jan 2010

    SoS Architecture Evaluation and Quality Attribute Specification

    A System of Systems (SoS) can experience costly rework, schedule overruns, and the failure to achieve performance goals-from problems that surface late in the development life cycle or after the SoS is in operation. One prominent reason for these severe integration and operational problems is inconsistency, ambiguity, and omission in...

    Provided By Carnegie Mellon University

  • White Papers // Jan 2010

    Modeling TCP-Vegas Under On/Off Traffic

    There has been a significant amount of research toward modeling variants of the Transmission Control Protocol (TCP) in order to understand the impact of this protocol on file transmission times and network utilization. Analytical models have emerged as a way to reduce the time required for evaluation when compared with...

    Provided By Carnegie Mellon University

  • White Papers // Dec 2009

    Efficient Similarity Estimation for Systems Exploiting Data Redundancy

    Many modern systems exploit data redundancy to improve efficiency. These systems split data into chunks, generate identifiers for each of them, and compare the identifiers among other data items to identify duplicate chunks. As a result, chunk size becomes a critical parameter for the efficiency of these systems: it trades...

    Provided By Carnegie Mellon University

  • White Papers // Dec 2009

    A Contractual Anonymity System

    The authors propose, develop, and implement techniques for achieving contractual anonymity. In contractual anonymity, a user and service provider enter into an anonymity contract. The user is guaranteed anonymity and message unlinkability from the contractual anonymity system unless she breaks the contract. The service provider is guaranteed that it can...

    Provided By Carnegie Mellon University

  • White Papers // Dec 2009

    Coding Without Your Crystal Ball: Unanticipated Object-Oriented Reuse

    In many ways, existing languages place unrealistic expectations on library and framework designers, allowing some varieties of client reuse only if it is explicitly - sometimes manually - supported. Instead, the authors should aim for the ideal: a language design that reduces the amount of prognostication that is required on...

    Provided By Carnegie Mellon University

  • White Papers // Nov 2009

    LSM-Based Secure System Monitoring Using Kernel Protection Schemes

    Monitoring a process and its file I/O behaviors is important for security inspection for a data center server against intrusions, malware infection and information leakage. In the case of the Linux kernel 2.6, a set of hook functions called the Linux Security Module (LSM) has been implemented in order to...

    Provided By Carnegie Mellon University

  • White Papers // Nov 2009

    BitShred: Fast, Scalable Code Reuse Detection in Binary Code

    Many experts believe that new malware is created at a rate faster than legitimate software. For example, in 2007 over one million new malware samples were collected by a major security solution vendor. However, it is often speculated, though to the best of the authors' knowledge unproven, that new malware...

    Provided By Carnegie Mellon University

  • White Papers // Nov 2009

    Computer Generation of Efficient Software Viterbi Decoders

    This paper presents a program generator for fast software Viterbi decoders for arbitrary convolutional codes. The input to the generator is a specification of the code and a single-instruction multiple-data (SIMD) vector length. The output is an optimized C implementation of the decoder that uses explicit Intel SSE vector instructions....

    Provided By Carnegie Mellon University

  • Webcasts // Nov 2009

    Talking Architects With Len Bass

    In this webcast, the presenter talks with Len Bass, co-author of Software Architecture in Practice, about how quality attributes (non-functional requirements) could be considered as "First class citizens" of a project in an agile development environment (20 minutes, 15 seconds).

    Provided By Carnegie Mellon University

  • White Papers // Oct 2009

    When and How to Change Quorums on Wide Area Networks

    In wide-area settings, unpredictable events, such as flash crowds caused by nearly instantaneous popularity of services, can cause servers that are expected to respond quickly to instead suddenly respond slowly. This presents a problem for achieving consistently good performance in quorum-based distributed systems, in which clients must choose which quorums...

    Provided By Carnegie Mellon University

  • White Papers // Oct 2009

    Online and Stochastic Survivable Network Design

    This paper discusses online and stochastic network design while taking into consideration the edge-connectivity survivable network design problem where a graph with edge costs and edge-connectivity requirements for different sets of vertices finds a minimum-cost network that provides the required connectivity. This problem has been known to admit good approximation...

    Provided By Carnegie Mellon University

  • White Papers // Oct 2009

    Access Control for Home Data Sharing: Attitudes, Needs and Practices

    As digital content becomes more prevalent in the home, nontechnical users are increasingly interested in sharing that content with others and accessing it from multiple devices. Not much is known about how these users think about controlling access to this data. To better understand this, the authors conducted semi-structured, in-situ...

    Provided By Carnegie Mellon University

  • White Papers // Oct 2009

    Tree Embeddings for Two-Edge-Connected Network Design

    The group Steiner problem is a classical network design problem where the authors are given a graph and a collection of groups of vertices, and want to build a min-cost subgraph that connects the root vertex to at least one vertex from each group. What if they wanted to build...

    Provided By Carnegie Mellon University

  • White Papers // Oct 2009

    A Random Dynamical Systems Approach to Filtering in Large-Scale Networks

    Networked Control Systems (NCS) have been proposed as the paradigm to model, design and analyze control systems where the effects of computation and communication on the performance of the closed loop system cannot be neglected and need to be incorporated in the model. NCS are amenable to describe large-scale systems...

    Provided By Carnegie Mellon University

  • White Papers // Oct 2009

    A Narrow Waist for Multipath Routing

    Many applications can use multipath routing to improve reliability or throughput, and many multipath routing protocols exist. Despite this diversity of mechanisms and applications, no common interface exists to allow an application to select these paths. This paper presents a design for such a common interface, called path bits. Path...

    Provided By Carnegie Mellon University

  • White Papers // Oct 2009

    Understanding and Maturing the Data-Intensive Scalable Computing Storage Substrate

    Modern science has available to it, and is more productively pursued with, massive amounts of data, typically either gathered from sensors or output from some simulation or processing. Data Intensive Scalable Computing (DISC) couples computational resources with the data storage and access capabilities to handle massive data science quickly and...

    Provided By Carnegie Mellon University

  • White Papers // Oct 2009

    Secure Design Patterns

    The cost of fixing system vulnerabilities and the risk associated with vulnerabilities after system deployment are high for both developers and end users. While there are a number of best practices available to address the issue of software security vulnerabilities, these practices are often difficult to reuse due to the...

    Provided By Carnegie Mellon University

  • Webcasts // Oct 2009

    The Survivability Analysis Framework (SAF)

    This webinar was developed to address the following research questions: How can mission survivability be maintained as interoperability of systems increases? How can operational impacts (such as information security) are tied to technology changes in operational mission execution.

    Provided By Carnegie Mellon University

  • White Papers // Sep 2009

    Authenticated Communication and Computation in Known-Topology Networks With a Trusted Authority

    The authors show that two distinguishing properties of sensor networks, i.e., the presence of a trusted base station, and the pre-knowledge of the fixed network topology, can yield security protocols that are both communication-efficient and highly general. They show new protocols for broadcast authentication, credential dissemination and node-to-node signatures. For...

    Provided By Carnegie Mellon University

  • White Papers // Sep 2009

    Distributed Consensus Algorithms in Sensor Networks: Quantized Data and Random Link Failures

    The paper studies the problem of distributed average consensus in sensor networks with quantized data and random link failures. To achieve consensus, dither (small noise) is added to the sensor states before quantization. When the quantizer range is unbounded (countable number of quantizer levels), stochastic approximation shows that consensus is...

    Provided By Carnegie Mellon University

  • White Papers // Sep 2009

    A Study of User-Friendly Hash Comparison Schemes

    Several security protocols require a human to compare two hash values to ensure successful completion. When the hash values are represented as long sequences of numbers, humans may make a mistake or require significant time and patience to accurately compare the hash values. To improve usability during comparison, a number...

    Provided By Carnegie Mellon University

  • White Papers // Sep 2009

    Weight Optimization for Consensus Algorithms With Correlated Switching Topology

    The authors design the weights in consensus algorithms with spatially correlated random topologies. These arise with: Networks with spatially correlated random link failures and networks with randomized averaging protocols. They show that the weight optimization problem is convex for both symmetric and asymmetric random graphs. With symmetric random networks, they...

    Provided By Carnegie Mellon University

  • White Papers // Sep 2009

    Privacy-Preserving Relationship Path Discovery in Social Networks

    As social networks sites continue to proliferate and are being used for an increasing variety of purposes, the privacy risks raised by the full access of social networking sites over user data become uncomfortable. A decentralized social network would help alleviate this problem, but offering the functionalities of social networking...

    Provided By Carnegie Mellon University

  • White Papers // Sep 2009

    A Language for Large Ensembles of Independently Executing Nodes

    The authors address how to write programs for distributed computing systems in which the network topology can change dynamically. Examples of such systems, which they call ensembles, include programmable sensor networks (where the network topology can change due to failures in the nodes or links) and modular robotics systems (whose...

    Provided By Carnegie Mellon University

  • White Papers // Sep 2009

    SmartCard Prototype

    The SmartCard is envisioned as a tool for users to conveniently access the results of the 300 Cites virtual experiment. In that study, the effectiveness of various strategies for modifying taxpaying behavior will be examined by means of simulation, using demographically varied population sets representative of specific urban areas. The...

    Provided By Carnegie Mellon University

  • White Papers // Sep 2009

    Hyrax: Cloud Computing on Mobile Devices Using MapReduce

    Today's smartphones operate independently of each other, using only local computing, sensing, networking, and storage capabilities and functions provided by remote Internet services. It is generally difficult or expensive for one smartphone to share data and computing resources with another. Data is shared through centralized services, requiring expensive uploads and...

    Provided By Carnegie Mellon University

  • White Papers // Sep 2009

    Formal Methods for Privacy

    Privacy means something different to everyone. Against a vast and rich canvas of diverse types of privacy rights and violations, the authors argue technology's dual role in privacy: new technologies raise new threats to privacy rights and new technologies can help preserve privacy. Formal methods, as just one class of...

    Provided By Carnegie Mellon University

  • Podcasts // Sep 2009

    The Smart Grid: Managing Electrical Power Distribution and Use

    In this podcast, the speaker discusses the growing digitization of electrical power distribution (referred to as the smart grid) and some of the related security and privacy issues. He also introduces new work at the SEI on a smart grid maturity model that will be discussed in more detail in...

    Provided By Carnegie Mellon University

  • Webcasts // Sep 2009

    How to Effectively Evaluate Software Architecture and Identify Risks

    Software architecture is critical for business success. Think about it. Solid architecture prevents defects and system failures. It saves money and gets quality products to the market faster. Most software-reliant systems are required to be modifiable and reliable. They may also need to be secure, interoperable, and portable.

    Provided By Carnegie Mellon University

  • White Papers // Aug 2009

    High Dimensional Consensus in Large-Scale Networks: Theory and Applications

    In this paper, the authors develop the theory of High Dimensional Consensus (HDC), a general class of distributed algorithms in large-scale networks. HDC relies only on local information, local communication, and low-order computation, and, hence, is ideally suited to implement network tasks under resource constraints, e.g., in sparse networks with...

    Provided By Carnegie Mellon University

  • White Papers // Aug 2009

    A Flexible Approach to Embedded Network Multicast Authentication

    Distributed embedded systems are becoming increasingly vulnerable to attack as they are connected to external networks. Unfortunately, they often have no built-in authentication capability. Multicast authentication mechanisms required to secure embedded networks must function within the unique constraints of these systems, making it difficult to apply previously proposed schemes. The...

    Provided By Carnegie Mellon University

  • White Papers // Aug 2009

    Impact of Clustering on the BER Performance of Ad Hoc Wireless Networks

    Ad hoc wireless networks are characterized by multi-hop radio communications. The spatial distribution of the nodes is seldom perfectly regular. In particular, in a realistic ad hoc wireless network communication scenario, the nodes are likely to be clustered, i.e., to configure themselves in subgroups such that the nodes inside each...

    Provided By Carnegie Mellon University

  • White Papers // Aug 2009

    An Empirical Analysis of Mobile Voice and SMS Service: A Structural Model

    In addition to wireless telephony boom, a similar exponential increasing trend in wireless data service -for example, Short Message Service (SMS) - is visible as technology advances. The authors develop a structural model to examine user demand for voice and SMS services. Specifically, they measure the own- and the cross-price...

    Provided By Carnegie Mellon University

  • White Papers // Aug 2009

    Chip-Level Redundancy in Distributed Shared-Memory Multiprocessors

    Distributed Shared-Memory (DSM) multiprocessors provide a scalable hardware platform, but lack the necessary redundancy for mainframe-level reliability and availability. Chip-level redundancy in a DSM server faces a key challenge: the increased latency to check results among redundant components. To address performance overheads, the authors propose a checking filter that reduces...

    Provided By Carnegie Mellon University

  • White Papers // Aug 2009

    A Supervised Factorial Acoustic Model for Simultaneous Multiparticipant Vocal Activity Detection in Close-Talk Microphone Recordings of Meetings

    The authors have implemented a supervised acoustic model for VAD in conversations with an arbitrary number of participants, and analyzed its performance with respect to the unsupervised AM baseline. Analysis consisted of a broad exploration of several parameters, two of which (inclusion of NLED features and decoding constraints on the...

    Provided By Carnegie Mellon University

  • White Papers // Jan 2013

    Modulation Coding for Flash Memories

    The aggressive scaling down of flash memories has threatened data reliability since the scaling down of cell sizes gives rise to more serious degradation mechanisms such as cell-to-cell interference and lateral charge spreading. The effect of these mechanisms has pattern dependency and some data patterns are more vulnerable than other...

    Provided By Carnegie Mellon University

  • White Papers // Dec 2012

    Evaluating Row Buffer Locality in Future Non-Volatile Main Memories

    DRAM-based main memories have read operations that destroy the read data, and as a result, must buffer large amounts of data on each array access to keep chip costs low. Unfortunately, system-level trends such as increased memory contention in multi-core architectures and data mapping schemes that improve memory parallelism may...

    Provided By Carnegie Mellon University

  • White Papers // Jul 2010

    Polonium: Tera-Scale Graph Mining for Malware Detection

    The authors present Polonium, a scalable and effective technology for detecting malware. They evaluated it with the largest anonymized le submissions dataset ever published, which spans over 60 terabytes of disk space. They formulated the problem of detecting malware as a large-scale graph mining and inference task, for which they...

    Provided By Carnegie Mellon University

  • White Papers // Sep 2006

    Is There a Cost to Privacy Breaches? an Event Study

    While the literature on information security economics has begun to investigate the stock market impact of security breaches and vulnerability announcements, little more than anecdotal evidence exists on the effects of privacy breaches. In this paper the authors present the first comprehensive analysis of the impact of a company's privacy...

    Provided By Carnegie Mellon University

  • White Papers // Nov 2009

    BitShred: Fast, Scalable Code Reuse Detection in Binary Code

    Many experts believe that new malware is created at a rate faster than legitimate software. For example, in 2007 over one million new malware samples were collected by a major security solution vendor. However, it is often speculated, though to the best of the authors' knowledge unproven, that new malware...

    Provided By Carnegie Mellon University

  • White Papers // Mar 2011

    Polonium: Tera-Scale Graph Mining and Inference for Malware Detection

    The authors present polonium, a novel Symantec technology that detects malware through large-scale graph inference. Based on the scalable belief propagation algorithm, polonium infers every file's reputation, flagging les with low reputation as malware. They evaluated Polonium with a billion-node graph constructed from the largest le submissions dataset ever published...

    Provided By Carnegie Mellon University

  • White Papers // Jun 2011

    The Effect of Online Privacy Information on Purchasing Behavior: An Experimental Study

    Although online retailers detail their privacy practices in online privacy policies, this information often remains invisible to consumers, who seldom make the effort to read and understand those policies. This paper reports on research undertaken to determine whether a more prominent display of privacy information will cause consumers to incorporate...

    Provided By Carnegie Mellon University

  • White Papers // Sep 2012

    Insider Threats to Cloud Computing: Directions for New Research Challenges

    Cloud computing related insider threats are often listed as a serious concern by security researchers, but to date this threat has not been thoroughly explored. The authors believe the fundamental nature of current insider threats will remain relatively unchanged in a cloud environment, but the paradigm does reveal new exploit...

    Provided By Carnegie Mellon University

  • White Papers // Aug 2009

    Chip-Level Redundancy in Distributed Shared-Memory Multiprocessors

    Distributed Shared-Memory (DSM) multiprocessors provide a scalable hardware platform, but lack the necessary redundancy for mainframe-level reliability and availability. Chip-level redundancy in a DSM server faces a key challenge: the increased latency to check results among redundant components. To address performance overheads, the authors propose a checking filter that reduces...

    Provided By Carnegie Mellon University

  • White Papers // Aug 2011

    Investigating the Viability of Bufferless NoCs in Modern Chip Multi-Processor Systems

    Chip Multi-Processors (CMP) are quickly growing to dozens and potentially hundreds of cores, and as such the design of the interconnect for on chip resources has become an important field of study. Of the available topologies, tiled mesh networks are an appealing approach in tiled CMPs, as they are relatively...

    Provided By Carnegie Mellon University

  • White Papers // Jun 2011

    A Loadable Task Execution Recorder for Hierarchical Scheduling in Linux

    In this paper, the authors present a Hierarchical Scheduling Framework (HSF) recorder for Linux-based operating systems. The HSF recorder is a loadable kernel module that is capable of recording tasks and servers without requiring any kernel modifications. Hence, it complies with the reliability and stability requirements in the area of...

    Provided By Carnegie Mellon University

  • White Papers // Sep 2010

    Scheduling Parallel Real-Time Tasks on Multi-Core Processors

    Massively multi-core processors are rapidly gaining market share with major chip vendors offering an ever increasing number of cores per processor. From a programming perspective, the sequential programming model does not scale very well for such multi-core systems. Parallel programming models such as OpenMP present promising solutions for more effectively...

    Provided By Carnegie Mellon University

  • White Papers // Mar 2012

    Towards Adaptive GPU Resource Management for Embedded Real-Time Systems

    In this paper, the authors present two conceptual frameworks for GPU applications to adjust their task execution times based on total workload. These frameworks enable smart GPU resource management when many applications share GPU resources while the workloads of those applications vary. Application developers can explicitly adjust the number of...

    Provided By Carnegie Mellon University

  • White Papers // Jun 2011

    Resource Sharing in GPU-Accelerated Windowing Systems

    Recent windowing systems allow graphics applications to directly access the Graphics Processing Unit (GPU) for fast rendering. However, application tasks that render frames on the GPU contend heavily with the windowing server that also accesses the GPU to blit the rendered frames to the screen. This resource-sharing nature of direct...

    Provided By Carnegie Mellon University

  • White Papers // Jun 2012

    ExSched: An External CPU Scheduler Framework for Real-Time Systems

    Scheduling theory and algorithms have been well studied in the real-time systems literature. Many useful approaches and solutions have appeared in different problem domains. While their theoretical effectiveness has been extensively discussed, the community is now facing implementation challenges that show the impact of the algorithms in practice. In this...

    Provided By Carnegie Mellon University

  • White Papers // Oct 2013

    GOTCHA Password Hackers!

    The authors introduce GOTCHAs (Generating panOptic Turing tests to tell Computers and Humans Apart) as a way of preventing automated offline dictionary attacks against user selected passwords. A GOTCHA is a randomized puzzle generation protocol, which involves interaction between a computer and a human. Informally, a GOTCHA should satisfy two...

    Provided By Carnegie Mellon University

  • White Papers // Feb 2010

    Concurrent Autonomous Self-Test for Uncore Components in System-on-Chips

    Concurrent autonomous self-test, or online self-test, allows a system to test itself, concurrently during normal operation, with no system downtime visible to the end-user. Online self-test is important for overcoming major reliability challenges such as early-life failures and circuit aging in future System-on-Chips (SoCs). To ensure required levels of overall...

    Provided By Carnegie Mellon University

  • White Papers // Jul 2011

    Congestion Control for Scalability in Bufferless On-Chip Networks

    In this paper, the authors present Network-on-Chip (NoC) design and contrast it to traditional network design, highlighting both similarities and differences between NoCs and traditional networks. As an initial case study, they examine network congestion in bufferless NoCs. They show that congestion manifests itself differently in a NoC than in...

    Provided By Carnegie Mellon University

  • White Papers // Mar 2014

    The Heterogeneous Block Architecture

    This paper makes two new observations that lead to a new heterogeneous core design. First, the authors observe that most serial code exhibits fine-grained heterogeneity: at the scale of tens or hundreds of instructions, regions of code fit different micro-architectures better (at the same point or at different points in...

    Provided By Carnegie Mellon University

  • White Papers // Jan 2014

    Bounding Memory Interference Delay in COTS-based Multi-Core Systems

    In Commercial-Off-The-Shelf (COTS) multi-core systems, a task running on one core can be delayed by other tasks running simultaneously on other cores due to interference in the shared DRAM main memory. Such memory interference delay can be large and highly variable, thereby posing a significant challenge for the design of...

    Provided By Carnegie Mellon University

  • White Papers // Dec 2010

    CHIPPER: A Low-Complexity Bufferless Deflection Router

    As Chip Multi-Processors (CMPs) scale to tens or hundreds of nodes, the interconnect becomes a significant factor in cost, energy consumption and performance. Recent work has explored many design tradeoffs for Networks-on-Chip (NoCs) with novel router architectures to reduce hardware cost. In particular, recent work proposes bufferless deflection routing to...

    Provided By Carnegie Mellon University

  • White Papers // Dec 2013

    Exploiting Compressed Block Size as an Indicator of Future Reuse

    The authors introduce a set of new Compression-Aware Management Policies (CAMP) for on-chip caches that employ data compression. Their management policies are based on two key ideas. First, they show that it is possible to build a more efficient management policy for compressed caches if the compressed block size is...

    Provided By Carnegie Mellon University

  • White Papers // Jan 2013

    Application-to-Core Mapping Policies to Reduce Memory System Interference in Multi-Core System

    Future many-core processors are likely to concurrently execute a large number of diverse applications. How these applications are mapped to cores largely determines the interference between these applications in critical shared hardware resources. This paper proposes new application-to-core mapping policies to improve system performance by reducing inter-application interference in the...

    Provided By Carnegie Mellon University

  • White Papers // May 2011

    Application-to-Core Mapping Policies to Reduce Interference in On-Chip Networks

    As the industry moves toward many-core processors, Network-on-Chips (NoCs) will likely become the communication backbone of future microprocessor designs. The NoC is a critical shared resource and its effective utilization is essential for improving overall system performance and fairness. In this paper, the authors propose application-to-core mapping policies to reduce...

    Provided By Carnegie Mellon University

  • White Papers // Oct 2012

    Flash Correct-and-Refresh: Retention-Aware Error Management for Increased Flash Memory Lifetime

    With the continued scaling of NAND flash and multi-level cell technology, flash-based storage has gained widespread use in systems ranging from mobile platforms to enterprise servers. However, the robustness of NAND flash cells is an increasing concern, especially at nanometer-regime process geometries. NAND flash memory bit error rate increases exponentially...

    Provided By Carnegie Mellon University

  • White Papers // Mar 2008

    Flexible Hardware Acceleration for Instruction-Grain Program Monitoring

    Instruction-grain program monitoring tools, which check and analyze executing programs at the granularity of individual instructions, are invaluable for quickly detecting bugs and security attacks and then limiting their damage (via containment and/or recovery). Unfortunately, their fine-grain nature implies very high monitoring overheads for software-only tools, which are typically based...

    Provided By Carnegie Mellon University

  • White Papers // Nov 2006

    Fast Random Walk with Restart and Its Applications

    The authors propose fast solutions to this problem. The heart of their approach is to exploit two important properties shared by many real graphs: linear correlations and block-wise, community-like structure. They exploit the linearity by using low-rank matrix approximation and the community structure by graph partitioning, followed by the Sherman-Morrison...

    Provided By Carnegie Mellon University

  • White Papers // Sep 2013

    Program Interference in MLC NAND Flash Memory: Characterization, Modeling, and Mitigation

    As NAND flash memory continues to scale down to smaller process technology nodes, its reliability and endurance are degrading. One important source of reduced reliability is the phenomenon of program interference: when a flash cell is programmed to a value, the programming operation affects the threshold voltage of not only...

    Provided By Carnegie Mellon University

  • White Papers // Sep 2013

    LightTx: A Lightweight Transactional Design in Flash-based SSDs to Support Flexible Transactions

    Flash memory has accelerated the architectural evolution of storage systems with its unique characteristics compared to magnetic disks. The no-overwrite property of flash memory has been leveraged to efficiently support transactions, a commonly used mechanism in systems to provide consistency. However, existing transaction designs embedded in flash-based Solid State Drives...

    Provided By Carnegie Mellon University

  • White Papers // Jun 2013

    Memory Scaling: A Systems Architecture Perspective

    The memory system is a fundamental performance and energy bottleneck in almost all computing systems. Recent system design, application, and technology trends that require more capacity, bandwidth, efficiency, and predictability out of the memory system make it an even more important system bottleneck. At the same time, DRAM technology is...

    Provided By Carnegie Mellon University

  • White Papers // Oct 2013

    Challenges in Security and Privacy for Mobile Edge-Clouds

    Mobile devices such as Smartphone and tablets are ubiquitous today, and many of them possess significant computation power, powerful sensors such as high-resolution cameras and GPS sensors, and a wealth of sensor data such as photos, videos, and location information. Collections of mobile devices in close geographical proximity present both...

    Provided By Carnegie Mellon University

  • White Papers // Aug 2013

    A Proof of Correctness for Egalitarian Paxos

    In this paper the authors present a proof of correctness for Egalitarian Paxos (EPaxos), a new distributed consensus algorithm based on Paxos. EPaxos achieves three goals: availability without interruption as long as a simple majority of replicas are reachable - its availability is not interrupted when replicas crash or fail...

    Provided By Carnegie Mellon University

  • White Papers // Jun 2013

    A Case for Efficient Hardware/Software Cooperative Management of Storage and Memory

    Most applications manipulate persistent data, yet traditional systems decouple data manipulation from persistence in a two-level storage model. Programming languages and system software manipulate data in one set of formats in volatile main memory (DRAM) using a load/store interface, while storage systems maintain persistence in another set of formats in...

    Provided By Carnegie Mellon University

  • White Papers // Feb 2014

    SpringFS: Bridging Agility and Performance in Elastic Distributed Storage

    The elastic storage systems can be expanded or contracted to meet current demand, allowing servers to be turned off or used for other tasks. However, the usefulness of an elastic distributed storage system is limited by its agility: how quickly it can increase or decrease its number of servers. Due...

    Provided By Carnegie Mellon University

  • White Papers // Dec 2013

    Tetrisched: Space-Time Scheduling for Heterogeneous Datacenters

    Tetrisched is a new scheduler that explicitly considers both job-specific preferences and estimated job runtimes in its allocation of resources. Combined, this information allows tetrisched to provide higher overall value to complex application mixes consolidated on heterogeneous collections of machines. Job-specific preferences, provided by tenants in the form of composable...

    Provided By Carnegie Mellon University

  • White Papers // Jan 2014

    Toward Strong, Usable Access Control for Shared Distributed Data

    As non-expert users produce increasing amounts of personal digital data, usable access control becomes critical. Current approaches often fail, because they insufficiently protect data or confuse users about policy specification. This paper presents penumbra, a distributed file system with access control designed to match users' mental models while providing principled...

    Provided By Carnegie Mellon University

  • White Papers // Nov 2013

    More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server

    The authors propose a parameter server system for distributed ML, which follows a Stale Synchronous Parallel (SSP) model of computation that maximizes the time computational workers spend doing useful work on ML algorithms, while still providing correctness guarantees. The parameter server provides an easy-to-use shared interface for read/write access to...

    Provided By Carnegie Mellon University

  • White Papers // Jan 2012

    ZZFS: A Hybrid Device and Cloud File System for Spontaneous Users

    A good execution of data placement, caching and consistency policies across a user's personal devices has always been hard. Unpredictable networks, capricious user behavior with leaving devices on or off and non-uniform energy-saving policies constantly interfere with the good intentions of a storage system's policies. This paper's contribution is to...

    Provided By Carnegie Mellon University

  • White Papers // Feb 2012

    Near-Real-Time Inference of File-Level Mutations from Virtual Disk Writes

    The authors describe a new mechanism for cloud computing enabling near-real-time monitoring of virtual disk write streams across an entire cloud. Their solution has low IO overhead for the guest VM, low latency to file-level mutation notification, and a layered design for scalability. They achieve low IO overhead by duplicating...

    Provided By Carnegie Mellon University

  • White Papers // May 2012

    TABLEFS: Embedding a NoSQL Database Inside the Local File System

    Conventional file systems are optimized for large file transfers instead of workloads that are dominated by metadata and small file accesses. This paper examines using techniques adopted from NoSQL databases to manage file system metadata and small files, which feature high rate of changes and efficient out-of-core data representation. A...

    Provided By Carnegie Mellon University