North Carolina State University

Displaying 201-239 of 239 results

  • White Papers // Aug 2009

    Power Efficient Traffic Grooming in Optical WDM Networks

    Power-awareness in networking attracts more attention as the trends in the energy consumption of the Internet raise growing concerns about the environmental impacts and sustainability of the network expansion. Building energy efficient equipment is definitely an integral part of the solution. However, such a strategy should be complemented with appropriate...

    Provided By North Carolina State University

  • White Papers // Aug 2009

    Towards Efficient Designs for In-Network Computing With Noisy Wireless Channels

    In this paper, the authors study distributed function computation in a noisy multi-hop wireless network, in which n nodes are uniformly and independently distributed in a unit square. Each node holds an m-bit integer per instance and the computation is started after each node collects N readings. The goal is...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    Architectural Support for Internet Evolution and Innovation

    The architecture of the modern Internet encompasses a large number of principles, concepts and assumptions that have evolved over several decades. In this paper, the authors argue that while the current architecture houses an effective design, it is not itself effective in enabling evolution. To achieve the latter goal, they...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    The Impact Of Exchange Rate Volatility On Plant-level Investment: Evidence From

    The authors estimate the impact of exchange rate volatility on firms' investment decisions in a developing country setting. Employing plant-level panel data from the Colombian Manufacturing Census, they estimate a dynamic investment equation using the system-GMM estimator developed by Arellano and Bover (1995) and Blundell and Bond (1998). They find...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    Business Modeling Via Commitments

    Existing computer science approaches to business modeling offer low-level abstractions such as data and control flows, which fail to capture the business intent underlying the interactions that are central to real-life business models. In contrast, existing management science approaches are high-level but not only are these semiformal, they are also...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    A Hierarchical Model for Multigranular Optical Networks

    The authors present a hierarchical algorithm for grooming light-paths into wavebands, and routing wavebands over a network of multi-granular switching nodes. This algorithm focuses on lowering the number of wavelengths W and ports over the network while being conceptually simple, scalable, and consistent with the way networks are operated and...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    Fault Localization for Firewall Policies

    Firewalls are the mainstay of enterprise security and the most widely adopted technology for protecting private networks. Ensuring the correctness of firewall policies through testing is important. In firewall policy testing, test inputs are packets and test outputs are decisions. Packets with unexpected (Expected) evaluated decisions are classified as failed...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    Correctness Properties for Multiagent Systems

    What distinguishes multiagent systems from other software systems is their emphasis on the interactions among autonomous, heterogeneous agents. This paper motivates and characterizes correctness properties for multiagent systems. These properties are centered on commitments, and capture correctness at a high level. In contrast to existing approaches, commitments underlie key correctness...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    Core-Selectability in Chip Multiprocessors

    The centralized structures necessary for the extraction of Instruction-Level Parallelism (ILP) are consuming progressively smaller portions of the total die area of Chip Multi-Processors (CMP). The reason for this is that scaling these structures does not enhance general performance as much as scaling the cache and interconnect. However, the fact...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    An Empirical Study of Security Problem Reports in Linux Distributions

    Existing work like focuses primiarily on the analysis of the general category of problem reports or limit their attention to observations on number of security problems reported in open source projects. Existing studies on problem reports in open source projects focus primarily on the analysis of the general category of...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    ReFormat: Automatic Reverse Engineering of Encrypted Messages

    Automatic protocol reverse engineering has recently received significant attention due to its importance to many security applications. However, previous methods are all limited in analyzing only plain-text communications wherein the exchanged messages are not encrypted. In this paper, the authors propose ReFormat, a system that aims at deriving the message...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    Towards Well-Behaved Schema Evolution

    The authors study the problem of schema evolution in the RDF data model. RDF and the RDFS schema language are W3C standards for flexibly modeling and sharing data on the web. Although schema evolution has been intensively studied in the database and knowledge-representation communities, only recently has progress been made...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    Super-Diffusive Behavior of Mobile Nodes and Its Impact on Routing Protocol Performance

    Mobility is the most important component in Mobile Ad-hoc NETworks (MANETs) and Delay Tolerant Networks (DTNs). In this paper, the authors first investigate numerous GPS mobility traces of human mobile nodes and observe super-diffusive behavior in all GPS traces, which is characterized by a 'Faster-than-linear' growth rate of the Mean...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    Architecture Support for Improving Bulk Memory Copying and Initialization Performance

    Bulk memory copying and initialization is one of the most ubiquitous operations performed in current computer systems by both user applications and operating systems. While many current systems rely on a loop of loads and stores, there are proposals to introduce a single instruction to perform bulk memory copying. While...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    Authenticated Data Compression in Delay Tolerant Wireless Sensor Networks

    Delay Tolerant Wireless Sensor Networks (DTWSNs) are sensor networks where continuous connectivity between the sensor nodes and their final destinations (e.g., the base station) cannot be guaranteed. Storage constraints are particularly a concern in DTWSNs, since each node may have to store sensed data for a long period of time...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    Detection of Multiple-Duty-Related Security Leakage in Access Control Policies

    Access control mechanisms control which subjects (Such as users or processes) has access to which resources. To facilitate managing access control, policy authors increasingly write access control policies in XACML. Access control policies written in XACML could be amenable to multiple-duty-related security leakage, which grants unauthorized access to a user...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    Randomized Differential DSSS: Jamming-Resistant Wireless Broadcast Communication

    Jamming resistance is crucial for applications where reliable wireless communication is required. Spread spectrum techniques such as Frequency Hopping Spread Spectrum (FHSS) and Direct Sequence Spread Spectrum (DSSS) have been used as countermeasures against jamming attacks. Traditional anti-jamming techniques require that senders and receivers share a secret key in order...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    Hybrid Full/Incremental Checkpoint/Restart for MPI Jobs in HPC Environments

    As the number of cores in high-performance computing environments keeps increasing, faults are becoming common place. Checkpointing addresses such faults but captures full process images even though only a subset of the process image changes between checkpoints. The authors have designed a high-performance hybrid disk-based full/incremental checkpointing technique for MPI...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    SHIELDSTRAP: A Secure Bootstrap Architecture

    Many systems may have security requirements such as protecting the privacy of data and code stored in the system, ensuring integrity of computations, or preventing the execution of unauthorized code. It is becoming increasingly difficult to ensure such protections as hardware-based attacks, in addition to software attacks, become more widespread...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    The Role of Internet Service Providers in Cyber Security

    The current level of insecurity of the Internet is a worldwide problem that has resulted in a multitude of costs for businesses, governments, and individuals. Past research (e.g., Frith, 2005; Gallaher, Rowe, Rogozhin, & Link, 2006) suggests that one significant factor in these cyber security problems is the inadequate level...

    Provided By North Carolina State University

  • White Papers // May 2009

    Hash-Based Sequential Aggregate and Forward Secure Signature for Unattended Wireless Sensor Networks

    Unattended Wireless Sensor Networks (UWSNs) operating in hostile environments face great security and performance challenges due to the lack of continuous real-time communication between senders (sensors) and receivers (e.g., mobile data collectors, static sinks). The lack of real-time communication forces sensors to accumulate the sensed data possibly for long time...

    Provided By North Carolina State University

  • White Papers // May 2009

    Analysis on the Kalman Filter Performance in GPS/INS Integration at Different Noise Levels, Sampling Periods and Curvatures

    Kalman Filters (KF) has been extensively used in the integration of Global Positioning System (GPS) and Inertial Navigation System (INS) data. Often, the GPS data is used as a benchmark to update the INS data. In this paper, an analysis of integration of GPS data with INS data using an...

    Provided By North Carolina State University

  • White Papers // Jan 2009

    PFetch: Software Prefetching Exploiting Temporal Predictability of Memory Access Streams

    CPU speeds have increased faster than the rate of improvement in memory access latencies in the recent past. As a result, with programs that suffer excessive cache misses, the CPU will increasingly be stalled waiting for the memory system to provide the requested memory line. Prefetching is a latency hiding...

    Provided By North Carolina State University

  • White Papers // Sep 2008

    Merging State and Preserving Timing Anomalies in Pipelines of High-End Processors

    Many embedded systems are subject to temporal constraints that require advance guarantees on meeting deadlines. Such systems rely on static analysis to safely bound Worst-Case Execution Time (WCET) bounds of tasks. Designers of these systems are forced to avoid state-of-the-art processors due to their inherent architectural complexity (such as out-of-order...

    Provided By North Carolina State University

  • White Papers // Aug 2008

    Security-Aware Resource Optimization in Distributed Service Computing

    In this paper, the authors consider a set of computer resources used by a service provider to host enterprise applications for customer services subject to a Service Level Agreement (SLA). The SLA defines three QoS metrics, namely, trustworthiness, percentile response time and availability. They first give an overview of current...

    Provided By North Carolina State University

  • White Papers // Jul 2008

    Dynamic Thread Assignment on Heterogeneous Multiprocessor Architectures

    In a multi-programmed computing environment, threads of execution exhibit different runtime characteristics and hardware resource requirements. Not only do the behaviors of distinct threads differ, but each thread may also present diversity in its performance and resource usage over time. A heterogeneous Chip Multi-Processor (CMP) architecture consists of processor cores...

    Provided By North Carolina State University

  • White Papers // Jul 2008

    Performance Assessment and Compensation for Secure Networked Control Systems

    Network-Control-Systems (NCS) have been gaining popularity due to their high potential in widespread applications and becoming realizable due to the rapid advancements in embedded systems, wireless communication technologies. This paper addresses the issue of NCS information security as well its time-sensitive performance and their trade-off. A PI controller implemented on...

    Provided By North Carolina State University

  • White Papers // Jul 2008

    Exploiting Locality to Ameliorate Packet Queue Contention and Serialization

    Packet processing systems maintain high throughput despite relatively high memory latencies by exploiting the coarse-grained parallelism available between packets. In particular, multiple processors are used to overlap the processing of multiple packets. Packet queuing - the fundamental mechanism enabling packet scheduling, differentiated services, and traffic isolation - requires a read-modify-write...

    Provided By North Carolina State University

  • White Papers // Mar 2008

    Hybrid Timing Analysis of Modern Processor Pipelines via Hardware/Software Interactions

    Embedded systems are often subject to constraints that require determinism to ensure that task deadlines are met. Such systems are referred to as real-time systems. Schedulability analysis provides a firm basis to ensure that tasks meet their deadlines for which knowledge of Worst-Case Execution Time (WCET) bounds is a critical...

    Provided By North Carolina State University

  • White Papers // Nov 2007

    Worst-Case Execution Time Analysis of Security Policies for Deeply Embedded Real-Time Systems

    Deeply embedded systems often have unique constraints because of their small size and vital roles in critical infrastructure. Problems include limitations on code size, limited access to the actual hardware, etc. These problems become more critical in real-time systems where security policies must not only work within the above limitations...

    Provided By North Carolina State University

  • White Papers // May 2007

    Fused Two-Level Branch Prediction with Ahead Calculation

    In this paper, the authors propose a Fused Two-Level (FTL) branch predictor combined with an ahead calculation method. The FTL predictor is derived from the fusion hybrid predictor. It achieves high accuracy by adopting PA p-based GEometrical History Length (GEHL) prediction, which is an effective prediction scheme exploiting local histories....

    Provided By North Carolina State University

  • White Papers // Apr 2007

    Conformance Checking of Access Control Policies Specified in XACML

    Access control is one of the most fundamental and widely used security mechanisms. Access control mechanisms control which principals such as users or processes have access to which resources in a system. To facilitate managing and maintaining access control, access control policies are increasingly written in specification languages such as...

    Provided By North Carolina State University

  • White Papers // Apr 2007

    Information Security with Real-Time Operation: Performance Assessment for Next Generation Wireless Distributed Networked-Control-Systems

    Distributed Network-Control-Systems (D-NCS) are a multidisciplinary effort whose aim is to produce a network structure and components that are capable of integrating sensors, actuators, communication, and control algorithms in a manner to suit real-time applications. They have been gaining popularity due to their high potential in widespread applications and becoming...

    Provided By North Carolina State University

  • White Papers // Feb 2007

    The ChoicePoint Dilemma: How Data Brokers Should Handle the Privacy of Personal Information

    In 2005, there was a significant increase in the number of security and privacy breaches disclosed to the public. Leading the charge was ChoicePoint, a data broker that suffered fraudulent access to its vast databases of personal information. ChoicePoint and other data brokers exist in a largely unregulated environment, in...

    Provided By North Carolina State University

  • White Papers // Jan 2007

    Using Deception to Hide Things from Hackers: Processes, Principles, and Techniques

    Deception offers one means of hiding things from an adversary. This paper introduces a model for understanding, comparing and developing methods of deceptive hiding. The model characterizes deceptive hiding in terms of how it defeats the underlying processes that an adversary uses to discover the hidden thing. An adversary's process...

    Provided By North Carolina State University

  • White Papers // Aug 2006

    The State of ZettaRAM

    Computer architectures are heavily influenced by parameters imposed by memory technologies. Memory hierarchies, virtual memory, prefetching, multithreading, and large-window processors are some well-known examples of architectural innovations influenced by memory constraints. This paper surveys ZettaRAM, a nascent memory technology based on molecular electronics. From patents and papers, the authors distill...

    Provided By North Carolina State University

  • White Papers // Aug 2006

    Assertion-Based Microarchitecture Design for Improved Fault Tolerance

    Protection against transient faults is an important constraint in high-performance processor design. One strategy for achieving efficient reliability is to apply targeted fault checking/masking techniques to different units within an overall reliability regimen. In this paper, the authors propose a novel class of targeted fault checks that verify the functioning...

    Provided By North Carolina State University

  • White Papers // Jun 2006

    A Framework for Identifying Compromised Nodes in Sensor Networks

    Sensor networks are often subject to physical attacks. Once a node's cryptographic key is compromised, an attacker may completely impersonate it, and introduce arbitrary false information into the network. Basic cryptographic security mechanisms are often not effective in this situation. Most techniques to address this problem focus on detecting and...

    Provided By North Carolina State University

  • White Papers // Jan 2006

    Non-Uniform Program Analysis & Repeatable Execution Constraints: Exploiting Out-of-Order Processors in Real-Time Systems

    In this paper the authors enable easy, tight, and safe timing analysis of contemporary complex processors. They exploit the fact that out-of-order processors can be analyzed via simulation in the absence of variable control-flow. In their first technique, Non-Uniform Program Analysis (NUPA), program segments with a single flow of control...

    Provided By North Carolina State University

  • White Papers // Jun 2014

    InVis: An EDM Tool for Graphical Rendering and Analysis of Student Interaction Data

    InVis is a novel visualization tool that was developed to explore, navigate and catalog student interaction data. In-Vis processes datasets collected from interactive educational systems such as intelligent tutoring systems and homework helpers and visualizes the student data as graphs. This visual representation of data provides an interactive environment with...

    Provided By North Carolina State University

  • White Papers // Sep 2014

    A Semantics-Oriented Storage Model for Big Heterogeneous RDF Data

    Increasing availability of RDF data covering different domains is enabling ad-hoc integration of different kinds of data to suit varying needs. This usually results in large collections of data such as the billion triple challenge datasets or SNOMED CT that are not just "Big" in the sense of volume but...

    Provided By North Carolina State University

  • White Papers // Jul 2013

    A Unified View of Non-monotonic Core Selection and Application Steering in Heterogeneous Chip Multiprocessors

    A single-ISA Heterogeneous Chip Multi-Processor (HCMP) is an attractive substrate to improve single-thread performance and energy efficiency in the dark silicon era. The authors consider HCMPs comprised of non-monotonic core types where each core type is performance-optimized to different instruction level behavior and hence cannot be ranked - different program...

    Provided By North Carolina State University

  • White Papers // Aug 2012

    A Physical Design Study of FabScalar-generated Superscalar Cores

    FabScalar is a recently published toolset for automatically composing synthesizable Register-Transfer-Level (RTL) designs of diverse superscalar cores. FabScalar is a recently published tool for automatically generating superscalar cores, of different pipeline widths, depths and sizes. The output of FabScalar is a synthesizable Register-Transfer-Level (RTL) description of the desired core. While...

    Provided By North Carolina State University

  • White Papers // Aug 2006

    Assertion-Based Microarchitecture Design for Improved Fault Tolerance

    Protection against transient faults is an important constraint in high-performance processor design. One strategy for achieving efficient reliability is to apply targeted fault checking/masking techniques to different units within an overall reliability regimen. In this paper, the authors propose a novel class of targeted fault checks that verify the functioning...

    Provided By North Carolina State University

  • White Papers // Aug 2006

    The State of ZettaRAM

    Computer architectures are heavily influenced by parameters imposed by memory technologies. Memory hierarchies, virtual memory, prefetching, multithreading, and large-window processors are some well-known examples of architectural innovations influenced by memory constraints. This paper surveys ZettaRAM, a nascent memory technology based on molecular electronics. From patents and papers, the authors distill...

    Provided By North Carolina State University

  • White Papers // Jan 2006

    Non-Uniform Program Analysis & Repeatable Execution Constraints: Exploiting Out-of-Order Processors in Real-Time Systems

    In this paper the authors enable easy, tight, and safe timing analysis of contemporary complex processors. They exploit the fact that out-of-order processors can be analyzed via simulation in the absence of variable control-flow. In their first technique, Non-Uniform Program Analysis (NUPA), program segments with a single flow of control...

    Provided By North Carolina State University

  • White Papers // Nov 2007

    Worst-Case Execution Time Analysis of Security Policies for Deeply Embedded Real-Time Systems

    Deeply embedded systems often have unique constraints because of their small size and vital roles in critical infrastructure. Problems include limitations on code size, limited access to the actual hardware, etc. These problems become more critical in real-time systems where security policies must not only work within the above limitations...

    Provided By North Carolina State University

  • White Papers // Aug 2008

    Security-Aware Resource Optimization in Distributed Service Computing

    In this paper, the authors consider a set of computer resources used by a service provider to host enterprise applications for customer services subject to a Service Level Agreement (SLA). The SLA defines three QoS metrics, namely, trustworthiness, percentile response time and availability. They first give an overview of current...

    Provided By North Carolina State University

  • White Papers // Aug 2009

    Towards a Unifying Approach in Understanding Security Problems

    To evaluate security in the context of software reliability engineering, it is necessary to analyze security problems, actual exploits, and their relationship with an understanding of the operational behavior of the system. That can be done in terms of the effort involved in security exploits, through classic reliability factors such...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    An Empirical Study of Security Problem Reports in Linux Distributions

    Existing work like focuses primiarily on the analysis of the general category of problem reports or limit their attention to observations on number of security problems reported in open source projects. Existing studies on problem reports in open source projects focus primarily on the analysis of the general category of...

    Provided By North Carolina State University

  • White Papers // Apr 2007

    Information Security with Real-Time Operation: Performance Assessment for Next Generation Wireless Distributed Networked-Control-Systems

    Distributed Network-Control-Systems (D-NCS) are a multidisciplinary effort whose aim is to produce a network structure and components that are capable of integrating sensors, actuators, communication, and control algorithms in a manner to suit real-time applications. They have been gaining popularity due to their high potential in widespread applications and becoming...

    Provided By North Carolina State University

  • White Papers // Sep 2012

    Is Link Signature Dependable for Wireless Security?

    Link signature, which refers to the unique and reciprocal wireless channel between a pair of transceivers, has gained significant attentions recently due to its effectiveness in signal authentication and shared secret construction for various wireless applications. A fundamental assumption of this technique is that the wireless signals received at two...

    Provided By North Carolina State University

  • White Papers // Jul 2008

    Performance Assessment and Compensation for Secure Networked Control Systems

    Network-Control-Systems (NCS) have been gaining popularity due to their high potential in widespread applications and becoming realizable due to the rapid advancements in embedded systems, wireless communication technologies. This paper addresses the issue of NCS information security as well its time-sensitive performance and their trade-off. A PI controller implemented on...

    Provided By North Carolina State University

  • White Papers // Apr 2007

    Conformance Checking of Access Control Policies Specified in XACML

    Access control is one of the most fundamental and widely used security mechanisms. Access control mechanisms control which principals such as users or processes have access to which resources in a system. To facilitate managing and maintaining access control, access control policies are increasingly written in specification languages such as...

    Provided By North Carolina State University

  • White Papers // Jun 2006

    A Framework for Identifying Compromised Nodes in Sensor Networks

    Sensor networks are often subject to physical attacks. Once a node's cryptographic key is compromised, an attacker may completely impersonate it, and introduce arbitrary false information into the network. Basic cryptographic security mechanisms are often not effective in this situation. Most techniques to address this problem focus on detecting and...

    Provided By North Carolina State University

  • White Papers // Jun 2011

    EMFS: Email-Based Personal Cloud Storage

    Though a variety of cloud storage services have been offered recently, they have not yet provided users with transparent and cost-effective personal data storage. Services like Google Docs offer easy file access and sharing, but tie storage with internal data formats and specific applications. Meanwhile, services like Dropbox offer general-purpose...

    Provided By North Carolina State University

  • White Papers // Dec 2012

    Scheduling Cloud Capacity for Time-Varying Customer Demand

    As utility computing resources become more ubiquitous, service providers increasingly look to the cloud for an in-full or in-part infrastructure to serve utility computing customers on demand. Given the costs associated with cloud infrastructure, dynamic scheduling of cloud resources can significantly lower costs while providing an acceptable service level. The...

    Provided By North Carolina State University

  • White Papers // Jan 2014

    IPSec/VPN Security Policy: Correctness, Conflict Detection and Resolution

    IPSec (Internet Security Protocol suite) functions will be executed correctly only if its policies are correctly specified and configured. Manual IPSec policy configuration is inefficient and error-prone. An erroneous policy could lead to communication blockade or serious security breach. In addition, even if policies are specified correctly in each domain,...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    SHIELDSTRAP: A Secure Bootstrap Architecture

    Many systems may have security requirements such as protecting the privacy of data and code stored in the system, ensuring integrity of computations, or preventing the execution of unauthorized code. It is becoming increasingly difficult to ensure such protections as hardware-based attacks, in addition to software attacks, become more widespread...

    Provided By North Carolina State University

  • White Papers // Feb 2013

    Directory-Oblivious Capacity Sharing in Tiled CMPs

    In bus-based CMPs with private caches, Capacity Sharing is applied by spilling victim cache blocks from over-utilized caches to under-utilized ones. If a spilled block is needed, it can be retrieved by posting a miss on the bus. Prior work in this domain focused on Capacity Sharing design and put...

    Provided By North Carolina State University

  • White Papers // Jan 2013

    Flexible Capacity Partitioning in Many-Core Tiled CMPs

    Chip Multi-Processors (CMP) have become a mainstream computing platform. As transistor density shrinks and the number of cores increases, more scalable CMP architectures will emerge. Recently, tiled architectures have shown such scalable characteristics and been used in many industry chips. The memory hierarchy in tiled architectures presents interesting design challenges....

    Provided By North Carolina State University

  • White Papers // Feb 2012

    Evaluating Dynamics and Bottlenecks of Memory Collaboration in Cluster Systems

    With the fast development of highly-integrated distributed systems (cluster systems), designers face interesting memory hierarchy design choices while attempting to avoid the notorious disk swapping. Swapping to the free remote memory through Memory Collaboration has demonstrated its cost-effectiveness compared to over-provisioning the cluster for peak load requirements. Recent memory collaboration...

    Provided By North Carolina State University

  • White Papers // Feb 2012

    Data Sharing in MultiThreaded Applications and Its Impact on Chip Design

    Analytical modeling is becoming an increasingly important technique used in the design of chip multiprocessors. Most such models assume multi-programmed workload mixes and either ignore or oversimplify the behavior of multi-threaded applications. In particular, data sharing observed in multi-threaded applications, and its impact on chip design decisions, has not been...

    Provided By North Carolina State University

  • White Papers // Jan 2011

    Impact of Data Sharing on CMP Design: A Study Based on Analytical Modeling

    Over the past few years, Chip Multi Processor (CMP) architecture has become the dominating hardware architecture across a spectrum of computing machinery - personal computing devices, workstations, commercial and scientific servers, and warehouse scale computers. The sheer complexity involved in the design and verification of each unit in a CMP...

    Provided By North Carolina State University

  • White Papers // Dec 2010

    Architectural Framework for Supporting Operating System Survivability

    The ever increasing size and complexity of Operating System (OS) kernel code bring an inevitable increase in the number of security vulnerabilities that can be exploited by attackers. A successful security attack on the kernel has a profound impact that may affect all processes running on it. In this paper,...

    Provided By North Carolina State University

  • White Papers // Dec 2009

    Defining Anomalous Behavior for Phase Change Memory

    Traditional memory systems based on memory technologies such as DRAM are fast approaching their cost and power limits. Alternative memory technologies such as Phase Change Memory (PCM) are being widely researched as a scalable, cost- and power-efficient alternative for DRAM. However, a PCM memory cell has a limited endurance of...

    Provided By North Carolina State University

  • White Papers // May 2012

    Understanding the Limits of Capacity Sharing in CMP Private Caches

    Chip Multi Processor (CMP) systems present interesting design challenges at the lower levels of the cache hierarchy. Private L2 caches allow easier processor-cache design reuse, thus scaling better than a system with a shared L2 cache, while offering better performance isolation and lower access latency. While some private cache management...

    Provided By North Carolina State University

  • White Papers // Aug 2009

    SHIELDSTRAP: Making Secure Processors Truly Secure

    Many systems may have security requirements such as protecting the privacy of data and code stored in the system, ensuring integrity of computations, or preventing the execution of unauthorized code. It is becoming increasingly difficult to ensure such protections as hardware-based attacks, in addition to software attacks, become more widespread...

    Provided By North Carolina State University

  • White Papers // Sep 2009

    Memory Management Thread for Heap Allocation Intensive Sequential Applications

    Dynamic memory management is one of the most ubiquitous and expensive operations in many C/C++ applications. Some C/C++ programs might spend up to one third of their execution time in dynamic memory management routines. With multicore processors as a mainstream architecture, it is important to investigate how dynamic memory management...

    Provided By North Carolina State University

  • White Papers // Feb 2014

    Understanding the Tradeoffs Between Software-Managed Vs. Hardware-Managed Caches in GPUs

    On-chip caches are commonly used in computer systems to hide long off-chip memory access latencies. To manage on-chip caches, either software-managed or hardware-managed schemes can be employed. State-of-art accelerators, such as the NVIDIA Fermi or Kepler GPUs and Intel's forthcoming MIC \"KNights Landing\" (KNL), support both software-managed caches, aka. shared...

    Provided By North Carolina State University

  • White Papers // Jan 2014

    Warp-Level Divergence in GPUs: Characterization, Impact, and Mitigation

    High throughput architectures rely on high Thread-Level Parallelism (TLP) to hide execution latencies. In state-of-art Graphics Processing Units (GPUs), threads are organized in a grid of Thread Blocks (TBs) and each TB contains tens to hundreds of threads. With a TB-level resource management scheme, all the resource required by a...

    Provided By North Carolina State University

  • White Papers // Jun 2012

    Fixing Performance Bugs: An Empirical Study of Open-Source GPGPU Programs

    Given the extraordinary computational power of modern Graphics Processing Units (GPUs), general purpose computation on GPUs (GPGPU) has become an increasingly important platform for high performance computing. To better understand how well the GPU resource has been utilized by application developers and then to facilitate them to develop high performance...

    Provided By North Carolina State University

  • White Papers // Feb 2013

    Adaptive Cache Bypassing for Inclusive Last Level Caches

    Cache hierarchy designs, including bypassing, replacement, and the inclusion property, have significant performance impact. Recent works on high performance caches have shown that cache bypassing is an effective technique to enhance the Last Level Cache (LLC) performance. However, commonly used inclusive cache hierarchy cannot benefit from this technique because bypassing...

    Provided By North Carolina State University

  • White Papers // Feb 2012

    Locality Principle Revisited: A Probability-Based Quantitative Approach

    This paper revisits the fundamental concept of the locality of references and proposes to quantify it as a conditional probability: in an address stream, given the condition that an address is accessed, how likely the same address (temporal locality) or an address within its neighborhood (spatial locality) will be accessed...

    Provided By North Carolina State University

  • White Papers // Feb 2012

    CPU-Assisted GPGPU on Fused CPU-GPU Architectures

    This paper presents a novel approach to utilize the CPU resource to facilitate the execution of GPGPU programs on fused CPU-GPU architectures. In the authors' model of fused architectures, the GPU and the CPU are integrated on the same die and share the on-chip L3 cache and off-chip memory, similar...

    Provided By North Carolina State University

  • White Papers // Jan 2011

    Time-Ordered Event Traces: A New Debugging Primitive for Concurrency Bugs

    Non-determinism makes concurrent bugs extremely difficult to reproduce and to debug. In this paper, the authors propose a new debugging primitive to facilitate the debugging process by exposing this non-deterministic behavior to the programmer. The key idea is to generate a time-ordered trace of events such as function calls/returns and...

    Provided By North Carolina State University

  • White Papers // Jan 2011

    Access Map Pattern Matching for High Performance Data Cache Prefetch

    Hardware data prefetching is widely adopted to hide long memory latency. A hardware data pre-fetcher predicts the memory address that will be accessed in the near future and fetches the data at the predicted address into the cache memory in advance. To detect memory access patterns such as a constant...

    Provided By North Carolina State University

  • White Papers // May 2007

    Fused Two-Level Branch Prediction with Ahead Calculation

    In this paper, the authors propose a Fused Two-Level (FTL) branch predictor combined with an ahead calculation method. The FTL predictor is derived from the fusion hybrid predictor. It achieves high accuracy by adopting PA p-based GEometrical History Length (GEHL) prediction, which is an effective prediction scheme exploiting local histories....

    Provided By North Carolina State University