North Carolina State University

Displaying 201-240 of 263 results

  • White Papers // Jul 2009

    The Impact Of Exchange Rate Volatility On Plant-level Investment: Evidence From

    The authors estimate the impact of exchange rate volatility on firms' investment decisions in a developing country setting. Employing plant-level panel data from the Colombian Manufacturing Census, they estimate a dynamic investment equation using the system-GMM estimator developed by Arellano and Bover (1995) and Blundell and Bond (1998). They find...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    A Hierarchical Model for Multigranular Optical Networks

    The authors present a hierarchical algorithm for grooming light-paths into wavebands, and routing wavebands over a network of multi-granular switching nodes. This algorithm focuses on lowering the number of wavelengths W and ports over the network while being conceptually simple, scalable, and consistent with the way networks are operated and...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    Business Modeling Via Commitments

    Existing computer science approaches to business modeling offer low-level abstractions such as data and control flows, which fail to capture the business intent underlying the interactions that are central to real-life business models. In contrast, existing management science approaches are high-level but not only are these semiformal, they are also...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    Fault Localization for Firewall Policies

    Firewalls are the mainstay of enterprise security and the most widely adopted technology for protecting private networks. Ensuring the correctness of firewall policies through testing is important. In firewall policy testing, test inputs are packets and test outputs are decisions. Packets with unexpected (Expected) evaluated decisions are classified as failed...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    Correctness Properties for Multiagent Systems

    What distinguishes multiagent systems from other software systems is their emphasis on the interactions among autonomous, heterogeneous agents. This paper motivates and characterizes correctness properties for multiagent systems. These properties are centered on commitments, and capture correctness at a high level. In contrast to existing approaches, commitments underlie key correctness...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    Core-Selectability in Chip Multiprocessors

    The centralized structures necessary for the extraction of Instruction-Level Parallelism (ILP) are consuming progressively smaller portions of the total die area of Chip Multi-Processors (CMP). The reason for this is that scaling these structures does not enhance general performance as much as scaling the cache and interconnect. However, the fact...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    An Empirical Study of Security Problem Reports in Linux Distributions

    Existing work like focuses primiarily on the analysis of the general category of problem reports or limit their attention to observations on number of security problems reported in open source projects. Existing studies on problem reports in open source projects focus primarily on the analysis of the general category of...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    ReFormat: Automatic Reverse Engineering of Encrypted Messages

    Automatic protocol reverse engineering has recently received significant attention due to its importance to many security applications. However, previous methods are all limited in analyzing only plain-text communications wherein the exchanged messages are not encrypted. In this paper, the authors propose ReFormat, a system that aims at deriving the message...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    Towards Well-Behaved Schema Evolution

    The authors study the problem of schema evolution in the RDF data model. RDF and the RDFS schema language are W3C standards for flexibly modeling and sharing data on the web. Although schema evolution has been intensively studied in the database and knowledge-representation communities, only recently has progress been made...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    Super-Diffusive Behavior of Mobile Nodes and Its Impact on Routing Protocol Performance

    Mobility is the most important component in Mobile Ad-hoc NETworks (MANETs) and Delay Tolerant Networks (DTNs). In this paper, the authors first investigate numerous GPS mobility traces of human mobile nodes and observe super-diffusive behavior in all GPS traces, which is characterized by a 'Faster-than-linear' growth rate of the Mean...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    Architecture Support for Improving Bulk Memory Copying and Initialization Performance

    Bulk memory copying and initialization is one of the most ubiquitous operations performed in current computer systems by both user applications and operating systems. While many current systems rely on a loop of loads and stores, there are proposals to introduce a single instruction to perform bulk memory copying. While...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    Authenticated Data Compression in Delay Tolerant Wireless Sensor Networks

    Delay Tolerant Wireless Sensor Networks (DTWSNs) are sensor networks where continuous connectivity between the sensor nodes and their final destinations (e.g., the base station) cannot be guaranteed. Storage constraints are particularly a concern in DTWSNs, since each node may have to store sensed data for a long period of time...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    Randomized Differential DSSS: Jamming-Resistant Wireless Broadcast Communication

    Jamming resistance is crucial for applications where reliable wireless communication is required. Spread spectrum techniques such as Frequency Hopping Spread Spectrum (FHSS) and Direct Sequence Spread Spectrum (DSSS) have been used as countermeasures against jamming attacks. Traditional anti-jamming techniques require that senders and receivers share a secret key in order...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    Detection of Multiple-Duty-Related Security Leakage in Access Control Policies

    Access control mechanisms control which subjects (Such as users or processes) has access to which resources. To facilitate managing access control, policy authors increasingly write access control policies in XACML. Access control policies written in XACML could be amenable to multiple-duty-related security leakage, which grants unauthorized access to a user...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    Hybrid Full/Incremental Checkpoint/Restart for MPI Jobs in HPC Environments

    As the number of cores in high-performance computing environments keeps increasing, faults are becoming common place. Checkpointing addresses such faults but captures full process images even though only a subset of the process image changes between checkpoints. The authors have designed a high-performance hybrid disk-based full/incremental checkpointing technique for MPI...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    SHIELDSTRAP: A Secure Bootstrap Architecture

    Many systems may have security requirements such as protecting the privacy of data and code stored in the system, ensuring integrity of computations, or preventing the execution of unauthorized code. It is becoming increasingly difficult to ensure such protections as hardware-based attacks, in addition to software attacks, become more widespread...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    The Role of Internet Service Providers in Cyber Security

    The current level of insecurity of the Internet is a worldwide problem that has resulted in a multitude of costs for businesses, governments, and individuals. Past research (e.g., Frith, 2005; Gallaher, Rowe, Rogozhin, & Link, 2006) suggests that one significant factor in these cyber security problems is the inadequate level...

    Provided By North Carolina State University

  • White Papers // May 2009

    Hash-Based Sequential Aggregate and Forward Secure Signature for Unattended Wireless Sensor Networks

    Unattended Wireless Sensor Networks (UWSNs) operating in hostile environments face great security and performance challenges due to the lack of continuous real-time communication between senders (sensors) and receivers (e.g., mobile data collectors, static sinks). The lack of real-time communication forces sensors to accumulate the sensed data possibly for long time...

    Provided By North Carolina State University

  • White Papers // May 2009

    Analysis on the Kalman Filter Performance in GPS/INS Integration at Different Noise Levels, Sampling Periods and Curvatures

    Kalman Filters (KF) has been extensively used in the integration of Global Positioning System (GPS) and Inertial Navigation System (INS) data. Often, the GPS data is used as a benchmark to update the INS data. In this paper, an analysis of integration of GPS data with INS data using an...

    Provided By North Carolina State University

  • White Papers // May 2009

    Netset: Automating Network Performance Evaluation

    Performance measurement and comparison is integral to almost any kind of networking research. However, the authors currently lack a general framework under which network-based tests can be carried out. Thus researchers tend to design their tests based on their own ingenuity, making it difficult to repeat these tests and compare...

    Provided By North Carolina State University

  • White Papers // May 2009

    Predictive Control of Multiple UGVs in a NCS With Adaptive Bandwidth Allocation

    In network based path tracking control of systems with multiple Unmanned Ground Vehicles (UGVs), performance can be affected by network constraints including time-varying network delays and bandwidth limitation. Network delay has previously been compensated for using smith predictor based techniques and gain scheduling to limit UGV motion. The predictive gain...

    Provided By North Carolina State University

  • White Papers // May 2009

    Improving the Availability of Supercomputer Job Input Data Using Temporal Replication

    Storage systems in supercomputers are a major reason for service interruptions. RAID solutions alone cannot provide sufficient protection as 1) growing average disk recovery times make RAID groups increasingly vulnerable to disk failures during reconstruction, and 2) RAID does not help with higher-level faults such failed I/O nodes. This paper...

    Provided By North Carolina State University

  • White Papers // Apr 2009

    Structured Variational Methods for Distributed Inference: Convergence Analysis and Performance-Complexity Tradeoff

    In this paper, the asymptotic performance of a recently proposed distributed inference framework, structured variational methods, is investigated. The authors first distinguish the intra- and inter-cluster inference algorithms as vertex and edge processes respectively. Their difference is illustrated, and convergence rate is derived for the intra-cluster inference procedure which is...

    Provided By North Carolina State University

  • White Papers // Apr 2009

    FREEDM Software Controller Architecture for a Solid State Transformer

    The Future Renewable Electric Energy Delivery and Management (FREEDM) project aims at providing an efficient electric power grid integrating alternative generating sources and storage with existing power systems to facilitate a green energy in a highly distributed and scalable manner. One of the central aspects of the Reliable and Secure...

    Provided By North Carolina State University

  • White Papers // Apr 2009

    The FREEDM Architecture of Fault Tolerant Network Routing through Software Overlays

    Control decisions of intelligent devices in critical infrastructure can have a significant impact on human life and the environment. Insuring that the appropriate data is available is crucial in making informed decisions. Such considerations are becoming increasingly important in today's cyber-physical systems that combine computational decision making on the cyber...

    Provided By North Carolina State University

  • White Papers // Apr 2009

    Remote Attestation to Dynamic System Properties: Towards Providing Complete System Integrity Evidence

    Remote attestation of system integrity is an essential part of trusted computing. However, current remote attestation techniques only provide integrity proofs of static properties of the system. To address this problem the authors present a novel remote dynamic attestation system named ReDAS (Remote Dynamic Attestation System) that provides integrity evidence...

    Provided By North Carolina State University

  • White Papers // Mar 2009

    Selecting Trustworthy Service in Service-Oriented Environments

    Most of current service selection approaches in service-oriented environments fail to capture the dynamic relationships between services or assume the complete knowledge of service composition is known as a prior. In these cases, problems may arise when consumers are not aware of the underlying composition behind services. The authors propose...

    Provided By North Carolina State University

  • White Papers // Mar 2009

    Feedback-Directed Page Placement for CcNUMA Via Hardware-Generated Memory Traces

    Non-Uniform Memory Architectures with cache coherence (ccNUMA) are becoming increasingly common, not just for large-scale high performance platforms but also in the context of multi-cores architectures. Under ccNUMA, data placement may influence overall application performance significantly as references resolved locally to a processor/core impose lower latencies than remote ones. This...

    Provided By North Carolina State University

  • White Papers // Feb 2009

    Lightweight Remote Image Management for Secure Code Dissemination in Wireless Sensor Networks

    Wireless sensor networks are considered ideal candidates for a wide range of applications. It is desirable and sometimes necessary to reprogram sensor nodes through wireless links after they are deployed to remove bugs or add new functionalities. Several approaches (e.g., Seluge, Sluice) have been proposed recently for secure code dissemination...

    Provided By North Carolina State University

  • White Papers // Feb 2009

    Improving Software Quality Via Code Searching and Mining

    Enormous amount of open source code is available on the Internet and various Code Search Engines (CSE) are available to serve as a means for searching in open source code. However, usage of CSEs is often limited to simple tasks such as searching for relevant code examples. This paper presents...

    Provided By North Carolina State University

  • White Papers // Feb 2009

    An Empirical Study of Testing File-System-Dependent Software With Mock Objects

    Unit testing is a technique of testing a single unit of a program in isolation. The testability of the unit under test can be reduced when the unit interacts with its environment. The construction of high-covering unit tests and their execution require appropriate interactions with the environment such as a...

    Provided By North Carolina State University

  • White Papers // Feb 2009

    Service Performance and Analysis in Cloud Computing

    Cloud computing is a new cost-efficient computing paradigm in which information and computer power can be accessed from a Web browser by customers. Understanding the characteristics of computer service performance has become critical for service applications in cloud computing. For the commercial success of this new computing paradigm, the ability...

    Provided By North Carolina State University

  • White Papers // Feb 2009

    Mining Exception-Handling Rules as Sequence Association Rules

    Programming languages such as Java and C++ provide exception-handling constructs such as try-catch to handle exception conditions that arise during program execution. Under these exception conditions, programs follow paths different from normal execution paths; these additional paths are referred to as exception paths. Applications developed based on these programming languages...

    Provided By North Carolina State University

  • White Papers // Feb 2009

    Guided Path Exploration for Regression Test Generation

    Regression test generation aims at generating a test suite that can detect behavioral differences between the original and the modified versions of a program. Regression test generation can be automated by using Dynamic Symbolic Execution (DSE), a state-of-the-art test generation technique, to generate a test suite achieving high structural coverage....

    Provided By North Carolina State University

  • White Papers // Feb 2009

    Test Selection for Result Inspection Via Mining Predicate Rules

    It is labor-intensive to manually verify the outputs of a large set of tests that are not equipped with test oracles. Test selection helps to reduce this cost by selecting a small subset of tests that are likely to reveal faults. A promising approach is to dynamically mine operational models...

    Provided By North Carolina State University

  • White Papers // Feb 2009

    Using VCL Technology to Implement Distributed Reconfigurable Data Centers and Computational Services for Educational Institutions

    In the context of educational institutions, small distributed data centers and labs are becoming increasingly expensive to provision, support and maintain on their own. This leads to preferences towards centralized and integrated data center resource management and network access to the resources. In turn, the data centers are undergoing a...

    Provided By North Carolina State University

  • White Papers // Jan 2009

    DiffQ: Practical Differential Backlog Congestion Control for Wireless Networks

    Congestion control in wireless multi-hop networks is challenging and complicated because of two reasons. First, interference is ubiquitous and causes loss in the shared medium. Second, wireless multihop networks are characterized by the use of diverse and dynamically changing routing paths. Traditional end point based congestion control protocols are ineffective...

    Provided By North Carolina State University

  • White Papers // Jan 2009

    PFetch: Software Prefetching Exploiting Temporal Predictability of Memory Access Streams

    CPU speeds have increased faster than the rate of improvement in memory access latencies in the recent past. As a result, with programs that suffer excessive cache misses, the CPU will increasingly be stalled waiting for the memory system to provide the requested memory line. Prefetching is a latency hiding...

    Provided By North Carolina State University

  • White Papers // Jan 2009

    Treat-Before-Trick : Free-Riding Prevention for BitTorrent-Like Peer-to-Peer Networks

    In P2P file sharing systems, free-riders who use others' resources without sharing their own cause system-wide performance degradation. Existing techniques to counter free-riders are either complex (and thus not widely deployed), or easy to bypass (and therefore not effective). This paper proposes a simple yet highly effective free-rider prevention scheme...

    Provided By North Carolina State University

  • White Papers // Jan 2009

    Adaptive Quickest Change Detection With Unknown Parameter

    Quickest detection of an abrupt distribution change with an unknown time varying parameter is considered. A novel adaptive approach is proposed to tackle this problem, which is shown to outperform the celebrated Parallel CUSUM Test. Performance is evaluated through theoretical analysis and numerical simulations. Quickest detection is a technique to...

    Provided By North Carolina State University

  • White Papers // Jan 2006

    Non-Uniform Program Analysis & Repeatable Execution Constraints: Exploiting Out-of-Order Processors in Real-Time Systems

    In this paper the authors enable easy, tight, and safe timing analysis of contemporary complex processors. They exploit the fact that out-of-order processors can be analyzed via simulation in the absence of variable control-flow. In their first technique, Non-Uniform Program Analysis (NUPA), program segments with a single flow of control...

    Provided By North Carolina State University

  • White Papers // Aug 2006

    The State of ZettaRAM

    Computer architectures are heavily influenced by parameters imposed by memory technologies. Memory hierarchies, virtual memory, prefetching, multithreading, and large-window processors are some well-known examples of architectural innovations influenced by memory constraints. This paper surveys ZettaRAM, a nascent memory technology based on molecular electronics. From patents and papers, the authors distill...

    Provided By North Carolina State University

  • White Papers // Aug 2006

    Assertion-Based Microarchitecture Design for Improved Fault Tolerance

    Protection against transient faults is an important constraint in high-performance processor design. One strategy for achieving efficient reliability is to apply targeted fault checking/masking techniques to different units within an overall reliability regimen. In this paper, the authors propose a novel class of targeted fault checks that verify the functioning...

    Provided By North Carolina State University

  • White Papers // Mar 2014

    NoCMsg: Scalable NoC-Based Message Passing

    Current processor design with ever more cores may ensure that theoretical compute performance still follows past increases (resting from Moore's law), but they also increasingly present a challenge to hardware and software alike. As the core count increases, the Network-on-Chip (NoC) topology has changed from buses over rings and fully...

    Provided By North Carolina State University

  • White Papers // Feb 2014

    Understanding the Tradeoffs between Software- Managed vs. Hardware-Managed Caches in GPUs

    On-chip caches are commonly used in computer systems to hide long off-chip memory access latencies. To manage on-chip caches, either software-managed or hardware-managed schemes can be employed. State-of-art accelerators, such as the NVIDIA Fermi or Kepler GPUs and Intel's forthcoming MIC \"KNights Landing\" (KNL), support both software-managed caches, aka. shared...

    Provided By North Carolina State University

  • White Papers // Jan 2014

    Communication Characteristics of Large-Scale Scientific Applications for Contemporary Cluster Architectures

    In this paper, the authors examine the explicit communication characteristics of several sophisticated scientific applications, which, by themselves, constitute a representative suite of publicly available benchmarks for large cluster architectures. By focusing on the Message Passing Interface (MPI) and by using hardware counters on the microprocessor, they observe each application's...

    Provided By North Carolina State University

  • White Papers // Jun 2014

    ScalaJack: Customized Scalable Tracing with in-situ Data Analysis

    Root cause diagnosis of large-scale HPC applications often fails because tools, specifically trace-based ones, can no longer record all metrics they measure. The authors address this problems by combining customized tracing and providing support for in-situ data analysis via ScalaJack, a framework with customizable instrumentation and pluggable ex-tension capabilities for...

    Provided By North Carolina State University

  • White Papers // Aug 2011

    Memory Trace Compression and Replay for SPMD Systems using Extended PRSDs

    Concurrency levels in large-scale supercomputers are rising exponentially, and shared-memory nodes with hundreds of cores and non-uniform memory access latencies are expected within the next decade. However, even current petascale systems with tens of cores per node suffer from memory bottlenecks. As core counts increase, memory issues will become critical...

    Provided By North Carolina State University

  • White Papers // Jan 2010

    Data-Intensive Document Clustering on GPU Clusters

    Document clustering is a central method to mine massive amounts of data. Due to the explosion of raw documents generated on the Internet and the necessity to analyze them efficiently in various intelligent information systems, clustering techniques have reached their limitations on single processors. Instead of single processors, general purpose...

    Provided By North Carolina State University

  • White Papers // Dec 2012

    Auto-Generation and Auto-Tuning of 3D Stencil Codes on Homogeneous and Heterogeneous GPU Clusters

    In this paper, the authors develop and evaluates search and optimization techniques for auto-tuning 3D stencil (nearest-neighbor) computations on GPUs. Observations indicate that parameter tuning is necessary for heterogeneous GPUs to achieve optimal performance with respect to a search space. Their proposed framework takes a most concise specification of stencil...

    Provided By North Carolina State University

  • White Papers // Sep 2009

    A Programming Model for Massive Data Parallelism with Data Dependencies

    Accelerating processors can often be more cost and energy effective for a wide range of data-parallel computing problems than general-purpose processors. For Graphics Processor Units (GPUs), this is particularly the case when program development is aided by environments, such as NVIDIA's Compute Unified Device Architecture (CUDA), which dramatically reduces the...

    Provided By North Carolina State University

  • White Papers // Sep 2011

    A Tunable, Software-based DRAM Error Detection and Correction Library for HPC

    Proposed exascale systems will present a number of considerable resiliency challenges. In particular, DRAM soft-errors, or bit-flips, are expected to greatly increase due to the increased memory density of these systems. Current hardware-based fault-tolerance methods will be unsuitable for addressing the expected soft error frequency rate. As a result, additional...

    Provided By North Carolina State University

  • White Papers // Oct 2011

    AutoGeneration of Communication Benchmark Traces

    Benchmarks are essential for evaluating HPC hardware and software for petascale machines and beyond. But benchmark creation is a tedious manual process. As a result, benchmarks tend to lag behind the development of complex scientific codes. The authors' paper automates the creation of communication benchmarks. Given an MPI application, they...

    Provided By North Carolina State University

  • White Papers // Dec 2013

    Performance Assessment of A Multi-block Incompressible Navier-Stokes Solver using Directive-based GPU Programming in a Cluster Environment

    OpenACC, a directive-based GPU programming standard, is emerging as a promising technology for massively-parallel accelerators, such as General-Purpose computing on Graphics Processing Units (GPGPU), Accelerated Processing Unit (APU) and Many Integrated Core architecture (MIC). The heterogeneous nature of these accelerators call for careful designs of parallel algorithms and data management,...

    Provided By North Carolina State University

  • White Papers // Mar 2008

    Hybrid Timing Analysis of Modern Processor Pipelines via Hardware/Software Interactions

    Embedded systems are often subject to constraints that require determinism to ensure that task deadlines are met. Such systems are referred to as real-time systems. Schedulability analysis provides a firm basis to ensure that tasks meet their deadlines for which knowledge of Worst-Case Execution Time (WCET) bounds is a critical...

    Provided By North Carolina State University

  • White Papers // Sep 2008

    Merging State and Preserving Timing Anomalies in Pipelines of High-End Processors

    Many embedded systems are subject to temporal constraints that require advance guarantees on meeting deadlines. Such systems rely on static analysis to safely bound Worst-Case Execution Time (WCET) bounds of tasks. Designers of these systems are forced to avoid state-of-the-art processors due to their inherent architectural complexity (such as out-of-order...

    Provided By North Carolina State University

  • White Papers // Jun 2011

    GStream: A General-Purpose Data Streaming Framework on GPU Clusters

    Emerging accelerating architectures, such as GPUs, have proved successful in providing significant performance gains to various application domains. However, their viability to operate on general streaming data is still ambiguous. In this paper, the authors propose GStream, a general-purpose, scalable data streaming framework on GPUs. The contributions of GStream are...

    Provided By North Carolina State University

  • White Papers // Mar 2012

    Fault Resilient Real-Time Design for NoC Architectures

    Performance and time to market requirements cause many real-time designers to consider Components, Off The Shelf (COTS) for real-time cyber-physical systems. Massive multi-core embedded processors with Network-on-Chip (NoC) designs to facilitate core-to-core communication are becoming common in COTS. These architectures benefit real-time scheduling, but they also pose predictability challenges. in...

    Provided By North Carolina State University

  • White Papers // Mar 2012

    Low ContentionMapping of Real-Time Tasks onto a TilePro 64 Core Processor

    Predictability of task execution is paramount for real-time systems so that upper bounds of execution times can be determined via static timing analysis. Static timing analysis on Network-on-Chip (NoC) processors may result in unsafe underestimations when the underlying communication paths are not considered. This stems from contention on the underlying...

    Provided By North Carolina State University

  • White Papers // Jan 2012

    ScalaBenchGen: Auto-Generation of Communication Benchmarks Traces

    Benchmarks are essential for evaluating HPC hardware and software for petascale machines and beyond. But benchmark creation is a tedious manual process. As a result, benchmarks tend to lag behind the development of complex scientific codes. This work contributes an automated approach to the creation of communication benchmarks. Given an...

    Provided By North Carolina State University

  • White Papers // Jun 2012

    CuNesl: Compiling Nested Data-Parallel Languages for SIMT Architectures

    Data-parallel languages feature fine-grained parallel primitives that can be supported by compilers targeting modern many-core architectures where data parallelism must be exploited to fully utilize the hardware. Previous research has focused on converting data-parallel languages for SIMD (Single Instruction Multiple Data) architectures. However, directly applying them to today's SIMT (Single...

    Provided By North Carolina State University

  • White Papers // Jan 2009

    PFetch: Software Prefetching Exploiting Temporal Predictability of Memory Access Streams

    CPU speeds have increased faster than the rate of improvement in memory access latencies in the recent past. As a result, with programs that suffer excessive cache misses, the CPU will increasingly be stalled waiting for the memory system to provide the requested memory line. Prefetching is a latency hiding...

    Provided By North Carolina State University

  • White Papers // Jan 2014

    Soft Error Protection via Fault-Resilient Data Representations

    Embedded systems are increasingly deployed in harsh environments that their components were not necessarily designed for. As a result, systems may have to sustain transient faults, i.e., both single-bit soft errors caused by radiation from space and transient errors caused by lower signal/noise ratio in smaller fabrication sizes. Hardware can...

    Provided By North Carolina State University

  • White Papers // Jun 2011

    A Fault Observant Real-Time Embedded Design for Network-on-Chip Control Systems

    Performance and time to market requirements cause many real-time designers to consider Components, Off The Shelf (COTS) for real-time systems. Massive multi-core embedded processors with Network-on-Chip (NoC) designs to facilitate core-to-core communication are becoming common in COTS. These architectures benefit real-time scheduling, but they also pose predictability challenges. In this...

    Provided By North Carolina State University

  • White Papers // Dec 2013

    Exploiting Data Representation for Fault Tolerance

    The authors explore the link between data representation and soft errors in dot products. They present an analytic model for the absolute error introduced should a soft error corrupt a bit in an IEEE-754 floating-point number. They show how this finding relates to the fundamental linear algebra concepts of normalization...

    Provided By North Carolina State University

  • White Papers // Feb 2010

    Stealthy Malware Detection and Monitoring through VMM-Based \"Out-of-the-Box\" Semantic View Reconstruction

    An alarming trend in recent malware incidents is that they are armed with stealthy techniques to detect, evade, and subvert malware detection facilities of the victim. On the defensive side, a fundamental limitation of traditional host-based antimalware systems is that they run inside the very hosts they are protecting (\"In-the-box\"),...

    Provided By North Carolina State University

  • White Papers // Feb 2013

    Adaptive Cache Bypassing for Inclusive Last Level Caches

    Cache hierarchy designs, including bypassing, replacement, and the inclusion property, have significant performance impact. Recent works on high performance caches have shown that cache bypassing is an effective technique to enhance the Last Level Cache (LLC) performance. However, commonly used inclusive cache hierarchy cannot benefit from this technique because bypassing...

    Provided By North Carolina State University

  • White Papers // Feb 2012

    Locality Principle Revisited: A Probability-Based Quantitative Approach

    This paper revisits the fundamental concept of the locality of references and proposes to quantify it as a conditional probability: in an address stream, given the condition that an address is accessed, how likely the same address (temporal locality) or an address within its neighborhood (spatial locality) will be accessed...

    Provided By North Carolina State University

  • White Papers // Jun 2012

    Fixing Performance Bugs: An Empirical Study of Open-Source GPGPU Programs

    Given the extraordinary computational power of modern Graphics Processing Units (GPUs), general purpose computation on GPUs (GPGPU) has become an increasingly important platform for high performance computing. To better understand how well the GPU resource has been utilized by application developers and then to facilitate them to develop high performance...

    Provided By North Carolina State University

  • White Papers // Feb 2012

    CPU-Assisted GPGPU on Fused CPU-GPU Architectures

    This paper presents a novel approach to utilize the CPU resource to facilitate the execution of GPGPU programs on fused CPU-GPU architectures. In the authors' model of fused architectures, the GPU and the CPU are integrated on the same die and share the on-chip L3 cache and off-chip memory, similar...

    Provided By North Carolina State University

  • White Papers // Jan 2011

    Time-Ordered Event Traces: A New Debugging Primitive for Concurrency Bugs

    Non-determinism makes concurrent bugs extremely difficult to reproduce and to debug. In this paper, the authors propose a new debugging primitive to facilitate the debugging process by exposing this non-deterministic behavior to the programmer. The key idea is to generate a time-ordered trace of events such as function calls/returns and...

    Provided By North Carolina State University

  • White Papers // Jan 2014

    Warp-Level Divergence in GPUs: Characterization, Impact, and Mitigation

    High throughput architectures rely on high Thread-Level Parallelism (TLP) to hide execution latencies. In state-of-art Graphics Processing Units (GPUs), threads are organized in a grid of Thread Blocks (TBs) and each TB contains tens to hundreds of threads. With a TB-level resource management scheme, all the resource required by a...

    Provided By North Carolina State University

  • White Papers // Feb 2014

    Understanding the Tradeoffs Between Software-Managed Vs. Hardware-Managed Caches in GPUs

    On-chip caches are commonly used in computer systems to hide long off-chip memory access latencies. To manage on-chip caches, either software-managed or hardware-managed schemes can be employed. State-of-art accelerators, such as the NVIDIA Fermi or Kepler GPUs and Intel's forthcoming MIC \"KNights Landing\" (KNL), support both software-managed caches, aka. shared...

    Provided By North Carolina State University

  • White Papers // Aug 2010

    Abstracting and Applying Business Modeling Patterns from RosettaNet

    RosettaNet is a leading industry effort that creates standards for business interactions among the participants in a supply chain. The RosettaNet standard defines over 100 Partner Interface Processes (PIPs) through which the participants can exchange business documents necessary to enact a supply chain. However, each PIP specifies the business interactions...

    Provided By North Carolina State University

  • White Papers // Feb 2014

    Automatic Identification of Application I/O Signatures from Noisy Server-Side Traces

    Competing workloads on a shared storage system cause I/O resource contention and application performance vagaries. This problem is already evident in today's HPC storage systems and is likely to become acute at exascale. The authors need more interaction between application I/O requirements and system software tools to help alleviate the...

    Provided By North Carolina State University

  • White Papers // Jan 2014

    IPSec/VPN Security Policy: Correctness, Conflict Detection and Resolution

    IPSec (Internet Security Protocol suite) functions will be executed correctly only if its policies are correctly specified and configured. Manual IPSec policy configuration is inefficient and error-prone. An erroneous policy could lead to communication blockade or serious security breach. In addition, even if policies are specified correctly in each domain,...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    An Empirical Study of Security Problem Reports in Linux Distributions

    Existing work like focuses primiarily on the analysis of the general category of problem reports or limit their attention to observations on number of security problems reported in open source projects. Existing studies on problem reports in open source projects focus primarily on the analysis of the general category of...

    Provided By North Carolina State University

  • White Papers // Jun 2006

    A Framework for Identifying Compromised Nodes in Sensor Networks

    Sensor networks are often subject to physical attacks. Once a node's cryptographic key is compromised, an attacker may completely impersonate it, and introduce arbitrary false information into the network. Basic cryptographic security mechanisms are often not effective in this situation. Most techniques to address this problem focus on detecting and...

    Provided By North Carolina State University

  • White Papers // Jun 2011

    EMFS: Email-Based Personal Cloud Storage

    Though a variety of cloud storage services have been offered recently, they have not yet provided users with transparent and cost-effective personal data storage. Services like Google Docs offer easy file access and sharing, but tie storage with internal data formats and specific applications. Meanwhile, services like Dropbox offer general-purpose...

    Provided By North Carolina State University

  • White Papers // Dec 2012

    Scheduling Cloud Capacity for Time-Varying Customer Demand

    As utility computing resources become more ubiquitous, service providers increasingly look to the cloud for an in-full or in-part infrastructure to serve utility computing customers on demand. Given the costs associated with cloud infrastructure, dynamic scheduling of cloud resources can significantly lower costs while providing an acceptable service level. The...

    Provided By North Carolina State University