North Carolina State University

Displaying 201-240 of 251 results

  • White Papers // Jul 2009

    The Impact Of Exchange Rate Volatility On Plant-level Investment: Evidence From

    The authors estimate the impact of exchange rate volatility on firms' investment decisions in a developing country setting. Employing plant-level panel data from the Colombian Manufacturing Census, they estimate a dynamic investment equation using the system-GMM estimator developed by Arellano and Bover (1995) and Blundell and Bond (1998). They find...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    A Hierarchical Model for Multigranular Optical Networks

    The authors present a hierarchical algorithm for grooming light-paths into wavebands, and routing wavebands over a network of multi-granular switching nodes. This algorithm focuses on lowering the number of wavelengths W and ports over the network while being conceptually simple, scalable, and consistent with the way networks are operated and...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    Business Modeling Via Commitments

    Existing computer science approaches to business modeling offer low-level abstractions such as data and control flows, which fail to capture the business intent underlying the interactions that are central to real-life business models. In contrast, existing management science approaches are high-level but not only are these semiformal, they are also...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    Fault Localization for Firewall Policies

    Firewalls are the mainstay of enterprise security and the most widely adopted technology for protecting private networks. Ensuring the correctness of firewall policies through testing is important. In firewall policy testing, test inputs are packets and test outputs are decisions. Packets with unexpected (Expected) evaluated decisions are classified as failed...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    Correctness Properties for Multiagent Systems

    What distinguishes multiagent systems from other software systems is their emphasis on the interactions among autonomous, heterogeneous agents. This paper motivates and characterizes correctness properties for multiagent systems. These properties are centered on commitments, and capture correctness at a high level. In contrast to existing approaches, commitments underlie key correctness...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    Core-Selectability in Chip Multiprocessors

    The centralized structures necessary for the extraction of Instruction-Level Parallelism (ILP) are consuming progressively smaller portions of the total die area of Chip Multi-Processors (CMP). The reason for this is that scaling these structures does not enhance general performance as much as scaling the cache and interconnect. However, the fact...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    An Empirical Study of Security Problem Reports in Linux Distributions

    Existing work like focuses primiarily on the analysis of the general category of problem reports or limit their attention to observations on number of security problems reported in open source projects. Existing studies on problem reports in open source projects focus primarily on the analysis of the general category of...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    ReFormat: Automatic Reverse Engineering of Encrypted Messages

    Automatic protocol reverse engineering has recently received significant attention due to its importance to many security applications. However, previous methods are all limited in analyzing only plain-text communications wherein the exchanged messages are not encrypted. In this paper, the authors propose ReFormat, a system that aims at deriving the message...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    Towards Well-Behaved Schema Evolution

    The authors study the problem of schema evolution in the RDF data model. RDF and the RDFS schema language are W3C standards for flexibly modeling and sharing data on the web. Although schema evolution has been intensively studied in the database and knowledge-representation communities, only recently has progress been made...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    Super-Diffusive Behavior of Mobile Nodes and Its Impact on Routing Protocol Performance

    Mobility is the most important component in Mobile Ad-hoc NETworks (MANETs) and Delay Tolerant Networks (DTNs). In this paper, the authors first investigate numerous GPS mobility traces of human mobile nodes and observe super-diffusive behavior in all GPS traces, which is characterized by a 'Faster-than-linear' growth rate of the Mean...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    Architecture Support for Improving Bulk Memory Copying and Initialization Performance

    Bulk memory copying and initialization is one of the most ubiquitous operations performed in current computer systems by both user applications and operating systems. While many current systems rely on a loop of loads and stores, there are proposals to introduce a single instruction to perform bulk memory copying. While...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    Authenticated Data Compression in Delay Tolerant Wireless Sensor Networks

    Delay Tolerant Wireless Sensor Networks (DTWSNs) are sensor networks where continuous connectivity between the sensor nodes and their final destinations (e.g., the base station) cannot be guaranteed. Storage constraints are particularly a concern in DTWSNs, since each node may have to store sensed data for a long period of time...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    Randomized Differential DSSS: Jamming-Resistant Wireless Broadcast Communication

    Jamming resistance is crucial for applications where reliable wireless communication is required. Spread spectrum techniques such as Frequency Hopping Spread Spectrum (FHSS) and Direct Sequence Spread Spectrum (DSSS) have been used as countermeasures against jamming attacks. Traditional anti-jamming techniques require that senders and receivers share a secret key in order...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    Detection of Multiple-Duty-Related Security Leakage in Access Control Policies

    Access control mechanisms control which subjects (Such as users or processes) has access to which resources. To facilitate managing access control, policy authors increasingly write access control policies in XACML. Access control policies written in XACML could be amenable to multiple-duty-related security leakage, which grants unauthorized access to a user...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    Hybrid Full/Incremental Checkpoint/Restart for MPI Jobs in HPC Environments

    As the number of cores in high-performance computing environments keeps increasing, faults are becoming common place. Checkpointing addresses such faults but captures full process images even though only a subset of the process image changes between checkpoints. The authors have designed a high-performance hybrid disk-based full/incremental checkpointing technique for MPI...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    SHIELDSTRAP: A Secure Bootstrap Architecture

    Many systems may have security requirements such as protecting the privacy of data and code stored in the system, ensuring integrity of computations, or preventing the execution of unauthorized code. It is becoming increasingly difficult to ensure such protections as hardware-based attacks, in addition to software attacks, become more widespread...

    Provided By North Carolina State University

  • White Papers // Jun 2009

    The Role of Internet Service Providers in Cyber Security

    The current level of insecurity of the Internet is a worldwide problem that has resulted in a multitude of costs for businesses, governments, and individuals. Past research (e.g., Frith, 2005; Gallaher, Rowe, Rogozhin, & Link, 2006) suggests that one significant factor in these cyber security problems is the inadequate level...

    Provided By North Carolina State University

  • White Papers // May 2009

    Hash-Based Sequential Aggregate and Forward Secure Signature for Unattended Wireless Sensor Networks

    Unattended Wireless Sensor Networks (UWSNs) operating in hostile environments face great security and performance challenges due to the lack of continuous real-time communication between senders (sensors) and receivers (e.g., mobile data collectors, static sinks). The lack of real-time communication forces sensors to accumulate the sensed data possibly for long time...

    Provided By North Carolina State University

  • White Papers // May 2009

    Analysis on the Kalman Filter Performance in GPS/INS Integration at Different Noise Levels, Sampling Periods and Curvatures

    Kalman Filters (KF) has been extensively used in the integration of Global Positioning System (GPS) and Inertial Navigation System (INS) data. Often, the GPS data is used as a benchmark to update the INS data. In this paper, an analysis of integration of GPS data with INS data using an...

    Provided By North Carolina State University

  • White Papers // May 2009

    Netset: Automating Network Performance Evaluation

    Performance measurement and comparison is integral to almost any kind of networking research. However, the authors currently lack a general framework under which network-based tests can be carried out. Thus researchers tend to design their tests based on their own ingenuity, making it difficult to repeat these tests and compare...

    Provided By North Carolina State University

  • White Papers // May 2009

    Predictive Control of Multiple UGVs in a NCS With Adaptive Bandwidth Allocation

    In network based path tracking control of systems with multiple Unmanned Ground Vehicles (UGVs), performance can be affected by network constraints including time-varying network delays and bandwidth limitation. Network delay has previously been compensated for using smith predictor based techniques and gain scheduling to limit UGV motion. The predictive gain...

    Provided By North Carolina State University

  • White Papers // May 2009

    Improving the Availability of Supercomputer Job Input Data Using Temporal Replication

    Storage systems in supercomputers are a major reason for service interruptions. RAID solutions alone cannot provide sufficient protection as 1) growing average disk recovery times make RAID groups increasingly vulnerable to disk failures during reconstruction, and 2) RAID does not help with higher-level faults such failed I/O nodes. This paper...

    Provided By North Carolina State University

  • White Papers // Apr 2009

    Structured Variational Methods for Distributed Inference: Convergence Analysis and Performance-Complexity Tradeoff

    In this paper, the asymptotic performance of a recently proposed distributed inference framework, structured variational methods, is investigated. The authors first distinguish the intra- and inter-cluster inference algorithms as vertex and edge processes respectively. Their difference is illustrated, and convergence rate is derived for the intra-cluster inference procedure which is...

    Provided By North Carolina State University

  • White Papers // Apr 2009

    FREEDM Software Controller Architecture for a Solid State Transformer

    The Future Renewable Electric Energy Delivery and Management (FREEDM) project aims at providing an efficient electric power grid integrating alternative generating sources and storage with existing power systems to facilitate a green energy in a highly distributed and scalable manner. One of the central aspects of the Reliable and Secure...

    Provided By North Carolina State University

  • White Papers // Apr 2009

    The FREEDM Architecture of Fault Tolerant Network Routing through Software Overlays

    Control decisions of intelligent devices in critical infrastructure can have a significant impact on human life and the environment. Insuring that the appropriate data is available is crucial in making informed decisions. Such considerations are becoming increasingly important in today's cyber-physical systems that combine computational decision making on the cyber...

    Provided By North Carolina State University

  • White Papers // Apr 2009

    Remote Attestation to Dynamic System Properties: Towards Providing Complete System Integrity Evidence

    Remote attestation of system integrity is an essential part of trusted computing. However, current remote attestation techniques only provide integrity proofs of static properties of the system. To address this problem the authors present a novel remote dynamic attestation system named ReDAS (Remote Dynamic Attestation System) that provides integrity evidence...

    Provided By North Carolina State University

  • White Papers // Mar 2009

    Selecting Trustworthy Service in Service-Oriented Environments

    Most of current service selection approaches in service-oriented environments fail to capture the dynamic relationships between services or assume the complete knowledge of service composition is known as a prior. In these cases, problems may arise when consumers are not aware of the underlying composition behind services. The authors propose...

    Provided By North Carolina State University

  • White Papers // Mar 2009

    Feedback-Directed Page Placement for CcNUMA Via Hardware-Generated Memory Traces

    Non-Uniform Memory Architectures with cache coherence (ccNUMA) are becoming increasingly common, not just for large-scale high performance platforms but also in the context of multi-cores architectures. Under ccNUMA, data placement may influence overall application performance significantly as references resolved locally to a processor/core impose lower latencies than remote ones. This...

    Provided By North Carolina State University

  • White Papers // Feb 2009

    Lightweight Remote Image Management for Secure Code Dissemination in Wireless Sensor Networks

    Wireless sensor networks are considered ideal candidates for a wide range of applications. It is desirable and sometimes necessary to reprogram sensor nodes through wireless links after they are deployed to remove bugs or add new functionalities. Several approaches (e.g., Seluge, Sluice) have been proposed recently for secure code dissemination...

    Provided By North Carolina State University

  • White Papers // Feb 2009

    Improving Software Quality Via Code Searching and Mining

    Enormous amount of open source code is available on the Internet and various Code Search Engines (CSE) are available to serve as a means for searching in open source code. However, usage of CSEs is often limited to simple tasks such as searching for relevant code examples. This paper presents...

    Provided By North Carolina State University

  • White Papers // Feb 2009

    An Empirical Study of Testing File-System-Dependent Software With Mock Objects

    Unit testing is a technique of testing a single unit of a program in isolation. The testability of the unit under test can be reduced when the unit interacts with its environment. The construction of high-covering unit tests and their execution require appropriate interactions with the environment such as a...

    Provided By North Carolina State University

  • White Papers // Feb 2009

    Service Performance and Analysis in Cloud Computing

    Cloud computing is a new cost-efficient computing paradigm in which information and computer power can be accessed from a Web browser by customers. Understanding the characteristics of computer service performance has become critical for service applications in cloud computing. For the commercial success of this new computing paradigm, the ability...

    Provided By North Carolina State University

  • White Papers // Feb 2009

    Mining Exception-Handling Rules as Sequence Association Rules

    Programming languages such as Java and C++ provide exception-handling constructs such as try-catch to handle exception conditions that arise during program execution. Under these exception conditions, programs follow paths different from normal execution paths; these additional paths are referred to as exception paths. Applications developed based on these programming languages...

    Provided By North Carolina State University

  • White Papers // Feb 2009

    Guided Path Exploration for Regression Test Generation

    Regression test generation aims at generating a test suite that can detect behavioral differences between the original and the modified versions of a program. Regression test generation can be automated by using Dynamic Symbolic Execution (DSE), a state-of-the-art test generation technique, to generate a test suite achieving high structural coverage....

    Provided By North Carolina State University

  • White Papers // Feb 2009

    Test Selection for Result Inspection Via Mining Predicate Rules

    It is labor-intensive to manually verify the outputs of a large set of tests that are not equipped with test oracles. Test selection helps to reduce this cost by selecting a small subset of tests that are likely to reveal faults. A promising approach is to dynamically mine operational models...

    Provided By North Carolina State University

  • White Papers // Jan 2009

    PFetch: Software Prefetching Exploiting Temporal Predictability of Memory Access Streams

    CPU speeds have increased faster than the rate of improvement in memory access latencies in the recent past. As a result, with programs that suffer excessive cache misses, the CPU will increasingly be stalled waiting for the memory system to provide the requested memory line. Prefetching is a latency hiding...

    Provided By North Carolina State University

  • White Papers // Sep 2008

    Merging State and Preserving Timing Anomalies in Pipelines of High-End Processors

    Many embedded systems are subject to temporal constraints that require advance guarantees on meeting deadlines. Such systems rely on static analysis to safely bound Worst-Case Execution Time (WCET) bounds of tasks. Designers of these systems are forced to avoid state-of-the-art processors due to their inherent architectural complexity (such as out-of-order...

    Provided By North Carolina State University

  • White Papers // Aug 2008

    Security-Aware Resource Optimization in Distributed Service Computing

    In this paper, the authors consider a set of computer resources used by a service provider to host enterprise applications for customer services subject to a Service Level Agreement (SLA). The SLA defines three QoS metrics, namely, trustworthiness, percentile response time and availability. They first give an overview of current...

    Provided By North Carolina State University

  • White Papers // Jul 2008

    Dynamic Thread Assignment on Heterogeneous Multiprocessor Architectures

    In a multi-programmed computing environment, threads of execution exhibit different runtime characteristics and hardware resource requirements. Not only do the behaviors of distinct threads differ, but each thread may also present diversity in its performance and resource usage over time. A heterogeneous Chip Multi-Processor (CMP) architecture consists of processor cores...

    Provided By North Carolina State University

  • White Papers // Jul 2008

    Performance Assessment and Compensation for Secure Networked Control Systems

    Network-Control-Systems (NCS) have been gaining popularity due to their high potential in widespread applications and becoming realizable due to the rapid advancements in embedded systems, wireless communication technologies. This paper addresses the issue of NCS information security as well its time-sensitive performance and their trade-off. A PI controller implemented on...

    Provided By North Carolina State University

  • White Papers // Oct 2011

    AutoGeneration of Communication Benchmark Traces

    Benchmarks are essential for evaluating HPC hardware and software for petascale machines and beyond. But benchmark creation is a tedious manual process. As a result, benchmarks tend to lag behind the development of complex scientific codes. The authors' paper automates the creation of communication benchmarks. Given an MPI application, they...

    Provided By North Carolina State University

  • White Papers // Dec 2013

    Performance Assessment of A Multi-block Incompressible Navier-Stokes Solver using Directive-based GPU Programming in a Cluster Environment

    OpenACC, a directive-based GPU programming standard, is emerging as a promising technology for massively-parallel accelerators, such as General-Purpose computing on Graphics Processing Units (GPGPU), Accelerated Processing Unit (APU) and Many Integrated Core architecture (MIC). The heterogeneous nature of these accelerators call for careful designs of parallel algorithms and data management,...

    Provided By North Carolina State University

  • White Papers // Mar 2008

    Hybrid Timing Analysis of Modern Processor Pipelines via Hardware/Software Interactions

    Embedded systems are often subject to constraints that require determinism to ensure that task deadlines are met. Such systems are referred to as real-time systems. Schedulability analysis provides a firm basis to ensure that tasks meet their deadlines for which knowledge of Worst-Case Execution Time (WCET) bounds is a critical...

    Provided By North Carolina State University

  • White Papers // Sep 2008

    Merging State and Preserving Timing Anomalies in Pipelines of High-End Processors

    Many embedded systems are subject to temporal constraints that require advance guarantees on meeting deadlines. Such systems rely on static analysis to safely bound Worst-Case Execution Time (WCET) bounds of tasks. Designers of these systems are forced to avoid state-of-the-art processors due to their inherent architectural complexity (such as out-of-order...

    Provided By North Carolina State University

  • White Papers // Jun 2011

    GStream: A General-Purpose Data Streaming Framework on GPU Clusters

    Emerging accelerating architectures, such as GPUs, have proved successful in providing significant performance gains to various application domains. However, their viability to operate on general streaming data is still ambiguous. In this paper, the authors propose GStream, a general-purpose, scalable data streaming framework on GPUs. The contributions of GStream are...

    Provided By North Carolina State University

  • White Papers // Mar 2012

    Fault Resilient Real-Time Design for NoC Architectures

    Performance and time to market requirements cause many real-time designers to consider Components, Off The Shelf (COTS) for real-time cyber-physical systems. Massive multi-core embedded processors with Network-on-Chip (NoC) designs to facilitate core-to-core communication are becoming common in COTS. These architectures benefit real-time scheduling, but they also pose predictability challenges. in...

    Provided By North Carolina State University

  • White Papers // Mar 2012

    Low ContentionMapping of Real-Time Tasks onto a TilePro 64 Core Processor

    Predictability of task execution is paramount for real-time systems so that upper bounds of execution times can be determined via static timing analysis. Static timing analysis on Network-on-Chip (NoC) processors may result in unsafe underestimations when the underlying communication paths are not considered. This stems from contention on the underlying...

    Provided By North Carolina State University

  • White Papers // Jan 2012

    ScalaBenchGen: Auto-Generation of Communication Benchmarks Traces

    Benchmarks are essential for evaluating HPC hardware and software for petascale machines and beyond. But benchmark creation is a tedious manual process. As a result, benchmarks tend to lag behind the development of complex scientific codes. This work contributes an automated approach to the creation of communication benchmarks. Given an...

    Provided By North Carolina State University

  • White Papers // Jun 2012

    CuNesl: Compiling Nested Data-Parallel Languages for SIMT Architectures

    Data-parallel languages feature fine-grained parallel primitives that can be supported by compilers targeting modern many-core architectures where data parallelism must be exploited to fully utilize the hardware. Previous research has focused on converting data-parallel languages for SIMD (Single Instruction Multiple Data) architectures. However, directly applying them to today's SIMT (Single...

    Provided By North Carolina State University

  • White Papers // Jan 2009

    PFetch: Software Prefetching Exploiting Temporal Predictability of Memory Access Streams

    CPU speeds have increased faster than the rate of improvement in memory access latencies in the recent past. As a result, with programs that suffer excessive cache misses, the CPU will increasingly be stalled waiting for the memory system to provide the requested memory line. Prefetching is a latency hiding...

    Provided By North Carolina State University

  • White Papers // Jan 2014

    Soft Error Protection via Fault-Resilient Data Representations

    Embedded systems are increasingly deployed in harsh environments that their components were not necessarily designed for. As a result, systems may have to sustain transient faults, i.e., both single-bit soft errors caused by radiation from space and transient errors caused by lower signal/noise ratio in smaller fabrication sizes. Hardware can...

    Provided By North Carolina State University

  • White Papers // Jun 2011

    A Fault Observant Real-Time Embedded Design for Network-on-Chip Control Systems

    Performance and time to market requirements cause many real-time designers to consider Components, Off The Shelf (COTS) for real-time systems. Massive multi-core embedded processors with Network-on-Chip (NoC) designs to facilitate core-to-core communication are becoming common in COTS. These architectures benefit real-time scheduling, but they also pose predictability challenges. In this...

    Provided By North Carolina State University

  • White Papers // Dec 2013

    Exploiting Data Representation for Fault Tolerance

    The authors explore the link between data representation and soft errors in dot products. They present an analytic model for the absolute error introduced should a soft error corrupt a bit in an IEEE-754 floating-point number. They show how this finding relates to the fundamental linear algebra concepts of normalization...

    Provided By North Carolina State University

  • White Papers // Feb 2010

    Stealthy Malware Detection and Monitoring through VMM-Based \"Out-of-the-Box\" Semantic View Reconstruction

    An alarming trend in recent malware incidents is that they are armed with stealthy techniques to detect, evade, and subvert malware detection facilities of the victim. On the defensive side, a fundamental limitation of traditional host-based antimalware systems is that they run inside the very hosts they are protecting (\"In-the-box\"),...

    Provided By North Carolina State University

  • White Papers // Feb 2013

    Adaptive Cache Bypassing for Inclusive Last Level Caches

    Cache hierarchy designs, including bypassing, replacement, and the inclusion property, have significant performance impact. Recent works on high performance caches have shown that cache bypassing is an effective technique to enhance the Last Level Cache (LLC) performance. However, commonly used inclusive cache hierarchy cannot benefit from this technique because bypassing...

    Provided By North Carolina State University

  • White Papers // Feb 2012

    Locality Principle Revisited: A Probability-Based Quantitative Approach

    This paper revisits the fundamental concept of the locality of references and proposes to quantify it as a conditional probability: in an address stream, given the condition that an address is accessed, how likely the same address (temporal locality) or an address within its neighborhood (spatial locality) will be accessed...

    Provided By North Carolina State University

  • White Papers // Jun 2012

    Fixing Performance Bugs: An Empirical Study of Open-Source GPGPU Programs

    Given the extraordinary computational power of modern Graphics Processing Units (GPUs), general purpose computation on GPUs (GPGPU) has become an increasingly important platform for high performance computing. To better understand how well the GPU resource has been utilized by application developers and then to facilitate them to develop high performance...

    Provided By North Carolina State University

  • White Papers // Feb 2012

    CPU-Assisted GPGPU on Fused CPU-GPU Architectures

    This paper presents a novel approach to utilize the CPU resource to facilitate the execution of GPGPU programs on fused CPU-GPU architectures. In the authors' model of fused architectures, the GPU and the CPU are integrated on the same die and share the on-chip L3 cache and off-chip memory, similar...

    Provided By North Carolina State University

  • White Papers // Jan 2011

    Time-Ordered Event Traces: A New Debugging Primitive for Concurrency Bugs

    Non-determinism makes concurrent bugs extremely difficult to reproduce and to debug. In this paper, the authors propose a new debugging primitive to facilitate the debugging process by exposing this non-deterministic behavior to the programmer. The key idea is to generate a time-ordered trace of events such as function calls/returns and...

    Provided By North Carolina State University

  • White Papers // Jan 2014

    Warp-Level Divergence in GPUs: Characterization, Impact, and Mitigation

    High throughput architectures rely on high Thread-Level Parallelism (TLP) to hide execution latencies. In state-of-art Graphics Processing Units (GPUs), threads are organized in a grid of Thread Blocks (TBs) and each TB contains tens to hundreds of threads. With a TB-level resource management scheme, all the resource required by a...

    Provided By North Carolina State University

  • White Papers // Feb 2014

    Understanding the Tradeoffs Between Software-Managed Vs. Hardware-Managed Caches in GPUs

    On-chip caches are commonly used in computer systems to hide long off-chip memory access latencies. To manage on-chip caches, either software-managed or hardware-managed schemes can be employed. State-of-art accelerators, such as the NVIDIA Fermi or Kepler GPUs and Intel's forthcoming MIC \"KNights Landing\" (KNL), support both software-managed caches, aka. shared...

    Provided By North Carolina State University

  • White Papers // Aug 2010

    Abstracting and Applying Business Modeling Patterns from RosettaNet

    RosettaNet is a leading industry effort that creates standards for business interactions among the participants in a supply chain. The RosettaNet standard defines over 100 Partner Interface Processes (PIPs) through which the participants can exchange business documents necessary to enact a supply chain. However, each PIP specifies the business interactions...

    Provided By North Carolina State University

  • White Papers // Feb 2014

    Automatic Identification of Application I/O Signatures from Noisy Server-Side Traces

    Competing workloads on a shared storage system cause I/O resource contention and application performance vagaries. This problem is already evident in today's HPC storage systems and is likely to become acute at exascale. The authors need more interaction between application I/O requirements and system software tools to help alleviate the...

    Provided By North Carolina State University

  • White Papers // Jan 2014

    IPSec/VPN Security Policy: Correctness, Conflict Detection and Resolution

    IPSec (Internet Security Protocol suite) functions will be executed correctly only if its policies are correctly specified and configured. Manual IPSec policy configuration is inefficient and error-prone. An erroneous policy could lead to communication blockade or serious security breach. In addition, even if policies are specified correctly in each domain,...

    Provided By North Carolina State University

  • White Papers // Jul 2009

    An Empirical Study of Security Problem Reports in Linux Distributions

    Existing work like focuses primiarily on the analysis of the general category of problem reports or limit their attention to observations on number of security problems reported in open source projects. Existing studies on problem reports in open source projects focus primarily on the analysis of the general category of...

    Provided By North Carolina State University

  • White Papers // Jun 2006

    A Framework for Identifying Compromised Nodes in Sensor Networks

    Sensor networks are often subject to physical attacks. Once a node's cryptographic key is compromised, an attacker may completely impersonate it, and introduce arbitrary false information into the network. Basic cryptographic security mechanisms are often not effective in this situation. Most techniques to address this problem focus on detecting and...

    Provided By North Carolina State University

  • White Papers // Jun 2011

    EMFS: Email-Based Personal Cloud Storage

    Though a variety of cloud storage services have been offered recently, they have not yet provided users with transparent and cost-effective personal data storage. Services like Google Docs offer easy file access and sharing, but tie storage with internal data formats and specific applications. Meanwhile, services like Dropbox offer general-purpose...

    Provided By North Carolina State University

  • White Papers // Dec 2012

    Scheduling Cloud Capacity for Time-Varying Customer Demand

    As utility computing resources become more ubiquitous, service providers increasingly look to the cloud for an in-full or in-part infrastructure to serve utility computing customers on demand. Given the costs associated with cloud infrastructure, dynamic scheduling of cloud resources can significantly lower costs while providing an acceptable service level. The...

    Provided By North Carolina State University

  • White Papers // Sep 2012

    Is Link Signature Dependable for Wireless Security?

    Link signature, which refers to the unique and reciprocal wireless channel between a pair of transceivers, has gained significant attentions recently due to its effectiveness in signal authentication and shared secret construction for various wireless applications. A fundamental assumption of this technique is that the wireless signals received at two...

    Provided By North Carolina State University

  • White Papers // Jul 2008

    Performance Assessment and Compensation for Secure Networked Control Systems

    Network-Control-Systems (NCS) have been gaining popularity due to their high potential in widespread applications and becoming realizable due to the rapid advancements in embedded systems, wireless communication technologies. This paper addresses the issue of NCS information security as well its time-sensitive performance and their trade-off. A PI controller implemented on...

    Provided By North Carolina State University

  • White Papers // Apr 2007

    Conformance Checking of Access Control Policies Specified in XACML

    Access control is one of the most fundamental and widely used security mechanisms. Access control mechanisms control which principals such as users or processes have access to which resources in a system. To facilitate managing and maintaining access control, access control policies are increasingly written in specification languages such as...

    Provided By North Carolina State University

  • White Papers // Nov 2007

    Worst-Case Execution Time Analysis of Security Policies for Deeply Embedded Real-Time Systems

    Deeply embedded systems often have unique constraints because of their small size and vital roles in critical infrastructure. Problems include limitations on code size, limited access to the actual hardware, etc. These problems become more critical in real-time systems where security policies must not only work within the above limitations...

    Provided By North Carolina State University

  • White Papers // Aug 2008

    Security-Aware Resource Optimization in Distributed Service Computing

    In this paper, the authors consider a set of computer resources used by a service provider to host enterprise applications for customer services subject to a Service Level Agreement (SLA). The SLA defines three QoS metrics, namely, trustworthiness, percentile response time and availability. They first give an overview of current...

    Provided By North Carolina State University

  • White Papers // Nov 2013

    PAQO: Preference-Aware Query Optimization for Decentralized Database Systems

    The declarative nature of SQL has traditionally been a major strength. Users simply state what information they are interested in, and the database management system determines the best plan for retrieving it. A consequence of this model is that should a user ever want to specify some aspect of how...

    Provided By North Carolina State University

  • White Papers // Apr 2007

    Information Security with Real-Time Operation: Performance Assessment for Next Generation Wireless Distributed Networked-Control-Systems

    Distributed Network-Control-Systems (D-NCS) are a multidisciplinary effort whose aim is to produce a network structure and components that are capable of integrating sensors, actuators, communication, and control algorithms in a manner to suit real-time applications. They have been gaining popularity due to their high potential in widespread applications and becoming...

    Provided By North Carolina State University

  • White Papers // Aug 2009

    Towards a Unifying Approach in Understanding Security Problems

    To evaluate security in the context of software reliability engineering, it is necessary to analyze security problems, actual exploits, and their relationship with an understanding of the operational behavior of the system. That can be done in terms of the effort involved in security exploits, through classic reliability factors such...

    Provided By North Carolina State University

  • White Papers // Jan 2007

    Using Deception to Hide Things from Hackers: Processes, Principles, and Techniques

    Deception offers one means of hiding things from an adversary. This paper introduces a model for understanding, comparing and developing methods of deceptive hiding. The model characterizes deceptive hiding in terms of how it defeats the underlying processes that an adversary uses to discover the hidden thing. An adversary's process...

    Provided By North Carolina State University

  • White Papers // Mar 2013

    Reasonableness Meets Requirements: Regulating Security and Privacy in Software

    Software security and privacy issues regularly grab headlines amid fears of identity theft, data breaches, and threats to security. Policymakers have responded with a variety of approaches to combat such risk. Suggested measures include promulgation of strict rules, enactment of open-ended standards, and, at times, abstention in favor of allowing...

    Provided By North Carolina State University

  • White Papers // Feb 2007

    The ChoicePoint Dilemma: How Data Brokers Should Handle the Privacy of Personal Information

    In 2005, there was a significant increase in the number of security and privacy breaches disclosed to the public. Leading the charge was ChoicePoint, a data broker that suffered fraudulent access to its vast databases of personal information. ChoicePoint and other data brokers exist in a largely unregulated environment, in...

    Provided By North Carolina State University

  • White Papers // Mar 2012

    Low Contention Mapping of Real-Time Tasks onto a TilePro 64 Core Processor

    Predictability of task execution is paramount for real-time systems so that upper bounds of execution times can be determined via static timing analysis. Static timing analysis on Network-on-Chip (NoC) processors may result in unsafe underestimations when the underlying communication paths are not considered. This stems from contention on the underlying...

    Provided By North Carolina State University