North Carolina State University

Displaying 1-40 of 217 results

  • White Papers // Sep 2014

    A Semantics-Oriented Storage Model for Big Heterogeneous RDF Data

    Increasing availability of RDF data covering different domains is enabling ad-hoc integration of different kinds of data to suit varying needs. This usually results in large collections of data such as the billion triple challenge datasets or SNOMED CT that are not just "Big" in the sense of volume but...

    Provided By North Carolina State University

  • White Papers // Jun 2014

    ScalaJack: Customized Scalable Tracing with in-situ Data Analysis

    Root cause diagnosis of large-scale HPC applications often fails because tools, specifically trace-based ones, can no longer record all metrics they measure. The authors address this problems by combining customized tracing and providing support for in-situ data analysis via ScalaJack, a framework with customizable instrumentation and pluggable ex-tension capabilities for...

    Provided By North Carolina State University

  • White Papers // Jun 2014

    InVis: An EDM Tool for Graphical Rendering and Analysis of Student Interaction Data

    InVis is a novel visualization tool that was developed to explore, navigate and catalog student interaction data. In-Vis processes datasets collected from interactive educational systems such as intelligent tutoring systems and homework helpers and visualizes the student data as graphs. This visual representation of data provides an interactive environment with...

    Provided By North Carolina State University

  • White Papers // Mar 2014

    NoCMsg: Scalable NoC-Based Message Passing

    Current processor design with ever more cores may ensure that theoretical compute performance still follows past increases (resting from Moore's law), but they also increasingly present a challenge to hardware and software alike. As the core count increases, the Network-on-Chip (NoC) topology has changed from buses over rings and fully...

    Provided By North Carolina State University

  • White Papers // Feb 2014

    Tools for Simulation and Benchmark Generation at Exascale

    The path to exascale High-Performance Computing (HPC) poses several challenges related to power, performance, resilience, productivity, programmability, data movement, and data management. Investigating the performance of parallel applications at scale on future architectures and the performance impact of different architecture choices is an important component of HPC hardware/software co-design. Simulations...

    Provided By North Carolina State University

  • White Papers // Feb 2014

    Automatic Identification of Application I/O Signatures from Noisy Server-Side Traces

    Competing workloads on a shared storage system cause I/O resource contention and application performance vagaries. This problem is already evident in today's HPC storage systems and is likely to become acute at exascale. The authors need more interaction between application I/O requirements and system software tools to help alleviate the...

    Provided By North Carolina State University

  • White Papers // Feb 2014

    Understanding the Tradeoffs between Software- Managed vs. Hardware-Managed Caches in GPUs

    On-chip caches are commonly used in computer systems to hide long off-chip memory access latencies. To manage on-chip caches, either software-managed or hardware-managed schemes can be employed. State-of-art accelerators, such as the NVIDIA Fermi or Kepler GPUs and Intel's forthcoming MIC \"KNights Landing\" (KNL), support both software-managed caches, aka. shared...

    Provided By North Carolina State University

  • White Papers // Feb 2014

    Understanding the Tradeoffs Between Software-Managed Vs. Hardware-Managed Caches in GPUs

    On-chip caches are commonly used in computer systems to hide long off-chip memory access latencies. To manage on-chip caches, either software-managed or hardware-managed schemes can be employed. State-of-art accelerators, such as the NVIDIA Fermi or Kepler GPUs and Intel's forthcoming MIC \"KNights Landing\" (KNL), support both software-managed caches, aka. shared...

    Provided By North Carolina State University

  • White Papers // Jan 2014

    Warp-Level Divergence in GPUs: Characterization, Impact, and Mitigation

    High throughput architectures rely on high Thread-Level Parallelism (TLP) to hide execution latencies. In state-of-art Graphics Processing Units (GPUs), threads are organized in a grid of Thread Blocks (TBs) and each TB contains tens to hundreds of threads. With a TB-level resource management scheme, all the resource required by a...

    Provided By North Carolina State University

  • White Papers // Jan 2014

    Fair Caching in a Chip Multiprocessor Architecture

    In this paper, the authors present a detailed study of fairness in cache sharing between threads in a Chip Multi-Processor (CMP) architecture. Prior work in CMP architectures has only studied throughput optimization techniques for a shared cache. The issue of fairness in cache sharing, and its relation to throughput, has...

    Provided By North Carolina State University

  • White Papers // Jan 2014

    Soft Error Protection via Fault-Resilient Data Representations

    Embedded systems are increasingly deployed in harsh environments that their components were not necessarily designed for. As a result, systems may have to sustain transient faults, i.e., both single-bit soft errors caused by radiation from space and transient errors caused by lower signal/noise ratio in smaller fabrication sizes. Hardware can...

    Provided By North Carolina State University

  • White Papers // Jan 2014

    Communication Characteristics of Large-Scale Scientific Applications for Contemporary Cluster Architectures

    In this paper, the authors examine the explicit communication characteristics of several sophisticated scientific applications, which, by themselves, constitute a representative suite of publicly available benchmarks for large cluster architectures. By focusing on the Message Passing Interface (MPI) and by using hardware counters on the microprocessor, they observe each application's...

    Provided By North Carolina State University

  • White Papers // Jan 2014

    IPSec/VPN Security Policy: Correctness, Conflict Detection and Resolution

    IPSec (Internet Security Protocol suite) functions will be executed correctly only if its policies are correctly specified and configured. Manual IPSec policy configuration is inefficient and error-prone. An erroneous policy could lead to communication blockade or serious security breach. In addition, even if policies are specified correctly in each domain,...

    Provided By North Carolina State University

  • White Papers // Dec 2013

    Performance Assessment of A Multi-block Incompressible Navier-Stokes Solver using Directive-based GPU Programming in a Cluster Environment

    OpenACC, a directive-based GPU programming standard, is emerging as a promising technology for massively-parallel accelerators, such as General-Purpose computing on Graphics Processing Units (GPGPU), Accelerated Processing Unit (APU) and Many Integrated Core architecture (MIC). The heterogeneous nature of these accelerators call for careful designs of parallel algorithms and data management,...

    Provided By North Carolina State University

  • White Papers // Dec 2013

    Exploiting Data Representation for Fault Tolerance

    The authors explore the link between data representation and soft errors in dot products. They present an analytic model for the absolute error introduced should a soft error corrupt a bit in an IEEE-754 floating-point number. They show how this finding relates to the fundamental linear algebra concepts of normalization...

    Provided By North Carolina State University

  • White Papers // Nov 2013

    PAQO: Preference-Aware Query Optimization for Decentralized Database Systems

    The declarative nature of SQL has traditionally been a major strength. Users simply state what information they are interested in, and the database management system determines the best plan for retrieving it. A consequence of this model is that should a user ever want to specify some aspect of how...

    Provided By North Carolina State University

  • White Papers // Sep 2013

    WHYPER: Towards Automating Risk Assessment of Mobile Applications

    In this paper, the authors present the first step in addressing this challenge. Specifically, they focus on permissions for a given application and examine whether the application description provides any indication for: why the application needs permission. They present WHYPER, a framework using Natural Language Processing (NLP) techniques to identify...

    Provided By North Carolina State University

  • White Papers // Jul 2013

    A Unified View of Non-monotonic Core Selection and Application Steering in Heterogeneous Chip Multiprocessors

    A single-ISA Heterogeneous Chip Multi-Processor (HCMP) is an attractive substrate to improve single-thread performance and energy efficiency in the dark silicon era. The authors consider HCMPs comprised of non-monotonic core types where each core type is performance-optimized to different instruction level behavior and hence cannot be ranked - different program...

    Provided By North Carolina State University

  • White Papers // Jun 2013

    MetaSymploit: Day-One Defense Against Script-Based Attacks with Security-Enhanced Symbolic Analysis

    In this paper, the authors propose MetaSymploit, the first system of fast attack script analysis and automatic signature generation for a network Intrusion Detection System (IDS). As soon as a new attack script is developed and distributed, MetaSymploit uses security-enhanced symbolic execution to quickly analyze the script and automatically generate...

    Provided By North Carolina State University

  • White Papers // Mar 2013

    Reasonableness Meets Requirements: Regulating Security and Privacy in Software

    Software security and privacy issues regularly grab headlines amid fears of identity theft, data breaches, and threats to security. Policymakers have responded with a variety of approaches to combat such risk. Suggested measures include promulgation of strict rules, enactment of open-ended standards, and, at times, abstention in favor of allowing...

    Provided By North Carolina State University

  • White Papers // Feb 2013

    Taming Hosted Hypervisors With (Mostly) Deprivileged Execution

    Recent years have witnessed increased adoption of hosted hypervisors in virtualized computer systems. By non-intrusively extending commodity OSs, hosted hypervisors can effectively take advantage of a variety of mature and stable features as well as the existing broad user base of commodity OSs. However, virtualizing a computer system is still...

    Provided By North Carolina State University

  • White Papers // Feb 2013

    Adaptive Cache Bypassing for Inclusive Last Level Caches

    Cache hierarchy designs, including bypassing, replacement, and the inclusion property, have significant performance impact. Recent works on high performance caches have shown that cache bypassing is an effective technique to enhance the Last Level Cache (LLC) performance. However, commonly used inclusive cache hierarchy cannot benefit from this technique because bypassing...

    Provided By North Carolina State University

  • White Papers // Feb 2013

    Directory-Oblivious Capacity Sharing in Tiled CMPs

    In bus-based CMPs with private caches, Capacity Sharing is applied by spilling victim cache blocks from over-utilized caches to under-utilized ones. If a spilled block is needed, it can be retrieved by posting a miss on the bus. Prior work in this domain focused on Capacity Sharing design and put...

    Provided By North Carolina State University

  • White Papers // Jan 2013

    Flexible Capacity Partitioning in Many-Core Tiled CMPs

    Chip Multi-Processors (CMP) have become a mainstream computing platform. As transistor density shrinks and the number of cores increases, more scalable CMP architectures will emerge. Recently, tiled architectures have shown such scalable characteristics and been used in many industry chips. The memory hierarchy in tiled architectures presents interesting design challenges....

    Provided By North Carolina State University

  • White Papers // Jan 2013

    QuickSense: Fast and Energy-Efficient Channel Sensing for Dynamic Spectrum Access Networks

    Spectrum sensing, the task of discovering spectrum usage at a given location, is a fundamental problem in dynamic spectrum access networks. While sensing in narrow spectrum bands is well studied in previous work, wideband spectrum sensing is challenging since a wideband radio is generally too expensive and power consuming for...

    Provided By North Carolina State University

  • White Papers // Jan 2013

    Characterizing Link Connectivity for Opportunistic Mobile Networking: Does Mobility Suffice?

    With recent drastic growth in the number of users carrying smart mobile devices, it is not hard to envision opportunistic ad-hoc communications taking place with such devices carried by humans. This leads to, however, a new challenge to the conventional link-level metrics, solely defined based on user mobility, such as...

    Provided By North Carolina State University

  • White Papers // Jan 2013

    A Semantic Protocol-Based Approach for Developing Business Processes

    A (business) protocol is a modular, public specification of an interaction among different roles that achieves a desired purpose. The authors' model protocols in terms of the commitments of the participating roles. Commitments enable reasoning about actions, thus allowing the participants to comply with protocols while acting flexibly to exploit...

    Provided By North Carolina State University

  • White Papers // Jan 2013

    Modeling Flexible Business Processes

    Current approaches of designing business processes rely on traditional workflow technologies and thus take a logically centralized view of processes. Processes designed in that manner assume the participants will act as invoked, thus limiting their flexibility or autonomy. Flexibility is in conflict with both reusability and compliance. The authors propose...

    Provided By North Carolina State University

  • White Papers // Dec 2012

    Scheduling Cloud Capacity for Time-Varying Customer Demand

    As utility computing resources become more ubiquitous, service providers increasingly look to the cloud for an in-full or in-part infrastructure to serve utility computing customers on demand. Given the costs associated with cloud infrastructure, dynamic scheduling of cloud resources can significantly lower costs while providing an acceptable service level. The...

    Provided By North Carolina State University

  • White Papers // Dec 2012

    Auto-Generation and Auto-Tuning of 3D Stencil Codes on Homogeneous and Heterogeneous GPU Clusters

    In this paper, the authors develop and evaluates search and optimization techniques for auto-tuning 3D stencil (nearest-neighbor) computations on GPUs. Observations indicate that parameter tuning is necessary for heterogeneous GPUs to achieve optimal performance with respect to a search space. Their proposed framework takes a most concise specification of stencil...

    Provided By North Carolina State University

  • White Papers // Nov 2012

    On the Accurate Identification of Network Service Dependencies in Distributed Systems

    The automated identification of network service dependencies remains a challenging problem in the administration of large distributed systems. Advances in developing solutions for this problem have immediate and tangible benefits to operators in the field. When the dependencies of the services in a network are better-understood, planning for and responding...

    Provided By North Carolina State University

  • White Papers // Oct 2012

    HadISD: A Quality-Controlled Global Synoptic Report Database for Selected Variables at Long-Term Stations From 1973 - 2011

    In this paper, the authors describes the creation of HadISD: an automatically quality-controlled synoptic resolution dataset of temperature, dewpoint temperature, sea-level pressure, wind speed, wind direction and cloud cover from global weather stations for 1973 - 2011. The full dataset consists of over 6000 stations, with 3427 long-term stations deemed...

    Provided By North Carolina State University

  • White Papers // Sep 2012

    Collaborative Assessment of Functional Reliability in Wireless Networks

    Nodes that are part of a multi-hop wireless network, typically deployed in mission critical settings, are expected to perform specific functions. Establishing a notion of reliability of the nodes with respect to each function (referred to as Functional Reliability or FR) is essential for efficient operations and management of the...

    Provided By North Carolina State University

  • White Papers // Sep 2012

    Is Link Signature Dependable for Wireless Security?

    Link signature, which refers to the unique and reciprocal wireless channel between a pair of transceivers, has gained significant attentions recently due to its effectiveness in signal authentication and shared secret construction for various wireless applications. A fundamental assumption of this technique is that the wireless signals received at two...

    Provided By North Carolina State University

  • White Papers // Sep 2012

    An Efficient Algorithm for Solving Traffic Grooming Problems in Optical Networks

    The authors consider the Virtual Topology and Traffic Routing (VTTR) problem, a sub-problem of traffic grooming that arises as a fundamental network design problem in optical networks. The objective of VTTR is to determine the minimum number of light-paths so as to satisfy a set of traffic demands, and does...

    Provided By North Carolina State University

  • White Papers // Sep 2012

    Scalable Optimal Traffic Grooming in WDM Rings Incorporating Fast RWA Formulation

    The authors present a scalable formulation for the traffic grooming problem in WDM ring networks. Specifically, they modify the ILP formulation to replace the constraints related to Routing and Wavelength Assignment (RWA), typically based on a link approach, with a new set of constraints based on the Maximal Independent Set...

    Provided By North Carolina State University

  • White Papers // Sep 2012

    Reducing Data Movement Costs Using Energy-Efficient, Active Computation on SSD

    Modern scientific discovery often involves running complex application simulations on supercomputers, followed by a sequence of data analysis tasks on smaller clusters. This offline approach suffers from significant data movement costs such as redundant I/O, storage bandwidth bottleneck, and wasted CPU cycles, all of which contribute to increased energy consumption...

    Provided By North Carolina State University

  • White Papers // Aug 2012

    A Physical Design Study of FabScalar-generated Superscalar Cores

    FabScalar is a recently published toolset for automatically composing synthesizable Register-Transfer-Level (RTL) designs of diverse superscalar cores. FabScalar is a recently published tool for automatically generating superscalar cores, of different pipeline widths, depths and sizes. The output of FabScalar is a synthesizable Register-Transfer-Level (RTL) description of the desired core. While...

    Provided By North Carolina State University

  • White Papers // Aug 2012

    Network Virtualization: Technologies, Perspectives, and Frontiers

    Network virtualization refers to a broad set of technologies. Commercial solutions have been offered by the industry for years, while more recently the academic community has emphasized virtualization as an enabler for network architecture research, deployment, and experimentation. The authors review the entire spectrum of relevant approaches with the goal...

    Provided By North Carolina State University

  • White Papers // Aug 2012

    A Fast Path-Based ILP Formulation for Offline RWA in Mesh Optical Networks

    RWA is a fundamental problem in the design and control of optical networks. The authors introduce the concept of symmetric RWA solutions and present a new ILP formulation to construct optimally such solutions. The formulation scales to mesh topologies representative of backbone and regional networks. Numerical results demonstrate that the...

    Provided By North Carolina State University

  • White Papers // Mar 2010

    Countering Persistent Kernel Rootkits Through Systematic Hook Discovery

    Kernel rootkits, as one of the most elusive types of malware, pose significant challenges for investigation and defense. Among the most notable are persistent kernel rootkits, a special type of kernel rootkits that implant persistent kernel hooks to tamper with the kernel execution to hide their presence. To defend against...

    Provided By North Carolina State University

  • White Papers // Jun 2010

    Transparent Protection of Commodity OS Kernels Using Hardware Virtualization

    Kernel rootkits are among the most insidious threats to computer security today. By employing various code injection techniques, they are able to maintain an omnipotent presence in the compromised OS kernels. Existing preventive countermeasures typically employ virtualization technology as part of their solutions. However, they are still limited in either...

    Provided By North Carolina State University

  • White Papers // Sep 2009

    HIMA: A Hypervisor-Based Integrity Measurement Agent

    Integrity measurement is a key issue in building trust in distributed systems. A good solution to integrity measurement has to provide both strong isolation between the measurement agent and the measurement target and Time Of Check To Time Of Use (TOCTTOU) consistency (i.e., The consistency between measured version and executed...

    Provided By North Carolina State University

  • White Papers // May 2010

    HyperSafe: A Lightweight Approach to Provide Lifetime Hypervisor Control-Flow Integrity

    Virtualization is being widely adopted in today's computing systems. Its unique security advantages in isolating and introspecting commodity OSes as Virtual Machines (VMs) have enabled a wide spectrum of applications. However, a common, fundamental assumption is the presence of a trustworthy hypervisor. Unfortunately, the large code base of commodity hypervisors...

    Provided By North Carolina State University

  • White Papers // Nov 2009

    Towards Performance, System and Security Issues in Secure Processor Architectures

    With the tremendous amount of digital information stored on today's computer systems, and with the increasing motivation and ability of malicious attackers to target this wealth of information, computer security has become an increasingly important topic. An important research effort towards such computer security issues focuses on protecting the privacy...

    Provided By North Carolina State University

  • White Papers // Jan 2011

    A New Internet Architecture to Enable Software Defined Optics and Evolving Optical Switching Models

    The design of the SILO network architecture of fine-grain services was based on three fundamental principles. First, SILO generalizes the concept of layering and decouples layers from services, making it possible to introduce easily new functionality and innovations into the architecture. Second, cross-layer interactions are explicitly supported by extending the...

    Provided By North Carolina State University

  • White Papers // Jan 2011

    Retrofitting Unit Tests for Parameterized Unit Testing

    Recent advances in software testing introduced Parameterized Unit Tests (PUT), which accept parameters, unlike Conventional Unit Tests (CUT), which do not accept parameters. PUTs are more beneficial than CUTs with regards to fault detection capability, since PUTs help describe the behaviors of methods under test for all test arguments. In...

    Provided By North Carolina State University

  • White Papers // Aug 2010

    From Quality to Utility: Adaptive Service Selection Framework

    The authors consider an approach to service selection wherein service consumers choose services with desired nonfunctional properties to maximize their utility. A consumer's utility from using a service clearly depends upon the qualities offered by the service. Many existing service selection approaches support agents estimating trustworthiness of services based on...

    Provided By North Carolina State University

  • White Papers // Jul 2010

    Using Focus Groups In Preliminary Instrument Development: Expected And Unexpected Lessons Learned

    Focus groups can be utilized effectively across various stages of instrument development. This paper details selected aspects of a process in which they were employed at the initial stages of item generation and refinement in a study of occupational stereotyping. The process yielded rich contextual information about the worldview and...

    Provided By North Carolina State University

  • White Papers // Sep 2009

    Methodology for Engineering Affective Social Applications

    Affective applications are becoming increasingly mainstream in entertainment and education. Yet, current techniques for building such applications are limited, and the maintenance and use of affect is in essence handcrafted in each application. The Koko architecture describes middleware that reduces the burden of incorporating affect into applications, thereby enabling developers...

    Provided By North Carolina State University

  • White Papers // Mar 2010

    Protocol Refinement: Formalization and Verification

    A proper definition of protocols and protocol refinement is crucial to designing multiagent systems. Rigidly defined protocols can require significant rework for even minor changes. Loosely defined protocols can require significant reasoning capabilities within each agent. Protocol definitions based on commitments is a middle ground. The authors formalize a model...

    Provided By North Carolina State University

  • White Papers // Jan 2011

    Trustworthy Service Caching: Cooperative Search in P2P Information Systems

    The authors are developing an approach for P2P information systems, where the peers are modeled as autonomous agents. Agents provide services or give referrals to one another to help find trustworthy services. They consider the important case of information services that can be cached. Agents request information services through high-level...

    Provided By North Carolina State University

  • White Papers // Mar 2010

    A Value-Based Energy Manager for Wireless Sensor Networks

    Recent research in Wireless Sensor Networks (WSNs) focuses on the definition of a re-useable software module to encapsulate energy management concerns. One such paper defines an Energy Management Architecture (EMA), which schedules application tasks in terms of static priorities and a simple request/grant API. Under EMA, application goals are expressed...

    Provided By North Carolina State University

  • White Papers // Oct 2010

    Teaching and Training Developer-Testing Techniques and Tool Support

    Developer testing is a type of testing where developers test their code as they write it, as opposed to testing done by a separate quality assurance organization. Developer testing has been widely recognized as an important and valuable means of improving software reliability, as it exposes faults early in the...

    Provided By North Carolina State University

  • White Papers // Jan 2011

    Analyzing the Energy-Time Tradeoff in High-Performance Computing Applications

    Although users of high-performance computing are most interested in raw performance, both energy and power consumption have become critical concerns. One approach to lowering energy and power is to use high-performance cluster nodes that have several power-performance states, so that the energy-time tradeoff can be dynamically adjusted. This paper analyzes...

    Provided By North Carolina State University

  • White Papers // Jan 2011

    Bounding Preemption Delay Within Data Cache Reference Patterns for Real-Time Tasks

    Caches have become invaluable for higher-end architectures to hide, in part, the increasing gap between processor speed and memory access times. While the effect of caches on timing predictability of single real-time tasks has been the focus of much research, bounding the overhead of cache warm-ups after preemptions remains a...

    Provided By North Carolina State University

  • White Papers // Jun 2010

    On Predictability of System Anomalies in Real World

    As computer systems become increasingly complex, system anomalies have become major concerns in system management. In this paper, the authors present a comprehensive measurement study to quantify the predictability of different system anomalies. Online anomaly prediction allows the system to foresee impending anomalies so as to take proper actions to...

    Provided By North Carolina State University

  • White Papers // Dec 2009

    Testing Access Control Policies

    As software systems become more and more complex, and are deployed to manage a large amount of sensitive in-formation and resources, specifying and managing correct access control policies is critical and yet challenging. Policy testing is an important means to increasing confidence in the correctness of specified policies and their...

    Provided By North Carolina State University

  • White Papers // Sep 2009

    Reggae: Automated Test Generation for Programs Using Complex Regular Expressions

    Test coverage such as branch coverage is commonly measured to assess the sufficiency of test inputs. To reduce tedious manual efforts in generating high-covering test inputs, various automated techniques have been proposed. Some recent effective techniques include Dynamic Symbolic Execution (DSE) based on path exploration. However, these existing DSE techniques...

    Provided By North Carolina State University

  • White Papers // Mar 2010

    Identifying Security Bug Reports Via Text Mining: An Industrial Case Study

    A bug-tracking system such as Bugzilla contains Bug Reports (BRs) collected from various sources such as development teams, testing teams, and end users. When bug reporters submit bug reports to a bug-tracking system, the bug reporters need to label the bug reports as security bug reports (SBRs) or not, to...

    Provided By North Carolina State University

  • White Papers // Jan 2010

    Automated Behavioral Regression Testing

    When a program is modified during software evolution, developers typically run the new version of the program against its existing test suite to validate that the changes made on the program did not introduce unintended side effects (i.e., Regression faults). This kind of regression testing can be effective in identifying...

    Provided By North Carolina State University

  • White Papers // Apr 2010

    Mining Likely Properties of Access Control Policies Via Association Rule Mining

    Access control mechanisms are used to control which principals (Such as users or processes) have access to which resources based on access control policies. To ensure the correctness of access control policies, policy authors conduct policy verification to check whether certain properties are satisfied by a policy. However, these properties...

    Provided By North Carolina State University

  • White Papers // May 2010

    Does Erasure Coding Have a Role to Play in My Data Center?

    Today replication has become the de facto standard for storing data within and across data centers that process data-intensive workloads. Erasure coding (A form of software RAID), although heavily researched and theoretically more space efficient than replication, has complex tradeoffs which are not well-understood by practitioners. Today's data centers have...

    Provided By North Carolina State University

  • White Papers // Oct 2009

    SecureMR: A Service Integrity Assurance Framework for MapReduce

    MapReduce has become increasingly popular as a powerful parallel data processing model. To deploy MapReduce as a data processing service over open systems such as service oriented architecture, cloud computing, and volunteer computing, the authors must provide necessary security mechanisms to protect the integrity of MapReduce data processing services. In...

    Provided By North Carolina State University

  • White Papers // Aug 2010

    Flow Isolation in Optical Networks

    Innovation and technology advances in computer networking have spawned new and exciting telecommunications applications and services that have transformed every aspect of business and commerce, education and scientific exploration, entertainment and social interaction, and government and defense services. In particular, the Internet has changed the way people and corporations interact,...

    Provided By North Carolina State University

  • White Papers // Jun 2010

    PAC: Pattern-Driven Application Consolidation for Efficient Cloud Computing

    To reduce cloud system resource cost, application consolidation is a must. In this paper, the authors present a novel Pattern-driven Application Consolidation (PAC) system to achieve efficient resource sharing in virtualized cloud computing infrastructures. PAC employs signal processing techniques to dynamically discover significant patterns called signatures of different applications and...

    Provided By North Carolina State University

  • White Papers // Dec 2010

    A Timed Logic for Modeling and Reasoning About Security Protocols

    Many logical methods are usually considered suitable to express the static properties of security protocols while unsuitable to model dynamic processes or properties. However, a security protocol itself is in fact a dynamic process over time, and sometimes it is important to be able to express time-dependent security properties of...

    Provided By North Carolina State University

  • White Papers // Jan 2011

    PRESS: PRedictive Elastic ReSource Scaling for Cloud Systems

    Cloud systems require elastic resource allocation to minimize resource provisioning costs while meeting Service Level Objectives (SLOs). In this paper, the authors present a novel PRedictive Elastic reSource Scaling (PRESS) scheme for cloud systems. PRESS unobtrusively extracts fine-grained dynamic patterns in application resource demands and adjusts their resource allocations automatically....

    Provided By North Carolina State University

  • White Papers // Jun 2010

    Kernel Malware Analysis With Un-Tampered and Temporal Views of Dynamic Kernel Memory

    Dynamic kernel memory has been a popular target of recent kernel malware due to the difficulty of determining the status of volatile dynamic kernel objects. Some existing approaches use kernel memory mapping to identify dynamic kernel objects and check kernel integrity. The snapshot-based memory maps generated by these approaches are...

    Provided By North Carolina State University

  • White Papers // Aug 2010

    DKSM: Subverting Virtual Machine Introspection for Fun and Profit

    Virtual Machine (VM) introspection is a powerful technique for determining the specific aspects of guest VM execution from outside the VM. Unfortunately, existing introspection solutions share a common questionable assumption. This assumption is embodied in the expectation that original kernel data structures are respected by the untrusted guest and thus...

    Provided By North Carolina State University

  • White Papers // Aug 2010

    First Step Towards Automatic Correction of Firewall Policy Faults

    In this paper, the authors make three major contributions. First, they propose the first comprehensive fault model for firewall policies including five types of faults. For each type of fault, they present an automatic correction technique. Second, they propose the first systematic approach that employs these five techniques to automatically...

    Provided By North Carolina State University

  • White Papers // Jan 2011

    Using Prime Numbers for Cache Indexing to Eliminate Conflict Misses

    Using alternative cache indexing/hashing functions is a popular technique to reduce conflict misses by achieving a more uniform cache access distribution across the sets in the cache. Although various alternative hashing functions have been demonstrated to eliminate the worst case conflict behavior, no study has really analyzed the pathological behavior...

    Provided By North Carolina State University

  • White Papers // Jan 2011

    The Incremental Deployability of RTT-Based Congestion Avoidance for High Speed TCP Internet Connections

    The research focuses on end-to-end congestion avoidance algorithms that use Round Trip Time (RTT) fluctuations as an indicator of the level of network congestion. The algorithms are referred to as delay-based congestion avoidance or DCA. Due to the economics associated with deploying change within an existing network, the authors are...

    Provided By North Carolina State University

  • White Papers // Jan 2011

    Asymmetric Multiprocessing for Simultaneous Multithreading Processors

    Simultaneous Multithreading (SMT) has become common in commercially available processors with hardware support for dual contexts of execution. However, performance of SMT systems has been disappointing for many applications. Consequently, many SMT systems are operated in a single-context configuration to achieve better average throughput, depending on the application domain. This...

    Provided By North Carolina State University

  • White Papers // Mar 2011

    Probabilistic Communication and I/O Tracing With Deterministic Replay at Scale

    With today's petascale supercomputers, applications often exhibit low efficiency, such as poor communication and I/O performance that can be diagnosed by analysis tools. However, these tools either produce extremely large trace files that complicate performance analysis, or sacrifice accuracy to collect high-level statistical information using crude averaging. This work contributes...

    Provided By North Carolina State University

  • White Papers // Jan 2010

    Large-Scale Multi-Dimensional Document Clustering on GPU Clusters

    Document clustering plays an important role in data mining systems. Recently, a flocking-based document clustering algorithm has been proposed to solve the problem through simulation resembling the flocking behavior of birds in nature. This method is superior to other clustering algorithms, including k-means, in the sense that the outcome is...

    Provided By North Carolina State University

  • White Papers // Jan 2011

    Challenges for Cyber-Physical Systems: Security, Timing Analysis and Soft Error Protection

    The power grid represents a distributed cyber-physical system that is essential to the every-day life. Larger-scale black-outs are known to have a severe economic and safety impact, as historical events have shown. The severity in impact of power outage on the life is increasing continuously as the power distribution grid...

    Provided By North Carolina State University

  • White Papers // May 2010

    Making DRAM Refresh Predictable

    Embedded control systems with hard real-time constraints require that deadlines are met at all times or the system may malfunction with potentially catastrophic consequences. Schedulability theory can assure deadlines for a given task set when periods and Worst-Case Execution Times (WCETs) of tasks are known. While periods are generally derived...

    Provided By North Carolina State University

  • White Papers // May 2010

    Fault Tolerant Network Routing Through Software Overlays for Intelligent Power Grids

    Control decisions of intelligent devices in critical infrastructure can have a significant impact on human life and the environment. Insuring that the appropriate data is available is crucial in making informed decisions. Such considerations are becoming increasingly important in today's cyber-physical systems that combine computational decision making on the cyber...

    Provided By North Carolina State University

  • White Papers // Sep 2010

    Hybrid Checkpointing for MPI Jobs in HPC Environments

    As the core count in high-performance computing systems keeps increasing, faults are becoming common place. Checkpointing addresses such faults but captures full process images even though only a subset of the process image changes between checkpoints. The authors have designed a hybrid checkpointing technique for MPI tasks of high-performance applications....

    Provided By North Carolina State University