Cornell University

Displaying 1-40 of 1115 results

  • White Papers // Jun 2014

    A Tale of Two Data-Intensive Paradigms: Applications, Abstractions, and Architectures

    Scientific problems that depend on processing large amounts of data require overcoming challenges in multiple areas: managing large-scale data distribution, co-placement and scheduling of data with compute resources, and storing and transferring large volumes of data. The authors analyze the ecosystems of the two prominent paradigms for data-intensive applications, hereafter...

    Provided By Cornell University

  • White Papers // Jun 2014

    Integration of a Predictive, Continuous Time Neural Network Into Securities Market Trading Operations

    In this paper, the authors have presented an example of deep learning, namely the integration of a predictive, continuous time recurrent neural network into trading and risk assessment operations. During application within a trading environment the potential need to adapt technical analysis indicators that their use might be continued and...

    Provided By Cornell University

  • White Papers // May 2014

    Bargaining-Based Mobile Data Offloading

    The unprecedented growth of mobile data traffic challenges the performance and economic viability of today's cellular networks, and calls for novel network architectures and communication solutions. Data offloading through third-party WiFi or femtocell Access Points (APs) can effectively alleviate the cellular network congestion in a low operational and capital expenditure....

    Provided By Cornell University

  • White Papers // May 2014

    Algebraic Codes and a New Physical Layer Transmission Protocol for Wireless Distributed Storage Systems

    In a wireless storage system, having to communicate over a fading channel makes repair transmissions prone to physical layer errors. The first approach to combat fading is to utilize the existing optimal space-time codes. However, it was recently pointed out that such codes are in general too complex to decode...

    Provided By Cornell University

  • White Papers // May 2014

    The SQL++ Semi-structured Data Model and Query Language: A Capabilities Survey of SQL-on-Hadoop, NoSQL and NewSQL Databases

    Numerous SQL-on-Hadoop, NewSQL and NoSQL databases provide semi-structured data model and query language capabilities, but it is difficult to compare these capabilities. Many differences between the data models and (especially) between the query languages are superficial, but nonetheless distract from the essential differences. Other query language differences are direct derivatives...

    Provided By Cornell University

  • White Papers // May 2014

    Repair for Distributed Storage Systems in Packet Erasure Networks

    Reliability is essential for storing files in many applications of distributed storage systems. To maintain reliability, when a storage node fails, a new node should be regenerated by a repair process. Most of the previous results on the repair problem assume perfect (error-free) links in the networks. However, in practice,...

    Provided By Cornell University

  • White Papers // May 2014

    Full-Duplex Cloud Radio Access Networks: An Information-Theoretic Viewpoint

    The conventional design of cellular systems prescribes the separation of uplink and downlink transmissions via time-division or frequency-division duplex. Recent advances in analog and digital domain self-interference interference cancellation challenge the need for this arrangement and open up the possibility to operate base stations, especially low-power ones, in a full-duplex...

    Provided By Cornell University

  • White Papers // May 2014

    NScale: Neighborhood-Centric Large-Scale Graph Analytics in the Cloud

    There is an increasing interest in executing rich and complex analysis tasks over large-scale graphs, many of which require processing and reasoning about a large number of multi-hop neighborhoods or sub-graphs in the graph. Examples of such tasks include ego network analysis, motif counting, finding social circles, personalized recommendations, link...

    Provided By Cornell University

  • White Papers // May 2014

    Massively Parallel Processor Architectures for Resource-Aware Computing

    The authors present a class of massively parallel processor architectures called invasive Tightly Coupled Processor Arrays (TCPAs). The presented processor class is a highly parameterizable template, which can be tailored before runtime to fulfill costumers' requirements such as performance, area cost, and energy efficiency. These programmable accelerators are well suited...

    Provided By Cornell University

  • White Papers // May 2014

    Anytime Control Using Input Sequences With Markovian Processor Availability

    The authors study an anytime control algorithm for situations where the processing resources available for control are time-varying in an a priori unknown fashion. Thus, at times, processing resources are insufficient to calculate control inputs. To address this issue, the algorithm calculates sequences of tentative future control inputs whenever possible,...

    Provided By Cornell University

  • White Papers // May 2014

    Emulated ASIC Power and Temperature Monitor System for FPGA Prototyping of an Invasive MPSoC Computing Architecture

    In this contribution the emulation of an ASIC Temperature and Power Monitoring system (TPMon) for FPGA prototyping is presented and tested to control processor temperatures under different control targets and operating strategies. The approach for emulating the power monitor is based on an instruction-level energy model. For emulating the temperature...

    Provided By Cornell University

  • White Papers // May 2014

    NetSecCC: A Scalable and Fault-tolerant Architecture without Outsourcing Cloud Network Security

    Modern cloud computing platforms based on virtual machine monitors carry a variety of complex business that present many network security vulnerabilities. At present, the traditional architecture employs a number of security devices at front-end of cloud computing to protect its network security. Under the new environment, however, this approach cannot...

    Provided By Cornell University

  • White Papers // Apr 2014

    Intelligent Resource Allocation Technique For Desktop-as-a-Service in Cloud Environment

    The specialty of desktop-as-a-service cloud computing is that user can access their desktop and can execute applications in virtual desktops on remote servers. Resource management and resource utilization are most significant in the area of desktop-as-a-service, cloud computing; however, handling a large amount of clients in the most efficient manner...

    Provided By Cornell University

  • White Papers // Apr 2014

    Computing an Optimal Control Policy for an Energy Storage

    The authors introduce StoDynProg, a small library created to solve Optimal Control problems arising in the management of Renewable Power Sources, in particular when coupled with an Energy Storage System. The library implements generic Stochastic Dynamic Programming (SDP) numerical methods which can solve a large class of Dynamic Optimization problems....

    Provided By Cornell University

  • White Papers // Apr 2014

    Automated Classification of Airborne Laser Scanning Point Clouds

    Making sense of the physical world has always been at the core of mapping. Up until recently, this has always dependent on using the human eye. Using airborne lasers, it has become possible to quickly \"See\" more of the world in many more dimensions. The resulting enormous point clouds serve...

    Provided By Cornell University

  • White Papers // Apr 2014

    Cache-Oblivious VAT-Algorithms

    Modern processors have a memory hierarchy and use virtual memory. The authors concentrate on two-levels of the hierarchy and refer to the faster memory as the cache. Data is moved between the fast and the slow memory in blocks of contiguous memory cells, and only data residing in the fast...

    Provided By Cornell University

  • White Papers // Apr 2014

    A Signal Processor for Gaussian Message Passing

    In this paper, the authors present a novel signal processing unit built upon the theory of factor graphs, which is able to address a wide range of signal processing algorithms. More specifically, the demonstrated Factor Graph Processor (FGP) is tailored to Gaussian message passing algorithms. They show how to use...

    Provided By Cornell University

  • White Papers // Apr 2014

    CernVM Online and Cloud Gateway: A Uniform Interface for CernVM Contextualization and Deployment

    In a virtualized environment, contextualization is the process of configuring a VM instance for the needs of various deployment use cases. Contextualization in CernVM can be done by passing a handwritten context to the user data field of cloud APIs, when running CernVM on the cloud, or by using CernVM...

    Provided By Cornell University

  • White Papers // Apr 2014

    GraphGen: An FPGA Framework for Vertex-Centric Graph Computation

    Vertex-centric graph computations are widely used in many machine learning and data mining applications that operate on graph data structures. This paper presents GraphGen, a vertex-centric framework that targets FPGA for hardware acceleration of graph computations. GraphGen accepts a vertex-centric graph specification and automatically compiles it onto an application-specific synthesized...

    Provided By Cornell University

  • White Papers // Apr 2014

    Hardware Efficient WiMAX Deinterleaver Capable of Address Generation for Random Interleaving Depths

    The variation in the prescribed modulation schemes and code rates for WiMAX interleaver design, as defined by IEEE 802.16 standard, demands a plethora of hardware if all the modulation schemes and code rates have to be unified into a single electronic device. Add to this the complexities involved with the...

    Provided By Cornell University

  • White Papers // Apr 2014

    A New Multi-Tiered Solid State Disk Using SLC/MLC Combined Flash Memory

    Storing digital information, ensuring the accuracy, steady and uninterrupted access to the data is considered as fundamental challenges in enterprise-class organizations and companies. In recent years, new types of storage systems such as Solid State Disks (SSD) have been introduced. Unlike hard disks that have mechanical structure, SSDs are based...

    Provided By Cornell University

  • White Papers // Apr 2014

    Towards Cloud Computing: A Swot Analysis on Its Adoption in SMEs

    Over the past few years, emergence of cloud computing has notably made an evolution in the IT industry by putting forward an 'Everything as a service' idea .Cloud computing is of growing interest to companies throughout the world, but there are many barriers associated with its adoption which should be...

    Provided By Cornell University

  • White Papers // Apr 2014

    Estimation of Optimized Energy and Latency Constraint for Task Allocation in 3D Network on Chip

    In Network on Chip (NoC) rooted system, energy consumption is affected by task scheduling and allocation schemes which affect the performance of the system. In this paper, the authors test the pre-existing proposed algorithms and introduced a new energy skilled algorithm for 3D NoC architecture. An efficient dynamic and cluster...

    Provided By Cornell University

  • White Papers // Apr 2014

    Web Log Data Analysis by Enhanced Fuzzy C Means Clustering

    World Wide Web is a huge repository of information and there is a tremendous increase in the volume of information daily. The number of users is also increasing day by day. To reduce users browsing time lot of research is taken place. Web Usage Mining is a type of web...

    Provided By Cornell University

  • White Papers // Mar 2014

    Tile Optimization for Area in FPGA Based Hardware Acceleration of Peptide Identification

    Advances in life sciences over the last few decades have lead to the generation of a huge amount of biological data. Computing research has become a vital part in driving biological discovery where analysis and categorization of biological data are involved. String matching algorithms can be applied for protein/gene sequence...

    Provided By Cornell University

  • White Papers // Mar 2014

    Design Architecture-Based on Web Server and Application Cluster in Cloud Environment

    Cloud has been a computational and storage solution for many data centric organizations. The problem today those organizations are facing from the cloud is in data searching in an efficient manner. A framework is required to distribute the work of searching and fetching from thousands of computers. The data in...

    Provided By Cornell University

  • White Papers // Mar 2014

    Increasing Flash Memory Lifetime by Dynamic Voltage Allocation for Constant Mutual Information

    The read channel in Flash memory systems degrades over time because the Fowler-Nordheim tunneling used to apply charge to the floating gate eventually compromises the integrity of the cell because of tunnel oxide degradation. While degradation is commonly measured in the number of program/erase cycles experienced by a cell, the...

    Provided By Cornell University

  • White Papers // Mar 2014

    Applying Mathematical Models in Cloud Computing: A Survey

    As more and more information on individuals and companies are placed in the cloud, concerns are beginning to grow about just how safe an environment it is. It is better to prevent security threats before they enter into the systems and there is no way how this can be prevented...

    Provided By Cornell University

  • White Papers // Mar 2014

    Capacity of a Nonlinear Optical Channel with Finite Memory

    The channel capacity of a nonlinear, dispersive fiberoptic link is revisited. To this end, the popular Gaussian Noise (GN) model is extended with a parameter to account for the finite memory of realistic fiber channels. This finite-memory model is harder to analyze mathematically but, in contrast to previous models, it...

    Provided By Cornell University

  • White Papers // Mar 2014

    Noise Facilitation in Associative Memories of Exponential Capacity

    Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms have allowed reliable learning and recall of an exponential number of patterns. Although these designs correct external errors in recall, they assume neurons that compute noiselessly, in contrast to the highly variable neurons in brain regions...

    Provided By Cornell University

  • White Papers // Mar 2014

    Readout Optical System of Sapphire Disks Intended for Long-Term Data Storage

    The development of long-term data storage technology is one of the urging problems of the time. This paper presents the results of implementation of technical solution for long-term data storage technology proposed a few years ago on the basis of single crystal sapphire. It is shown that the problem of...

    Provided By Cornell University

  • White Papers // Mar 2014

    Era of Big Data Processing: A New Approach via Tensor Networks and Tensor Decompositions

    Modern applications such as computational neuroscience, neuroinformatics and pattern/image recognition generate massive amounts of multidimensional data with multiple aspects and high dimensionality. Big data require novel technologies to efficiently process massive datasets within tolerable elapsed times. Such a new emerging technology for multidimensional big data is a multi-way analysis via...

    Provided By Cornell University

  • White Papers // Mar 2014

    Large-Scale Geospatial Processing on Multi-Core and Many-Core Processors: Evaluations on CPUs, GPUs and MICs

    Geospatial Processing, such as queries based on point-to-polyline shortest distance and point-in-polygon test, are fundamental to many scientific and engineering applications, including post-processing large-scale environmental and climate model outputs and analyzing traffic and travel patterns from massive GPS collections in transportation engineering and urban studies. Commodity parallel hardware, such as...

    Provided By Cornell University

  • White Papers // Feb 2014

    Two Stage Prediction Process with Gradient Descent Methods Aligning with the Data Privacy Preservation

    Privacy preservation emphasize on authorization of data, which signifies that data should be accessed only by authorized users. Ensuring the privacy of data is considered as one of the challenging task in data management. The generalization of data with varying concept hierarchies seems to be interesting solution. This paper proposes...

    Provided By Cornell University

  • White Papers // Feb 2014

    Distributed Storage over Unidirectional Ring Networks

    In this paper, the authors study distributed storage problems over unidirectional ring networks, whose storage nodes form a directed ring and data is transmitted along the same direction. The original data is distributed to store on these nodes. Each user can connect one and only one storage node to download...

    Provided By Cornell University

  • White Papers // Feb 2014

    Intensional RDB Manifesto: a Unifying NewSQL Model for Flexible Big Data

    In this paper, the authors present a new family of Intensional RDBs (IRDBs) which extends the traditional RDBs with the Big Data and flexible and 'Open schema' features, able to preserve the user-defined relational database schemas and all preexisting user's applications containing the SQL statements for a deployment of such...

    Provided By Cornell University

  • White Papers // Feb 2014

    Energy and Latency Aware Application Mapping Algorithm & Optimization for Homogeneous 3d Network on Chip

    Energy efficiency is one of the most critical issues in design of System on Chip. In Network-on-Chip (NoC) based system, energy consumption is influenced dramatically by mapping of Intellectual Property (IP) which affect the performance of the system. In this paper, the authors test the antecedently extant proposed algorithms and...

    Provided By Cornell University

  • White Papers // Feb 2014

    The Case for Cloud Service Trustmarks and Assurance-as-a-Service

    Cloud computing represents a significant economic opportunity for Europe. However, this growth is threatened by adoption barriers largely related to trust. This position paper examines trust and confidence issues in cloud computing and advances a case for addressing them through the implementation of a novel trustmark scheme for cloud service...

    Provided By Cornell University

  • White Papers // Feb 2014

    Control Loop Feedback Mechanism for Generic Array Logic Chip Multiprocessor

    Control Loop Feedback Mechanism for Generic Array Logic Chip Multiprocessor is presented. The approach is based on control-loop feedback mechanism to maximize the efficiency on exploiting available resources such as CPU time, operating frequency, etc. Each Processing Element (PE) in the architecture is equipped with a frequency scaling module responsible...

    Provided By Cornell University

  • White Papers // Feb 2014

    A Comparative Study of Load Balancing Algorithms in Cloud Computing Environment

    Cloud computing is a new trend emerging in IT environment with huge requirements of infrastructure and resources. Load Balancing is an important aspect of cloud computing environment. Efficient load balancing scheme ensures efficient resource utilization by provisioning of resources to cloud user's on-demand basis in pay-as-you-say-manner. Load Balancing may even...

    Provided By Cornell University

  • White Papers // Nov 2013

    Big Data Analytics in Future Internet of Things

    Current research on Internet of Things (IoT) mainly focuses on how to enable general objects to see, hear, and smell the physical world for themselves, and make them connected to share the observations. In this paper, the authors argue that only connected is not enough, beyond that, general objects should...

    Provided By Cornell University

  • White Papers // Nov 2013

    A Big Data Approach to Computational Creativity

    Computational creativity is an emerging branch of artificial intelligence that places computers in the center of the creative process. Broadly, creativity involves a generative step to produce many ideas and a selective step to determine the ones that are the best. Many previous attempts at computational creativity, however, have not...

    Provided By Cornell University

  • White Papers // Nov 2013

    On the Inequality of the 3V's of Big Data Architectural Para-Digms: A Case for Heterogeneity

    The well-known 3V architectural paradigm for Big Data introduced by Laney (2011) provides a simplified framework for defining the architecture of a big data platform to be deployed in various scenarios tackling processing of massive datasets. While additional components such as Variability and Veracity have been discussed as an extension...

    Provided By Cornell University

  • White Papers // Jul 2013

    BigDataBench: a Big Data Benchmark Suite from Web Search Engines

    In this paper, the authors present joint research efforts on big data benchmarking with several industrial partners. Considering the complexity, diversity, workload churns, and rapid evolution of big data systems, they take an incremental approach in big data benchmarking. For the first step, they pay attention to search engines, which...

    Provided By Cornell University

  • White Papers // Oct 2012

    Fast Data in the Era of Big Data: Twitter's Real-Time Related Query Suggestion Architecture

    The authors present the architecture behind Twitter's real-time related query suggestion and spelling correction service. Although these tasks have received much attention in the web search literature, the Twitter context introduces a real-time \"Twist\": after significant breaking news events, they aim to provide relevant results within minutes. This paper provides...

    Provided By Cornell University

  • White Papers // Feb 2014

    Intensional RDB Manifesto: a Unifying NewSQL Model for Flexible Big Data

    In this paper, the authors present a new family of Intensional RDBs (IRDBs) which extends the traditional RDBs with the Big Data and flexible and 'Open schema' features, able to preserve the user-defined relational database schemas and all preexisting user's applications containing the SQL statements for a deployment of such...

    Provided By Cornell University

  • White Papers // Oct 2013

    Analyzing Big Data with Dynamic Quantum Clustering

    How does one search for a needle in a multi-dimensional haystack without knowing what a needle is and without knowing if there is one in the haystack? This kind of problem requires a paradigm shift - away from hypothesis driven searches of the data - towards a methodology that lets...

    Provided By Cornell University

  • White Papers // Sep 2013

    Unmixing Incoherent Structures of Big Data by Randomized or Greedy Decomposition

    Learning big data by matrix decomposition always suffers from expensive computation, mixing of complicated structures and noise. In this paper, the authors study more adaptive models and efficient algorithms that decompose a data matrix as the sum of semantic components with incoherent structures. They firstly introduce \"GO Decomposition (GoDec)\", an...

    Provided By Cornell University

  • White Papers // Feb 2014

    Big Data and the SP Theory of Intelligence

    This paper is about how the SP theory of intelligence and its realization in the SP machine may, with advantage, be applied to the management and analysis of big data. The issues which are discussed are general and need to be addressed in any scenario. The SP system introduced in...

    Provided By Cornell University

  • White Papers // Feb 2010

    Performance and Stability of the Chelonia Storage Cloud

    In this paper, the authors present the Chelonia storage cloud middleware. It was designed to fill the requirements gap between those of large, sophisticated scientific collaborations which have adopted the grid paradigm for their distributed storage needs, and of corporate business communities which are gravitating towards the cloud paradigm. The...

    Provided By Cornell University

  • White Papers // Nov 2009

    One-Bit Stochastic Resonance Storage Device

    The increasing capacity of modern computers, driven by Moore's Law, is accompanied by smaller noise margins and higher error rates. In this paper, the authors propose a memory device, consisting of a ring of two identical overdamped bistable forward-coupled oscillators, which may serve as a building block in a larger...

    Provided By Cornell University

  • White Papers // Feb 2013

    Maximum-Likelihood Sequence Detector for Dynamic Mode High Density Probe Storage

    There is an increasing need for high density data storage devices driven by the increased demand of consumer electronics. In this paper, the authors consider a data storage system that operates by encoding information as topographic profiles on a polymer medium. A cantilever probe with a sharp tip (few nm...

    Provided By Cornell University

  • White Papers // Feb 2007

    Dynamic Control of a Flow-Rack Automated Storage and Retrieval System

    In this paper, the authors propose a control scheme based on Colored Petri Net (CPN) for a flow-rack automated storage and retrieval system. The AS/RS is modeled using Colored Petri Nets (CPN), the developed model has been used to capture and provide the rack state. They introduce in the control...

    Provided By Cornell University

  • White Papers // Feb 2008

    Joint Equalization and Decoding for Nonlinear Two-Dimensional Intersymbol Interference Channels with Application to Optical Storage

    An algorithm that performs joint equalization and decoding for nonlinear two-dimensional intersymbol interference channels is presented. The algorithm performs sum-product message-passing on a factor graph that represents the underlying system. The TWO-Dimensional Optical Storage (TWODOS) technology is an example of a system with nonlinear two-dimensional intersymbol interference. Simulations for the...

    Provided By Cornell University

  • White Papers // Feb 2008

    Cryptography in the Bounded Quantum-Storage Model

    The authors initiate the study of two-party cryptographic primitives with unconditional security, assuming that the adversary's quantum memory is of bounded size. They show that oblivious transfer and bit commitment can be implemented in this model using protocols where honest parties need no quantum memory, whereas an adversarial player needs...

    Provided By Cornell University

  • White Papers // Mar 2014

    Large-Scale Geospatial Processing on Multi-Core and Many-Core Processors: Evaluations on CPUs, GPUs and MICs

    Geospatial Processing, such as queries based on point-to-polyline shortest distance and point-in-polygon test, are fundamental to many scientific and engineering applications, including post-processing large-scale environmental and climate model outputs and analyzing traffic and travel patterns from massive GPS collections in transportation engineering and urban studies. Commodity parallel hardware, such as...

    Provided By Cornell University

  • White Papers // Aug 2013

    Intensional view of General Single Processor Operating Systems

    Operating systems are currently viewed ostensively. As a result they mean different things to different people. The ostensive character makes it is hard to understand OSes formally. An intensional view can enable better formal work, and also offer constructive support for some important problems, e.g. OS architecture. This paper argues...

    Provided By Cornell University

  • White Papers // May 2013

    Analysis of a Non-Work Conserving Generalized Processor Sharing Queue

    In this paper, the authors consider a non work-conserving Generalized Processor Sharing (GPS) system composed of two queues with Poisson arrivals and exponential service times. Using general results due to Fayolle et al, they first establish the stability condition for this system. They then determine the functional equation satisfied by...

    Provided By Cornell University

  • White Papers // Aug 2012

    A Model for Minimizing Active Processor Time

    Power management strategies have been widely studied in the scheduling literature. Many of the models are motivated by the energy consumption of the processor. Consider, alternatively, the energy consumed by the operation of large storage systems. Data is stored in memory which may be turned on and off, and each...

    Provided By Cornell University

  • White Papers // Jun 2012

    Best Practices for HPM-Assisted Performance Engineering on Modern Multicore Processors

    Many tools and libraries employ Hardware Performance Monitoring (HPM) on modern processors, and using this data for performance assessment and as a starting point for code optimizations is very popular. However, such data is only useful if it is interpreted with care, and if the right metrics are chosen for...

    Provided By Cornell University

  • White Papers // Mar 2012

    The Distributed Network Processor: A Novel Off-Chip and On-Chip Interconnection Network Architecture

    One of the most demanding challenges for the designers of parallel computing architectures is to deliver an efficient network infrastructure providing low latency, high bandwidth communications while preserving scalability. Besides off-chip communications between processors, recent multi-tile (i.e. multi-core) architectures face the challenge for an efficient on-chip interconnection network between processor's...

    Provided By Cornell University

  • White Papers // Nov 2011

    Design and Simulation of an 8-Bit Dedicated Processor for Calculating the Sine and Cosine of an Angle Using the CORDIC Algorithm

    In this paper, the authors describe the design and simulation of an 8-bit dedicated processor for calculating the Sine and Cosine of an Angle using CORDIC Algorithm (COordinate Rotation DIgital Computer), a simple and efficient algorithm to calculate hyperbolic and trigonometric functions. They have proposed a dedicated processor system, modeled...

    Provided By Cornell University

  • White Papers // Sep 2013

    Knowledge-Based Expressive Technologies Within Cloud Computing Environments

    In this paper, the authors describe the development of comprehensive approach for knowledge processing within e-Sceince tasks. Considering the task solving within a simulation-driven approach a set of knowledge-based procedures for task definition and composite application processing can be identified. This procedure could be supported by the use of domain-specific...

    Provided By Cornell University

  • White Papers // Oct 2011

    Cameleon language Part 1 : Processor

    Emergence is the way complex systems arise out of a multiplicity of relatively simple interactions between primitives. An example of emergence is the Lego game where the primitives are plastic bricks and their interaction is the interlocking. Since programming problems become more and more complexes and transverses, their vision is...

    Provided By Cornell University

  • White Papers // Oct 2013

    Load Balancing Using Ant Colony in Cloud Computing

    Ants are very small insects. They are capable to find food even they are complete blind. The ants live in their nest and their job is to search food while they get hungry. The authors are not interested in their living style, such as how they live, how they sleep....

    Provided By Cornell University

  • White Papers // Feb 2010

    Efficient Implementation of Elliptic Curve Cryptography Using Low-power Digital Signal Processor

    Strength of RSA lies in integer factorization problem. That is when the authors are given a number n; they have to find its prime factors. It becomes quite complicated when dealing with large numbers. RSA (Rivest, Shamir and Adleman) is being used as a public key exchange and key agreement...

    Provided By Cornell University

  • White Papers // Dec 2013

    Cache-Aware Static Scheduling for Hard Real-Time Multicore Systems Based on Communication Affinities

    The growing need for continuous processing capabilities has led to the development of multicore systems with a complex cache hierarchy. Such multicore systems are generally designed for improving the performance in average case, while hard real-time systems must consider worst-case scenarios. An open challenge is therefore to efficiently schedule hard...

    Provided By Cornell University

  • White Papers // Jul 2011

    Speed Scaling on Parallel Processors with Migration

    The authors study the problem of scheduling a set of jobs with release dates, deadlines and processing requirements (or works), on parallel speed-scaled processors so as to minimize the total energy consumption. They consider that both preemption and migration of jobs are allowed. An exact polynomial-time algorithm has been proposed...

    Provided By Cornell University

  • White Papers // Apr 2014

    Hardware Efficient WiMAX Deinterleaver Capable of Address Generation for Random Interleaving Depths

    The variation in the prescribed modulation schemes and code rates for WiMAX interleaver design, as defined by IEEE 802.16 standard, demands a plethora of hardware if all the modulation schemes and code rates have to be unified into a single electronic device. Add to this the complexities involved with the...

    Provided By Cornell University

  • White Papers // May 2011

    A Simple Multi-Processor Computer Based on Subleqa

    Subleq (Subtract and Branch on result less than or equal to zero) is both an instruction set and a programming language for One Instruction Set Computer (OISC). The authors describe a hardware implementation of an array of 28 one-instruction Subleq processors on a low-cost FPGA board. Their test results demonstrate...

    Provided By Cornell University

  • White Papers // Mar 2014

    Tile Optimization for Area in FPGA Based Hardware Acceleration of Peptide Identification

    Advances in life sciences over the last few decades have lead to the generation of a huge amount of biological data. Computing research has become a vital part in driving biological discovery where analysis and categorization of biological data are involved. String matching algorithms can be applied for protein/gene sequence...

    Provided By Cornell University

  • White Papers // Jul 2010

    Solving k-Nearest Neighbor Problem on Multiple Graphics Processors

    A recommendation system is a software system to predict customers' unknown preferences from known preferences. In a recommendation system, customers' preferences are encoded into vectors, and finding the nearest vectors to each vector is an essential part. This vector-searching part of the problem is called a k-nearest neighbor problem. The...

    Provided By Cornell University

  • White Papers // Oct 2012

    A Flexible Design for Optimization of Hardware Architecture in Distributed Arithmetic based FIR Filters

    FIR filters are used in many performance/power critical applications such as mobile communication devices, analogue to digital converters and digital signal processing applications. Design of appropriate FIR filters usually causes the order of filter to be increased. Synthesis and tape-out of high-order FIR filters with reasonable delay, area and power...

    Provided By Cornell University

  • White Papers // Aug 2010

    Associative Control Processor With a Rigid Structure

    An important place among various real-world problems is occupied by problems of developing logical-linguistic control models for complex technical systems; i.e. in the case study of applying logical-linguistic control model for vehicle crashworthiness modeling the approach of applying associative processor for decision making problem was proposed. It focuses on hardware...

    Provided By Cornell University

  • White Papers // Jan 2014

    Hardware Implementation of Four Byte Per Clock RC4 Algorithm

    In the field of cryptography till date the 2-byte in 1-clock is the best known RC4 hardware design, while 1-byte in 1-clock and the 1-byte in 3 clocks are the best known implementation. The design algorithm in considers two consecutive bytes together and processes them in 2 clocks. The design...

    Provided By Cornell University

  • White Papers // Feb 2014

    Application of Selective Algorithm for Effective Resource Provisioning in Cloud Computing Environment

    Modern day continued demand for resource hungry services and applications in IT sector has led to development of Cloud computing. Cloud computing environment involves high cost infrastructure on one hand and need high scale computational resources on the other hand. These resources need to be provisioned (allocation and scheduling) to...

    Provided By Cornell University

  • White Papers // May 2014

    Massively Parallel Processor Architectures for Resource-Aware Computing

    The authors present a class of massively parallel processor architectures called invasive Tightly Coupled Processor Arrays (TCPAs). The presented processor class is a highly parameterizable template, which can be tailored before runtime to fulfill costumers' requirements such as performance, area cost, and energy efficiency. These programmable accelerators are well suited...

    Provided By Cornell University

  • White Papers // Feb 2014

    Concept of Feedback in Future Computing Models to Cloud Systems

    Currently, it is urgent to ensure QoS in distributed computing systems. This became especially important to the development and spread of cloud services. Big data structures become heavily distributed. Necessary to consider the communication channels and data transmission systems and virtualization and scalability in future design of computational models in...

    Provided By Cornell University

  • White Papers // May 2014

    Anytime Control Using Input Sequences With Markovian Processor Availability

    The authors study an anytime control algorithm for situations where the processing resources available for control are time-varying in an a priori unknown fashion. Thus, at times, processing resources are insufficient to calculate control inputs. To address this issue, the algorithm calculates sequences of tentative future control inputs whenever possible,...

    Provided By Cornell University

  • White Papers // Aug 2013

    Secure Authentication of Cloud Data Mining API

    Cloud computing is a revolutionary concept that has brought a paradigm shift in the IT world. This has made it possible to manage and run businesses without even setting up an IT infrastructure. It offers multifold benefits to the users moving to a cloud, while posing unknown security and privacy...

    Provided By Cornell University