VLDB Endowment

Displaying 1-40 of 248 results

  • White Papers // Nov 2011

    PIQL: Success-Tolerant Query Processing in the Cloud

    Newly-released web applications often succumb to a "Success Disaster," where overloaded database machines and resulting high response times destroy a previously good user experience. Unfortunately, the data independence provided by a traditional relational database system, while useful for agile development, only exacerbates the problem by hiding potentially expensive queries under...

    Provided By VLDB Endowment

  • White Papers // Sep 2011

    DivDB: A System for Diversifying Query Results

    With the availability of very large databases, an exploratory query can easily lead to a vast answer set, typically based on an answer's relevance (i.e., top-k, tf-idf) to the user query. Navigating through such an answer set requires huge effort and users give up after perusing through the first few...

    Provided By VLDB Endowment

  • White Papers // Sep 2011

    Online Data Fusion

    The Web contains a significant volume of structured data in various domains, but a lot of data are dirty and erroneous, and they can be propagated through copying. While data integration techniques allow querying structured data on the Web, they take the union of the answers retrieved from different sources...

    Provided By VLDB Endowment

  • White Papers // Sep 2011

    Summary Graphs for Relational Database Schemas

    Increasingly complex databases need ever more sophisticated tools to help users understand their schemas and interact with the data. Existing tools fall short of either providing the "Big picture," or of presenting useful connectivity information. In this paper, the authors define summary graphs, a novel approach for summarizing schemas. Given...

    Provided By VLDB Endowment

  • White Papers // Sep 2011

    RemusDB: Transparent High Availability for Database Systems

    In this paper, the authors present a technique for building a High-Availability (HA) DataBase Management System (DBMS). The proposed technique can be applied to any DBMS with little or no customization, and with reasonable performance overhead. Their approach is based on Remus, a commodity HA solution implemented in the virtualization...

    Provided By VLDB Endowment

  • White Papers // Sep 2011

    A Framework for Supporting DBMS-Like Indexes in the Cloud

    To support "Database as a service" (DaaS) in the cloud, the database system is expected to provide similar functionalities as in centralized DBMS such as efficient processing of ad hoc queries. The system must therefore support DBMS-like indexes, possibly a few indexes for each table to provide fast location of...

    Provided By VLDB Endowment

  • White Papers // Sep 2011

    Scalable SPARQL Querying of Large RDF Graphs

    The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a...

    Provided By VLDB Endowment

  • White Papers // Sep 2011

    MapReduce Programming and Cost based Optimization? Crossing This Chasm With Starfish

    MapReduce has emerged as a viable competitor to database systems in big data analytics. MapReduce programs are being written for a wide variety of application domains including business data processing, text analysis, natural language processing, Web graph and social network analysis, and computational science. However, MapReduce systems lack a feature...

    Provided By VLDB Endowment

  • White Papers // Sep 2011

    Proactive Detection and Repair of Data Corruption: Towards a Hasslefree Declarative Approach With Amulet

    Occasional corruption of stored data is an unfortunate byproduct of the complexity of modern systems. Hardware errors, software bugs, and mistakes by human administrators can corrupt important sources of data. The dominant practice to deal with data corruption today involves administrators writing ad hoc scripts that run data-integrity tests at...

    Provided By VLDB Endowment

  • White Papers // Jul 2011

    Efficient Probabilistic Reverse Nearest Neighbor Query Processing on Uncertain Data

    Given a query object q, a Reverse Nearest Neighbor (RNN) query in a common certain database returns the objects having q as their nearest neighbor. A new challenge for databases is dealing with uncertain objects. In this paper, the authors consider Probabilistic Reverse Nearest Neighbor (PRNN) queries, which return the...

    Provided By VLDB Endowment

  • White Papers // Jun 2011

    Monitoring Reverse Top-k Queries Over Mobile Devices

    Location-based queries are widely employed to retrieve useful information based on the user's geographical position. For example, a tourist that walks around a city may seek points of interest (e.g., restaurants) in her vicinity that satisfy her preferences (e.g., cheap and highly-rated). A top-k query defined by the user preferences...

    Provided By VLDB Endowment

  • White Papers // Apr 2011

    Albatross: Lightweight Elasticity in Shared Storage Databases for the Cloud Using Live Data Migration

    Database systems serving cloud platforms must serve large numbers of applications (or tenants). In addition to managing tenants with small data footprints, different schemas, and variable load patterns, such multitenant data platforms must minimize their operating costs by efficient resource sharing. When deployed over a pay-per-use infrastructure, elastic scaling and...

    Provided By VLDB Endowment

  • White Papers // Mar 2011

    Automatic Optimization for MapReduce Programs

    The MapReduce distributed programming framework has become popular, despite evidence that current implementations are inefficient, requiring far more hardware than traditional relational databases to complete similar tasks. MapReduce jobs are amenable to many traditional database query optimizations (B+Trees for selections, column-store-style techniques for projections, etc), but existing systems do not...

    Provided By VLDB Endowment

  • White Papers // Mar 2011

    CoPhy: A Scalable, Portable, and Interactive Index Advisor for Large Workloads

    Index tuning, i.e., selecting the indexes appropriate for a workload, is a crucial problem in database system tuning. In this paper, the authors solve index tuning for large problem instances that are common in practice, e.g., thousands of queries in the workload, thousands of candidate indexes and several hard and...

    Provided By VLDB Endowment

  • White Papers // Feb 2011

    High Throughput Transaction Executions on Graphics Processors

    OLTP (On-Line Transaction Processing) is an important business system sector in various traditional and emerging online services. Due to the increasing number of users, OLTP systems require high throughput for executing tens of thousands of transactions in a short time period. Encouraged by the recent success of GPGPU (General-Purpose computation...

    Provided By VLDB Endowment

  • White Papers // Feb 2011

    Incrementally Maintaining Classification Using an RDBMS

    The proliferation of imprecise data has motivated both researchers and the database industry to push statistical techniques into Relational DataBase Management Systems (RDBMSes). The authors study strategies to maintain model-based views for a popular statistical technique, classification, inside an RDBMS in the presence of updates (to the set of training...

    Provided By VLDB Endowment

  • White Papers // Feb 2011

    Distributed Inference and Query Processing for RFID Tracking and Monitoring

    In this paper, the authors present the design of a scalable, distributed stream processing system for RFID tracking and monitoring. Since RFID data lacks containment and location information that is key to query processing, they propose to combine location and containment inference with stream query processing in a single architecture,...

    Provided By VLDB Endowment

  • White Papers // Feb 2011

    Fast Sparse MatrixVector Multiplication on GPUs: Implications for Graph Mining

    Scaling up the sparse matrix-vector multiplication kernel on modern Graphics Processing Units (GPU) has been at the heart of numerous studies in both academia and industry. In this paper the authors present a novel non-parametric, self-tunable, approach to data representation for computing this kernel, particularly targeting sparse matrices representing power-law...

    Provided By VLDB Endowment

  • White Papers // Feb 2011

    Automatic Wrappers for Large Scale Web Extraction

    The authors present a generic framework to make wrapper induction algorithms tolerant to noise in the training data. This enables one to learn wrappers in a completely unsupervised manner from automatically and cheaply obtained noisy training data, e.g., using dictionaries and regular expressions. By removing the site-level supervision that wrapper-based...

    Provided By VLDB Endowment

  • White Papers // Jan 2011

    Graph Indexing of Road Networks for Shortest Path Queries With Label Restrictions

    The current widespread use of location-based services and GPS technologies has revived interest in very fast and scalable shortest path queries. The authors introduce a new shortest path query type in which dynamic constraints may be placed on the allowable set of edges that can appear on a valid shortest...

    Provided By VLDB Endowment

  • White Papers // Nov 2010

    Efficient Processing of Top-k Spatial Preference Queries

    Top-k spatial preference queries return a ranked set of the k best data objects based on the scores of feature objects in their spatial neighborhood. Despite the wide range of location-based applications that rely on spatial preference queries, existing algorithms incur non-negligible processing cost resulting in high response time. The...

    Provided By VLDB Endowment

  • White Papers // Oct 2010

    ROXXI: Reviving Witness DOcuments to EXplore EXtracted Information

    In recent years, there has been considerable research on information extraction and constructing RDF knowledge bases. In general, the goal is to extract all relevant information from a corpus of documents, store it into an ontology, and answer future queries based only on the created knowledge base. Thus, the original...

    Provided By VLDB Endowment

  • White Papers // Sep 2010

    Nearest Neighbor Search With Strong Location Privacy

    The tremendous growth of the Internet has significantly reduced the cost of obtaining and sharing information about individuals, raising many concerns about user privacy. Spatial queries pose an additional threat to privacy because the location of a query may be sufficient to reveal sensitive information about the querier. In this...

    Provided By VLDB Endowment

  • White Papers // Sep 2010

    Secure Personal Data Servers: A Vision Paper

    An increasing amount of personal data is automatically gathered and stored on servers by administrations, hospitals, insurance companies, etc. Citizen themselves often count on internet companies to store their data and make them reliable and highly available through the internet. However, these benefits must be weighed against privacy risks incurred...

    Provided By VLDB Endowment

  • White Papers // Sep 2010

    Automatic Rule Refinement for Information Extraction

    Rule-based information extraction from text is increasingly being used to populate databases and to support structured queries on unstructured text. Specification of suitable information extraction rules requires considerable skill and standard practice is to refine rules iteratively, with substantial effort. In this paper, the authors show that techniques developed in...

    Provided By VLDB Endowment

  • White Papers // Sep 2010

    PolicyReplay: Misconfiguration-Response Queries for Data Breach Reporting

    Recent legislation has increased the requirements of organizations to report data breaches, or unauthorized access to data. While access control policies are used to restrict access to a database, these policies are complex and difficult to configure. As a result, misconfigurations sometimes allow users access to unauthorized data. In this...

    Provided By VLDB Endowment

  • White Papers // Sep 2010

    Dynamic Join Optimization in Multi-Hop Wireless Sensor Networks

    To enable smart environments and self-tuning data centers, the authors are developing the Aspen system for integrating physical sensor data, as well as stream data coming from machine logical state, and database or Web data from the Internet. A key component of this system is a query processor optimized for...

    Provided By VLDB Endowment

  • White Papers // Sep 2010

    On Dense Pattern Mining in Graph Streams

    Many massive web and communication network applications create data which can be represented as a massive sequential stream of edges. For example, conversations in a telecommunication network or messages in a social network can be represented as a massive stream of edges. Such streams are typically very large, because of...

    Provided By VLDB Endowment

  • White Papers // Sep 2010

    Read-Once Functions and Query Evaluation in Probabilistic Databases

    Probabilistic databases hold promise of being a viable means for large-scale uncertainty management, increasingly needed in a number of real world applications domains. However, query evaluation in probabilistic databases remains a computational challenge. Prior work on efficient exact query evaluation in probabilistic databases has largely concentrated on query-centric formulations (e.g.,...

    Provided By VLDB Endowment

  • White Papers // Sep 2010

    Efficient B-Tree Based Indexing for Cloud Data Processing

    There has been an increasing interest in deploying a storage system on Cloud to support applications that require massive scalability and high throughput in storage layer. Examples of such systems include Amazon's Dynamo and Google's BigTable. Cloud storage systems are designed to meet several essential requirements of data-intensive applications: manageability,...

    Provided By VLDB Endowment

  • White Papers // Sep 2010

    PAO: Power-Efficient Attribution of Outliers in Wireless Sensor Networks

    Sensor nodes constitute inexpensive, disposable devices that are often scattered in harsh environments of interest so as to collect and communicate desired measurements of monitored quantities. Due to the commodity hardware used in the construction of sensor nodes, the readings of sensors are frequently tainted with outliers. Given the presence...

    Provided By VLDB Endowment

  • White Papers // Aug 2010

    Using XMorph to Transform XML Data

    XMorph is a new, shape polymorphic, domain-specific XML query language. A query in a shape polymorphic language adapts to the shape of the input, freeing the user from having to know the input's shape and making the query applicable to a wide variety of differently shaped inputs. An XMorph query...

    Provided By VLDB Endowment

  • White Papers // Aug 2010

    Active Complex Event Processing: Applications in Real-Time Health Care

    The analysis of many real-world event based applications has revealed that existing Complex Event Processing technology (CEP), while effective for efficient pattern matching on event stream, is limited in its capability of reacting in real-time to opportunities and risks detected or environmental changes. The authors are the first to tackle...

    Provided By VLDB Endowment

  • White Papers // Aug 2010

    Thirteen New Players in the Team: A Ferrybased LINQ to SQL Provider

    The authors demonstrate an efficient LINQ to SQL provider and its significant impact on the runtime performance of LINQ programs that process large data volumes. This alternative provider is based on Ferry, compilation technology that lets relational database systems participate in the evaluation of first-order functional programs over nested, ordered...

    Provided By VLDB Endowment

  • White Papers // Aug 2010

    AXART Enabling Collaborative Work With AXML Artifacts

    The workflow models have been essentially operation-centric for many years, ignoring almost completely the data aspects. Recently, a new paradigm of data-centric workflows, called business artifacts, has been introduced by Nigam and Caswell. The authors follow this approach and propose a model where artifacts are XML documents that evolve in...

    Provided By VLDB Endowment

  • White Papers // Aug 2010

    iFlow: An Approach for Fast and Reliable InternetScale Stream Processing Utilizing Detouring and Replication

    The authors propose to demonstrate iFlow, the replication-based system that supports both fast and reliable processing of data streams over the Internet. iFlow uses a low degree of replication in conjunction with detouring techniques to overcome network outages. iFlow also deploys replicas in a manner that improves performance and availability...

    Provided By VLDB Endowment

  • White Papers // Aug 2010

    Peer Coordination Through Distributed Triggers

    This is a demonstration of data coordination in a peer data management system through the employment of distributed triggers. The latter express in a declarative manner individual security and consistency requirements of peers, that cannot be ensured by default in the P2P environment. Peers achieve to handle in a transparent...

    Provided By VLDB Endowment

  • White Papers // Aug 2010

    Seaform: Search As You Type in Forms

    Form-style interfaces have been widely used to allow users to access information. In this demonstration paper, the authors develop a new search paradigm in form-style query interfaces, called SEAFORM (which stands for SEarch-As-You-Type in FORMS), which computes answers on-the-fly as a user types in a query letter by letter and...

    Provided By VLDB Endowment

  • White Papers // Aug 2010

    TimeTrails: A System for Exploring Spatio Temporal Information in Documents

    Spatial and temporal data have become ubiquitous in many application domains such as the Geosciences or life sciences. Sophisticated database management systems are employed to manage such structured data. However, an important source of spatio-temporal information that has not been fully utilized are unstructured text documents. In this paper, combinations...

    Provided By VLDB Endowment

  • White Papers // Aug 2010

    Interactive Route Search in the Presence of Order Constraints

    A route search is an enhancement of an ordinary geographic search. Instead of merely returning a set of entities, the result is a route that goes via entities that are relevant to the search. The input to the problem consists of several search queries, and each query defines a type...

    Provided By VLDB Endowment

  • White Papers // Oct 2009

    Xplus: A SQL-Tuning-Aware Query Optimizer

    The need to improve a suboptimal execution plan picked by the query optimizer for a repeatedly run SQL query arises routinely. Complex expressions, skewed or correlated data, and changing conditions can cause the optimizer to make mistakes. For example, the optimizer may pick a poor join order, overlook an important...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Tuning Database Configuration Parameters With iTuned

    Database systems have a large number of configuration parameters that control memory distribution, I/O optimization, costing of query plans, parallelism, many aspects of logging, recovery, and other behavior. Regular users and even expert database administrators struggle to tune these parameters for good performance. The wave of research on improving database...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Supporting Realworld Activities in Database Management Systems

    Databases are integral to many application domains in which the cycle of processing the data is complex and may involve real-world activities that are external to the database, e.g., wet-lab experiments, manual measurements, and collecting instrument readings. As a result, an update operation in the database may render dependent data...

    Provided By VLDB Endowment

  • White Papers // Jan 2011

    Graph Indexing of Road Networks for Shortest Path Queries With Label Restrictions

    The current widespread use of location-based services and GPS technologies has revived interest in very fast and scalable shortest path queries. The authors introduce a new shortest path query type in which dynamic constraints may be placed on the allowable set of edges that can appear on a valid shortest...

    Provided By VLDB Endowment

  • White Papers // Apr 2011

    Albatross: Lightweight Elasticity in Shared Storage Databases for the Cloud Using Live Data Migration

    Database systems serving cloud platforms must serve large numbers of applications (or tenants). In addition to managing tenants with small data footprints, different schemas, and variable load patterns, such multitenant data platforms must minimize their operating costs by efficient resource sharing. When deployed over a pay-per-use infrastructure, elastic scaling and...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Anonymization of Set-Valued Data Via Top-Down, Local Generalization

    Set-valued data, in which a set of values are associated with an individual, is common in databases ranging from market basket data, to medical databases of patients' symptoms and behaviors, to query engine search logs. Anonymizing this data is important if people are to reconcile the conflicting demands arising from...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Promotion Analysis in Multi-Dimensional Space

    Promotion is one of the key ingredients in marketing. It is often desirable to find merit in an object (e.g., product, person, organization, or service) and promote it in an appropriate community. In this paper, the authors propose a novel functionality, called promotion analysis through ranking, for promoting a given...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    StatAdvisor: Recommending Statistical Views

    Database statistics are crucial to cost-based optimizers for estimating the execution cost of a query plan. Using traditional basic statistics on base tables requires adopting unrealistic assumptions to estimate the cardinalities of intermediate results, which usually causes large estimation errors that can be several orders of magnitude. Modern commercial database...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Comparing Stars: On Approximating Graph Edit Distance

    Graph data have become ubiquitous and manipulating them based on similarity is essential for many applications. Graph edit distance is one of the most widely accepted measures to determine similarities between graphs and has extensive applications in the fields of pattern recognition, computer vision etc. Unfortunately, the problem of graph...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Data Processing on FPGAs

    Computer architectures are quickly changing toward heterogeneous many-core systems. Such a trend opens up interesting opportunities but also raises immense challenges since the efficient use of heterogeneous many-core systems is not a trivial problem. In this paper, the authors explore how to program data processing operators on top of Field-Programmable...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Evaluating Clustering in Subspace Projections of High Dimensional Data

    Knowledge discovery in databases provides database owners with new information about patterns in their data. Clustering is a traditional data mining task for automatic grouping of objects. Cluster detection is based on similarity between objects, typically measured with respect to distance functions. In high dimensional spaces, effects attributed to the...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Laconic Schema Mappings: Computing the Core With SQL Queries

    A schema mapping is a declarative specification of the relationship between instances of a source schema and a target schema. The data exchange (or data translation) problem asks: given an instance over the source schema, materialize an instance (or solution) over the target schema that satisfies the schema mapping. In...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Towards Low Carbon Similarity Search With Compressed Sketches

    Sketches are compact bit string representations of objects. Objects that have the same sketch are stored in the same database bucket. By calculating the hamming distance of the sketches, an estimation of the similarity of their respective objects can be obtained. Objects that are close to each other are expected...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    A Wavelet Transform for Efficient Consolidation of Sensor Relations With Quality Guarantees

    Answering queries with a low selectivity in wireless sensor networks is a challenging problem. A simple tree-based data collection is communication-intensive and costly in terms of energy. Prior work has addressed the problem by approximating query results based on models of sensor readings. This cuts communication effort if the accuracy...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Lahar Demonstration: Warehousing Markovian Streams

    Lahar is a warehousing system for Markovian streams - a common class of uncertain data streams produced via inference on probabilistic models. Example Markovian streams include text inferred from speech, location streams inferred from GPS or RFID readings, and human activity streams inferred from sensor data. Lahar supports OLAP-style queries...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Streams on Wires - A Query Compiler for FPGAs

    Taking advantage of many-core, heterogeneous hardware for data processing tasks is a difficult problem. In this paper, the authors consider the use of FPGAs for data stream processing as co-processors in many-core architectures. The authors present Glacier, a component library and compositional compiler that transforms continuous queries into logic circuits...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    XPEDIA: XML Processing for Data Integration

    Data Integration engines increasingly need to provide sophisticated processing options for XML data. In the past, it was adequate for these engines to support basic shredding and XML generation capabilities. However, with the steady growth of XML in applications and databases, integration platforms need to provide more direct operations on...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Scalable Verification for Outsourced Dynamic Databases

    Query answers from servers operated by third parties need to be verified, as the third parties may not be trusted or their servers may be compromised. Most of the existing authentication methods construct validity proofs based on the Merkle Hash Tree (MHT). The MHT, however, imposes severe concurrency constraints that...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    PLANET: Massively Parallel Learning of Tree Ensembles With MapReduce

    Classification and regression tree learning on massive datasets is a common data mining task at Google, yet many state of the art tree learning algorithms require training data to reside in memory on a single machine. While more scalable implementations of tree learning have been proposed, they typically require specialized...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Efficient Rewriting of XPath Queries Using Query Set Specifications

    The authors study the problem of querying XML data sources that accept only a limited set of queries, such as sources accessible by Web services which can implement very large (potentially infinite) families of XPath queries. To compactly specify such families of queries they adopt the Query Set Specifications, formalism...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Answering Table Augmentation Queries From Unstructured Lists on the Web

    The authors present the design of a system for assembling a table from a few example rows by harnessing the huge corpus of information-rich but unstructured lists on the web. The authors developed a totally unsupervised end to end approach which given the sample query rows - retrieves HTML lists...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Reasoning About Record Matching Rules

    Record matching is the problem for identifying tuples in one or more relations that refer to the same real-world entity. This problem is also known as record linkage, merge-purge, and duplicate detection and object identification. The need for record matching is evident. In data integration it is necessary to collate...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Preventing Bad Plans by Bounding the Impact of Cardinality Estimation Errors

    Query optimizers rely on accurate estimations of the sizes of intermediate results. Wrong size estimations can lead to overly expensive execution plans. The authors first define the q-error to measure deviations of size estimates from actual sizes. The q-error enables the derivation of two important results: The authors provide bounds...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    WOLVES: Achieving Correct Provenance Analysis by Detecting and Resolving Unsound Workflow Views

    Workflow views abstract groups of tasks in a workflow into composite tasks, and are used for simplifying provenance analysis, workflow sharing and reuse. An unsound view does not preserve the dataflow between tasks in the workflow, and can therefore cause incorrect provenance analysis. In this demo the authors present WOLVES,...

    Provided By VLDB Endowment

  • White Papers // Jun 2009

    RankIE: Document Retrieval on Ranked Entity Graphs

    Developer communities built around software products, like the SAP Community Network, provide a knowledge base for recurring problems and their solutions. Due to the large amount of content maintained in such communities, e.g., in forums, finding relevant solutions is a major challenge beyond the scope of common keyword-based search engines....

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Efficient Retrieval of the Top-k Most Relevant Spatial Web Objects

    The conventional Internet is acquiring a geo-spatial dimension. Web documents are being geo-tagged, and geo-referenced objects such as points of interest are being associated with descriptive text documents. The resulting fusion of geo-location and documents enables a new kind of top-k query that takes into account both location proximity and...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Schema-Based Independence Analysis for XML Updates

    Query-update independence analysis is the problem of determining whether an update affects the results of a query. Query-update independence is useful for avoiding recomputation of materialized views and may have applications to access control and concurrency control. This paper develops static analysis techniques for query-update independence problems involving core XQuery...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Pangea: An Eager Database Replication Middleware Guaranteeing Snapshot Isolation Without Modification of Database Servers

    Recently, several middleware-based approaches have been proposed. If the authors implement all functionalities of database replication only in a middleware layer, they can avoid the high cost of modifying existing database servers or scratch-building. However, it is a big challenge to propose middleware which can enhance performance and scalability without...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Output Space Sampling for Graph Patterns

    Recent interest in graph pattern mining has shifted from finding all frequent subgraphs to obtaining a small subset of frequent subgraphs that are representative, discriminative or significant. The main motivation behind that is to cope with the scalability problem that the graph mining algorithms suffer when mining databases of large...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Optimal Random Perturbation at Multiple Privacy Levels

    Random perturbation is a popular method of computing anonymized data for privacy preserving data mining. It is simple to apply, ensures strong privacy protection, and permits effective mining of a large variety of data patterns. However, all the existing studies with good privacy guarantees focus on perturbation at a single...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Scalable Delivery of Stream Query Result

    Continuous queries over data streams typically produce large volumes of result streams. To scale up the system, one should carefully study the problem of delivering the result streams to the end users, which, unfortunately, is often over-looked in existing systems. In this paper, the authors leverage Distributed Publish/Subscribe System (DPSS),...

    Provided By VLDB Endowment

  • White Papers // Aug 2009

    Enabling Approximate Querying in Sensor Networks

    Data approximation is a popular means to support energy-efficient query processing in sensor networks. Conventional data approximation methods require users to specify fixed error bounds a prior to address the trade-off between result accuracy and energy efficiency of queries. The authors argue that this can be infeasible and inefficient when,...

    Provided By VLDB Endowment

  • White Papers // Aug 2010

    Efficient and Effective Similarity Search Over Probabilistic Data Based on Earth Mover's Distance Jia Xu1

    Probabilistic data is coming as a new deluge along with the technical advances on geographical tracking, multimedia processing, sensor network and RFID. While similarity search is an important functionality supporting the manipulation of probabilistic data, it raises new challenges to traditional relational database. The problem stems from the limited effectiveness...

    Provided By VLDB Endowment

  • White Papers // Aug 2010

    MCDBR: Risk Analysis in the Database

    Enterprises often need to assess and manage the risk arising from uncertainty in their data. Such uncertainty is typically modeled as a probability distribution over the uncertain data values, specified by means of a complex (often predictive) stochastic model. The probability distribution over data values leads to a probability distribution...

    Provided By VLDB Endowment

  • White Papers // Aug 2010

    Scalable Probabilistic Databases With Factor Graphs and MCMC

    Incorporating probabilities into the semantics of incomplete databases has posed many challenges, forcing systems to sacrifice modeling power, scalability, or treatment of relational algebra operators. The authors propose an alternative approach where the underlying relational database always represents a single world, and an external factor graph encodes a distribution over...

    Provided By VLDB Endowment

  • White Papers // Aug 2010

    On MultiColumn Foreign Key Discovery

    A foreign/primary key relationship between relational tables is one of the most important constraints in a database. From a data analysis perspective, discovering foreign keys is a crucial step in understanding and working with the data. Nevertheless, more often than not, foreign key constraints are not specified in the data,...

    Provided By VLDB Endowment

  • White Papers // Aug 2010

    Building Ranked Mashups of Unstructured Sources With Uncertain Information

    Mashups are situational applications that join multiple sources to better meet the information needs of Web users. Web sources can be huge databases behind query interfaces, which triggers the need of ranking mashup results based on some user preferences. The authors present MashRank, a mashup authoring and processing system building...

    Provided By VLDB Endowment

  • White Papers // Aug 2010

    Generating Databases for Query Workloads

    To evaluate the performance of database applications and DBMSs, the authors usually execute workloads of queries on generated databases of different sizes and measure the response time. This paper introduces MyBenchmark, an offline data generation tool that takes a set of queries as input and generates database instances for which...

    Provided By VLDB Endowment

  • White Papers // Aug 2010

    CODS: Evolving Data Efficiently and Scalably in Column Oriented Databases

    Database evolution is the process of updating the schema of a database or data warehouse (schema evolution) and evolving the data to the updated schema (data evolution). Database evolution is often necessitated in relational databases due to the changes of data or workload, the suboptimal initial schema design, or the...

    Provided By VLDB Endowment

  • White Papers // Aug 2010

    The Picasso Database Query Optimizer Visualizer

    Modern database systems employ a query optimizer module to automatically identify the most efficient strategies for executing the declarative SQL queries submitted by users. The efficiency of these strategies, called "Plans", is measured in terms of "Costs" that are indicative of query response times. Optimization is a mandatory exercise since...

    Provided By VLDB Endowment