Hadoop.TS: Large-Scale Time-Series Processing

In this paper, the authors describe a computational framework for time-series analysis. It allows rapid prototyping of new algorithms, since all components are reusable. Generic data structures represent different types of time series, e.g. event and interevent time series, and define reliable interfaces to existing big data. Standalone applications, highly scalable MapReduce programs, and user defined functions for Hadoop-based analysis frameworks are the major modes of operation. Efficient implementations of univariate and bivariate analysis algorithms are provided for, e. g., long-term correlation, crosscorrelation and event synchronization analysis on large data sets.

Provided by: International Journal of Computer Applications Topic: Data Management Date Added: Jul 2013 Format: PDF

Find By Topic