Massive Genomic Data Processing and Deep Analysis
Today large sequencing centers are producing genomic data at the rate of 10 terabytes a day and require complicated processing to transform massive amounts of noisy raw data into biological information. To address these needs, the authors develop a system for end-to-end processing of genomic data, including alignment of short read sequences, variation discovery, and deep analysis. They also employ a range of quality control mechanisms to improve data quality and parallel processing techniques for performance. In the paper, they will use real genomic data to show details of data transformation through the workflow, the usefulness of end results (ready for use as testable hypotheses), the effects of their quality control mechanisms and improved algorithms, and finally performance improvement.