Download now Free registration required
Given a dataset containing sensitive personal information, a statistical database answers aggregate queries in a manner that preserves individual privacy. The authors consider the problem of constructing a statistical database using output perturbation, which protects privacy by injecting a small noise into each query result. They show that the state-of-the-art approach, ?-differential privacy, suffers from two severe deficiencies: it incurs prohibitive computation overhead, and can answer only a limited number of queries, after which the statistical database has to be shut down. To remedy the problem, they develop a new technique that enforces ?-different privacy with economical cost.
- Format: PDF
- Size: 867.68 KB