Date Added: Apr 2012
As location-based services emerge, many people feel exposed to high privacy threats. Privacy protection is a major challenge for such applications. A broadly used approach is perturbation, which adds an artificial noise to positions and returns an obfuscated measurement to the requester. The authors' main finding is that, unless the noise is chosen properly, these methods do not withstand attacks based on probabilistic analysis. In this paper, they define a strong adversary model that uses probability calculus to de-obfuscate the location measurements. Such a model has general applicability and can evaluate the resistance of a generic location-obfuscation technique. They then propose UniLO, an obfuscation operator which resists to such an adversary.