Provided by: University of California San Francisco
Date Added: Feb 2010
Security issues are crucial in a number of machine learning applications, especially in scenarios dealing with human activity rather than natural phenomena (e.g., information ranking, spam detection, malware detection, etc.). It is to be expected in such cases that learning algorithms will have to deal with manipulated data aimed at hampering decision making. Although some previous paper addressed the handling of malicious data in the paper of supervised learning, very little is known about the behavior of anomaly detection methods in such scenarios.