Key Challenges in Defending Against Malicious Socialbots
Source: University of Bristol
The ease with which the authors adopt online personas and relationships has created a soft spot that cyber criminals are willing to exploit. Advances in artificial intelligence make it feasible to design bots that sense, think and act cooperatively in social settings just like human beings. In the wrong hands, these bots can be used to infiltrate online communities, build up trust over time and then send personalized messages to elicit information, sway opinions and call to action. In this position paper, they observe that defending against such malicious bots raises a set of unique challenges that relate to web automation, online-offline identity binding and usable security.