Learn how artificially-created situations influence our behavior online and what if anything can be done to reduce the influence of social defaults.
Recently, there's been a fair amount of press coverage discussing how trolls and bots are influencing public opinion, legislation, and even elections in the United States.
As to trolls, we can assume members of the press are not referring to mythical creatures such as the one who lives under a bridge in the story Three Billy Goats Gruff. Jarrett Potts, a senior sales manager at Lenovo, in his LinkedIn post What is the difference between a Troll and an Internet Bot, defines a troll as:
"A person who sows discord on the internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages in an online community (such as a newsgroup, forum, chat room, or blog) with the intent of provoking readers into an emotional reaction."
The trolls currently making news are likely anonymous state-sponsored individuals trying to alter public opinion by blanket posting on social media and/or popular blogs.
SEE: Social Media and Web Usage Policy (Tech Pro Research)
Why trolling works
In their 2014 seminal paper Social Defaults: Observed Choices Become Choice Defaults (PDF), authors Young Eun Huh, Joachim Vosgerau, and Carey K. Morewedge write:
"We suggest that just as observing others' behavior can induce behavioral mimicry, observing others' choices can induce choice mimicry. Observing others' choices may cause their choices to become default options, which are automatically adopted unless consumers believe it is inappropriate to imitate those choices or have sufficiently strong preferences, cognitive resources, and motivation to diverge before choosing."
What role do bots play?
This is how Potts defines a bot:
"An internet bot--also known as web robot, WWW robot, or simply bot--is a software application that runs automated tasks (scripts) over the internet. Typically, bots perform tasks that are both simple and structurally repetitive, at a much higher rate than would be possible for a human alone."
Bots are especially helpful in gathering information and interacting with instant messaging programs, instant relay chat, or various other web interfaces--saving time and effort for the person or organization controlling the bots. For example, travelers tend to get frustrated trying to find good hotel accommodations at a bargain price. The people at SnapTravel offer a service that takes all the pain away. SnapTravel programmers using Natural Language Processing and artificial intelligence (AI) have created chatbots that allow travelers to find and book rooms via SMS texting and Facebook Messenger.
SEE: How to implement AI and machine learning (ZDNet/TechRepublic special report, PDF)
Saving time and effort are not lost on those trolling the internet. Since the number of posts required to create social defaults is significant, why not use bots to turn the intended misinformation viral? "Our research demonstrates that automatic forms of social influence are more pervasive than previously thought," explain Eun Huh, Vosgerau, and Morewedge. "The automatic processes that underlie behavioral mimicry appear to not only influence nonverbal communication, emotions, and behavior when people interact, but they adopt the same preferences as other consumers."
Put simply, by obtaining or creating bot software, internet trolls--governments, organizations, or individuals--can automatically add enough inaccurate commentary--fake news for example--at social media outlets to create the "social-default situations" as defined earlier.
Unfortunately, this is no longer theory
A consortium of academic researchers, led by Chengcheng Shao, have compiled evidence of social bots spreading fake news. The team's findings were published in the research paper The spread of fake news by social bots. To prevent misleading information, such as fake news, from becoming social-default situations, Chengcheng Shao and fellow researchers have developed two online platforms that help sort out what is real and what's not: Hoaxy and Botometer.
Hoaxy tracks fake-news claims. From the Hoaxy website: "Hoaxy visualizes the spread of claims and related fact-checking online. A claim may be a fake news article, hoax, rumor, conspiracy theory, satire, or even an accurate report." As to how Hoaxy (which is currently in beta) works, the researchers track the social sharing of links to stories published by two types of websites:
- Independent fact-checking organizations, such as snopes.com, politifact.com, and factcheck.org that routinely fact check unverified claims; and
- Sources that often post inaccurate, unverified, or satirical claims according to lists compiled and published by reputable news and fact-checking organizations.
Botometer checks the activity of a Twitter account and assigns it a score based on the likelihood the account is bot controlled or not. The higher the score the more likely it is being influenced by a bot.
The developers of Hoaxy and Botometer are quick to point out that bot detection is difficult:
"Many criteria are used in determining whether an account is controlled by a human or a bot, and even a trained eye gets it wrong sometimes. If this task were easy to do with software, there wouldn't be any bots--Twitter would have already caught and banned them!"
Cybercriminals are watching
As cybercriminals and those intent on causing harm better understand the psychology behind the phenomena of social defaults (the internet version of herd mentality), events like the swaying of elections will likely become commonplace. The best approach is to use tools such as Hoaxy and Botometer to complement, but not replace one's judgment based on personal research into the validity of an online claim.
- Supreme Court puts new restriction on patent trolls (ZDNet)
- How Russian trolls lie their way to the top of your news feed (CNET)
- Help me AI, you're my only hope: Tech giants to Senate on Russian election meddling (TechRepublic)
- Gallery: Did you click on these Russian-backed Facebook ads? (TechRepublic)
- When it comes to web traffic, 79% of CISOs can't tell the difference between humans and bots (TechRepublic)
- Beware of the bots: How they're created and why they matter (TechRepublic)
- Extra, extra! That fake news story might come with malware (TechRepublic)
- Troll hunting: Google launches new AI tool to counter 'toxic' and abusive comments (TechRepublic)
- Gallery: Internet trolls, ranked by state (TechRepublic)
- iHate special feature (CNET)