Since the 2016 US Presidential Election, social media platforms have been more proactive about addressing security gaps and stopping the kind of state-run psychological operations that were endemic over the last decade.

Despite their efforts to focus on security, experts say these kinds of digital psychological operations by governments are evolving in ways that will require sophisticated, multi-pronged security efforts.

Charity Wright, a former NSA operative and now a researcher with cybersecurity firm IntSights, detailed the situation in her new report PSYOPS: How States Weaponize Social Media to Disrupt Global Politics.

“The internet started as this open, free, fun space where you get to meet people across the world in real time and it was fun and exciting. However, we live in an era where our favorite applications, to which we devote hours each day, are being weaponized against us by governments that wish to influence how we think and behave,” Wright said.

“While many people are just now hearing about misinformation campaigns, ‘fake news’, and social media bots, psychological warfare is as old as war itself,” she said.

SEE: Midterm elections 2018: How 7 states are fighting cybersecurity threats from Russia and other attackers (free PDF) (TechRepublic Premium)

While most of her report focuses on attacks led by Russia and China, she admitted that many other countries, including the US, are now conducting these kinds of social media-based attack campaigns, partially in an effort to keep up with the world’s superpowers.

Since the release of Special Counsel Robert Mueller’s report on Russian interference in the 2016 presidential election, analysts have learned even more about the breadth and depth of how social media was used maliciously.

Russia’s state-run Internet Research Agency spent five years using Facebook, Instagram, Twitter and other sites to push real, but contentious, issues and stir fierce debate across the US.

Research firm New Knowledge released a lengthy report on the Internet Research Agency, saying the group managed to reach over 126 million people on Facebook, 20 million users on Instagram, 1.4 million users on Twitter, and uploaded over 1,000 videos to YouTube.

Wright’s study notes that the Internet Research Agency was spending up to $25 million a year on this project.

With this information, social media platforms largely did the right thing, creating new ways for them to shut down fake profiles and take down posts with false information. But when the Internet Research Agency’s activities were revealed publicly, they moved to a new strategy.

“During the 2016 election and afterwards, all this truth started coming out about the Internet Research Agency. Now that they’ve been outed, they kind of pivoted toward platforms that don’t catch them as much,” Wright said.

“So instead of creating tweets that can be analyzed, they start creating memes, literal meme warfare, because it’s harder to catch images than threatening messages or text. They’ve pivoted over to Instagram and started creating all these memes and photos and plastered it all over.”

The Internet Research Agency managed to get over 187 million engagements on Instagram, compared to 77 million on Facebook and 73 million engagements on Twitter. This switch over to Instagram was particularly worrying, Wright said, because it showed that these kinds of operations could evolve quickly and switch platforms at will.

She added that in an effort to gain access to young Americans, the Internet Research Agency was also making inroads on platforms like SnapChat and TikTok.

“Users of these platforms should remain vigilant and aware that the Internet Research Agency uses ‘click farms’, or bots, to generate ‘likes’ on certain posts and accounts. This gives the audience a false sense of support for that post and the view it is trying to propagate,” Wright said.

“Awareness of this threat could help to significantly counter these efforts. Americans should use discretion when communicating with and connecting with new, previously unknown accounts.”

Wright did say that sites like Twitter were becoming much better at spotting mass disinformation campaigns and stopping them before they could continue spreading.

She pointed to recent efforts by China to sway public opinion on the protests in Hong Kong as an example of Twitter taking proactive measures to stop abuse of their platform.

“Last week, China’s state-run social media campaign put out some tweets about the protesters planning a terrorist attack similar to 9/11 in Hong Kong, which was totally fabricated. But they used hundreds of their bots to retweet and reshare it to incite fear into people supporting the protesters,” Wright said.

“But they became careless because they were panicking about how the situation made them look. All of a sudden you see thousands of tweets and posts, so many that it became obvious. Twitter stepped in and shut down thousands of their accounts and rebuilding that army of twitter bots may take more work.”

Twitter shut down hundreds of thousands of accounts and released a report about the entire disinformation campaign.

Even Google stepped in as well, closing down 210 YouTube channels and releasing their own report on the problem.

“Covert, manipulative behaviors have no place on our service–they violate the fundamental principles on which our company is built. These deceptive strategies have been around for far longer than Twitter has existed. They adapt and change as the geopolitical terrain evolves worldwide and as new technologies emerge,” Twitter’s statement said.

“We will continue to be vigilant, learning from this network and proactively enforcing our policies to serve the public conversation. We hope that by being transparent and open we will empower further learning and public understanding of these nefarious tactics.”

Although Twitter had success in stopping the tweets from spreading too far, the media campaign was successful inside of China, she said, because the government has total control over the internet and all media outlets.

Americans, Wright said, also had a duty to step in and be more critical of the things they read before believing, or sharing, content with friends and family. Vigilance was the key to avoiding the problems that cropped up in the past, she added.

“The internet has transitioned into a space that is regulated, governments are trying to take more control of the flow of data. The best thing for us to protect ourselves is to be more aware and to be more cautious when we’re on social media, ” Wright added.

“When we see things that get us stirred up, we should question, ‘Do I know this person? Is there truth behind this or am I just going to believe everything on the internet? It is hard because a lot of these look real.”

Image: ValeryBrozhinsky, Getty Images/iStockphoto