Since the election of the current US president, many news agencies have discussed the possibility of social engineering being used as part of targeted influence campaigns and how they may have affected voters during the 2016 election. 

SE at the nation state level

Federal indictments talk about the use of Facebook and Twitter by foreign agents to push narratives and sway opinion for a variety of topics using automated accounts re-posting information in line with the desired perspective, thus tapping into the tribe mentality. Actual US citizens were also using these techniques to push their own storyline and talking points, whether they knew it or not. 

These are great platforms for getting a message out, whether legitimate or not, but that is not the only form of SE seen during that campaign. 

It has been reported that foreign agents even made the trip to US soil to interact with Americans in a variety of political and professional positions to attempt to gain more knowledge of their targets. They were meeting with rally organizers and local grassroots advocates to better understand the markets in the population that could be manipulated to conform to their objectives. One such meeting in Texas revealed the need to focus on “purple states” where a smaller, influenced population could have a more dramatic effect. 

As an example, it is far more resource-intensive to flip a blue state to red or vice versa, but a purple state only takes a few extra voters to flip one direction or another. Targeted influence campaigns against specific populations of people can accomplish that goal while still making them think it was their idea all along. 

It all comes down to knowing your targets, goals, and obstacles well enough to maneuver with the least effort and still accomplish your goal. This is true for any type of influence campaign whether professional or not. 

It is one thing to use phishing emails and social media posts to try to influence voters, but impersonating fellow citizens offered an opportunity to gather vital data for the campaign by building rapport face-to-face. 

The media discussed the possibility that protesters may have been paid to voice a specific opinion, and it has come to light that that specific technique was used by foreign operatives at some protests. So, there may have been information available regarding these payoffs, but it was attributed to the wrong actors. 

It was also determined that foreign operatives were setting up their own protests and marketing them through fake accounts with social media posts and targeted advertising to specific demographics of users to the point that thousands of people actually showed up to them. 

While these may be extreme and rare examples, the fact that they were used shows the level that a motivated group was willing to go to in an attempt to influence a specific demographic. 

All of this is both disturbing and enlightening from a professional social-engineer’s perspective. To know that these techniques are used this way in the real world and can have such a huge impact on public opinion is truly astonishing.  

What can be done about all this going forward?

In the current information age, it may be difficult to determine what information is real and what is composed to solidify or change your opinion on a topic. Social engineering awareness training often promotes critical thinking as the go-to form of defense against these threats.  

Today’s information overload tends to make people fall into bubbles of common thought, so they only see the information they agree with. 

To combat these threats, people need to get out of those bubbles, get offline and talk to other people, even those they may not agree with. 

Verify news sources that have extreme points of view and even ones that consistently appear in your news feed. Does this news source try to address both sides of an issue, or just focus like a laser on a particular narrative? 

A web of trusted sources can go a long way to combat the emergence of “fake news” and automated posts flooding your social media accounts. 

Additionally, if a new piece of information “just doesn’t sit right,” or you “get a weird gut feeling,” take the time to verify its validity even just a little bit, and you may uncover a new legion of bots and trolls that could significantly change your view of where your news is coming from. 

Also, just because a juicy piece of news fits in line with your established view point doesn’t mean it is actually true and worth promoting either. Passing along incorrect or slanted data to others only perpetuates the problem and further muddies the waters of truth. 

It is very likely that similar campaigns will be run in the future, so it is vital that you the reader understand the threats and critically think about the information that is presented to you. 

Sources: 
https://www.msn.com/en-us/news/technology/how-a-twitter-fight-over-bernie-sanders-revealed-a-network-of-fake-accounts/ar-BBKbTZ3?li=AA4Zoy 
https://www.npr.org/2018/02/17/586698361/the-russia-investigations-mueller-indicts-the-internet-research-agency 
https://slate.com/technology/2018/02/what-we-know-about-the-internet-research-agency-and-how-it-meddled-in-the-2016-election.html 
https://thinkprogress.org/black-matters-us-site-90625b18f262/ 
https://consultgiana.com/truth-lies-and-the-future-of-your-organization/