It’s always difficult to say anything definite about cybercrime with any modicum of confidence, since most of what’s happening on the “dark side” of the internet is hidden from our eyes. We have very little information on how much there is of it, who is doing the crime and from where. We know some answers to these questions sometimes, but the bigger picture is elusive. If we look at the crime statistics, we can let out a partial sigh of relief, since the amount of cybercrime reported to the police seems like something we can actually deal with. But you can multiply that number by a thousand to account for the detected but unreported cases and then multiply that number by a thousand to get an estimate of how many cases go wholly undetected. And that’s probably a laughably conservative estimate. This is, in part, a problem of classification. Whenever an organization boasts on how they’ve stopped “tens of thousands of cyber attacks” you can be sure that they classify each and every touch of a network scanner as an individual attack.
On the technical side of the attack spectrum we only see a tiny bit of the bad activities that are going on in our networks, since most individuals and organizations are not looking very carefully. And I don’t blame them, computers are hard enough to use in your day-to-day life, so it’s not reasonable to expect grepping log files or rigging up personal firewall and IDS platforms to become the new citizen skill required to keep your home network safe. In organizations, however, it should be routine activity and given the same consideration as fire safety and occupational hazards. It usually isn’t.
A lot of the work protecting internet users is being done in the dark too. Passive and active technical protections are now part of even consumer hardware, and ISP’s are cooperating with CERTs, companies and not-for-profit organizations to find and take down malicious content. The tough struggle between the good guys and bad guys is done in the background while we rush to digitize every aspect of our lives and connect every possible device to the internet. It’s a serious fight to keep all this usable, and I’m not so sure the good guys are winning it. I hope so, but I guess only time will tell.
What we do have good visibility on, though, are the types of attacks that by their nature need to surface to reach the users. I have a gut feeling that so-called social attacks are gaining more prevalence every day, surpassing the more technical intrusions and breaches as the basic mode of operations for underground criminal organizations. There are a lot of good reasons for criminals to focus their efforts on exploiting the user instead of exploiting computer systems directly – it’s a lot easier, and when succesful, they can gain access to systems with legitimate credentials. Our brain’s “software” is really hard to upgrade, and how our minds work is still very opaque to us. Humans are just starting to figure out how human minds work, and the field of psychology gains new insights every day.
What all social attacks have in common is that they leverage some of the better-known features of social psychology. We’ve evolved to be around other people, and successfully navigating social environments is absolutely essential for our psychological well-being. Most of the traits and skills required to thrive in a social environment are passed down genetically or inherited from our closer social circles, our family and peers. How we manifest them is determined by culture and our personality and where we stand on the Big Five personality traits scale. There is good reason to believe that your personality can have an effect on how likely it is that you will become a victim of a social cyberattack. A Dutch study from 2017 found surprising and significant correlations that effect whether people become victims of cybercrime (highlights mine):
Against our expectation, a signiﬁcant association was found for conscientiousness, but not for agreeableness. Individuals who were more conscientious have a decreased risk to become a victim of cybercrime (Odds Ratio[OR]:0.981). In addition, also people who showed more emotional stability were less likely to be victimized by cybercrime (OR:0.959), while those who were more open to experience were more likely to be a victim of cybercrime (OR: 1.044). Moreover, the control variables show that men and young people were more likely to become victims of cybercrime than women and older people.– Steve G.A. van de Weijer, PhD, and E. Rutger Leukfeldt, PhD
The range of cybercrime defined in the study ranged from online bullying to virus infections and hacking, and the profile of victimization might be different if only social attacks were studied. If I had to guess, conscientiousness is a personality trait that helps people to adhere to security best practices and act according to secure principles like “do not click links in email messages”.
Another study by Fawn T. Ngo and Raymond Paternoster found a significant correlation between low self-control and online victimization, but this only applied to some specific types of victimization:
Our findings with respect to the general theory appears to be consistent with that reported in a previous study in which the authors found low levels of self-control predicted person-based cybercrime victimization (i.e., offenses where the individual was the specific target) but not computer-based victimization (i.e., offenses where computers were the targets; see Bossler & Holt; 2010).– Fawn T. Ngo & Raymond Paternoster
Another study from University of Exeter found similar results (highlights mine):
There is strong evidence that low self-control plays a substantial role in
victimization in general (Carter, 2001; Gottfredson & Hirschi, 1990; Tangney, Baumeister, & Boone, 2004) and fraud specifically (Holtfreter, Reisig, & Pratt, 2008) while reducing the effect of demographic factors such as gender and income (Holtfreter, Reisig, Leeper Piquero, & Piquero, 2010; Schreck, 1999). Individuals with low self control have difficulties controlling their emotions, leaving them vulnerable to errors in judgment (Tangney et al., 2004) that lead to less than optimal decisions when responding to scams (Langenderfer & Shimp, 2001).
Those participants who reported lower levels of completed education were also more likely to be scam compliant (for each completed level of education the odds of compliance halved: odds ratio of 0.50).
Extraverted individuals were more likely to respond, as were those who were less open to new experiences (openness) and had lower high-risk preferences (sensation seeking). Those who were not good at predicting the outcome of a scam (premeditation) and were prone to act impulsively under negative affect (urgency) were also more likely to be compliant.– David Modica, Stephen E.G. Leab
There is a good reason to believe that studies that explore victimization for fraud are applicable to victimization for social attacks since most social attacks are essentially a form of fraud. Classifying all criminal acts involving a computer with a blanket category of “cybercrime” can be misleading, since getting phished or romance-scammed has as much to do with a virus infection as getting scammed out of your pension has to do with getting stabbed in an alley for your wallet. Being aware of the risk factors for social attacks is useful when designing interventions against them.
With the knowledge that the attackers use common psychological tricks, it can be postulated that some individuals are more at risk than others. The studies hint at this with finding on personality features playing a role in individual risk profile, but currently it is not reasonable to try to use the findings to single out high-risk individuals. Observing criminal activity can provide us some empirical evidence, since their process of victimization does not care about carefully controlled studies. They attack the targets that they believe yield the highest reward. That is probably why it seems that more romance scam victims are women than men and investment fraud victims are more often men. The University of Exeter study found that women are eight times more likely to respond to fraudulent communication (page 23), which places them at a higher risk for victimization in female-targeted crimes like romance scams. The higher risk for young people and men found in the Dutch study might be caused by a higher risk for technical attacks like virus infections.
How you are being played
Cybercriminals leverage your psychology by playing tricks on your mind, whether it’s a scary message from someone posing as your bank (“your account is about the be locked”) or a lucrative offer to make thousands online with just a few hours of “work” every week. The lure and scare of these messages can be very powerful, and engaging in further communication with the attacker gives them more opportunities to pull the victim deeper into the scam.
Jonathan J. Rusch from the Department of Justice detailed the social psychology phenomenon known as “routes of persuasion” in his paper titled “The “Social Engineering” of Internet Fraud“. It was published in 2003, so the technology-specific examples listed are outdated (haven’t seen much promises of a free 56k modem in my inbox lately), but the psychology of the attack type remains essentially the same.
The two methods of extracting compliance described are the central route of persuasion and peripheral route of persuasion. This quote describes why the latter is of interest to scammers and cybercriminals (highlights mine):
A central route to persuasion marshals systemic and logical arguments to stimulate a favorable response, prompting the listener or reader to think deeply and reach agreement.
A peripheral route to persuasion, in contrast, relies on peripheral cues and mental shortcuts to bypass logical argument and counterargument and seek to trigger acceptance without thinking deeply about the matter. [14, 15]
As every scheme to defraud necessarily involves the offering of goods or services in ways that misrepresent their objective qualities and features, the principals in the scheme can never afford to use a direct route to persuasion, and therefore invariably fall back on methods using peripheral routes to persuasion.
One way in which a criminal can make prospective victims more susceptible to peripheral routes to persuasion is by making some statement at the outset of their interaction that triggers strong emotions, such as excitement or fear.
In other types of fraud that involve strong personal interaction, such as telemarketing fraud, criminals construct their schemes to ensure that at or near the beginning of their interaction with a prospective victim, they will make some statements or actions, such as the promise of a substantial prize worth hundreds or thousands of dollars, that will cause the prospective victim to become immediately excited. 
These surges of strong emotion, like other forms of distraction, serve to interfere with the victim’s ability to call on his or her capacity for logical thinking, such as his capacity for counterargument.  This aids the criminal in making false representations that exploit a peripheral route to persuasion.– Jonathan J. Rusch
The way these emotional triggers are manipulated via the peripheral route to persuasion can be seen in the various different scams circulating the internet. Consider this example screen shot of a phishing message sent from “Apple”:
This is a well-crafted message, with some thought put into it. Let’s walk through the average user’s reactions when they receive this message:
“But I didn’t order an iTunes gift card. Somebody must have hacked my account or stolen my credit card! Oh no!” This is the emotional response that the attacker is looking for. The user will probably think that their account credentials are stolen, since they are receiving a receipt from Apple. If the criminals just stole their card, it’s highly unlikely they’d receive receipts of purchases. Just making this insight might be enough to make the user feel that they are at least a little bit in control of the situation.
This is a simple scare tactic. The criminals want you to get emotional and act without thinking about it too much. The user’s next thought is probably “I have to dispute this payment and let Apple know my account was hacked!”. There is a sense of urgency there, since if the user receives this receipt now, it means that the criminals are also active right now.
The user needs to quickly dispute the payment, and this is where the clever part comes – the two most striking visual aspects of the message are the Apple logo and the “Dispute Transaction” button. (I’ve also seen a version that say “recovery my account” but that’s maybe jumping too much to conclusions for the user, and the spelling error doesn’t help.)
As they scramble to regain control of their account while mentally making a list of all the photos, notes and messages they’d hate to see in the wrong hands, they’ll probably swallow the bait and click the link in the message, bringing them right to the attacker-controlled login page (which is probably piggybacking on a hacked WordPress site of a brazilian shoe store somewhere).
The way to defend from this kind of an attack is to take a deep breath, calm your emotions and really look at the message. There are several subtle and one very obvious clue that this message is fraudulent. Some of the subtle clues are:
- It’s addressed to “Clients”, even though Apple knows your name.
- It’s always in dollars, even if you live somewhere where you deal with Apple using your own currency.
- It’s sent from an address not belonging to @apple.com.
- The links will take you to a site that’s not an Apple controlled domain.
…and the obvious clue is:
- Why is there a big blue “dispute payment” button on a receipt? That hasn’t been there before. It’s not supposed to be there. Why is it so strikingly obvious that you can’t help but to look at it?
The obvious clue is, of course, the trap. If you keep a cool head and really look at the messages you receive, especially when you feel an emotion stirring, you’ll be able to spot a phishing message a mile away. But the sad truth is that phishing is usually really effective, even when the message isn’t especially targeted. The Verizon Data Breach Report for 2017 found out that:
7.3% of users across multiple data contributors were successfully phished—whether via a link or an opened attachment. That begged the question, “How many users fell victim more than once over the course of a year?” The answer is, in a typical company (with 30 or more employees), about 15% of all unique users who fell victim once, also took the bait a second time. 3% of all unique users clicked more than twice, and finally less than 1% clicked more than three times.– Verizon 2017 Data Breach Investigations Report
So, phishing works. The psychological trick is validated. The way to combat against it is relentless education and detection methods inside your organization, and of course, rolling out two-factor authentication, which will stop a phisher dead in their tracks. There are, of course, ways to work around two-factor authentication protections, but opportunistic attacks targeting hundreds of thousands of users simultaneously are more common and they usually won’t bother phishing your expiring token to exploit in real time. After Google issued all of it’s 85,000+ employees security keys, they haven’t been succesfully phished since. At least, not their Google credentials.
The power of social attacks is so well known, that even the most advanced groups backed by nation states usually start their hacking operations with a targeted phishing message. These are messages that leverage information gathered with open-source intelligence and contain target-specific content to lure the victim to open the message or the included attachment. The success rate of spear-phishing messages is probably staggering, even though we don’t have access to the actual numbers. It can be said that the continued use of such tactics is a testament to their efficiency.
By becoming familiar with the peripheral route to persuasion you can put roadblocks to the routes the criminals use to bypass your critical faculties. Being aware of the attack methods will let you design on-the-point education for the people working in your organization to prevent, detect and mitigate the effects of these devastating attacks. These are also novel ways to turn spotting phishing messages into a game, with company-wide leaderboards and a one-click solution to report suspicious messages.
Oh, and of course, phishing is just one of the many social attacks available for criminals. However, adopting a skeptical mindset and calming yourself before acting are a good first aid against all of them.
Be careful with your emotions. They can land you in a world of trouble.