Pretexting

Last updated

Pretexting is a type of social engineering attack that involves a situation, or pretext, created by an attacker in order to lure a victim into a vulnerable situation and to trick them into giving private information, specifically information that the victim would typically not give outside the context of the pretext. [1] In its history, pretexting has been described as the first stage of social engineering, and has been used by the FBI to aid in investigations. [2] A specific example of pretexting is reverse social engineering, in which the attacker tricks the victim into contacting the attacker first.

Contents

A reason for pretexting's prevalence among social engineering attacks is its reliance on manipulating the human mind in order to gain access to the information the attacker wants, versus having to hack a technological system. When looking for victims, attackers can watch out for a variety of characteristics, such as ability to trust, low perception of threat, response to authority, and susceptibility to react with fear or excitement in different situations. [3] [4] Throughout history, pretexting attacks have increased in complexity, having evolved from manipulating operators over the phone in the 1900s to the Hewlett Packard scandal in the 2000s, which involved the use of social security numbers, phones, and banks. [5] Current education frameworks on social engineering are used in organizations, although researchers in academia have suggested possible improvements to those frameworks. [6]

Background

Social engineering

Social engineering is a psychological manipulation tactic that leads to the unwilling or unknowing response of the target/victim. [7] It is one of the top information security threats in the modern world, affecting organizations, business management, and industries. [7] Social engineering attacks are considered difficult to prevent due to its root in psychological manipulation. [8] These attacks can also reach a broader scale. In other security attacks, a company that holds customer data might be breached. With social engineering attacks, both the company (specifically workers within the company) and the customer directly are susceptible to being targeted. [8]

An example would be in the banking industry, where not only bank employees can be attacked, but the customers as well. Social engineering culprits directly target customers and/or employees to work around trying to hack a purely technological system and exploit human vulnerabilities. [8]

Though its definition in relation to cybersecurity has been skewed across different literature, a common theme is that social engineering (in cybersecurity) exploits human vulnerabilities in order to breach entities such as computers and information technology. [2]

Social engineering has little literature and research done on it currently. However, a main part of the methodology when researching social engineering is to set up a made-up pretext. When assessing which social engineering attacks are the most dangerous or harmful, (ie. phishing, vishing, water-holing), the type of pretext is a largely insignificant factor, seeing as some attacks can have multiple pretexts. Thus, pretexting itself is widely used, not just as its own attack, but as a component of others. [9]

Pretexting in the timeline of social engineering

In cybersecurity, pretexting can be considered one of the earliest stages of evolution for social engineering. For example, while the social engineering attack known as phishing relies on modern items such as credit cards and mainly occurs in the electronic space, pretexting was and can be implemented without technology. [10]

Pretexting was one of the first examples of social engineering. Coined by the FBI in 1974, the concept of pretexting was often used to help in their investigations. In this phase, pretexting consisted of an attacker calling the victim simply asking for information. [2] Pretexting attacks usually consist of persuasion tactics. After this beginning phase of social engineering's evolution (1974-1983), pretexting changed from not only persuasion tactics, but deception tactics as well. As technology developed, pretexting methods developed alongside it. Soon, hackers had access to a wider audience of victims due to the invention of social media. [2]

Reverse social engineering

Reverse social engineering is a more specific example of pretexting. [11] It is a non-electronic form of social engineering where the attacker creates a pretext where the user is manipulated into contacting the attacker first, versus the other way around.

Typically, reverse engineering attacks involve the attacker advertising their services as a type of technical aid, establishing credibility. Then, the victim is tricked into contacting the attacker after seeing advertisements, without the attacker directly contacting the victim in the first place. Once an attacker successfully accomplishes a reverse social engineering attack, then a wide range of social engineering attacks can be established due to the falsified trust between the attacker and the victim (for example, the attacker can give the victim a harmful link and say that it is a solution to the victim's problem. Due to the connection between the attacker and the victim, the victim will be inclined to believe the attacker and click on the harmful link). [12]

Social aspect

Pretexting was and continues to be seen as a useful tactic in social engineering attacks. According to researchers, this is because they don't rely on technology (such as hacking into computer systems or breaching technology). Pretexting can occur online, but it is more reliant on the user and the aspects of their personality the attacker can utilize to their advantage. [13] Attacks that are more reliant on the user are harder to track and control, as each person responds to social engineering and pretexting attacks differently. Directly attacking a computer, however, can take less effort to solve, since computers relatively work in similar ways. [13] There are certain characteristics of users that attackers pinpoint and target. In academia, some common characteristics [14] are:

Prized

If the victim is "prized", it means that he/she has some type of information that the social engineer desires. [3]

Ability to trust

Trustworthiness goes along with likability, as typically the more someone is liked, the more they are trusted. [14] Similarly, when trust is established between the social engineer (the attacker) and the victim, credibility is also established. Thus, it is easier for the victim to divulge personal information to the attacker if the victim is more easily able to trust. [4]

Susceptibility to react

How easily a person reacts to events and to what degree can be used in a social engineer's favor. Particularly, emotions like excitement and fear are often used to persuade people to divulge information. For example, a pretext could be established wherein the social engineer teases an exciting prize for the victim if they agree to give the social engineer their banking information. The feeling of excitement can be used to lure the victim into the pretext and persuade them to give the attacker the information being sought after. [14]

Low perception of threat

Despite understanding that threats exist when doing anything online, most people will perform actions that are against this, such as clicking on random links or accepting unknown friend requests. [14] This is due to a person perceiving the action as having a low threat or negative consequence. This lack of fear/threat, despite an awareness of its presence, is another reason why social engineering attacks, especially pretexting, are prevalent. [15]

Response to authority

If the victim is submissive and compliant, then an attacker is more likely to be successful in the attack if a pretext is set where the victim thinks the attacker is posed as some type of authoritative figure. [14]

Examples

Early pretexting (1970–80s)

The October 1984 article Switching centres and Operators detailed a common pretexting attack at the time. Attackers would often contact operators who specifically operated for deaf people using Teletypewriters. The logic was that these operators were often more patient than regular operators, so it was easier to manipulate and persuade them for the information the attacker desired. [2]

Recent examples

A notable example is the Hewlett Packard scandal. The company Hewlett Packard wanted to know who was leaking out information to journalists. In order to do so, they provided private investigators with employees' personal information (such as social security numbers), and the private investigators in turn called phone companies impersonating those employees in hopes of obtaining call records. When the scandal was discovered, the CEO resigned. [16]

In general, socialbots are machine-operated fake social media profiles employed by social engineering attackers. On social media sites like Facebook, socialbots can be used to send mass friend requests in order to find as many potential victims as possible. [5] Using reverse social engineering techniques, attackers can use socialbots to gain massive amounts of private information on many social media users. [17] In 2018, a fraudster impersonated entrepreneur Elon Musk on Twitter, altering their name and profile picture. They proceeded to initiate a deceptive giveaway scam, promising to multiply the cryptocurrency sent by users. Subsequently, the scammer retained the funds sent to them. This incident serves as an example of how pretexting was employed as a tactic in a social engineering attack. [18]

Current education frameworks

Current education frameworks on the topic of social engineering fall in between two categories: awareness and training. Awareness is when the information about social engineering is presented to the intended party to inform them about the topic. Training is specifically teaching necessary skills that people will learn and use in case they are in a social engineering attack or can encounter one. [6] Awareness and training can be combined into one intensive process when constructing education frameworks.

While research has been done on the successfulness and necessity of training programs in the context of cybersecurity education, [19] up to 70% of information can be lost when it comes to social engineering training. [20] A research study on social engineering education in banks across the Asian Pacific, it was found that most frameworks only touched upon either awareness or training. Also, the only type of social engineering attack that was taught was phishing. By looking at and comparing the security policies on these banks' websites, the policies contain generalized language such as "malware" and "scams", while also missing the details behind the different types of social engineering attacks and examples of each one of those types. [6]

This generalization does not benefit the users being educated by these frameworks, as there is considerable depth missing when the user is only educated on broad terms like the examples above. As well, purely technical methods of combatting against social engineering and pretexting attacks, such as firewalls and antiviruses, are ineffective. This is because social engineering attacks typically involve exploiting the social characteristic of human nature, thus purely combatting technology is ineffective. [21]

See also

Related Research Articles

<span class="mw-page-title-main">Computer security</span> Protection of computer systems from information disclosure, theft or damage

Computer security, cybersecurity, digital security or information technology security is the protection of computer systems and networks from attacks by malicious actors that may result in unauthorized information disclosure, theft of, or damage to hardware, software, or data, as well as from the disruption or misdirection of the services they provide.

Malware is any software intentionally designed to cause disruption to a computer, server, client, or computer network, leak private information, gain unauthorized access to information or systems, deprive access to information, or which unknowingly interferes with the user's computer security and privacy. Researchers tend to classify malware into one or more sub-types.

<span class="mw-page-title-main">Honeypot (computing)</span> Computer security mechanism

In computer terminology, a honeypot is a computer security mechanism set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of information systems. Generally, a honeypot consists of data that appears to be a legitimate part of the site which contains information or resources of value to attackers. It is actually isolated, monitored, and capable of blocking or analyzing the attackers. This is similar to police sting operations, colloquially known as "baiting" a suspect.

<span class="mw-page-title-main">Cybercrime</span> Type of crime based in computer networks

Cybercrime encompasses a wide range of criminal activities that are carried out using digital devices and/or networks. These crimes involve the use of technology to commit fraud, identity theft, data breaches, computer viruses, scams, and expanded upon in other malicious acts. Cybercriminals exploit vulnerabilities in computer systems and networks to gain unauthorized access, steal sensitive information, disrupt services, and cause financial or reputational harm to individuals, organizations, and governments.

<span class="mw-page-title-main">Phishing</span> Form of social engineering

Phishing is a form of social engineering and scam where attackers deceive people into revealing sensitive information or installing malware such as ransomware. Phishing attacks have become increasingly sophisticated and often transparently mirror the site being targeted, allowing the attacker to observe everything while the victim is navigating the site, and transverse any additional security boundaries with the victim. As of 2020, it is the most common type of cybercrime, with the FBI's Internet Crime Complaint Center reporting more incidents of phishing than any other type of computer crime.

In the context of information security, and especially network security, a spoofing attack is a situation in which a person or program successfully identifies as another by falsifying data, to gain an illegitimate advantage.

<span class="mw-page-title-main">Social engineering (security)</span> Psychological manipulation of people into performing actions or divulging confidential information

In the context of information security, social engineering is the psychological manipulation of people into performing actions or divulging confidential information. A type of confidence trick for the purpose of information gathering, fraud, or system access, it differs from a traditional "con" in the sense that it is often one of the many steps in a more complex fraud scheme. It has also been defined as "any act that influences a person to take an action that may or may not be in their best interests."

Internet security is a branch of computer security. It encompasses the Internet, browser security, web site security, and network security as it applies to other applications or operating systems as a whole. Its objective is to establish rules and measures to use against attacks over the Internet. The Internet is an inherently insecure channel for information exchange, with high risk of intrusion or fraud, such as phishing, online viruses, trojans, ransomware and worms.

A white hat is an ethical security hacker. Ethical hacking is a term meant to imply a broader category than just penetration testing. Under the owner's consent, white-hat hackers aim to identify any vulnerabilities or security issues the current system has. The white hat is contrasted with the black hat, a malicious hacker; this definitional dichotomy comes from Western films, where heroic and antagonistic cowboys might traditionally wear a white and a black hat, respectively. There is a third kind of hacker known as a grey hat who hacks with good intentions but at times without permission.

A spoofed URL involves one website masquerading as another, often leveraging vulnerabilities in web browser technology to facilitate a malicious computer attack. These attacks are particularly effective against computers that lack up-to- security patches. Alternatively, some spoofed URLs are crafted for satirical purposes.

Voice phishing, or vishing, is the use of telephony to conduct phishing attacks.

Information assurance (IA) is the practice of assuring information and managing risks related to the use, processing, storage, and transmission of information. Information assurance includes protection of the integrity, availability, authenticity, non-repudiation and confidentiality of user data. IA encompasses both digital protections and physical techniques. These methods apply to data in transit, both physical and electronic forms, as well as data at rest. IA is best thought of as a superset of information security, and as the business outcome of information risk management.

A Sybil attack is a type of attack on a computer network service in which an attacker subverts the service's reputation system by creating a large number of pseudonymous identities and uses them to gain a disproportionately large influence. It is named after the subject of the book Sybil, a case study of a woman diagnosed with dissociative identity disorder. The name was suggested in or before 2002 by Brian Zill at Microsoft Research. The term pseudospoofing had previously been coined by L. Detweiler on the Cypherpunks mailing list and used in the literature on peer-to-peer systems for the same class of attacks prior to 2002, but this term did not gain as much influence as "Sybil attack".

<span class="mw-page-title-main">Multi-factor authentication</span> Method of computer access control

Multi-factor authentication is an electronic authentication method in which a user is granted access to a website or application only after successfully presenting two or more pieces of evidence to an authentication mechanism. MFA protects personal data—which may include personal identification or financial assets—from being accessed by an unauthorized third party that may have been able to discover, for example, a single password.

Security information and event management (SIEM) is a field within the field of computer security, where software products and services combine security information management (SIM) and security event management (SEM). SIEM is the core component of any typical Security Operations Center (SOC), which is the centralized response team addressing security issues within an organization.

<span class="mw-page-title-main">Internet Security Awareness Training</span>

Internet Security Awareness Training (ISAT) is the training given to members of an organization regarding the protection of various information assets of that organization. ISAT is a subset of general security awareness training (SAT).

<span class="mw-page-title-main">Digital privacy</span>

Digital privacy is often used in contexts that promote advocacy on behalf of individual and consumer privacy rights in e-services and is typically used in opposition to the business practices of many e-marketers, businesses, and companies to collect and use such information and data. Digital privacy, a crucial aspect of modern online interactions and services, can be defined under three sub-related categories: information privacy, communication privacy, and individual privacy.

A threat actor, bad actor or malicious actor is either a person or a group of people that take part in an action that is intended to cause harm to the cyber realm including: computers, devices, systems, or networks. The term is typically used to describe individuals or groups that perform malicious acts against a person or an organization of any type or size. Threat actors engage in cyber related offenses to exploit open vulnerabilities and disrupt operations. Threat actors have different educational backgrounds, skills, and resources. The frequency and classification of cyber attacks changes rapidly. The background of threat actors helps dictate who they target, how they attack, and what information they seek. There are a number of threat actors including: cyber criminals, nation-state actors, ideologues, thrill seekers/trolls, insiders, and competitors. These threat actors all have distinct motivations, techniques, targets, and uses of stolen data. See Advanced persistent threats for a list of identified threat actors.

Automotive security refers to the branch of computer security focused on the cyber risks related to the automotive context. The increasingly high number of ECUs in vehicles and, alongside, the implementation of multiple different means of communication from and towards the vehicle in a remote and wireless manner led to the necessity of a branch of cybersecurity dedicated to the threats associated with vehicles. Not to be confused with automotive safety.

Internet security awareness or Cyber security awareness refers to how much end-users know about the cyber security threats their networks face, the risks they introduce and mitigating security best practices to guide their behavior. End users are considered the weakest link and the primary vulnerability within a network. Since end-users are a major vulnerability, technical means to improve security are not enough. Organizations could also seek to reduce the risk of the human element. This could be accomplished by providing security best practice guidance for end users' awareness of cyber security. Employees could be taught about common threats and how to avoid or mitigate them.

References

  1. Greitzer, F. L.; Strozer, J. R.; Cohen, S.; Moore, A. P.; Mundie, D.; Cowley, J. (May 2014). "Analysis of Unintentional Insider Threats Deriving from Social Engineering Exploits". 2014 IEEE Security and Privacy Workshops. pp. 236–250. doi:10.1109/SPW.2014.39. ISBN   978-1-4799-5103-1. S2CID   15493684.
  2. 1 2 3 4 5 Wang, Zuoguang; Sun, Limin; Zhu, Hongsong (2020). "Defining Social Engineering in Cybersecurity". IEEE Access. 8: 85094–85115. doi: 10.1109/ACCESS.2020.2992807 . ISSN   2169-3536. S2CID   218676466.
  3. 1 2 Steinmetz, Kevin F. (2020-09-07). "The Identification of a Model Victim for Social Engineering: A Qualitative Analysis". Victims & Offenders. 16 (4): 540–564. doi:10.1080/15564886.2020.1818658. ISSN   1556-4886. S2CID   225195664.
  4. 1 2 Algarni, Abdullah (June 2019). "What Message Characteristics Make Social Engineering Successful on Facebook: The Role of Central Route, Peripheral Route, and Perceived Risk". Information. 10 (6): 211. doi: 10.3390/info10060211 .
  5. 1 2 Paradise, Abigail; Shabtai, Asaf; Puzis, Rami (2019-09-01). "Detecting Organization-Targeted Socialbots by Monitoring Social Network Profiles". Networks and Spatial Economics. 19 (3): 731–761. doi:10.1007/s11067-018-9406-1. ISSN   1572-9427. S2CID   158163902.
  6. 1 2 3 Ivaturi, Koteswara; Janczewski, Lech (2013-10-01). "Social Engineering Preparedness of Online Banks: An Asia-Pacific Perspective". Journal of Global Information Technology Management. 16 (4): 21–46. doi:10.1080/1097198X.2013.10845647. ISSN   1097-198X. S2CID   154032226.
  7. 1 2 Ghafir, Ibrahim; Saleem, Jibran; Hammoudeh, Mohammad; Faour, Hanan; Prenosil, Vaclav; Jaf, Sardar; Jabbar, Sohail; Baker, Thar (October 2018). "Security threats to critical infrastructure: the human factor". The Journal of Supercomputing. 74 (10): 4986–5002. doi: 10.1007/s11227-018-2337-2 . hdl: 10454/17618 . ISSN   0920-8542. S2CID   4336550.
  8. 1 2 3 Airehrour, David; Nair, Nisha Vasudevan; Madanian, Samaneh (2018-05-03). "Social Engineering Attacks and Countermeasures in the New Zealand Banking System: Advancing a User-Reflective Mitigation Model". Information. 9 (5): 110. doi: 10.3390/info9050110 . hdl: 10652/4378 . ISSN   2078-2489.
  9. Bleiman, Rachel (2020). An Examination in Social Engineering: The Susceptibility of Disclosing Private Security Information in College Students (Thesis). doi:10.34944/dspace/365.
  10. Chin, Tommy; Xiong, Kaiqi; Hu, Chengbin (2018). "Phishlimiter: A Phishing Detection and Mitigation Approach Using Software-Defined Networking". IEEE Access. 6: 42516–42531. doi: 10.1109/ACCESS.2018.2837889 . ISSN   2169-3536. S2CID   52048062.
  11. Greitzer, Frank L.; Strozer, Jeremy R.; Cohen, Sholom; Moore, Andrew P.; Mundie, David; Cowley, Jennifer (May 2014). "Analysis of Unintentional Insider Threats Deriving from Social Engineering Exploits". 2014 IEEE Security and Privacy Workshops. San Jose, CA: IEEE. pp. 236–250. doi:10.1109/SPW.2014.39. ISBN   978-1-4799-5103-1. S2CID   15493684.
  12. Irani, Danesh; Balduzzi, Marco; Balzarotti, Davide; Kirda, Engin; Pu, Calton (2011). Holz, Thorsten; Bos, Herbert (eds.). "Reverse Social Engineering Attacks in Online Social Networks". Detection of Intrusions and Malware, and Vulnerability Assessment. Lecture Notes in Computer Science. 6739. Berlin, Heidelberg: Springer: 55–74. doi:10.1007/978-3-642-22424-9_4. ISBN   978-3-642-22424-9.
  13. 1 2 Heartfield, Ryan; Loukas, George (2018), Conti, Mauro; Somani, Gaurav; Poovendran, Radha (eds.), "Protection Against Semantic Social Engineering Attacks", Versatile Cybersecurity, vol. 72, Cham: Springer International Publishing, pp. 99–140, doi:10.1007/978-3-319-97643-3_4, ISBN   978-3-319-97642-6 , retrieved 2020-10-29
  14. 1 2 3 4 5 Workman, Michael (2007-12-13). "Gaining Access with Social Engineering: An Empirical Study of the Threat". Information Systems Security. 16 (6): 315–331. doi: 10.1080/10658980701788165 . ISSN   1065-898X. S2CID   205732672.
  15. Krombholz, Katharina; Merkl, Dieter; Weippl, Edgar (December 2012). "Fake identities in social media: A case study on the sustainability of the Facebook business model". Journal of Service Science Research. 4 (2): 175–212. doi:10.1007/s12927-012-0008-z. ISSN   2093-0720. S2CID   6082130.
  16. Workman, Michael (2008). "Wisecrackers: A theory-grounded investigation of phishing and pretext social engineering threats to information security". Journal of the American Society for Information Science and Technology. 59 (4): 662–674. doi:10.1002/asi.20779. ISSN   1532-2882.
  17. Boshmaf, Yazan; Muslukhov, Ildar; Beznosov, Konstantin; Ripeanu, Matei (2013-02-04). "Design and analysis of a social botnet". Computer Networks. Botnet Activity: Analysis, Detection and Shutdown. 57 (2): 556–578. doi:10.1016/j.comnet.2012.06.006. ISSN   1389-1286.
  18. Bhusal, Chandra Sekhar (2020). "Systematic Review on Social Engineering: Hacking by Manipulating Humans". SSRN Electronic Journal. doi:10.2139/ssrn.3720955. ISSN   1556-5068.
  19. McCrohan, Kevin F.; Engel, Kathryn; Harvey, James W. (2010-06-14). "Influence of Awareness and Training on Cyber Security". Journal of Internet Commerce. 9 (1): 23–41. doi:10.1080/15332861.2010.487415. ISSN   1533-2861. S2CID   154281581.
  20. Ghafir, Ibrahim; Saleem, Jibran; Hammoudeh, Mohammad; Faour, Hanan; Prenosil, Vaclav; Jaf, Sardar; Jabbar, Sohail; Baker, Thar (2018-10-01). "Security threats to critical infrastructure: the human factor". The Journal of Supercomputing. 74 (10): 4986–5002. doi: 10.1007/s11227-018-2337-2 . hdl: 10454/17618 . ISSN   1573-0484. S2CID   4336550.
  21. Heartfield, Ryan; Loukas, George; Gan, Diane (2016). "You Are Probably Not the Weakest Link: Towards Practical Prediction of Susceptibility to Semantic Social Engineering Attacks". IEEE Access. 4: 6910–6928. doi: 10.1109/ACCESS.2016.2616285 . ISSN   2169-3536. S2CID   29598707.