Deception for Detection - Catching the Intruder’s Hand in the HoneyPot
1.0 | Introduction and Overview
Network intrusion detection and prevention tools are often plagued by false positives, which can consume an organization’s resources and reduce the effectiveness of incident response procedures against true threats. Attackers have always used forms of deception as part of their attacks, and combined with evasion techniques, can increase dwell time on networks.
The use of deception is not limited to attackers, as defenders have historically also employed deceit as a technique to detect and defend against attackers. The saying typically goes, an attacker only needs to get lucky once to breach the network, however once inside, a defender only needs to get lucky once to detect the breach. This essentially means that once the perimeter of the network is breached, an attacker needs to be cautious as any wrong move can lead to the blue team detecting the intrusion and taking remedial action: the blue team only needs to get lucky once.
Using deceptive techniques, the blue team can increase the likelihood of the attackers making a wrong move and thus increase the likelihood of detection. Commonly, honeypots are used by blue teams to accomplish this. Honeypots can refer to any artifact or system that has been purposefully created to be enticing to attackers for which attempts of use or access trigger alarms of the highest severity. An analogy is that of leaving an Amazon branded box marked “fragile” and of the size of a flat screen TV (empty, of course) on your front doorstep with a motion detection alarm focused at it. The moment a porch pirate sails by your street, notices this package, scurries up to your doorstep and grabs the package, an eardrum bursting alarm goes off alerting to malicious activity to which the Police immediately arrives (in a perfect world). Meanwhile your other Amazon packages neatly obscured behind your potted plant by the courteous FedEx driver went unnoticed and unstolen as the porch pirate was caught trying to take something a bit more obvious, valuable and less secured.
The same principle applies in cyber security from a defender’s perspective. Imagine that the empty TV box was a random internal system purposefully configured to look critical and insecure, but the smaller boxes are actually the crown jewels. One wrong move by the attacker is all it would take to be detected on the network, especially if detections were configured to scream Bloody Mary were the Honeypot to be accessed.
2.0 | Problem Definition
There are two types of organizations in this world — the ones that have been breached and the ones that will be breached.
The above statement describes a fundamental shift in the way we think about network breaches. This shift in the way we think has also brought about a shift in priorities, where early detection and response are prioritized over prevention. Implementing controls to prevent attacks are still a priority, however, the threat landscape is constantly evolving, with new and emerging threats outpacing the speed at which organizations can implement controls to prevent attacks. It is not feasible to prevent all attacks, and as such, organizations have switched to trying to detect, respond and recover in a timely manner. This shift has highlighted the importance of a key metric — dwell time — which is defined as the time between the initial breach of the perimeter and detection.
Even detecting, responding and recovering has posed a challenge to organizations. This is evident in the current statistics on dell time which has not changed in 2021, despite organizations having implemented state of the art intrusion detection systems and automated endpoint detection and response capabilities (Nayyar, 2021) to detect attacks. In a perfect world, these systems would immediately alert incident response teams to a breach and the attacker would be locked out of the network before damage can be done.
However this is not always the case as evidenced by dwell time standing at 24 days in 2021 according to FireEye’s report. Attackers are still finding ways to evade sophisticated security controls and remain undetected for significant periods of time. The longer an attacker remains undetected in a network, the higher the impact and subsequent cost of the breach. As such, there is a business case to be made for reducing attacker dwell time.
How do we decrease dwell time?
Often, techniques and approaches to decrease dwell time have largely focused the following
- Increasing preventative controls: processes such as system hardening, principle of least privilege, application whitelisting and Multi-Factor authentication have largely been implemented to prevent attacks by reducing the attack surface and thus the likelihood of a successful attack which could have potentially evaded detective controls;
- Increasing detective controls: processes and technologies such as endpoint detection and response, intrusion detection and security event and information management have largely been implemented to detect attacks on the network which may have bypassed or evaded preventative controls, and coordinate the appropriate response.
These approaches have, for the most part, reduced attacker dwell time in the network to what it currently stands to now, however the return on investment quickly diminishes as more and more preventative and detective controls are implemented.
As such, alternative approaches have been adopted in place of implementing more controls whereby attempts are made at increasing the effectiveness of existing controls through fine tuning, red teaming and penetration testing. However, even fine tuning has its limits, and as such, alternative ways for increasing the effectiveness of existing controls need to be sought.
An alternative to fine turning detective controls can potentially be activities which increase the likelihood that attackers trigger detective controls early in the attack chain. The material difference between the two approaches is that fine tuning focuses on reducing false positives through the configuration of the systems to decipher detect malicious from standard user behavior, while the latter focus on increasing the likelihood that malicious activity will be generated during attacks, thus alerting to an intruder.
Increasing the likelihood that malicious activity is generated during attacks is a tricky process, as historically, general user activity has often triggered false positives, and thus are often excluded from analysis, increasing the likelihood that an attacker’s evasion techniques which mimic general user activity succeed. This can potentially be solved in various ways, one of which is classifying general user activity as malicious, and thus increasing the likelihood that it will be detected. Classifying general user activity on all systems within the organization as malicious can lead to false positives and as such, dedicated systems on which this type of activity can be classified as malicious need to be implemented.
What are the gaps in current knowledge and why is it relevant?
These systems have historically been referred to as HoneyPots — systems designed to act as a trap for attackers with the goal of detecting or deflecting attacks. While this is not a new concept, there are several gaps in knowledge on the use of HoneyPots. In particular, current resources do not adequately address the below questions in a consolidated and structured manner:
What? — What are HoneyPots? What are their characteristics and what are they used for?
Why — Why should I consider implementing HoneyPots and what return on investment do I derive from their implementation?
How? — How do I implement these in an effective manner, given the nuances of my environment, the implications on my security posture, and my current incident response capabilities?
These gaps in knowledge can fall in one of the two categories:
- Governance:
Governance defines a unified approach which provides an accountability framework and enables oversight to ensure that business objectives are met while reducing risk. Lack of a structured and defined approach can lead to wastage of resources, challenges in achieving the desired objective and increased risk.
Currently, there is no structured approach to the implementation and management of HoneyPots within organizations. Resources on HoneyPots often describe technical implementation and defense frameworks such as the MITER D3FEND integrate honeypots as a category of deception techniques, however do not provide a standard framework for the implementation and management.
For example, while the MITRE D3FEND defines various types of decoys and honeypot environments, including basic considerations, it does not provide a considerations for the organization’s incident detection and response capabilities, the security requirements of the HoneyPot, the detection objectives, and a structured approach to implementation and management.
2.Taxonomy:
Taxonomy is a framework or schema for categorizing objects or concepts (Clarke, 2012). Categorization is an important process as it aids in communicating information about these objects or concepts.
Currently, there are few resources such as the MITRE D3FEND framework which provide a schema for HoneyPots, however (as of writing) the categories are broad and do not adequately consider various subtypes which can potentially lead to gaps in knowledge and understanding.
How does this paper address these gaps in knowledge?
This whitepaper aims to address the gaps in knowledge above by conceptualizing a framework which defines the implementation and management of HoneyPots. This frameworks aims to address the following specific gaps in knowledge:
- Honeypot Categorization — Categorization and reference definitions of the various types of HoneyPots and their respective characteristics. This aims to answer the “What?” question by categorizing the various types of HoneyPots and their characteristics across various environments in a platform and vendor agnostic manner.
- HoneyPot Return on Investment Methodology — A methodology for assessing the Return on Investment of implementation of HoneyPots which aims to answer the “why” question.
- HoneyPot Implementation Approach: A structured approach to the implementation of HoneyPots in various environments. This aims to answer the “How” question by providing a structured manner to maximize the effectiveness of HoneyPots by taking into account the organization’s current incident response and detection capabilities.
3.0 | The Open Apiary Framework v0.9
The Apiary Framework aims to provide a structured approach to the deployment and management of honeypots on a network to achieve the overall goal of increasing the likelihood of detecting a threat. A structured approach is important as it provides organizations with a consistent way of implementing honeypots with clear detection benefits while not reducing the overall security of the environment.
The framework is divided into the following components:
- The HoneyPot Definition: describes what a HoneyPot is and the characteristics of the same;
- The Apiary Maturity Model: A descriptive model of the stages through which an organization progresses as they define, implement, evolve and improve their HoneyPot strategy.
- The HoneyPots: this component of the framework describes the various types of Honeypots which can be deployed, the approach in which it can be deployed, the detection benefits, and the security considerations.
References
Clarke, M. (2012). The Digital Revolution. In Academic and Professional Publishing (pp. 79–98). Chandos Publishing. https://doi.org/10.1016/B978-1-84334-669-2.50004-4
Nayyar, S. (2021, 4 3). Why The Dwell Time Of Cyberattacks Has Not Changed. Forbes. Retrieved 11 2, 2021, from https://www.forbes.com/sites/forbestechcouncil/2021/05/03/why-the-dwell-time-of-cyberattacks-has-not-changed/?sh=6f37b473457d