Introduce impersonations to expand the scope of a deceptive story.
Decoy Artifacts and Systems allow the defender to increase the attack surface of their environment to expose more of the deception story. Additionally, they can be used to adjust the adversary’s sense of ambiguity to increase or decrease their level of uncertainty towards the environment. Investigation of these decoy artifacts may introduce a resource cost on the adversary, enable or block the adversary's intended actions, encourage or discourage a specific action or response, etc.
Decoy artifacts can take a variety of forms including credentials, accounts, files/directories, browser extensions/bookmarks, system processes, etc. Decoy systems can be real, virtual, or simulated. They can be presented as one of a variety of IT devices, including user workstations, servers, networking systems, IOT (embedded devices), mobile devices, etc. Regardless of form, these decoy artifacts and systems provide a variety of opportunities for the defender. For example, decoy artifacts can be used as tripwires to produce a high-fidelity alert when accessed.
Careful planning should guide the creation and deployment of these tripwires to ensure effectiveness. For example, understanding the adversary's known TTPs will highlight which resources the adversary is likely to touch, and therefore where decoy artifacts should be placed. A thorough assessment of the defender's priority cyber assets and intellectual property should guide the placement of decoy artifacts used as tripwires.
A decoy artifact can provide several means to influence adversary activity. The following examples illustrate the powerful effects decoy artifacts and systems can have on the adversary. First, by planting decoy artifacts and systems that align with known adversary TTPs, the defender can influence adversary activities. For example, if a target adversary has a capability against a specific application, the defender can place this vulnerable application in the environment to motivate the adversary to exploit the decoy.
As a second example, a defender may install AV or some other security or monitoring tool in a way that is easy for the adversary to remove. If an adversary removes the tool, they may be emboldened to act more openly believing they can’t be monitored.
The defender can attempt to demotivate the adversary by strategically placing decoy artifacts. For example, a defender could place a selection of reverse engineering tools or monitoring applications on a known vulnerable target. This may sow confusion and raise ambiguity, demotivating the adversary’s desire to go after that target even if it is vulnerable.
Planting decoy artifacts and systems in the environment can influence the adversary to reveal the extent of successive campaigns against a target. For example, the defender can make and leak fake credentials both inside and outside of the network. The defender can then monitor for the use of these credentials. Then, when an adversary uses a fake credential, the defender will receive a high-fidelity alert. If the credentials are unique, a defender may be able to detect how and when an adversary collected the credentials.
Finally, decoy artifacts can be used to impose a resource cost on the adversary. For example, the defender can create an especially enticing, but excessively large, decoy file that is time and resource consuming to exfiltrate from the target.
|ATT&CK® Tactics||Adversary Vulnerability Presented|
|Impact, Discovery, Persistence, Reconnaissance, Credential Access, Defense Evasion, Initial Access, Execution, Privilege Escalation, Lateral Movement, Collection, Command and Control||When adversaries interact with the environment or personas, they are vulnerable when they collect, observe or manipulate system artifacts or information. Manipulated data may cause them to reveal behaviors, use additional or more advanced capabilities against the target, and/or impact their dwell time.|
|Discovery, Reconnaissance, Credential Access, Collection, Command and Control, Impact, Lateral Movement, Initial Access||When adversaries interact with engagement environments and personas, their future capability, targetting, and/or infastructure requirements are vulnerable to influence|
|Reconnaissance, Discovery, Persistence, Impact, Collection, Initial Access||When adversaries interact with network or system resources they are vulnerable to trigger tripwires or engage in easily detectable, anomalous behavior|
|Collection, Exfiltration, Command and Control||When adversaries attempt to exfiltrate, manipulate, or move massive data objects, they are vulnerable to waste resources to accomplish the task|
|Exfiltration, Credential Access, Command and Control||When adversaries collect manipulated artifacts, they are vulnerable to reveal their presence when they use or move the artifacts elsewhere in the engagement environment.|
|Persistence, Discovery, Credential Access, Lateral Movement, Initial Access, Defense Evasion||When adversaries discover enabled, accessible, or intentionally weakened/overly permissive resources in the environment, they are vulnerable to reveal additional or more advanced capabilities when exploiting or using said resource|
|Persistence, Credential Access, Defense Evasion||When adversaries rely on particular resources to be enabled, accessible and/or vulnerable, they are vulnerable to their operations being disrupted if the resources is disabled, removed, or otherwise made invulnerable.|
|Credential Access||When adversaries use brute force techniques to access accounts or encrypted data, they are vulnerable to wasting resources if the artifact has no valid credentials or is locked in some other way.|
|Credential Access, Initial Access, Lateral Movement, Discovery, Execution||When adversaries use previously stolen information to access or move laterally within an environment they may reveal previous collection activities.|
|Initial Access||When the adversaries maintain drive-by sites, they provide a pathway for beginning engagements. They may be unable to differentiate real from deceptive victims.|
|Initial Access||When the adversaries maintain drive-by sites, they may reveal information about their targetting capabilities.|
|Initial Access||When the adversaries maintain drive-by sites and collect information about potential victims, they may reveal information about their targetting preferences by selecting or rejecting an arbitrary victim.|
|Initial Access||When adversaries exploit a trusted relationship, they are vulnerable to collect and act on manipulated data provided by the trusted party.|
|Initial Access||When adversaries exploit a trusted relationship such as using an account to access or move in the environment, they are vulnerable to trigger tripwires or engage in anamoulous behavior.|
|Command and Control||When adversaries exfiltrate data, their data are vulnerable to observation or manipulation via Man-in-the-Middle activities|