Use ATT&CK to Help Quantify Your Security Risk

Perfect for a summer internship, and can deliver an impressive and useful result.

Andrew Selig
System Weakness

--

ATT&CK Navigator example, generated by Andrew Selig

I’ve been somewhat obsessed with Mitre ATT&CK for the past 4 years. Companies focus on ISO 27001 and NIST 800–53 to develop policies and meet a myriad of compliance regulations, however meeting these guidelines has never instilled a strong degree of confidence as an incident response manager that a company would identify, mitigate, and recover from adversarial attacks. I set out to find a way to utilize ATT&CK to help provide that confidence.

An overview of ATT&CK

Ten years ago, Mitre came out with ATT&CK as a framework focused on adversarial techniques. In short, ATT&CK breaks down cyber attacks into 14 different tactics, with hundreds of techniques and sub-techniques. For example, Phishing is a technique in the Initial Access tactic, and it has three sub-techniques (Attachment, Link, via Service). Each technique has its own description, examples of being used in the wild, mitigations, and detections. That’s the 30,000 foot view, check out their blog for a lot more information on the project.

The ATT&CK Navigator allows you to visualize the framework (Create New Layer > Enterprise). Start by selecting the search at the top, expanding Threat Groups, scroll down to APT29, and clicking Select.

The Navigator now shows all the techniques that belong to APT29 outlined in blue. If you were targeted by this particular group, your detections and mitigations for these techniques would be paramount for identifying and limiting exposure.

A small subset of the techniques used by APT29

Applying it to your organization

This is incredibly useful if you know the groups likely to target you. However, if you spend some time scrolling through other threat groups in the Navigator, you can see that groups use tactics and techniques across the board.

Instead of chasing after each threat group, focusing top down on tactics and techniques, what if we focus instead on the ground up: detection and mitigations. We take on the whole of the framework and focus on how our current control environment covers off on the techniques and tactics.

Leveraging an internship to map your organization

Mapping your organization to ATT&CK is incredibly valuable to the enterprise in making decisions of where to spend money and resources, but how does this apply to an internship? Mapping the framework lends itself to a perfect intern project:

  • The technical level is largely policy based, and is similar to academic assignments. They don’t need to know the nuts and bolts of every tool, but the high-level policies across many tools.
  • An opportunity to navigate a corporate environment, interviewing teams about how their settings and policies play a role in the larger framework.
  • An understanding of how risk and controls work, and the impact they might have on operational teams.
  • A final presentation that quantifies risk, with recommendations based on a significant amount of data.

Many organizations are looking for analysts and engineers that are familiar with ATT&CK, and the project provides a great opportunity to discuss hands-on experience with tools and policies.

How to score each technique, from the bottom up

This project was largely influenced by looking at some work that Olaf Hartong did back in 2019 (you should be following him and FalconForce), but simplified in order to allow for faster decisions. The first step is to create a rubric that works for your organization so that you can determine your ability to detect the technique and give it a score. Let’s take the Spearphishing Attachment (T1566.001) sub-technique as an example:

Description:

Adversaries may send spearphishing emails with a malicious attachment in an attempt to gain access to victim systems. Spearphishing attachment is a specific variant of spearphishing. Spearphishing attachment is different from other forms of spearphishing in that it employs the use of malware attached to an email.

Detection:

Mitigation:

As we can see with Spearphishing, most techniques do not map nicely to a single control. Oftentimes, there might be five controls to consider and weigh. You also don’t want to make the scoring so onerous that it will take a week to get through a single technique.

Consider the following statements as a starting point to determine how to quantify techniques:

  • Are the tools and policies configured at the proper level to detect this technique?
  • Are the tools and policies deployed to the right population of devices or networks?
  • Is the telemetry present to detect quickly and proactively?

If your organization decides to weigh each equally out of 100, a technique’s score would simply be 33.3% of each of the above questions, added up.

Let’s return to our Spearphishing Attachment technique above and score it out. When we talk to an example company, we get the following brief on their control structure:

Our entire fleet is composed of Windows and Linux devices, with EDR deployed to the “moderate” level recommended by the vendor. All endpoint and server traffic goes through an intrusion detection system and web proxy, however both just see unencrypted traffic. Our cloud environment is run via GCP, and follows the same intrusion detections as our on-premise devices. Email is inspected utilizing the default settings in O365, but we’re not sure what it is exactly doing. Users go through an annual training on cybersecurity threats in accordance with our compliance requirements. Finally, firewall logs are centralized using a data lake, but not sure what other logs are available.

How does our control environment stand up to an attacker using a Spearphishing Attack?

  • We have an EDR installed and configured to the vendor’s specification. Might be some room to grow, but “suspicious files” noted in the mitigation are likely something that would be included in those baselines.
  • It doesn’t sound like we’ll have logs for Application Log Content or File Creation detections.
  • We are monitoring network traffic for bad things, however we’re missing everything that is in encrypted traffic, which is the vast majority these days. It’s helpful, but there is a lot of room to improve.
  • User training is required, but could be improved with monthly phishing simulation campaigns.
  • Default email scanning appears to be set up, however features that would help in this case, SPF and DMARC, are likely not configured.

Some wins, some areas for improvement. We cross many different controls in different states of deployment. This is where the art of this scoring process meets the science. How would you score this out of 100?

Are the tools configured to identify this threat?

EDR and network monitoring are set up to detect it, employee training, and email security are only partially deployed. Windows logging settings might not be configured to document potential threats. Our intern considers it a 40, there’s a lot of work to do.

Are the tools deployed in a way to identify the threat?

EDR is deployed on all devices, all email traffic is inspected, and all users are required to attend training. However, the network and web proxy tools are not deployed in a way to see potential threat traffic. Our intern considers it a 60.

Is the information easily available to analysts to detect the threat?

The brief does not provide much information on centralized logging outside of the networking logs. Our intern rates it a 20.

(40 * .33) + (60 * .33) + (20 * .33) = 39.6 / 100

We have a score! Did our intern do a good job based on the brief provided by the company? Should it be higher or lower? It’s important in this process to be aware of Goodhart’s Law, and understand why you are changing things after you see the final score. Is it warranted, or do you just not want to give something as critical as identifying spearphishing a 40/100?

One technique down

There is a lot of work ahead of your intern, below are some tips and tricks:

  • Determine a priority for tactics; there’s likely too much to get done in a single internship. Which tactic is most important to the organization?
  • Do a crash course on tools, their policies, and their deployments at the outset.
  • Establish some baseline ratings. As an example, EDR will largely be the same across all techniques, so maybe establish a set number for that. But if your EDR isn’t installed on Linux, then have a set score of 0 if the technique relies on EDR on Linux devices for detection.
  • Normalize scores if the work is split up. The point above is helpful to establish baselines, but reviewing techniques as a group before finalizing their scores will net a better result.

I have compiled some templates and examples that can help document this process, as well as a script that can translate the information into a JSON file that the Navigator will accept. Check out the GitHub repo for more information.

Showing off our results

It’s the end of summer and it’s presentation time. Your intern has gone through hundreds of techniques, scored them, and normalized them. The final step is to migrate the scores and results into a JSON file that will take the scores and apply a scoring gradient to it. I’d recommend a single, neutral color combined with white (no one likes green to red when talking about risk).

Upload your JSON file to the Navigator. Now that’s a presentation eye chart! Let’s drive the point home and select the same threat group as above, APT29. By selecting the techniques we can now see where our stronger controls are against this actor (dark aqua) or where we need to apply resources (white).

APT29 (blue outline) compared to a filled out matrix.

Don’t leave your leadership hanging though; provide a list of changes that you have observed during this exercise that would have the largest improvement. Regarding our example brief above:

  • Monitor encrypted traffic with the IDS, proxy, and endpoints. Move the IDS into an IPS to mitigate instead of detect.
  • Centralize logs for all critical security infrastructure, and ensure the appropriate logging settings are configured on all devices.
  • Perform more frequent employee training, including phishing simulations.
  • Review O365 settings and increase the protections available within the portal.

Buy your intern a lunch on their last day, thank them for all their hard work, and get ready to start partnering with infrastructure, security, and compliance teams to tackle those lighter shaded techniques.

Continue to improve your results

This exercise is not something to be completed and shelved. As the controls are improved our scores should go up, and our Navigator should be composed of darker shades. We can demonstrate improvement over time. As the low-hanging fruit is tackled, new techniques will bubble up that were not addressed with the work we did on Spearphishing Attachment. As ATT&CK iterates, so should organizations, and keeping an up-to-date evaluation will allow security management to quantify the risk when asking for more resources.

--

--