By Rudolph Araujo
What can we learn from this attack?

Having talked in Part 1 of this two-part series about the attack outlined in the Russia indictment, here we’ll dive into the underlying lessons and takeaways for security teams. While there is an obvious discussion about credential security—stronger passwords, multi-factor authentication, etc.—that has already been had ad-nauseum, in this post we wanted to go a little deeper.

A few key areas of lessons learned emerged when examining the attack:

  • Threat model – If you are trying to protect everything all of the time, you might as well be protecting nothing. In any organization, it is critical that you understand the threat model—and that doesn’t just include who is likely to attack you. In fact, we would argue that the attacker’s identity is almost less important than what or indeed who it is you have that is valuable. Once you know what your crown jewels are, make sure you have the right protective controls around those—network segmentation, strong authentication and authorization, audit trails, etc.

    From a monitoring perspective, you want to flag any events tied to these critical assets and make sure any correlation and triage accounts for this. Make sure you red team the hell out of that threat model too. If there are any weak points, you want to find them before an attacker does. As the former head of the NSA’s Tailored Access Operations, Rob Joyce said to USENIX Enigma 2016:

    “You know the technologies you intended to use in that network. We [the nation state hacker] know the technologies that are actually in use in that network.”

  • Protecting personal email accounts – You should also think about your critical human assets. In this day and age, users’ corporate and personal online presence often blur together. The question to think about is whether the organization’s security team should take on protecting employees’ personal email and social media accounts as well. Policy and privacy issues aside, how does one even do that? Most of your current technology stack doesn’t really alert you about threats to people but rather to IP addresses. This is therefore one of those areas where the controls are likely not going to be preventative, but rather teams will take a detect-quickly-and-respond approach. For instance, you may not be able to prevent the spear phishing email from being delivered to a personal email account, but can you detect the credential theft that followed quickly and shut down access before (too much) damage is done? More on this later.
  • Unmanaged devices – In this attack, we saw multiple instances of devices that didn’t appear to be managed or tracked in any kind of asset database—the Linux server that still had X-Agent on it 4-5 months after the compromise was first discovered; the cloud email service used by Hillary Clinton’s personal office (not tied directly to the campaign); and the DNC’s analytics computers hosted on a third-party cloud computing service that were compromised about four months after incident response began. At the risk of repeating ourselves, you can’t protect what you don’t know about. But just as importantly, even if you know about these unmanaged assets, what is your plan to detect and respond to threats targeting them? Endpoint and log-based approaches are not likely to be successful here, so having a high-fidelity network view of these devices and their behaviors is critical.
  • Suspect destinations – As mentioned in Part 1, these attackers were sophisticated, but perhaps their techniques weren’t. The attackers were just smart. For instance, in targeting personal email addresses, they knew that even if there was some kind of corporate email security solution in place it would not be protecting personal emails. As we hinted above, a defender in this case must focus on detecting the next step in the attack e.g. credential theft. In this case, security teams would be well-served by detecting traffic to lookalike or typosquatted domains (e.g. a fake Google domain). Even if you detect this traffic after a password has been stolen, you have a shot at responding quickly and closing the hole. Similarly, while many proxies and NGFWs will follow HTTP redirects to deal with those darn URL shorteners, they are almost always looking for malware. What if there is no malware involved, but just a simple “password reset” form? This is again where security analysts would benefit from being alerted about young domains or uncommon domains, registrars, etc. as an indicator that something might be up.
  • Visibility into the late attack lifecycle – With all the three victims, it is clear that while the initial compromise took just a couple of days, the attackers then had months to complete the rest of their attack. This is not atypical from the incidents we have observed and worked on. Even the best security teams struggle with visibility into attacker activities around internal recon, lateral movement and maintaining presence. This is because, at this point, a lot the activity just blends in with regular business-justified activity.

Let’s discuss some specific examples from this case:

  • Detecting remote access—Unless it is widely known malware, detecting remote access is hard. Threat hunters will find such activity by looking for telltale markers like odd screen resolutions being used on the network, but that effort is painstaking.
  • Detecting lateral movement and privilege escalation—The tools the attacker used here e.g. PowerShell or SMB are common IT or regular user tools. How do you spot malicious intent then? The answer lies in looking for unusual, yet recurrent behavior that represent the known attacker TTPs.
  • Detecting data exfiltration—This one seems easy right? Alert every time you spot a large upload. Wrong! Turns out this is a heavy false-positive approach. Instead, you have to factor attributes of the traffic (size of the upload in this case) along with attributes of the source (is this an isolated behavior for just this device, or a few devices?) and of the destination (is this one of those suspect destinations we discuss above?). And you might have thrown in the towel thinking, “Well the data is encrypted, so I can’t see what is leaving my network”. While that is true, repeat after me: “Encryption is my friend!” The reality is encrypted data tells you much about the endpoints on either end. For instance, it may help you identify the application doing the uploading.
  • Detecting completion activities—All attackers try and clear their traces, which is why you will miss things by relying on endpoint or log-based approaches exclusively. On the other hand, the network does not lie and does not forget. So, complementing those approaches with network intelligence and visibility is critical to be able to firstly detect the use of tools like CCleaner based on their network / SSL fingerprint (more on that in another post), and secondly to go back in time and answer the question—what really happened?

To quote again from the former head of an organization tasked with being a nation state threat actor:

“I’ll tell you one of our worst nightmares is that out-of-band network tap that really is capturing all the data understanding anomalous behavior going on and somebody’s paying attention to it.”

An aside …

We have to say, it is pretty impressive how months and years after the attack the investigators here were able to retrace the steps with such precision and detail. Even for those that do this day in and day out, it never ceases to amaze. It speaks to how humans with expertise can answer these questions. Now if only that kind of expertise was easily and abundantly available, right?