How Command and Control Servers Remain Resilient
One of the ways that malware activity on a network is spotted is via the activity of their network activity. However, in many cases this can be difficult to detect: there have been incidents where command-and-control (C&C) servers were able to stay online and pose a problem for many years. This particular group of threat actors was active for more than five years, and used a single C&C server for two years.
Malware, unlike future artificial intelligence, is generally not self-aware and requires direction from an attacker to function well. That’s where C&C servers come in. While these are commonly thought of as limited to use by botnets, that is less true than it is today: many different threats require C&C servers to function correctly today, not just botnets.
Previously C&C servers were limited to IRC servers that controlled victim machines via chatroom commands. Since then, it has become essentially standard for all malware to include some form of remote control in order to perform the following functions:
- receive commands to perform directed malicious routines
- report system information for tracking purposes
- sends stolen information to an external drop zone
- allow an attacker complete control of the affected machine
The infrastructure of these C&C servers has also improved over time. Servers are able to stay in use for far longer periods of time due to the use of increasingly sophisticated techniques. C&C servers have been implemented in ways to make them resilient to take downs, difficult to detect, and disguise their origins. In this post, we describe the most popular methodologies used to circumvent security solutions and maintain control for longer periods of time, starting with the more sophisticated techniques. This may give some insight into how attackers operate and how their activities can be stopped.
Diving into the Deep Web
The Deep Web offers anonymity to its users. As a result, various threat actors have used and abused it for their activities; it also not easily reached by everyday Internet users. As a result, it is more difficult to detect and identify the malicious traffic associated with Deep Web sites. Among the networks/technologies that make up the Deep Web are TOR (The Onion Router), Freenet, and I2P.
We’ve long known that the Deep Web is in use by cybercriminals. It was first “popularized” as an online marketplace, and this usage persists to this day, with illegal drugs being a very popular commodity. Crypto-ransomware now frequently uses the Deep Web to hide their payment sites, in order to make shutting these down more difficult. However, we have noted that newer variants of these malware families are using Tor and I2P for their C&C servers as well, both to allow malware to “phone home” and to retrieve the encryption keys that are needed by crypto-ransomware.
P2P: Joining the Peers
A peer-to-peer (P2P) network is a distributed virtual network of participants that connect to each other instead of a central server. This makes tracking, blocking, and takedowns by security researchers more difficult. A peer-to-peer network means that both “servers” and “hosts” are the same within the network, making identifying the actual source of the commands more difficult. The actual C&C servers would appear to be another peer of an infected botnet and could spread new information to other peers.
The massive peer-to-peer network that can be formed in this manner can be used to propagate binary updates, distribute configuration files and send stolen data. Among the malware families that are known to use this method are ZeroAccess, TDSS and Gameover ZeuS.
Use of cloud services
To set up a C&C server, one can opt to use underground services such as bulletproof hosting and run their servers from there. While this is certainly an option, an attacker may prefer other options that aren’t as obviously malicious. Another option is to abuse various online services to serve as C&C servers. Among the services used for this role are Dropbox, Facebook, Google Drive, Twitter, and Yahoo Forums. Since 2014, targeted attacks and cybercriminals alike have taken advantage of these services.
In June 2014, we saw a targeted attack that hit a government agency in Taiwan that used a PlugX RAT that stored its C&C settings in Dropbox. Around the same time, another targeted attack campaign used Google Drive to handle its callback communications. Other cases have used Dropbox to host malware, and Pinterest as a C&C channel.
Cloud infrastructure providers are a potential risk as well. Attackers can compromise existing instances and then install the necessary modules needed to act as a C&C server. In both scenarios, however, it is nearly impossible for C&C servers to stay active for a very long time. However, as we noted earlier, this may be to the advantage of actual attackers. The chart below shows the top malware families that use C&C servers hosted on cloud infrastructure services.
Figure 1. Some malware families that have had C&C servers on cloud infrastructure services
Needle in a haystack: domain generation algorithms
Botnets use domains generated by Domain generation algorithms (DGAs) to make detection of their server infrastructure more difficult. This technique was popularized by DOWNAD/Conficker years ago, which used it to generate and check 250 to 50,000 domains a day. This technique is designed to overwhelm traditional blacklisting solutions.
Since then, malware authors have formulated different algorithms in order to generate massive numbers of domains to hide their real C&C servers. As a result, DGA-using malware families such as CRILOCK, PUSHDO, NIVDORT and Gameover ZeuS had the most C&C domains in use last year. Together with the use of fast flux DNS techniques, this obscures the locations of C&C servers across various hosts.
Multi-level C&C servers
A typical C&C attack uses a “simple” architecture where affected victims talk directly to servers. However, there is no need for this to be the case. First-level servers may only be a proxy that gets its commands from a second server that is “higher up” on the C&C chain. It would be analogous to a military, where lower-ranking officers get commands from higher-ranking ones.
One particular advantage of this is it makes detecting the higher-level servers much more difficult. Researchers would be able to see and identify any first-level servers, but unless they were able to identify all of the network traffic of these servers they would be unable to find the location of the actual C&C server, making detection extremely difficult.
Use of public registrars
Another (optional) step in setting up a C&C server is acquiring a domain. (One could use just an IP address, but this makes detection and blocking of these servers easier.) The list below shows the most popular registrars where C&C domains where registered. Nothing about this data indicates that these registrars are complicit in malicious activities; the registrars on this list are all popular and well-known. It only shows that cybercriminals are also inclined to use them. The list below is based on the most popular registrars used by the active domains we have discovered and monitor:
- Bizcn.com
- DynaDot LLc
- ENOM Inc
- GoDaddy.com
- Internet AG
- Melbourne IT
- Network Solutions
- Public Domain Registry
- R01-RU
- REGRU-RU
- RU-CENTER
- TLDS
- Todaynic.com
- Tucows, INC
- Vitalwerks Internet Solutions
It is very difficult to know at the time of registration whether a domain will be used for fraudulent activities. Registrars with active abuse detection processes can see these once the domains become active, so C&C servers that use domain names registered with large registrars tend to be short-lived. However, registering domains is becoming easier; it doesn’t matter much that the domains are short-lived. This also contributes to ever increasing number of C&C domains.
In addition, domain privacy services are used to make it more difficult to identify the true actors behind these C&C servers. These services are perfectly legitimate: many site owners use them to protect themselves from spam (both electronic and physical). Figure 2 shows the information revealed by a WHOIS query for these sites. Many C&C domains were seen to use these anonymity services. Domain privacy, together with other dynamic technologies such as fast-flux network, makes tracking the attackers using publicly available information is nearly impossible. In addition to this, some registrars do not validate the information provided, rendering them unreliable.
Figures 2-4. WHOIS results for URLs with domain privacy (click to enlarge)
Use of compromised sites
The chart above notes that botnets created with the ZeuS malware kit are some of the most common uses of C&C servers hosted on cloud providers. ZeuS also uses compromised sites for some aspects of its C&C functionality, because it uses a relatively simple communication infrastructure.
ZeuS downloads an encrypted configuration file from its C&C servers. This file contains all the commands and information needed by ZeuS to carry out its activities. This makes it easy for cybercriminals to use compromised sites as C&C servers, as they only need to upload their configuration file. Of course, this is still independent from the main C&C servers, which host the ZeuS control panel (and may not be hosted on compromised sites).
There are multiple ways in order to compromise sites. Some of the ways that were known to be proven very effective are the following:
- Targeting web services that still use default settings. This includes weak passwords to management consoles that can easily be searched for, scanned, and hacked using brute-force attacks launched from the Internet.
- Public exposure/leakage of their code, access credentials and information. For example, developers may upload their source code into public repositories (such as Github) that exposes their information for others to view.
- Exploiting known vulnerability in web services. Zero-day vulnerabilities need not be used here; old and already patched vulnerabilities can be used perfectly well, as many websites use older, unpatched versions of software which are vulnerable to attacks. For example, the Rodecap malware (known for use in sending spam) is suspected of using C&C servers that have been compromised. It is believed that these sites were compromised because they ran outdated (and vulnerable) versions of content management systems.
Conclusion
The techniques used by attackers have changed over the years, reflecting changes in the threat landscape as well as steps put in place over the years to mitigate these threats. Today, detecting and mitigating C&C servers can be remarkably difficult.
These techniques make traditional blacklisting less reliable; advanced solutions such as Trend Micro Deep Discovery Inspector (DDI) rely on network behavior detection that can more reliably stop these threats. A part of the broader Deep Discovery suite, DDI monitors all TCP/IP ports as well as multiple protocols in order to detect potentially malicious C&C activity over the network. The threat information gathered from DDI is also fed back into the Smart Protection Network, providing all users faster protection against new and emerging threats.
Only through the use of these new tools can network administrators successfully detect C&C activity on their networks and halt any potential threats before they can cause more damage. This protects a network from multiple threats, as well as providing more rapid mitigation for any threats that succeed in breaching an enterprise’s defenses.
Read more: How Command and Control Servers Remain Resilient