Imagine this scenario: A man enters a bank after normal business hours. He presents his employee credentials to the badge scanner and is properly authenticated. He then goes to the custodial closet where he grabs a mop along with an accompanying bucket and proceeds toward the door that leads to the bank teller area where he again properly authenticates himself at yet another badge scanner. He enters the bank teller area and begins to clean the floor. After a predetermined amount of time, the man ceases to clean the floor and begins to tamper with the lock on the bank vault.
Attackers are capable of staying inside an organization’s network for years if enterprises lack robust measures to detect and remediate attacks.
Does this scenario sound familiar? While it may seem like a physical security problem, information security professionals should take notice, too, because malicious network traffic, often referred to as an advanced persistent threat (APT), infiltrates enterprises in a similar manner as a would-be janitor. With valid authentication credentials, a digital adversary can enter an enterprise’s network seemingly legitimately and proceed to wander into sensitive areas where valuable data can be stolen.
To improve APT detection, organizations are increasingly moving toward context-aware security setups, adopting an all-encompassing approach toward malware detection in place of the myopic approach of years past. In this tip, we’ll explain what is meant by context-aware security and how such an approach can help enterprises detect and defeat advanced threats.
Why context-aware security is needed
To understand contextual security, we must first understand the sort of attacks that are cropping up in enterprise settings. A common type of APT attack involves the embedding of Trojan horse code in PDF documents. There are many variations on this type of attack, but in general, the malicious PDF is delivered as an email attachment. When the unsuspecting email recipient clicks on the attachment, malicious code is unleashed, but it doesn’t immediately execute. Antimalware suites often wait a predetermined amount of time to scan code for malicious behavior, so attackers program the Trojan to delay executing until the antimalware program is no longer watching. When the Trojan does finally execute, it discretely begins collecting data and sending GET requests to commonly visited sites to test network connectivity. If it detects an active network connection, the Trojan initiates a status beacon message to a command-and-control node located somewhere in the cloud.
At this point, the attacker exports as much data as possible and uses it for whatever means is necessary to make money, whether that be selling it on the black market, conducting industrial espionage or some other purpose. As malware authors continue to introduce new antivirus evasion techniques, enterprises must learn how to detect attacks that have slipped through the net and are living on the network. As the Mandiant APT1 report illustrated to the security community, attackers are capable of staying inside an organization’s network for years if enterprises lack robust measures to detect and remediate attacks.
Getting started with contextual security
In order to combat these increasingly common scenarios, organizations must implement four lines of defense. First, they must use rule sets, usually in conjunction with an intrusion detection system such as Snort. Second, organizations should utilize statistical and correlation methods to analyze the latest trends in malware. Third, monitor for unusual data exfiltration attempts. And lastly, and perhaps the most overlooked of the four, is manual examination of event logs. Let’s discuss each one in greater detail.
Rule sets: In the case of rule sets, the results are mixed due to the fact that effective rule sets are typically reactionary as opposed to proactive. Simply put, rules are usually only formulated after an attack vector has been identified. This is not to say that rule sets are ineffective, however. Security administrators would be derelict in their duties if they failed to stay abreast of the latest attack vectors in an effort to formulate effective rule sets. Furthermore, formulating effective rule sets is a fundamental portion of the contextual approach to network security.
For example, an enterprise may allow employees to access network resources remotely via Secure Sockets Shell (SSH). However, if the network detects that a valid login was conducted, but it was conducted from outside of the normal geographic location, a rule can be configured to set off an alert so security administrators can be notified to look into the login more attentively. Are remote logins allowed? Yes. But what if the valid login appears to come from China? Then perhaps more scrutiny is required. Furthermore, the alert will be factored into the statistical and correlation algorithms mentioned below to provide a more high-level view of network activity.
Statistical and correlation methods: The utilization of statistical and correlation methods has been considered a viable method for quite some time, but in the case of contextual security, it is perhaps the cord that ties all of the other methods together as it meshes well with rule sets, log examinations and data exfiltration monitoring. Correlation methods are used to examine whatever alerts are currently configured and to look for relationships between each alert that is triggered. These relationships can be with regard to type of alert, port number or any other type of selector configured by the security administrator. Statistical methods do not rely on prior knowledge of an attack vector, but rather on the time and frequency of a set of alerts.
In keeping with the remote login example above, consider a scenario where all kinds of seemingly valid network logins occur throughout a given time period, perhaps throughout the course of a week. Because the logins are valid, few, if any, alerts are created. However, upon further examination of the logs, security administrators may notice a disturbing number of logins at approximately 3:00 AM local time on various days throughout the week. Now, these logins may be perfectly valid and explainable as the enterprise may employ personnel that are perpetual night owls, and therefore prefer to work through the wee hours of the morning. However, thanks to statistical and correlation methods, more analysis can be conducted into whether this is in fact normal activity.
Monitoring: Widely considered the most important portion of a context-aware security paradigm, examining and blocking data exfiltration attempts is the last line of defense when attempting to combat APT attacks. It is incumbent upon the security administrator to know what should and should not be leaving his or her network. The examination of all RAR files, for instance, should be a high priority; attackers commonly gather multiple files into a RAR and attempt to exfiltrate it, figuring many organizations aren’t monitoring for them. Security administrators should also consider implementing a proxy server such as Squid for all HTTP and HTTPS traffic.
Log examinations: Finally, enterprises should heavily emphasize the need to manually examine logs. Automating log reviews with tools such as Splunk is a popular technique, and when operating in a highly trafficked network, automation is indeed a necessity. However, when attempting to discover new attacks against a network, nothing is as effective as human observation and intuition. When manually examining logs, security pros should first and foremost look for large files (especially RAR and ZIP files) leaving the network, as mentioned above. However, one should not overlook the exfiltration of items such as log files, sensitive .xlsx files and .pst files, as all of these could (and often do) contain sensitive information. Furthermore, an examination should be done with regard to what sites are being connected to from inside the organization. Human intuition, along with informed experience, should alert the security administrator to any site that looks suspicious, which could then spawn a new network monitoring rule to block that avenue of attack in the future.
As attackers become better at hiding out on enterprise networks, organizations need to be aware of the context surrounding security events to better sniff out APTs. Yes, this means setting up the right kind of alerts based on previous attack vectors and correlating the information between triggered alerts, but most of all, this means having some human eyes monitoring data leaving the network and looking over logs. If an enterprise can’t connect all the dots across its network, it will be unable to fend off a new breed of persistent, stealthy malware.
About the author:
Brad Casey holds a Master of Science in information assurance from the University of Texas at San Antonio and has extensive experience in the areas of penetration testing, public key infrastructure, VoIP and network packet analysis. He is also knowledgeable in the areas of system administration, Active Directory and Windows Server 2008. He spent five years doing security assessment testing in the U.S. Air Force, and in his spare time, you can find him looking at Wireshark captures and playing with various Linux distributions in virtual machines.