Four excellent retrospective reports have become available recently that provide illuminating analysis of the state of information security over the past year. Mandiant's M-Trends Report, Trustwave's Global Security Report, Verizon's Data Breach Investigation Report, and the Microsoft Security Intelligence Report are all brimming with extremely useful data and solutions for IT organizations of all sizes. Two daunting issues face those who wish to digest this valuable data. The first is the sheer size of these reports, and the second is the sample bias present in each. The following article attempts to address the first concern by summarizing key findings of each report. The second issue, sample bias, should be addressed by the aggregation of all four reports. Additionally, by combing the findings of all reports, I hope to provide a broader picture than any one report presents.
The Mandiant M-Trends report (http://www.mandiant.com/index.php/news_events/forms/m-trends_2012_an_evo... - registration required) focuses on Mandiant's case load over the previous year. Some findings should definitely be interpreted with this fact in mind. For instance, the breakdown of industries being targeted by APT should more aptly be titled the breakdown of industries which, when targeted by APT, contract with Mandiant to perform incident response and investigation.
Some key points in the M-Trends report include their finding that many attackers us readily available malware and remote access tools. This fact is somewhat surprising given that most antivirus should be able to detect these tools, but M-Trends postulates that because many of these tools have legitimate purposes they are categorized by AV into a special groupings which are often allowed by organizational policy and thus escape detection. M-Trends also reports that these tools are often installed on systems alongside custom tools. This allows attackers an alternative access system should the garden variety malware be detected and removed. It seems attackers work with the off the shelf malware when it works and fall back to more complex custom malware if necessary.
The Mandiant report is also remarkable in that every case cited legitimate credentials were used as part of the compromise. This is probably a well known, but little examined fact. Attackers will target legitimate credentials in order to access network resources. These credentials might be acquired in a number of ways (brute force, keystroke logging, phishing, etc.) but once acquired are used to expand an attacker foothold.
Like a growing chorus of security experts, the M-Trends report states that security breaches are inevitable and that detection and response efforts deserve more focus. The report points out that despite the enormous investment in preventative efforts (antivirus, perimeter defense, etc.) breaches occur and in the vast majority of cases cited (94%) the victim was notified by a third party.
With respect to recommended improvements, the M-Trends report points out that much of the evidence used in a compromise can only be preserved remotely. Remote system event logging, recovery snapshots, system backups, etc. retained evidence of compromises in many of the incidents investigated. Remote event logging is not enough, however, as many of the events detected in cases cited might have gone unnoticed. Active log monitoring is essential to detecting ongoing attack. The report highlights the use of common malware, which often blends into the background noise of malware detection alerting systems, and could go unnoticed or un-investigated even when it is picked up by antivirus.
The M-Trends report highlights the use of web shell backdoors and the strategy of attackers to timestomp files (or changing the timestamp to seem legitimate) in order to hide tools in plain sight. The use of integrity checking and filesystem monitoring software could be extremely effective in combating such tactics as long as they are sophisticated enough to detect new or altered files independent of timestamps or other filesystem metadata that could be altered by an attacker.
The Mandiant report makes a number of conclusions, many of which have been made by several similar reports, including the need for improved network architecture, creating an "investigation-ready" environment (centralized logging, security event management, asset inventory, a standing incident response team, IR tabletops, etc.), enhancing authentication and authorization mechanisms (including upgrading to Windows 7), and investing in good security people (my favorite). The final portion of the report highlights the evolutionary nature of adversaries and the blending of technique between APT and financially motivated adversaries. The report states that "visibility is paramount, smart people are more important than technology" and emphasized incident response preparedness and capabilities in order to combat an increasingly persistent threat that is adapting from sophisticated to tried-and-true strategies to infiltrate and maintain a presence in target organizations.
The Trustwave 2012 Global Security Report (GSR) (https://www.trustwave.com/global-security-report- registration required) is a compilation of reflections on incident response as well as penetration tests conducted by Trustwave Spider Labs. Like other reports, data derived about the sources of compromise and industry of the victim should be regarded as the proportion of Trustwave clients rather than of the overall internet ecosystem.
The Trustwave report emphasizes that every business is a target, if only as a target of opportunity. Like other reports, it also points out that antivirus is largely ineffective. Similarly the report points out that existing defensive mechanisms, such as firewalls, although helpful, are not an effective replacement for secure network architecture.
One interesting conclusion of the Trustwave GSR is that third parties responsible for security of services were often the source of security breaches. This finding is particularly poignant in an era where an increasing number of services and assets are moving into the cloud, where third parties are often responsible for key components of systems' security posture. It is incumbent upon organizations, when they retain third party services, to verify those services. Sadly, however, many such organizations lack the capability to verify security processes (which is presumably why they hire third parties in the first place).
User account management, including credential management and access and authentication auditing are highlighted in the report. Like other rports, the GSR identifies misuse of legitimate credentials as a large factor in most breaches. The failures of account management also includes default or weak credentials on not just workstations, but also networking devices and other network enabled equipment which are easily exploitable points of compromise.
Like the M-Trends report, the Trustwave report also places a special focus on malware. The Trustwave report states that attackers often hide malware in plain sight. In the Trustwave database of malware collected during incidents antivirus could identify 60% of samples (which is much higher than the industry accepted expected identification rate of about 20%). However, custom malware, especially that which was not designed to self propagate, easily flew under the radar of antivirus. The Trustwave analysis also included custom malware developed specifically for software used in the target business. The use of malware in attack scenarios was a common trend among all four reports and should be telling.
In addition to the common trend of malware use in breaches, the Trustwave report also pointed out the commonality of data exfiltration. According to the report HTTPS is overtaking other protocols (such as FTP) as the preferred means of exfiltration. When this method does not work, like other reports, the Trustwave report noted attackers leaving backdoors and returning to manually retrieve data.
Also complimenting the M-Trends report is GSR assertion that breach is an inevitable event and security planning and posture must be guided by this eventuality.
Trustwave also highlights the following as simple controls that could have mitigated many breaches: "strong passwords, secure remote access, least privilege and patch management." In addition to these recommendations the Trustwave report recommends user education, account management, enterprise architecture (standardizing on certain technologies), asset management, centralized logging, and visualization to assist in data analysis of security information.
The special section "Security Weaknesses under the Microscope" examines four topics of interest that to plague modern enterprises: network, email, web and mobile. Each section presents challenges facing security in the particular arena and makes recommendations for mitigation and protection strategy. The section is an attempt to take stock of the state of affairs and determine if the situation is improving or declining and highlight the source of successes or failures.
The network section points to many common legacy problems that continue to confront modern infrastructures. Poor account management, authentication mechanisms and weak password choices are all highlighted as issues. The report also points out issues with layer 2 networking and the ease with which attackers can manipulate protocols such as Address Resolution Protocol (ARP) to change routing and intercept traffic. This issue is exacerbated by the continued use of unencrypted and legacy protocols. Remote access is also highlighted as a common source of compromise. With an ever increasing mobile workforce, this issue is certain to become a larger challenge over time (especially when combined with existing authentication mechanisms). The continued use of weak wireless architecture and protocols, storage of sensitive data outside secured zones and misconfiguration of protective mechanisms (such as firewalls) round out the top issues in the networking section.
The e-mail section highlights the overall decline in spam cluttering end user mailboxes. The report highlights the popularity of e-mail as an online activity (tied with web searches in terms of usage) and the enduring allure of e-mail as an attack vector as well as malware propagation tool. Although the report cites executable viruses as only 3% of all e-mail in their study period, that's still a hefty proportion.
The web section highlights the evolving and growing role that web applications play in compromises. The section utilizes the Web Hacking Incident Database (http://projects.webappsec.org/w/page/13246995/Web-Hacking-Incident-Database), a Web Application Security Consortium (WASC) project sponsored by Trustwave, for data to support analysis. Again, there is a sample bias as the database only includes public data about breaches and only included 300 incidents for the study period. The study did highlight the danger organizations face from "hactivistst," a theme reiterated in the Verizon Data Breach Investigations Report. The risks faced by web applications includes injection attacks, logic flaws, authentication bypass, session handling flaws, information disclosure and vulnerable third party software. None of these should be any surprise to anyone who has worked in web application security and track relatively closely to the OWASP Top 10 list (https://www.owasp.org/index.php/Top_10_2010).
The section on mobile was interesting in that is is one of the few reports that studied mobile malware. Of particular note is the emergence of mobile banking trojans, location aware malware and the focus on Android as a target platform. The section concludes, however, that mobile is still in its infancy and it is hard to predict trends.
The report includes a study of four basic controls recommended to mitigate the threats analyzed throughout the report. These include an analysis of passwords and authentication mechanisms, a study of SSL, a look at antivirus and a section titled "Walking through Firewalls." Each section is a litany of failures of the specific technology. None of the results are shocking, but they are disheartening as the weaknesses of each have long been known. Poor password choices continue to plague industry, despite their long legacy. Without a clear alternative this creates a continuing area of weakness for most organizations. The SSL section is interesting because although SSL is a cornerstone technology in securing web applications most users don't understand it and therefore don't ascribe much faith in SSL as a protection mechanism. The results were illustrative that there are issues with SSL, specifically around certificates, but not so much so as to conclude the system is failing. The antivirus study is fairly predictable as are the conclusions: AV doesn't really work but it's better than nothing. The firewall section, like passwords, is somewhat disheartening. The section concludes that firewalls provide a false sense of security, are prone to misconfiguration, and in many cases are easily bypassed.
The report concludes with an "Information Security Strategy Pyramid for 2012" that recommends six steps to combat the tide of attacks and the inevitable compromise. Employee education ranks first in the recommendations, as not only a primary line of defense against phishing as well as teaching users to act as "first detector[s]" in an incident - reporting suspicious activity. Identification of users is the second recommendation. This includes account management, auditing, mulit factor authentication and password complexity rules. Enterprise architecture is the third recommendation - that is standardizing on a homogeneous set of hardware and software. This reduces the cost of implementing security controls and eases the burden of auditing. Maintenance becomes easier as does response. Setting policy for hardware and decommissioning older systems assists in this effort. The third recommendation is registration of assets, or an asset inventory. Knowing what's on the network is a great aid to information security, containment and response. Asset management, network access control (NAC), patch management and vulnerability scanning are all emphasized s key supporting activities as part of this initiative. Centralized logging is the fourth recommendation, allowing for easy aggregation, data mining, and review of log activity. The final recommendation is data visualization as a tool for easing the burden of reviewing massive data sets, which is the primary downfall of centralized logging efforts.
The Verizon Data Breach Investigations Report (DBIR - http://www.verizonbusiness.com/resources/reports/rp_data-breach-investig...) is the industry leading breach report. Their data collection is based on incident response engagements by the Verizon RISK team as well as a number of participating law enforcement organizations (LEOs). All data collected for the report uses the VERIS format, which anonymizes victims and incidents and provides a common taxonomy for analysis and data mining. As with other reports there is sampling bias, especially in the demographic breakdown of industries affected by compromise. Rather than representing a total proportion of the internet population they represent a proportion of clients who hired Verizon or who had cases handled by the contributing LEOs.
It is also worth noting that the DBIR is one of the few reports that addresses the recent trend of "hacktivism" and includes breach data from perpetrators like Anonymous and LulzSec. The report also does a good job of breaking down incidents based on industry size to provide unique guidance to small to medium and large organizations. This breakdown provides some interesting data points in terms of divergent threats facing the two types of organizations.
The Verizon DBIR resoundingly concludes that the vast majority of incidents involve malicious outsiders using low complexity attacks. Internal agents (including the proverbial "malicious insider") and partner agents were responsible for less than 5% of breaches examined and resulted in loss of less than 1% of total records. The report is careful to point out that breaches only include data on incidents where data was disclosed to a third party, so data thieves caught before being able to disclose were not included in the sample set.
The DBIR cites malware in almost 70% of cases examined. Malware includes malicious software like viruses and trojans, as well as key loggers. Interestingly enough, malware was found in only about 30% of breaches at large organizations, perhaps a testament to their stronger ability to deploy antivirus and manage endpoints. The vast majority of malware (over 95%) was deployed directly by an attacker , rather than being executed by via an e-mail attachment, drive by download, or user execution. This finding directly contravenes the Microsoft Security Intelligence Report (see below) which found that nearly 50% of all malware involved user interaction. In over 66% of cases where malware was detected, keystroke logging functionality was discovered, which complements the findings of the M-Trends report. Similarly, the finding that the second most common function of malware was to provide data exfiltration mechanisms echos findings of the M-Trends and Trustwave reports. Also echoing the M-Trends report is the DBIR's finding that attackers install multiple back door access mechanisms after a compromise in order to facilitate re-entry and data exfiltration. Another point of commonality is the tendency of attackers to use well known sysadmin tools for malicious purposes. Just like the M-Trends report, the DBIR recommends responding to every antivirus alert as it could be indicative of a much larger compromise.
Hacking was involved in almost 80% of cases, the difference being that hacking involved the deliberate misuse and exploitation of software systems while malware involved software deployed on targets by attackers. Like the other reports the DBIR points to remote access services as the main avenue of external hacking attacks. Web applications also figure prominently in the list of targets as do backdoor or control channels (malware).
One interesting suggestion within the DBIR is the use of tools to examine systems to look for installations of legitimate tools that could be used for malicious purposes (such as pstools). Another is the recommendation to track changing (specifically inflating) archives such as .zip or .rar files on systems as indicators of compromise.
The DBIR is one of the few reports to focus on the massive amounts of data compromised from servers. Although workstations and other end points probably comprise the majority of compromised hosts, servers hold the most data and their compromise can be the most damaging. By breaking incidents down not only by host compromises, but also by records lost, the DBIR paints a clear picture of the danger centralized data poses in a compromise scenario.
Although the DBIR executive summary highlights that over 90% of all attacks were not very difficult (in terms of technical challenge) the report goes on to state that the initial compromise, in most cases, is not difficult. However, after the initial compromise the privilege escalation, setting up a beachhead, installing backdoors and pivoting through a target network do require more advanced techniques. The initial figure in the executive summary is a bit misleading as it suggests that attackers aren't very sophisticated, when in fact, according to the DBIR, they are. Attackers use simple methods to gain initial entry then up their game and exploit much more difficult attack vectors to retain and expand access as well as find and exfiltrate valuable data.
As in years past the DBIR points to a disappointing timeline of breach to discovery. The vast majority of breaches occur in less than an hour, and data exfiltration occurs from minutes to days after the breach. The majority of breaches aren't discovered for months and then take days to weeks to contain and recover from. This means that attackers have a large timeframe to get in, take what they want, and clean up. Making matters worse, the vast majority of victims are notified by third parties. These findings point to an abject failure of security to keep intruders out, and worse yet, to detect intrusions after they've occurred.
With respect to breach discovery, the report notes that large organizations are much better at detecting breaches than smaller ones. Echoing the Trustwave report, the DBIR vividly illustrates the ease with which analysts can spot anomalies when data is visualized properly. Graphs showing log size and remote access logs clearly demonstrates pockets of strange behavior. This "Sesame Street" approach to infosec (i.e. which of these things just don't belong) is particularly striking because the visualization is a clear indicator of where issues are located, even in potentially voluminous log files. Complimenting these examples is data showing that for the vast majority of incidents (84%), log data existed about the breach.
The DBIR took an unusual step in this years report and attempted to make inferences (lacking very solid data) about the effects of breaches. The report notes that typically the largest cost associated with a breach is the actual response and forensics, that market share and company value typically emerge unscathed. The report notes a few exceptions but presents a rather optimistic picture of the effects of a compromise. Of course, the exceptions noted resulted in the victim companies going out of business, so while the likelihood is that a breach will not hurt an organization, if it does, one can conclude from the report, the negative damage could be fatal to an enterprise.
The report concludes with a list of the top 10 threats facing, and corresponding mitigations, large organizations. These threats include: keyloggers and stolen credentials, backdoors, tampering, pretexting, phishing, brute force, and SQL injection. Broadly these fall into the categories of account management, malware, social engineering, and hacking. The mitigation strategies are well know, but for whatever reason often aren't implemented.
The Microsoft Security Intelligence Report (SIR) (http://download.microsoft.com/download/0/3/3/0331766E-3FC4-44E5-B1CA-2BD...) utilizes unique metrics to formulate findings based on malicious software identified by their Malicious Software Removal Tool (MSRT). This approach, as distinct from specific incident data, provides insights that are absent from the other reports. Despite the massive data set the limited perspective may skew some of the findings.
Overall the Microsoft SIR paints a relatively rosey picture of the threat landscape, with software security getting better, exploits moving away from focus on Microsoft products, and an increase in the ability to detect exploits and malware. While the shift away from malware targeting Microsoft software may be a result of a more robust security development lifecycle and automatic updates, it certainly isn't stopping malware from infecting systems. This trend merely demonstrates that attackers are choosing low cost (in terms of effort) targets and that Microsoft has outrun their hiking partners, not the bear.
Similarly, the report notes that the vast majority of detected malware requires user interaction. Again, this demonstrates the agility of attackers and their ability to continually re-focus on the most easily exploitable pieces of systems (which is often the end user). When technical exploits become more complex and difficult, straightforward deception and abuse of legitimate user privilege often proves sufficient.
The Microsoft report cites 0 day vulnerabilities as accounting for less than 1% of their sample, which again supports the notion of attackers chasing low hanging fruit. Although custom 0 day may account for a tiny fraction of malware, one can assume it is 100% effective and probably the method of choice for extremely targeted and stealthy attacks.
The report also categorizes rates of infection to operating systems. The graphical representations of side-by-side comparison make a compelling argument for upgrading to the latest versions of Microsoft operating systems as the rates of infection are dramatically lower for Windows 7 than Windows XP.
Much of the report discusses the newly proposed "Broad Street" taxonomy of malware. This is a new malware classification scheme that is utilized throughout the report. Although the information is useful, it seems somewhat extraneous to the analysis and conclusions of the report.
The Microsoft SIR, like other retrospective reports, makes the observation that social engineering attacks are an increasing threat to organizations. The report proposes a number of useful mitigation strategies including limiting user accounts, enforcing strong account access controls and auditing, and proactive social engineering incident response planning along with the canonical user education.
The report cites an interesting trend of decreasing vulnerability disclosures. This trend is cited as a result of better developer practices and software security. The conclusions are somewhat suspect, however. Declining disclosure could also stem from the increased monetization of the vulnerability market and the quiet death of the full disclosure security researcher community who have, in many cases, turned to profit motivated disclosures (selling vulnerabilities to brokers rather than disclosing them to the public). One needs only to turn to the VUPEN services page (http://www.vupen.com/english/services/lea-index.php) to understand why researchers may still be finding high impact vulnerabilities in prodigious numbers but not releasing them.
Similarly the report highlights a decline in reported vulnerabilities with low complexity of exploit. This is to say, exploits that are easy to carry out. This again, supports an optimistic outlook on the state of application security, but again, sample bias may be at play here. The data is culled from the National Vulnerability Database (http://nvd.nist.gov) of vulnerabilities with an assigned CVSS complexity ranking (http://www.first.org/cvss/cvss-guide.html). This list is far from comprehensive, and doesn't represent a complete, or even majority, sample of disclosed software vulnerabilities. For instance, I disclosed a vulnerability in the Drupal ImageField module in January of 2009 (http://www.madirish.net/node/280) that has never made it into the NVD.
In addition to malware the Microsoft SIR runs through a number of other threats to end users including e-mail born malware, phishing sites, fake antivirus scams, and social networking attacks. These sections include some interesting tidbits but ultimately all point to mitigation strategies in the "Managing Risk" section.
The "Managing Risk" section of the report provides a number provides pointers to the Microsoft website which outlines concrete steps organizations can take to protect themselves from threats to security outlined in previous sections of the SIR. This websites referenced include guidance for IT administrators, software developers as well as for those developing educations and awareness campaigns. In addition to these pointers the "Managing Risk" section there are two subsections on malware removal and safe browsing. The malware removal section is particularly informative and highlights steps administrators can take to manually inspect and clean systems. Although the section recommends an automated tool be used for the job in most cases it provides many interesting and useful tips for forensic investigators and incident responders.
The four reports represent a massive body of aggregate data about the state of information security in the last year. While each report is unique and worthwhile, en toto they present some very interesting aggregates, especially in terms of the commonality of threats and recommended solutions. Of these, the following stand out:
1. Aggregated log management is extremely valuable, and requires the ability to conduct meaningful analysis of this data.
2. Reducing complexity by standardizing on secure platforms greatly enhances security posture.
3. Detection of compromise is a capability that is sorely lacking in many organizations even though evidence often exists. It is important to utilize available evidence and follow up on key indicators of compromise.
4. An asset inventory is critical to improving visibility and understanding the security posture (as well as scope of any compromise) within an organization.
5 Misuse of valid credentials is a common factor to almost every breach. Attribution and authentication audit is essential to identifying patterns of compromise and determining scope of unauthorized use of legitimate credentials.
6. User education should be a critical component of information security strategies. Aware users are less likely to fall victim to attacks and more likely to alert infosec staff of anomalies and serve as an early warning system.
In terms of commonality in findings the one point that stuck out more than any other is the inevitability of compromise. The resounding theme was that much time, effort, and money has been spent on prevention that can never be completely effective. Investment in detection and reaction capabilities is key to finding compromises, containing them quickly to limit damage and return organizations to normal business operations. There was little evidence presented that these capabilities exist in any effective form for most organizations. The complete lack of effective commercial solutions that address this need is probably indicative.