Brian Weigel, Senior IAM Consultant, SecureITsource, Inc.
A Brief History of WannaCry
In May of 2017, a new strain of Ransomware (malware that encrypts important files on the victim machine and demands a ransom for the ability to recover the file contents) hit the world by storm. It came in the wake of a ShadowBrokers release of a NSA exploit of a protocol vulnerability that was patched by Microsoft over a month before the outbreak (https://technet.microsoft.com/en-us/library/security/ms17-010.aspx). Fortunately, the WannaCry ransomware had some key mistakes made in its design that allowed security researchers to help temper the spread (the discovery of a kill-switch), and Microsoft released emergency patches for EOL systems to help further mitigate the spread. Now we’re facing what could be effectively referred to as ‘WannaCry v2’, and it’s again expected to have a massive, global impact. This begs the question – did we miss something? If so, what?
Patch Early, Patch Often
While a lot of focus has been put on the exploit used by WannaCry to infect so many systems in such a short timeframe, the fact that Microsoft actually released a patch fixing this vulnerability over a month prior to the release and outbreak has largely fallen off the radar. This fact highlights one of the longstanding problems within the IT industry – patches are often viewed with skepticism, and people are often slow to implement them.
The prevailing thought process for this hesitation is “it might break something that we’re reliant upon”. While this can happen (no QA program is perfect), as a general principle major software vendors put patches through rigorous validation and regression testing process. If a patch does “break” something, one of two cases are true:
The vendor QA process missed something in their QA process. If this is the case, the vendor will gladly help resolve the issue and would appreciate the feedback to help improve their QA processes in validating future patches/releases. Note that this is generally the exception and not the norm.
Whatever the patch “broke” was reliant upon the exploit – intentionally or otherwise. If this is the case, then whatever the patch “broke” was not built/designed/implemented properly in the first place. Work with the necessary parties to resolve this issue if this ends up being the case.
If you’re still concerned about rolling out patches, there are some steps you can take to help keep you up-to-date with providing some additional measures to mitigate adverse affects of releasing a new patch or update:
Classify the patches/updates. 2 basic categories are sufficient – security patches and feature updates/bug fixes. Security patches should be rolled out ASAP, while lower-priority patches can be put through a more structure promotion/roll-out process.
Use automation to push updates and track update deployment. It’s virtually impossible for a small team to update and validate a patch’s roll-out to 10,000 machines without some form of automation tool to push the update out and take an inventory to ensure that all devices are updated. Even one unpatched device can be enough for an exploit to gain entry into your network.
This may all sound a bit harsh, but the simple fact is that the original outbreak would not have been nearly as bad as it was if more people (and organizations) were more proactive in deploying patches as they are released by the vendor. This leads to the next often-overlooked contributing factor…
Upgrade EOL and Unsupported Systems and Applications
A significant number of infected devices were running unsupported/end-of-life operating systems (Windows XP and Server 2003/2003R2). The support policy for releasing security patches for these systems is well known and published by the vendor (Server 2003R2 reached EOL in July 2015), but a large number of machines were still running these operating systems in production when WannaCry hit almost 2 years later.
There are a variety of reasons for this, ranging from compatibility of applications running on those systems to cost of doing a system refresh. Whichever the case may be, your risk exposure explodes the day EOL arrives, and increases exponentially the longer these systems are online after their EOL date. The reason is simple – new vulnerabilities are found regularly (and patch in supported systems), and attackers that find these vulnerabilities are apt to hold onto them until EOL hits to better ensure that their attacks can hit a wider audience without fear of the vendor catching wind and releasing a patch that closes that door.
The moral here is this – always be aware of upcoming end-of-life and end-of-support dates for your systems and applications. Be proactive and plan to be on a newer system before that date (preferably 3-4 patches before for some added breathing room). If, for some reason, you’re forced to run an unsupported/EOL system, know that it is a prime target and take appropriate security precautions to protect it, and expect someone to try to exploit any new vulnerabilities discovered after the EOL date on your system.
Segment Your Network, and Patrol Your Perimeter
When you picture a critical military base, you immediately picture armed patrols wandering along a gated/fenced property, cameras everywhere, and security where you can only enter a few designated areas based on your clearance. Authorized people can enter and leave the premises, and a select few can enter more than one building/area, but no-one has carte blanche access to everything. This is the same picture you should see with your enterprise network.
Firewalls and IDPS technologies are some of the oldest network protection technologies around, and in the modern model of defense-in-depth, it often is just assumed to be there in some form or fashion. Many solutions focusing on protecting internal assets often have marketing language implying that these technologies are obsolete or ineffective – easily and instantly bypassed by any attacker. While these statements do have merit in the context of a sophisticated, targeted threat, the network perimeter firewall can still be very effective at helping prevent ‘shotgun attacks’ like WannaCry that simply try to broadcast an exploit to anywhere that will listen.
In keeping with how WannaCry spread, I’d like to reiterate that many networks were compromised via this broadcast-style transmission. Its primary proliferation was a SMB exploit, which is transmitted on a network port that should almost never be exposed externally and tightly controlled on the internal network. Additional mitigation can be achieved by turning this technology inwards, and using it to segment and control/monitor traffic inside your perimeter. With a properly segmented network, the risk of having an attack like WannaCry infecting the entire network is significantly reduced. It may be able to incapacitate one or two segments, but it should hit a wall that it can’t cross before it spreads too far.
Finally, it is important to make sure that you do your rounds. Scan your perimeter for problems (e.g. open ports to machines that have since been repurposed) and question port openings for sensitive protocols. Do the same for your internal network zones as well. Be suspicious of any externally-accessible protocol/port that could potentially expose sensitive information or be used to launch an exploit or an attack, or transmit sensitive data. Some key examples for suspicious ports to look for (in keeping with the WannaCry and more recent “Petya”):
SMB (port 445)
WMI/NetBIOS (port 135 & 137-139)
Refer to the ports and protocols listed in the following link. Be on the lookout for any that do not have a designated purpose in your externally-facing environment, and be extra-cautious with ports/protocols that can expose critical data/functionality (like DNS, LDAP, RDP, RPC and SSH/TelNet): https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers
Securing the Person
Arguably, the biggest challenge in the IT security industry is the ‘people factor’. One of (if not the) largest entry vectors for an attacker is through a person who is legitimately authorized to access the target resources. Many of the largest data breaches over the past few years started with an internal employee clicking a seemingly-inconspicuous link in an e-mail that appeared legitimate. Thousands of identities are stolen (and used for tax fraud) each year by someone posing as a CFO and requesting tax information on employees from a lowly payroll employee.
Sadly, there is no easy way to ‘patch’ this problem. We have a limited number of tools at our disposal as security professionals to address this risk. Sure, we could completely lock down every system, but then no-one could do any work and the company would go under in short order. So how do we address this risk?
The short answer is ‘education and encouragement’. We need to continually inform, educate, and remind our associates about the dangers lurking out there – the devils that are constantly knocking at our door. We need to teach them the warning signs, to always be vigilant, and to err on the side of caution. We need to encourage them to come forward openly, honestly, and comfortably if they think there’s an issue. No-one wants to call the security team and tell them they might have an issue if they’re afraid they’ll be interrogated, chastised, belittled, and/or fired. Incident response needs to be seen more like calling a (polite and responsive) customer service line for professional assistance and not the local police department feeling like a potential suspect. It’s better to trace down 100 false reports with a positive customer experience than to allow one true issue to go unreported because the customer was afraid to ask for help until it was too late.
Back Up Your Backups
As a closing point, the need for having reliable backups cannot be under-emphasized. Have backups of your systems and data. Have multiple. Have backups readily available to instantly access, have backups (encrypted) in the cloud, and have backups sitting in a box in a safe in a basement. The more, the better. And test them. Make sure that you can get them, and make sure that you can recover them. We have entered an era where the cost of storage has plummeted, and a plethora of options are available. After all, you can’t compromise data that can’t be reached by an attacker through a single attack vector, so establishing multiple recovery vectors will further improve your chances of mitigating an attack should you fall victim.
Stay secure, my friends!
SecureITsource is an authorized reseller and professional services partner with the industry’s leading Identity & Access Management solution providers. Our team of experienced consultants help our clients to achieve their IAM goals by providing strategy, design, and engineering expertise.