Core 4 “Back to the Basics”
In light of all the cyberattacks that have been well publicized as of late, we see the increasing need for skilled IT security resources to both protect against and respond to rapidly evolving cyber-threats. IT professionals tasked with protecting their companies’ cyber assets are facing growing challenges: Cyber threats are getting more advanced; Networks are getting more complex leading to more vulnerabilities and increased risks to compromise; and we are playing the catch-up game when it comes to obtaining the skilled security resources to protect our networks. The challenges are mounting; however what I want to do in this article is to focus more on the last aspect, the lack of skilled resources. I have encountered numerous instances where lead security positions have been filled by qualified IT personnel who do not have the security background or experience needed to build a strong security program. With the cyber security field as fairly new as it is, experienced professionals are hard to come by.
This observation becomes more evident when organizational decision making with regard to a company’s security posture is made through the influence of the security market which is propagated oftentimes by fear and uncertainty. There is certainly not a lack of vendors touting how their solution is a ‘must buy’ and how every company in your sector is using their technology. The issue I find with this approach is that the focus then becomes the ‘new technology’ that must be acquired when the very basic security controls may not even be in place.
There are many frameworks available to help the IT security professional to determine what controls to implement such as ISO, Cobit, SANS Top 20, NIST and the new Presidential Cyber Security Framework to name a few. You also have your compliance guidelines/standards based on certain industries such as Payment Card Industry (PCI) standards for Retail and HIPAA for Healthcare. These frameworks are good, but can be overwhelming and will take time and resources to implement.
I definitely recommend following one of these frameworks and tailoring it based on your organization’s specific risks. However, in my experience, when trying to get some quick wins that add the biggest impact on securing your network, implementing these 4 basic controls can dramatically reduce a company’s risk of compromise from external malware. These controls include: 1) Identifying both authorized and unauthorized devices, 2) Reducing administrative access, 3) Identifying vulnerabilities and patching, and 4) Whitelisting applications.
This 4 core concept is nothing new and there are many articles out on the web discussing similar models. The need for a strong cyber-security foundation cannot be overemphasized which is why I am mentioning it again. A good example would be the Australian Government’s Top 4 Security Strategies. (http://www.asd.gov.au/infosec/mitigationstrategies.htm) The Australian Government states that 85% of the intrusions they encountered could be eliminated by implementing their 4 top strategies which is the same as the list above. This statement will most likely hold true for many small to midsize companies and even some larger ones. Attacks are becoming more advanced, but a majority of them are still exploiting known vulnerabilities. Read on for more detail regarding the 4 controls.
IDENTIFY AUTHORIZED AND UNAUTHOROZED DEVICES
Know what you are protecting. If you don’t know what is on your network then how can you protect it? This is a critical if not the most critical step especially at the endpoint since the preferred and probably the most successful delivery of exploits is through malicious email accessed by users on the endpoints. The first task of this control should be to create a list of all known or authorized devices including endpoints, servers, networking and security devices, etc. Once you initially determine active devices on the network, you then need to continuously monitor to ensure there are no unauthorized devices appearing such as wireless access points or other endpoints. This becomes much more difficult these days due to the era of BYOD (Bring Your Own Device) and you will most likely miss some devices, but should get the majority. Not to worry though, as your security program matures, you realize that there are many tools and technologies to help address this.
A challenge that is often encountered in this phase is that some devices will not respond to your tools as you sweep the network thereby escaping the identification process altogether. Depending on the size of your network, it is very unlikely that a single sweep would be able to detect 100% of connected devices, but if you have invested in a NGFW (Fortigate) or IPS (SourceFire) at the perimeter which has the ability to passively identify devices based on traffic you might be able to catch the others you missed in the sweep. Another challenge is that devices may appear for a short period on the network in between the times you sweep/scan the network. You might be able to catch them at the perimeter with the previously mentioned technologies; however a better way might be to implement port security or 802.1x. For smaller networks, you could simply turn network ports/jacks off until they are needed, and MAC addresses can be hard coded to the active ports so only previously authorized devices will be able to access the network.
One of the more simple tools to help you identify devices would be Nmap, but the best technology will most likely end up being one that can discover devices, software, vulnerabilities and has the ability to patch them. Dell KACE technology would be a good example of this.
More information from SANS:
The SANS top 20 list has a lot more information regarding this and can be reached at https://www.sans.org/critical-security-controls/control/1.
REDUCE ADMINISTRATIVE ACCESS
Oftentimes it would just be easier and faster to give administrator access to normal users for troubleshooting, installing applications and any other task an admin would need to do for a user. If you are short-staffed and some user needs something right now, the easiest route would simply be to give them administrator rights since you figure it is just a temporary thing. But then it happens again to another user, and another, and before you know it, the entire user base has administrative access and you just did not have enough time to go back to remove admin access. Even though you may (and I get this a lot) trust your users to use their access appropriately, it makes it much easier for attackers to gain access to privileged information as well as increase the scope of the breach once inside the network. If we use the phishing/malicious email attack as an example, when a user clicks the malicious link or attachment and the exploit is successful, the attacker will have the same access on the compromised machine as the logged in user. If the user does not have any higher level privileges, it drastically reduces the attacker’s options for further compromise unless the exploit had the ability to escalate privileges which does happen from time to time. To implement this control, a good first step is to review your users’ access privileges at the workstation and server level. Once you have a solid list of user privileges it would be good to build a list of supported applications and functions and test to ensure they can all work without administrative privileges. Almost all applications will work without the need for administrative access.
Once you have identified which users have administrative access, monitor them. Make sure that all administrator accounts are behaving normally and have not been compromised. As an administrator, you can set up a separate account for yourself with no admin rights for regular day-to-day use, and have a separate account for performing administrative privileges.
Limiting user rights is very doable, however it is a bit more difficult than just giving the user full rights. You may encounter some applications that need administrator rights to run. If that’s the case, then you need to understand the risks and increase the security defenses around it. I would also increase detection visibility as it is very likely that these devices will have a greater impact on the company than the others that do not require admin access to run.
Automated inventory tools can be used to audit endpoint and server devices. Dell Kace is a technology that can do this combined with many other of the above mentioned features. Most SIEM technologies have the capability to monitor administrative privileges as well.
IDENTIFY VULNERABILITIES AND PATCH
The majority of new malware in the wild today primarily exploit already existing (known) vulnerabilities. There may be a new technique used to get the exploit to the user or a new evasion technique to get around security defenses, but in the end, it is still exploiting a vulnerability that is known and usually already has a patch available. The percentages may vary, but some sources report that up to 80% of publicly known exploits have patches for the vulnerabilities. With those types of percentages I think it’s safe to say that having a vulnerability and patch management process is very important. A first step to building a vulnerability and patch management process is to create a list of operating systems and applications that are deployed within your network. Then you will need to cross reference them with the latest known vulnerabilities. Once you identified your internal as well as external vulnerabilities, you can then apply the known patches for them.
There are many challenges to patching not so much on the endpoints, but on other devices like servers and other networking and security devices that are very crucial to the business and cannot go down. Patching will sometimes break other functionalities on a system so proper testing will need to take place before a patch is deployed. Having a test network of all the OS and applications will be good for this as well as having a test plan to ensure complete functionality in the system is still there. Leveraging virtualization technology can facilitate this.
There are many tools to help with this including the Dell Kace system as well as other patch management systems such as GFI and Secunia. These technologies will give you the OS and applications on the network and do the cross referencing for you. Although these technologies will help automate most of the process, it is also important to monitor the security news feeds to ensure you are aware of the latest attack or vulnerability. Sometimes day-zero exploits are released and are active in the wild and it may take a while to get a patch out to fix the vulnerability. In these cases, just being aware of it is important as you wait for the patch. Putting extra focus on monitoring those systems is key as well.
Once a system is compromised, more often than not, software is downloaded and run to further infiltrate the network. Most, if not all of these applications or tools are not needed by the end user to accomplish their daily business tasks. Whitelisting will only allow the applications that are needed for business purposes to run on the network. Any other application that tries to run will be blocked. This will dramatically reduce the impact of a breach.
Depending on your environment, if may be a bit difficult to identify all the approved applications, but this would be the critical first step. You could possibly create different whitelisting packages for different groups. You could also create images that already have these packages of applications deployed. It would also be important to build a streamlined process to request additional applications that users may need for business purposes once whitelisting is implemented. Of course, you have to make sure that there is a strong validation process as to why an application is needed, and a good risk process to ensure it is not opening up more vulnerabilities.
Many commercial tools are available to enable whitelisting such as Bit9 and other Anti-Virus companies that have either developed the whitelisting technology or bought another company to incorporate it into their existing AV products. McAfee is an example. Microsoft also has this capability, but does lack some of the granularity that commercial products have.
This is a highly condensed overview and I understand that there is much more work involved with these controls and plenty other controls that an organization will need to properly defend their network which will include a combination of technology, processes, and people. But fundamentally, these are what I consider the basic controls that when implemented properly, would have a good impact in reducing risk. Spending money on the latest security products will not be as effective without the basic security controls in place to begin with.
In an article from a Cisco report, a survey of 1700 executives in the UK and eight other countries showed that while 75% of chief information security officers see their security tools as very or extremely effective, less than 50% of these respondents use standard tools such as patching and configuration management. This highlights the false sense of security that arises when you let the security market dictate your organizational decisions instead of covering the security basics as mundane as they may be.
It is tough when you are dealing with so many vendors pitching solutions to solve all your problems in an environment where the probability of the next big cyber-attack is not a matter of if but when. But not all organizations are made equal, so implementing a risk based approach that is customized to your own organization’s business is a good place to start. Take a step back and ensure you have the basics first, before investing in larger, more expensive products.