Gather ‘round, children, and hear tell of the old days when an enterprise network was a carefully assembled combination of hardware and software living inside a datacenter owned by an individual company. A shining firewall stood guard at the perimeter, its portals manned by the noble knights, Intrusion Detection, Network Antivirus, and Content Inspection. This heroic coterie kept everything inside safe from the anarchy that reigned outside the magical walls.
Those days are long gone. The secure castle has become a rest area that people visit before heading out to work. The small sally ports and heavily barred doors have become open freeways.
The most dramatic change of all may be that most of the activity you need to protect is now happening in other people’s kingdoms (namely, the cloud). It started with the things that were hard to maintain or that were not a part of core production: video conferencing, sales/logistics/supply-chain management, and web hosting. These days, the list includes IT backbone services—email, file storage, and collaboration/chat. In 2018, even cutting-edge research and core production services are more likely to live in AWS than on physical hardware in your lab.
This new, elastic network perimeter has some big advantages. For example, cloud providers are better at handling resource issues—both natural and attacker-inflicted. They are also better at patching than most local IT shops. They have big budgets to purchase leading security products.
The cloud has amazing advantages, but...
It's easy to get hypnotized by the promises made to you by your cloud provider and become complacent. It's also easy to miss the implications of a broken perimeter, especially since it happens one project/vendor/service at a time. Combine the two and you've got a recipe for disaster.
Following are three of the most common mistaken assumptions regarding cloud security, as well as some advice on how to defend today's elastic kingdom:
Fallacy #1: Our cloud service provider has a top-notch security team to ensure that our environment is secure. They’ve got this.
Reality check: It’s not as cut-and-dry as it may look from the outside. Security detection in a multi-tenant environment is a difficult problem. Yes, providers *do* have additional security resources, but they are spread thin over the mountain of data they’re responsible for and their systems generate a large volume of alerts. While these volume challenges are solvable, security isn’t the core business of most cloud services. Their primary focus is to ensure general availability and ease of use, and those priorities translate into little appetite for blocking content or connections. After all, customers who can’t use their services because of false-positive alerts get grumpy. In contrast, a customer compromised due to provider inaction may never know about the incident. What’s more, even if the customer does find out, there is usually a clause in the service contract that eliminates or greatly limits the provider’s liability.
Always remember that no one cares about your data more than you do. If you aren’t getting logs from all of your cloud service providers, you should start. While the security use case for each type of provider will vary with the service, you should (at least) be tracking access locations and administrative actions for all remote services.
Fallacy #2: It’s ok to ignore the alerts coming from my cloud service addresses or to whitelist this address block. I can consider these services “internal."
Reality check: Most SOCs suffer from alert fatigue. A large part of the job involves filtering out false positives. Often, alerts for things such as data-leakage alarms, command protocols (SSH, RDP, etc.), or high-volume transfers are assumed to be false positives if both machines are “internal." But it is important to remember that “officially designated a company service” and “internal” aren’t the same thing.
Cloud providers have other customers and it doesn’t take much work to figure out what cloud services your organization uses. If you use AWS, an attacker can spin up some instances and use their preferences/requirements to get those instances into the same datacenter as yours. If you are whitelisting that IP block, you may not see the beacons, command channels, or other suspicious behaviors from those instances.
If an attacker learns you use a cloud-based file-storage/sharing service, he or she can create a personal account with that service to exfiltrate data. Uploading data with that App-ID is a normal occurrence, so it may be ignored when the transfer is seen at the firewall. You can look at the cloud provider's logs, but this connection won’t show up there for a personal account.
Fallacy #3: Everything important is happening outside of my control. There’s nothing I can do.
Reality check: Although you don’t "own" the cloud services you use, you can still protect your data. You just need to adapt what you were doing before to the new environment and consider the new types of problems that come with remote operation.
Get the Basics Straight
Start out by adapting the knowledge and experience you gained while fighting problems such as compromised accounts and data exfiltration in your on-premises deployments. Spend some time looking at the service-provider logs and finding things that you can match to your detection capabilities. Do you track VPN login locations? If so, consider applying the detection logic you use there to cloud service logins. Unusual times and login locations may be relevant. If your cloud service provider has device fingerprinting, you can track the devices for each user, as well.
If you monitor your firewall logs for large file transfers leaving your network, you can apply the same logic to the download and preview actions in a cloud-based file storage/sharing service. You will have to screen out internal addresses if you want to replicate the “outgoing” nature of firewall based rules, but you could just split the population into “collection” and “exfiltration” to add internal file monitoring.
Next, become aware of new hiding places. Self-monitoring and securing your cloud services is crucial, because the nature of the service themselves makes them an excellent means for evading detection.
For example, consider data exfiltration by an insider. You can watch USB on your devices, outgoing connections on your external firewalls, and monitor all of your printers, but the data posted to your cloud storage is accessible by browser or app from any device. To succeed, your malicious insider only needs to upload the target data at work, then download it from home. Your provider logs should always include IP addresses for user interaction—you can use these, along with reported actions, to catch specific patterns. In the previous example, uploads from company addresses followed by downloads from external addresses should set off alarm bells.
Likewise, it's also time to rethink your email security strategy. You’ve undoubtedly trained your users not to click on links in email. You’re probably detonating suspicious file attachments and following all embedded links. However, these things can be circumvented with a variation on a watering-hole attack.
For example, let’s say that an attacker manages to compromise an Android phone with an email client and an app that stores data in the cloud and then syncs it to users' phones. The attacker could use the app to sync malware to the cloud service and then send an email luring others to the file. The click will take users one step away from the email and into a place that is “trusted.” Of course, the provider will be trying to identify and block malicious content. But cloud providers have a lot of content to process, are very sensitive to false positives, and will probably see just another file, not one associated with an email from a phone.
This is a case where whitelisting can really hurt you. Make sure that whatever content inspection/anti-malware protection you have in email doesn’t ignore “internal” links. If you have the ability to analyze executable content for malware, you should be proactive. Use file-creation events and file extensions to identify new executable files that could be pulled and processed using a simple script.
Top Five Cloud Security Tips
Augment your activities with the following:
- Collect the logs. Activities that affect your network may be happening completely outside of it. If you don’t get the logs from your service providers, you will be flying blind.
- Know your options. See what the security coverage and monitoring options are for your cloud service. Providers often have proactive protection measures and detections that are turned off by default. Also, check to see what accessibility and service features are enabled. Those all tend to be toggled to "on" by default. Disabling features you aren’t using and adding some access restrictions can greatly reduce your attack surface.
- Don’t whitelist your cloud providers. They have other customers and their own vulnerabilities. Always remember that your cloud services are owned by other companies and live outside your network. Never assume that they are safe.
- Adapt existing solutions to work with the cloud. The essential nature of your problems hasn’t changed, so some of your existing solutions should still be valuable. Look for ways to apply your existing coverage to the new environment.
- Assess new risks. Does the fact that a user can access a specific asset from an app on his phone affect the risk? What are the implications of allowing Slackbot to access your build system—did you circumvent any source controls? Adding new services or features to an existing service often has security implications that go unnoticed. You should always review those changes to avoid creating blind spots or unintentionally increasing exposure to other services. It is also a good idea to periodically review the enabled features. These features are often enabled at the behest of an end user, without consideration of the broader consequences.
Do you have other cloud security practices that you swear by? Tell us about it in the comments!