CompTIA CYSA+ CS0-002 – Cloud Infrastructure Assessments

  1. Cloud Threats (OBJ 1.6)

Cloud threats. In this lesson, we’re going to talk about some of the cloud threats and vulnerabilities. Because while the cloud has a lot of benefits, especially in terms of cost and operations, there are some vulnerabilities and some significant issues that we have to consider. Now, most of these vulnerabilities are going to happen in terms of identity and access management. So you really need to do a good job in securing that area because that’s your privileges, that’s your authorizations and your authentications. Now, what are some of the major threats? Well, it really comes down to four key areas. This includes insecure, APIs, improper key management, improper logging and monitoring, and unprotected storage. First, insecure application programming interfaces, or APIs. Now, the first thing I want to give you is a word of warning here.

When you’re using an API, you should always use it over an encrypted channel. That means SSL or TLS using an Https connection. If you don’t do that and you just use Http, you are asking for somebody to be able to get there and see what you’re doing, be able to steal things like your authorization tokens and then use that against you. This is a major issue. So you want to make sure you secure your APIs by having end to end encryption. Now, anytime you start getting data when you’re running an API, what should you do with it? Well, if you said input validation, you would be right. All of your data received by an API must pass server side validation routines before you start performing things on that data because you want to make sure nobody’s going to use that as a way to inject some code into your site and cause issues.

Another thing you have to consider when dealing with APIs, especially when you want to make sure your API is secure, is you need to think about error handling and more importantly, error messages. If I, as an attacker, can go against your system and start putting things against your API, and you start giving me error messages related to authentication and authorization, those things can give me clues on how to exploit your system. So when you want to give somebody an error message, make sure it’s been sanitized. You want to make sure it’s as simple as possible and just tell us what the error is without giving too much detail. The final thing we have to think about with APIs is we want to make sure they’re not subject to a denial of service attack. And the way we can do this is by implementing Throttling or rate limiting mechanisms.

This will protect us from a denial of service attack. Essentially, an API can sit and take a certain amount of input from its users. It can also get requests for a certain amount of input from its users. And when it does this, it needs to be able to know what that is. For instance, one of the APIs I use has a rate limit of 100 requests per minute. If I do more than 100 requests per minute, it will stop me, it won’t answer them, and it will start ignoring my IP for at least 20 minutes, and then I can go back and ask again. So these are things they can put in place to prevent somebody from just going and going, hey, give me answer. Hey, give me an answer, or hey, give me answer, over and over and over again, causing a denial of service. Now, the second main area we want to talk about is improper keys key management.

Now, this is a really important thing because a lot of the things you’re going to use your keys for are things like cryptography authentication and authorization. And so these are areas to help you secure your stuff. And if you’re not having proper key management, you’re going to have a very insecure API. Whenever you’re using an API, you need to make sure you’re using secure authentication and authorization. How do you do that? But we’ve already talked about things like SAML and OAuth and OIDC, and you want to use those things to do your authentication and authorization before you access data. Another word of warning I have for you here do not hard code or embed your key in the source code. We talked about this back when we talked about best coding practices.

You never want to hard code or embed the actual key inside the source code. Why? Because if an attacker can get a hold of your source code or reverse engineer it, they’re going to have access to your keys. So you never want to do that. Another thing we want to think about is when we’re dealing with keys, anytime you have a key that you’re no longer needing, go ahead and delete it. If it’s unnecessary, delete it. And anytime you start moving your system from a design environment to a staging environment to a production environment, you should regenerate keys and get new keys. Because those that have been in the development pipeline have been exposed to a lot of programmers and a lot of reviewers. So when you’re going to production, you want to generate new keys that nobody knows.

Finally, the other thing you have to think about is making sure that you have hardening policies in place for any of your client hosts and any of your servers and the development workstations. Anything you’re working on should always be hardened, and they should only run whitelisted applications, especially things that are going to touch your services and things that are going to touch your API. Anytime you’re dealing with a system that’s making these keys, it needs to be secured as well. The third main area we want to talk about is logging and monitoring. And one of the big problems is insufficient logging and monitoring of cloud services. Now, again, here’s a word of warning. If you’re dealing with a software as a service. Many times you’re not going to have any ability to access log files or monitoring tools.

For instance, think about Gmail. That is a software as a service tool. If you use Gmail, can you go in there and look at your log files? Can you go in there and look at your audit logs? Can you go in there and look at your monitoring tools to see if the service is up and down? No, because that’s Google’s job, not your job. And so this is a weak area for us if we start using a lot of software as a service inside of our companies. Now, remember, when you’re dealing with logs, your logs have to be copied from these elastic workstations into some place for long term storage. For example, when we have a cloud service and we spin up a new virtual machine and we use it for a while because we have a higher demand, and then that demand is gone.

If we’re storing those logs on that machine and that machine that was deprovisioned, we just lost all the logs. So you want to make sure those are copied off to a non elastic storage unit, and they’re there for long term retention. Based on your data retention policies, the fourth and final area we want to talk about is unprotected storage. Now, there are lots of ways you can do storage inside the cloud, but most storage containers are going to be referred to as one of two things. They’re either going to be called buckets or blobs. When you call them buckets, this is something that we use inside of AWS. When we talk about blobs, it’s usually in Microsoft Azure. Either way, we’re talking about cloud storage here. Essentially, when we have a file and we want to save it someplace, we have to put it in a container. And that container, a bucket or a blob is going to be someplace that we store it. And that can be actually located in lots of different places. For instance, your container could be in the East Coast or the West Coast. It could be in a specific region or any region. But the big thing is you can’t nest one container in another. Each container is going to host its own data objects, which are those files that we want to store on that system. Now, once you have that, you have to set up access control, and this is where my word of warning comes in. Access control to storage is administered through your container policies. It’s also done through your IAM authorizations, and it’s done through object ACLs. By combining these three things, you can get a good level of security, but if you misconfigure them, it can be a problem.

And that brings us to the other problem here. A lot of times people have incorrect permissions. Why does that happen? Well, because when you create a new storage container or bucket or blob, it’s going to create default read write permissions for that thing during creation. And if you don’t go and change those, you’re going to leave those and they’re going to be left over and that’s going to cause incorrect permissions later on. So you always want to make sure you modify those permissions to the level you need. The next thing we want to talk about here is incorrect origin settings which is another issue inside of insecure storage. Now this happens because when you’re dealing with content delivery networks you have to configure what’s known as a cross origin resource sharing policy.

This is a cors policy. Now when we talk about a cross origin sharing policy, this is a content delivery network policy that instructs the browser to treat requests from nominated domains as safe. Essentially you’re going to put things out into the content delivery network. These are little edges of the cloud service all over the world. And if you’re using things from multiple domains because they’re all coming from different CDN edge points, they have to be able to trust each other other and that’s what your cores policy is going to do for you. Now, the last word of warning I have for you is that a weak cores policy can expose your site to vulnerabilities such as cross site scripting attacks. So you want to be aware of this and you want to make sure that your policies are written properly.

  1. Cloud Tools (OBJ 1.4)

Cloud tools. In this lesson we’re going to talk about some cloud infrastructure assessment tools. Essentially, if we want to do some vulnerability assessments or some penetration tests against the cloud, what tools will we use? Now, when we look at the cloud, there is such a big push for everybody to migrate to the cloud these days. Unfortunately, a lot of the tools we have only work in traditional environments and so we have to be careful when we move to the cloud cloud to make sure we have the right tools and the right teams who know how to operate those tools so we can keep our vulnerabilities at a minimum. Now, when we talk about these different cloud tools, we’re going to use them to identify VM sprawl, dormant VMs, MISC configurations, and a host of other things. Now we’ve talked about VM sprawl before, but I don’t think we’ve mentioned the idea of a dormant VM.

Dormant VMs essentially can lead to VM sprawl. Now, when we talk about a dormant VM, this is a virtual machine that was created and configured for a particular purpose and then it was shut down or it was even left running without properly decommissioning it. If it’s left running, you’re going to be charged for it month after month. If you shut it down, it’s still sitting on your systems and it’s still a potential vulnerability. So you want to be able to identify these things and then take them down properly. Now there are lots of different tools out there that are available for vulnerability assessments and penetration testing of your cloud infrastructure. In this lesson we’re going to talk about three of them, specifically Scoutsuite, Prowler and Pacu.

Now, Scouts Suite is the first one we’re going to talk about and this is an open source tool written in Python that can be used to audit instances and policies created on multi cloud platforms. The great thing about scoutsuite is it works on Amazon, Microsoft and Google cloud platform. Now when it goes through their platform, it can look for lots of different information. Now the way Scouts Suite works is it actually makes API calls to those various cloud platforms to collect data. It then compiles a report of all the objects discovered, all the instances, the storage containers, the accounts, the users, the data, the Firewall sales, all that stuff. And then it can categorize each of those objects with a security level and it can define whether or not that security level has been violated.

When you do subsequent scans, this allows you to basically create a baseline and then see what’s changed from there. The second tool we’re going to talk about is Prowler. Now Prowler is an auditing tool for AWS that’s used to evaluate the cloud infrastructure against AWS, benchmarks, GDPR compliance and HIPAA compliance. Now notice this is for AWS only. So if you’re using Microsoft, Azure or Google Cloud, this is not going to be the tool for you the third tool we have is Pacu. Now, this is an open source cloud penetration testing framework to test the security configuration of an AWS account. So again, this is not multicloud. It only works for Amazon web services. Now, I like to think about this almost like metasploit for the cloud.

It’s set up with a bunch of different exploits already there and a bunch of different scripts, and you can run them based on using different commands. Again, this is useful for penetration testing as well as some hands on vulnerability assessments. Now, one of the things I want to remind you is any time you start scanning a cloud service, remember, you don’t own the cloud and so they may see you as an attacker. If you’re going to scan a cloud service, you better first consult the cloud service provider’s acceptable use policy before you scan those hosts and those services, or they might think you’re causing them to be under attack and they can go after you. So make sure you check their acceptable use policy to make sure you’re allowed to do scans and penetration tests.

For example, with AWS, there’s a particular form you’re going to submit to say, I plan on doing a penetration test against this range of networks with this IPS on this date and time. So that way they can put an exception into their system so they know that you’re going to be doing a penetration test and they won’t call the authorities on you. Now, for the exam, I want to make a quick note here. You do not need to know how to use any of these three tools, but you should know that they are associated with vulnerability assessments and penetration testing in the cloud. So if you know these three tools by name and you associate them with the cloud computing, you’re going to do fine when it comes to the exam.

  1. Cloud Forensics (OBJ 4.4)

Cloud forensics. In this lesson, we’re going to talk about some of the challenges with cloud forensics. Now, we covered a lot of digital forensics back earlier in this course, and we covered the entire digital forensics section. But there are some unique things that we have to think about when we deal with the cloud. For instance, when you start dealing with digital forensics in the cloud, it becomes much more challenging. This is because all the resources you’re talking about the servers, the networks and the disk storage is all technically virtual stuff. It’s not physical. This means that you may not have access to it. So because it’s virtual, you can logically access it, but you can’t do a physical disk image of that disk because that disk could be located anywhere in the world.

It might be located in multiple regions of the world, and it might have data that spread out across multiple data centers all at the same time. Now, this makes it so much harder for you to be able to get that data. Now, on the other side of things, the cloud makes it a lot easier for attackers as well. Attackers can use any of a number of clouds to perform their attack, and many attackers will actually create a multicloud service for them to be able to create their own attack platform. And so when you’re trying to investigate the source of an attack and you look at the IP, you may find that it’s coming from Amazon Web Services or Microsoft Azure or the Google Cloud Platform. All of these could be the source of your attack, but it’s not likely that Amazon or Microsoft or Google are the ones performing it.

Instead, it’s an attacker who leaves some kind of an instance and then they’re using that to perform their attack. Now, throughout the rest of this lesson, I want to focus on three main points that you need to consider in terms of the difficulty with cloud forensics. First, when you talk about performing forensics in a public cloud, this is going to be complicated, especially because the access you’re allowed to have and the access you’re permitted is going to be based on the cloud providers SLA, their service level agreement. For example, the SLA I have with my cloud service provider doesn’t allow me to jump in my car, drive to their server farm, plug in my device and start doing a bit by bit copy of that server. It’s not allowed. So if I want to get data or forensics performed on my cloud instance, I would have to ask them and they would have to do it for me.

And that’s only if it’s based on the SLA and what I agree to pay them in terms of that SLA to do that work for me. The second major concern is that instances are created and destroyed very quickly due to the elasticity. Now, this is a great thing in terms of our operations. But from a forensic standpoint, it makes recovery much more difficult. As these instances are created, they start taking up more disk space. When they’re done, they’re deleted and that disk space becomes available again. Then another customer might create a new instance and it writes over your data that was recently deleted, making recovery much more difficult. This is a big issue. Now, a lot of cloud service providers realize this is an issue, and so they start doing more extensive logging and monitoring options to try to overcome this or take snapshots and keep them for a certain amount of time to go back for data recovery.

But again, this is one of those things that this is a benefit of the cloud, but it’s also a drawback. The benefit is elasticity. The drawback is it’s harder to do forensic recovery. The third issue we want to talk about here is that there are issues with chain of custody. Now, this happens because the investigator can’t just go in and do the image themselves. Instead, they’re relying on the cloud service provider to provide them the data. So if I want an image of my server, I’m going to ask my service provider to do that for me.

Now, they’re going to have to hopefully document and record that as closely as possible. But again, they’re not law enforcement. And so it would be an issue if we need an actual legal, binding evidence collection here based on our jurisdiction because we wouldn’t be able to get it. The other issue here, when you start talking about chain of custody is again, that data can be anywhere in the world. So if my data is in a server farm in Thailand, that means in Thailand there’s data sovereignty there. But I’m a US. Company. I don’t have data sovereignty with my data being in Thailand. Now, my data is not in Thailand, but it’s just giving you an example. These are things you have to consider when you start moving to the cloud and they can make forensics much more challenging.

 

img