CompTIA CYSA+ CS0-002 – Analyzing Host-related IOCs Part 2

  1. Consumption (OBJ 4.3)

Consumption. In the last lesson, we looked at how you can do a basic memory analysis to look at different processes and the memory usage. Now, this is actually a big task, especially if you’re trying to do it in real time, looking for signs of malicious code or malicious behavior. So you need to be able to find different ways to identify where you should focus your efforts. And one of those is by looking at consumption. Resource consumption is a key indicator of malicious activity. But you have to be careful here because just because something is using a lot of resources doesn’t make it malicious. A lot of times this will occur with legitimate software too, as I’m recording this video. Right now. My video recording software is using up a lot of resources.

It’s being very heavy on the processor and the memory, but it’s not malicious. It’s doing what I told it to do. So you have to keep that in mind as you’re looking for legitimate activity versus malicious activity. Now, some of the things you can look at when you’re measuring resources are things like your processor usage. Now, your process usage is a percentage of CPU time utilized on a per process level. Now, in addition to this, you might look at your memory consumption. This is the amount of memory that’s utilized on a per process level. Again, we don’t want to look at just an application, but on a single process because this allows us to identify if a single process has become victim of some kind of malicious activity.

Now, to understand this, you have to first look at your baseline or your normal usage, just like we talked about before. If you understand what normal looks like for a process, then you can compare it against what you’re observing now. And based on that, that would help you determine if it’s suspicious. For example, right now my process that’s recording this is using about 50% of a processor. Now, is that normal or is that suspicious? Well, I know that normally when I record a video, it uses about 50%. If that jumped up to 75, that would be suspicious. So in understanding what normal is and what deviates from baseline really does help you identify what’s suspicious and should be looked at further.

Now, on a Windows system, you can look at this information by going into your task manager here on the screen, you can see all the different apps that are running, the amount of CPU being used and the memory being used. And that way you can identify what is normal and what isn’t if you click on one of those columns. Right now we are sorting it by name, but we could click on CPU or Memory and see which ones are the biggest offenders. As I look down the column for memory right now, I see one of the biggest offenders is actually Vivaldi, which is a 32 bit program and it has six processes running underneath it. Now, as I look at as a per process level, I see the first one is 1. 3 megabytes.

The second one is 15. 2 megabytes and the third one is 155. 6 megabytes. So if I was looking at one that might be suspicious, the first one I would look at is that third one. That 155. 6 megabytes because it is so much larger than the other two processes. Now, does that mean automatically it’s malicious? No, of course not. But it is the one that I would look at first. So while it’s really easy to use Task Manager inside of Windows, we have to have tools that we can use for Linux servers too, because you will do some instant responses there. Now, there are two main tools that we’re going to use for that. The first one is known as Free. Free is a command that outputs a summary of the available used and freely available memory on a computer.

Essentially, how much memory do you have and how much is available? To run the command Free, just type in Free and hit Enter and you’ll get a screen that looks like this. Here you can see the amount of total memory and the amount that’s used both for the physical memory and the swap file. Now, in addition to this, you can use the Top command. The top command works a lot like Task Manager. Top is a command that creates a scrollable table of every running process and is constantly refreshed so that you can see the most up to date statistics. I really like the Top command because it’s a really easy way to look at everything. So as I look at the Top command up at the Top, I get a lot of the same information that I had from Free.

I have the amount of memory, the total and the free memory. I see how much was used and how much is in buffer. If I go down to the bottom, I can see this list of processes listed by PID. Now, as I’m looking at it from PID, I get that for the first column. I then see the user that launched it for PID One, which is our system daemon. We have the root, which is the one who launched that. As you go across to the right, you’ll see columns for the CPU percentage and the memory percentage and the time it’s been online. And then what command was being used. All of these are commands that are used and a lot of them look just like we had inside the Task Manager inside of Windows. Now, in addition to Top, there’s actually a newer version known as Htop.

Now, the Htop utility is going to provide similar functionality plus mouse support. And it contains a more easy to read output when it’s run in the default configuration. So if I look at this, you get something that looks like this. First of all, you’ll notice there is more of a graphical display to this with some coloring instead of just being black and white. Additionally, I have this little graph in the top left corner showing memory and CPU usage. Also on the right side I could see the average load and the amount of uptime and how many tasks we have total and how many are running. Here we can also see our PID. On the left we get our user, and as we move to the right we see our CPU and our memory. This one has actually been sorted with the highest memory on top, which in this case is actually Htop.

Now, the nice thing about Htop is it’s very easy to sort by different columns. You’ll notice here we have F six, which is the sort by function. As we hit F six we can choose whether we want to sort by PID CPU or memory. For instance, if I want to see which process is using the most memory, I can do that by sorting by memory. And if I did that right now I’m just looking down the column, I can see the bash. The last process there on the list at 3645 would be the highest because it’s using 2. 3% of the total memory. Now, why is looking at memory important? Well, because it could indicate that you have memory issues, whether these are memory leaks or memory overflows.

When we deal with a memory overflow, this is a means of exploiting a vulnerability inside an application to execute arbitrary code or to crash the process. Or an ongoing memory leak would be used to crash the entire system. Now, if you suspect you may have a memory overflow issue, what can you do to test that? Well, you can take the code for that program and run it inside a sandbox debugging environment. This will allow you to find the process that might be exploiting a buffer overflow condition. By doing this, you can end up figuring is that code good or is that code malicious? Now, by putting it in the sandbox, you can start observing its behavior and trying to identify a signature.

Once you find a signature, you as an analyst can now identify any of these buffer overflow attacks by the signature created by that particular exploit code. Now, what do I mean by a signature? Well, this is a way of doing business for that exploit code. One of the most common things that we see with buffer overflows is they use what’s called a nopsled. Now a knop sled works like this. Here is a graphical depiction of my memory. Up at the top I have the shellcode and as I go down the stack you’ll see things called NOP or NOP. NOP is essentially a command that tells the computer to do nothing, just move to the next memory location and keep executing the program. Essentially it’s like blank space. So for example, if I started out on the first green knob I’m going to execute NOP.

Which says do nothing, move to the next spot that moves me to the next knob. Do nothing, move to the next spot that says relative jump, which means I’m going to take that and jump down to the next link that I was supposed to go to, which in this case is another knob. Once I reach that knob, I’m going to go through all four of those knops. This would be called a knob sled because I’m essentially skiing across Boss, going down the hill of those four knops. As I keep doing that, I go through all of them and I get down to the final line, which is NOP, nope. And then relative jump. Once I get to that relative jump, I’m going to jump up to where I’m told, which in this case is the shellcode.

That shellcode has my exploit, which means I now can run the exploit. This is essentially how a buffer overflow attack works. We try to overflow the buffer, write in a bunch of these knobs, and then a relative jump into a location that we know our shellcode is. And by doing this, we can randomly guess whenever a program tries to execute something, it might land on one of these knobs, slide down into a relative jump, and then push us up to the shellcode. So why would an attacker try to do a memory overflow or a memory consumption, or a memory leak or a processor consumption? Well, they do this to create a denial of service condition.

Now, I know we’ve mentioned denial of services before, but again, as a review, I want to remember that a denial of service is an attack meant to shut down a machine or a network that makes it inaccessible for its intended purpose. So if I start using a computer and I start taking up a lot of its resources, that can make it so it doesn’t have resources for other things it needs to do, this is a denial of service condition. So one type of denial of service attack method is to cause an application to overrun its memory buffer to trigger an execution failure. This can lead to a program crash or even a system crash. And if that happens, you have now conducted a denial of service against.

  1. Disk and File System (OBJ 4.3)

Disk and file systems. In this lesson, we’re going to look at the different indicators of compromise that may exist on your disk or file system. Now, as we said before, there is a lot of new fileless malware that’s out there. And while this is really prevalent in the industry, malware is still likely to leave metadata on the file system, even if it is fileless. Now, one of the common ways this happens is by using using staging areas. Now a staging area is a place where an adversary begins to collect data in preparation for data exfiltration. This may be a place like a temporary file or folder. It might be user profile locations, data masters, logs, alternate data streams or ads or even files being placed into the recycle bin. All of these are places that data can be staged, awaiting for it to be moved off of that network and into an attacker’s network.

Now, when an attacker does this, they often will take that data and compress it or encrypt it and then place it into the staging area. So this can be one of the common IOCs that you can look for. So if you start finding files that are compressed or encrypted and placed into certain directories, this could be an indication of a staging area. Another common area that’s used by attackers is that of alternate data streams. Now, alternate data streams or ads is a feature that’s embedded inside an NTFS file system. Now, NTFS is the file system that’s used on all Windows machines. So you can see here on the screen I have a directory called test seven. And inside of that we’re taking the words sample one, two, three and echoing that into a file called dot, dot dot.

And inside that file of dot, dot, dot, we’re using an alternate data stream called sample ABC TXT. That’s actually where the file is going to contain that word sample one, two, three. Now, you can see I’ve done the same thing here using null as the file and then a colon sample TXT. I am placing this sample TXT as an alternate data stream in the null file. You can see my third attempt here. I was going to try to send it over a comms link of comm one, which is a serial port, but on this system that wasn’t recognized. Now, as I went through and I did dir, you can see all of the files in a particular folder. Now, when you look at it, you see four different files. Here you see dot, which is this directory dot, dot, which is the parent directory, dot, dot, dot, which is that file that I made with the alternate data stream, having that text file in it.

And notice it has zero bytes because the null file had no information in it. But the alternate data stream did. And that’s being hidden by this operating system. And then you can see again that we have this null colon sample TXC dollar sign data, which is an alternate data stream inside of an alternate data stream, which gets a little confusing. Now again, the concept here is that an attacker can hide information in these alternate data streams and that way an analyst has to work really hard to find it. Now, luckily for us, there are tools that will scan an entire hard drive or file share and identify all of the alternate data streams. This way we can identify those and then look at them to see if they contain information that’s being prepared for exfiltration.

In addition to looking for alternate data streams, there are tools that will scan the entire host file system for file archives, compression and encryption types, and this will help you detect those data staging areas. Another great tool to use is a file system. Viewer. Now a File System Viewer is a tool that allows you to search the file system for certain keywords very quickly. And this can include system areas such as the Recycle Bin and NTFS shadow Copy and System Volume information stores. As you use these tools, you can start analyzing the file metadata and this will allow you to reconstruct a timeline of events that may have taken place on a particular computer.

As an analyst, one of the things you’re always trying to do is create a timeline. By creating that timeline, you can figure out exactly what that adversary has done on that system or on that network. And that will help you better defend in the future against those attacks and reconstitute your network after Navisser has been inside one. Here on the screen, you can see a tool called the File System Browser. Notice here that we can see inside that drive and we’re not seeing just the files and folders like boot and documents and settings, but we’re also seeing those hidden files and folders or those system files and folders like the ones that begin with a dollar sign.

These are shadow files and these are ones that are normally hidden from the operating system. But by using a tool like File System Browser, you as an analyst can go in and look at those things. Now you can also look at these types of files by using the command line. All the way back in your A plus studies, you learned about the Dir command, or Dur. This Windows directory command has some advanced functionality for file system analysis. There are lots of different options that we need to consider and we’re going to cover just three of them in this lesson. Now the first one is durax. When you use Dur ax, the x is actually going to be a variable here. So if I use ax, it’s going to then identify the filter for all the different file or folder types that will match that given parameter of x, that variable.

So for example, if I use Durh, that’s going to give me all of the hidden files and folders. To find a full list of all the different variables or parameters you could use, go ahead and look at the command line dur using the Help command and it will give you all the information on it. Next we have durq. Now, for Durrq, the Q is going to tell us to display who owns each file. This will also give us all of our standard information, but also who owns that file. And that’s important because if we start seeing files that should be owned by a particular user, but they’re now being owned by system or some other driver or device that can actually indicate that that file has been taken over by an adversary. And the third one we have is Dura R. Now, this is a capital R.

The capital R is going to display all the alternate data streams for a file. This is another way for us to find those alternate data streams and figure out if there are any things that are being hidden inside of them. Now, another type of disk or file indicator, a compromise, occurs when you start seeing your hard drive fill up. If the drive starts losing capacity, that could be an indication that something bad is happening. Now, it’s also just an indication that somebody’s using a lot of hard drive space. But when you see something like that, you do want to question it. If you’re used to seeing lots of free space on a given hard drive for your baseline assets, but now you’re seeing very little space, that machine may be used to stage information.

That means people are taking information from all over the network and putting it on that one machine in preparation for the adversary to take it from that machine and then upload it to their own servers. When this happens, essentially Malware is starting to cache those files locally for later exfiltration over the network, or later on if somebody plugs in a USB device to download all those things locally. Both of these are reasons why you may see the hard drive filling up and having a very high capacity consumption being used. Now, the next tool we want to talk about is Disk utilization Tools. So instead of just going into my computer and looking and seeing if your hard drive is filled, you can actually use a tool to do that. And disk utilization tools can actually scan a file system and retrieve a comprehensive list of different statistics.

This may give you other information about it, such as a visual representation of the storage space so you know how much is being used. It might provide you with a directory listing of that storage space telling you all the different files and folders in it. And it can also give you real time usage of data being written to that disk. All of these are good indicators you can use to develop your indicators of compromise. Now, in the rest of this lesson, we are going to talk about some specific tools for Linux because we just covered a lot of Windows tools. Now, when I talk about Linux file system analysis tools, there are many of them out there, but the ones we’re going to cover are lsof, DF, and Du. Now, lsof is a tool that retrieves a list of all the files that are currently open on the operating system.

This allows us to quickly get a list of all the resources that a process is currently using. Now why is that? Well, it’s because in Linux everything is treated as a file. Whether it’s a file, a folder, a disk, or even a resource like a printer. All of those are considered files inside the Linux operating system. So using lsof can quickly get us a list of all of those resources. For example, let’s say I did lsof U, root AP 1645. What does that say? Well, this says I want to show all of the files that are currently open on this computer that were opened by the user Root, and they are actively using the process number 1645. And this way I can find everything that’s associated with that particular process.

This is really helpful when you start doing adversary hunting and you start having pieces of information like, you know, there was a malicious process that was run and now you want to see everything else that was launched from that process. This type of a command can allow you to do that. Next, we want to talk about DF. Now DF is a tool that retrieves how much disk space is being used by all mounted file systems and how much space is available for each. Essentially, DF is going to find out how much disk space is there for all of your different disks. Now, the other command we want to talk about is Du. And Du is a tool that enables you to retrieve how much disk space each directory is using based on a specified directory.

So when we deal with DF, we’re dealing with the entire disk. When we’re dealing with Du, we’re dealing with a specific directory. Sometimes you want to look at the full disk, sometimes you want to look at the directory. For example, if I was using something on a server, I would be more inclined to look at a directory instead of the full disk. If I was looking at a workstation, on the other hand, I’d be more inclined to look at DF looking at the disk itself. Now, if I’m going to use Du, how would I use that command? Well, I would type something like this Duvar log. This will tell me how much space is the log directory using on this particular computer.

That would be a good indication if I’m above baseline, below baseline, or at baseline of what I expect to see. The final thing we need to talk about when we talk about disk and file IOCs is cryptography, because a lot of times, people will encrypt files in preparation for staging. Like we talked about before. Now, you can use cryptographic analysis tools to help you determine the type of encryption algorithm that’s being used and assess the strength of the encryption key. If you find some directories or files on your drive that are encrypted and you didn’t encrypt them and your user didn’t encrypt them, this might be something that’s being used as staging.

Now, as you look at that, you want to find out what’s inside those files. Well, to do that, you’re going to have to analyze them using cryptography to figure out if you can even open those or figure out what that key is. Now, just like we use encryption to protect our data from attackers, attackers use encryption to protect our data from us because they’re trying to steal that data. And so, as an analyst, you have to recover or brute force the user password to be able to obtain the decryption key for an encrypted volume. And so if there’s some piece of malware or ransomware or an attacker that has encrypted these files without key, those files are as good as gone. You won’t be able to read them, and you won’t be able to analyze them.

img