SY0-501 Section 2.4-Given a scenario, implement basic forensic procedures

The five steps outlined here will help in all incident response situations. For the exam, however, there are a number of procedures and topics about which CompTIA wants you to be aware that are relevant to a forensic investigation. We strongly recommend that you familiarize yourself with these topics as you prepare for the exam.

Act in Order of Volatility

When dealing with multiple issues, address them in order of volatility (OOV); always deal with the most volatile first. Volatility can be thought of as the amount of time that you have to collect certain data before a window of opportunity is gone. Naturally, in an investigation you want to collect everything, but some data will exist longer than others, and you cannot possibly collect all of it once. As an example, the OOV in an investigation may be RAM, hard drive data, CDs/DVDs, and printouts.

Capture System Image

A system image is a snapshot of what exists. Capturing an image of the operating system in its exploited state can be helpful in revisiting the issue after the fact to learn more about it. As an analogy, think of germ samples that are stored in labs after major outbreaks so that scientists can revisit them later and study them further.

Document Network Traffic and Logs

Look at network traffic and logs to see what information you can find there. This information can be useful in identifying trends associated with repeated attacks.

Capture Video

Capture any relevant video that you can. Video can later be analyzed manually in individual frames as well as run through a number of programs that can create indices of the contents.

Record Time Offset

It is quite common for workstation times to be off slightly from actual time, and that can happen with servers as well. Since a forensic investigation is usually dependent on a step-bystep account of what has happened, being able to follow events in the correct time sequence is critical. Because of this, it is imperative to record the time offset on each affected machine during the investigation. One method of assisting with this is to add an entry to a log file and note the time that this was done and the time associated with it on the system.

Take Hashes

It is important to collect as much data as possible to be able to illustrate the situation, and hashes must not be left out of the equation. NIST (the National Institute of Standards and Technology) maintains a National Software Reference Library (NSRL). One of the purposes of the NSRL is to collect “known, traceable software applications” through their hash values and store them in a Reference Data Set (RDS). The RDS can then be used by law enforcement, government agencies, and businesses to determine which files are important as evidence in criminal investigations.

Capture Screenshots

Just like video, capture all relevant screenshots for later analysis. One image can often parlay the same information that it would take hundreds of log entries to equal.

Talk to Witnesses

It is important to talk to as many witnesses as possible to learn exactly what happened and to do so as soon as possible after the incident. Over time, details and reflections can change, and you want to collect their thoughts before such changes occur. If at all possible, document as much of the interview as you can with video recorders, digital recorders, or whatever recording tools you can find.

Track Man Hours and Expenses Make no mistake about it; an investigation is expensive. Track total man-hours and expenses associated with the investigation, and be prepared to justify them if necessary to superiors, a court, or insurance agents.

Chain of custody

An important concept to keep in mind when working with incidents is the chain of custody, which covers how evidence is secured, where it is stored, and who has access to it. When you begin to collect evidence, you must keep track of that evidence at all times and show who has it, who has seen it, and where it has been. The evidence must always be within your custody, or you’re open to dispute about possible evidence tampering.

Big data analysis

One issue that will be tested with the first three (document review, walkthrough, and simulation) is called Big Data analysis. Big Data refers to data that is too large to be dealt with by traditional database management means. As of this writing, this usually means exabytes of data (a terabyte is a thousand gigabytes, a petabyte is a thousand terabytes, and an exabyte is a thousand petabytes). When systems are this large, obviously the system being down has a wide-ranging impact. However, doing a cutover test is very difficult, and in some cases it is just not practical. That does not mean, however, that you can simply ignore those systems in your disaster-recovery planning