SY0-501 Section 3.7- Given a scenario, use appropriate tools and techniques to discover security threats and vulnerabilities.

Interpret results

of security assessment tools Similar to packet sniffing, port scanning and other “security tools”, vulnerability scanning can help you to secure your own network or it can be used by the bad guys to identify weaknesses in your system to mount an attack against. The idea is for you to use these tools to identify and fix these weaknesses before the bad guys use them against you. The goal of running a vulnerability scanner is to identify devices on your network that are open to known vulnerabilities. Different scanners accomplish this goal through different means. Some work better than others.

Some may look for signs such as registry entries in Microsoft Windows operating systems to identify that a specific patch or update has been implemented. Others, in particular Nessus, actually attempt to exploit the vulnerability on each target device rather than relying on registry information.

Kevin Novak did a review of commercial vulnerability scanners for Network Computing Magazine in June of 2003. While one of the products, Tenable Lightning, was reviewed as a front-end for Nessus, Nessus itself was not tested directly against the commercial products. Click here for the complete details and results of the review: VA Scanners Pinpoint Your Weak Spots. One issue with vulnerability scanners is their impact on the devices they are scanning. On the one hand you want the scan to be able to be performed in the background without affecting the device. On the other hand, you want to be sure that the scan is thorough. Often, in the interest of being thorough and depending on how the scanner gathers its information or verifies that the device is vulnerable, the scan can be intrusive and cause adverse affects and even system crashes on the device being scanned. There are a number of highly rated commercial vulnerability scanning packages including Found stone Professional, eEye Retina and SAINT. These products also carry a fairly hefty price tag. It is easy to justify the expense given the added network security and peace of mind, but many companies simply don’t have the sort of budget needed for these products. While not a true vulnerability scanner, companies that rely primarily on Microsoft Windows products can use the freely available Microsoft Baseline Security Analyzer (MBSA). MBSA will scan your system and identify if there are any patches missing for products such as the Windows operating systems, Internet Information Server (IIS), SQL Server, Exchange Server, Internet Explorer, Windows Media Player and Microsoft Office products. It has had some issues in the past and there are occasionally errors with the results of MBSA- but the tool is free and is generally helpful for ensuring that these products and applications are patched against known vulnerabilities. MBSA will also identify and alert you to missing or weak passwords and other common security issues

Nessus is an open-source product and is also freely available. While there is a Windows graphical front-end available, the core Nessus product requires Linux / Unix to run. The up side to that is that Linux can be obtained for free and many versions of Linux have relatively low system requirements so it would not be too difficult to take an old PC and set it up as a Linux server. For administrators used to operating in the Microsoft world there will be a learning curve to get used to Linux conventions and get the Nessus product installed.


There are various tools that you can use to scan the system and find vulnerabilities. We are listing a few below.

Protocol analyzer

A protocol analyzer (also known as a packet sniffer, network analyzer, or network sniffer) is a piece of software or an integrated software/hardware system that can capture and decode network traffic. Protocol analyzers have been popular with system administrators and security professionals for decades because they are such versatile and useful tools for a network environment. From a security perspective, protocol analyzers can be used for a number of activities, such as the following:

Detecting intrusions or undesirable traffic (IDS/IPS must have some type of capture and decode ability to be able to look for suspicious/malicious traffic)

– Capturing traffic during incident response or incident handling

– Looking for evidence of botnets, Trojans, and infected systems

– Looking for unusual traffic or traffic exceeding certain thresholds

– Testing encryption between systems or applications

From a network administration perspective, protocol analyzers can be used for activities such as these:

– Analyzing network problems  Detecting misconfigured applications or misbehaving applications

– Gathering and reporting network usage and traffic statistics

– Debugging client/server communications

Regardless of the intended use, a protocol analyzer must be able to see network traffic in order to capture and decode it. A software-based protocol analyzer must be able to place the NIC it is going to use to monitor network traffic in promiscuous mode (sometimes called promisc mode). Promiscuous mode tells the NIC to process every network packet it sees regardless of the intended destination. Normally, a NIC will process only broadcast packets (that are going to everyone on that subnet) and packets with the NIC’s Media Access Control (MAC) address as the destination address inside the packet. As a sniffer, the analyzer must process every packet crossing the wire, so the ability to place a NIC into promiscuous mode is critical.

Honeypots and Honeynets

As is often the case, one of the best tools for information security personnel has always been knowledge. To secure and defend a network and the information systems on that network properly, security personnel need to know what they are up against. What types of attacks are being used? What tools and techniques are popular at the moment? How effective is a certain technique? What sort of impact will this tool have on my network? Often this sort of information is passed through white papers, conferences, mailing lists, or even word of mouth. In some cases, the tool developers themselves provide much of the information in the interest of promoting better security for everyone. Information is also gathered through examination and forensic analysis, often after a major incident has already occurred and information systems are already damaged.

One of the most effective techniques for collecting this type of information is to observe activity first-hand—watching an attacker as she probes, navigates, and exploits his way through a network. To accomplish this without exposing critical information systems, security researchers often use something called a honeypot. A honeypot, sometimes called a digital sandbox, is an artificial environment where attackers can be contained and observed without putting real systems at risk. A good honeypot appears to an attacker to be a real network consisting of application servers, user systems, network traffic, and so on, but in most cases it’s actually made up of one or a few systems running specialized software to simulate the user and network traffic common to most targeted networks. The figure below illustrates a simple honeypot layout in which a single system is placed on the network to deliberately attract attention from potential attackers.

There are many honeypots in use, specializing in everything from wireless to denial-ofservice attacks; most are run by research, government, or law enforcement organizations. Why aren’t more businesses running honeypots? Quite simply, the time and cost are prohibitive. Honeypots take a lot of time and effort to manage and maintain and even more effort to sort, analyze, and classify the traffic the honeypot collects. Unless they are developing security tools, most companies focus their limited security efforts on preventing attacks, and in many cases, companies aren’t even thatconcerned with detecting attacks as long as the attacks are blocked, are unsuccessful, and don’t affect business operations. Even though honeypots can serve as a valuable resource by luring attackers away from production systems and allowing defenders to identify and thwart potential attackers before they cause any serious damage, the costs and efforts involved deter many companies from using

Port scanner

Port Scanning is one of the most popular reconnaissance techniques attackers use to discover services they can break into. All machines connected to a Local Area Network (LAN) or Internet run many services that listen at well-known and not so well known ports. A port scan helps the attacker find which ports are available (i.e., what service might be listing to a port). Essentially, a port scan consists of sending a message to each port, one at a time. The kind of response received indicates whether the port is used and can therefore be probed further for weakness.

Port Scan – Port Numbers

As you know, public IP addresses are controlled by worldwide registrars, and are unique globally. Port numbers are not so controlled, but over the decades certain ports have become standard for certain services. The port numbers are unique only within a computer system. Port numbers are 16-bit unsigned numbers.The port numbers are divided into three ranges:

Well Known Ports (0 – 1023)

– Registered Ports (1024 – 49151)

– Dynamic and/or Private Ports (49152 – 65535)

– Assessment technique

– Baseline reporting

– Code review

– Determine attack surface

– Architecture

– Design reviews

Port Scanning Basic Techniques

The simplest port scan tries (i.e., sends a carefully constructed packet with a chosen destination port number) each of the ports from 0 to 65535 on the victim to see which ones are open.

TCP connect ():- The connect() system call provided by an OS is used to open a connection to every interesting port on the machine. If the port is listening, connect() will succeed, otherwise the port isn’t reachable. Strobe -A strobe does a narrower scan; only looking for those services the attacker knows how to exploit. The name comes from one of the original TCP scanning programs, though now virtually all-scanning tools include this feature. The ident protocol allows for the disclosure of the username of the owner of any process connected via TCP, even if that process didn’t initiate the connection. So, e.g., one can connect to port 80 and then use identd to find out whether the HTTP server is running as root.

Passive vs. active tools

Banner grabbing As the name implies, banner grabbing looks at the banner, or header information messages sent with data to find out about the system(s). Banners often identify the host, the operating system running on it, and other information that can be useful if you are going to attempt to later breach the security of it. Banners can be snagged with Telnet as well as tools like netcat or Nmap.

Risk calculations

For purposes of risk assessment, both in the real world and for the exam, you should familiarize yourself with a number of terms to determine the impact an event could have:

– ALE is the annual loss expectancy value. This is a monetary measure of how much loss you could expect in a year.

– SLE is another monetary value, and it represents how much you expect to lose at any one time: the single loss expectancy. SLE can be divided into two components:

– AV (asset value)

– EF (exposure factor)

– ARO is the likelihood, often drawn from historical data, of an event occurring within a year: the annualized rate of occurrence.

When you compute risk assessment, remember this formula:


As an example, if you can reasonably expect that every SLE, which is equal to asset value (AV) times exposure factor (EF), will be the equivalent of $1,000 and that there will be seven such occurrences a year (ARO), then the ALE is $7,000. Conversely, if there is only a 10 percent chance of an event occurring within a year time period (ARO = 0.1), then the ALE drops to $100.


The meaning of the word likelihood is usually self-explanatory; however, there are actual values that can be assigned to likelihood. The National Institute of Standards and Technology (NIST) recommends viewing likelihood as a score representing the possibility of threat initiation. In this way, it can be expressed either in qualitative or quantitative terms.

Assessment types

From the standpoint of measuring security and vulnerability in the network, you need to focus on three things:


What is the actual danger under consideration? This is the likelihood of an attack being successful.


What are the likely dangers associated with the risk? What are the means and source of the potential attack? This needs to be weighed against the likelihood of an attack, which the NIST defines as “a weighted risk factor based on an analysis of the probability that a given threat is capable of exploiting a given vulnerability.”


Where is the system weak? Identify the flaws, holes, areas of exposure, and perils.

Baseline Reporting

The term baseline reporting became popular with legislation such as Sarbanes–Oxley, which requires IT to provide internal controls that reduce the risk of unauthorized transactions. As the name implies, baseline reporting checks to make sure that things are operating status quo, and change detection is used to alert administrators when modificationsare made. A changesfrom-baseline report can be run to pinpoint security rule breaches quickly. This is often combined with gap analysis to measure the controls at a particular company against industry standards. One popular tool for baseline reporting is CA Policy and Configuration Manager (

Code Review

The purpose of code review is to look at all custom written code for holes that may exist. The review needs also to examine changes that the code—most likely in the form of a finished application—may make: configuration files, libraries, and the like. During this examination, look for threats such as opportunities for injection to occur (SQL, LDAP, code, and so on), cross-site request forgery, and authentication.

Code review is often conducted as a part of gray box testing. Looking at source code can often be one of the easiest ways to find weaknesses within the application. Simply reading the code is known as manual assessment, whereas using tools to scan the code is known as automated assessment.

Determine Attack Surface

The attack surface of an application is the area of that application that is available to users— those who are authenticated and, more importantly, those who are not. As such, it can include the services, protocols, interfaces, and code. The smaller the attack surface, the less visible the application is to attack; the larger the attack surface, the more likely it is to become a target. The goal of attack surface reduction (ASR) is to minimize the possibility of exploitation by reducing the amount of code and limiting potential damage. The potential damage can be limited by turning off unnecessary functions, reducing privileges, limiting entry points, and adding authentication requirements.