SY0-501 Section 1.1- Implement security configuration parameters on network devices and other technologies.

This is no different from our daily lives. We constantly make decisions about what risks we’re willing to accept. When we get in a car and drive to work, there’s a certain risk that we’re taking. It’s possible that something completely out of control will cause us to become part of an accident on the highway. When we get on an airplane, we’re accepting the level of risk involved as the price of convenience. However, most people have a mental picture of what an acceptable risk is, and won’t go beyond that in most circumstances. If I happen to be upstairs at home, and want to leave for work, I’m not going to jump out the window. Yes, it would be more convenient, but the risk of injury outweighs the advantage of convenience.

Every organization needs to decide for itself where between the two extremes of total security and total access they need to be. A policy needs to articulate this, and then define how that will be enforced with practices and such. Everything that is done in the name of security, then, must enforce that policy uniformly.

Firewall

A firewall can be hardware, software, or a combination whose purpose is to enforce a set of network security policies across network connections. It is much like a wall with a window: the wall serves to keep things out, except those permitted through the window. Network security policies act like the glass in the window; they permit some things to pass, such as light, while blocking others, such as air. The heart of a firewall is the set of security policies that it enforces. Management determines what is allowed in the form of network traffic between devices, and these policies are used to build rule sets for the firewall devices used to filter network traffic across the network.

Security policies are rules that define what traffic is permissible and what traffic is to be blocked or denied. These are not universal rules, and many different sets of rulesare created for a single company with multiple connections. A web server connected to the Internet may be configured to allow traffic only on port 80 for HTTP and have all other ports blocked, for example. An e-mail server may have only necessary ports for e-mail open, with others blocked. The network firewall can be programmed to block all traffic to the web server except for port 80 traffic, and to block all traffic bound to the mail server except for port 25. In this fashion, the firewall acts as a security filter, enabling control over network traffic, by machine, by port, and in some cases based on application level detail. A key to setting security policies for firewalls is the same as has been seen for other security policies—the principle of least access. Allow only the necessary access for a function; block or deny all unneeded functionality. How a firm deploys its firewalls determines what is needed for security policies for each firewall.

How Do Firewalls Work?

Firewalls enforce the established security policies through a variety of mechanisms, including the following:

Network Address Translation (NAT)

Basic packet filtering

State-ful packet filtering

ACLs

Application layer proxies

One of the most basic security functions provided by a firewall is NAT, which allows you to mask significant amounts of information from outside of the network. This allows an outside entity to communicate with an entity inside the firewall without truly knowing its address. NAT is a technique used in IPv4 to link private IP addresses to public ones. Private IP addresses are sets of IP addresses that can be used by anyone and by definition are not routable across the Internet. NAT can assist in security by preventing direct access to devices from outside the firm, without first having the address changed at a NAT device. The benefit is less public IP addresses are needed, and from a security point of view the internal address structure is not known to the outside world. If a hacker attacks the source address, he is simply attacking the NAT device, not the actual sender of the packet.

NAT was conceived to resolve an address shortage associated with IPv4 and is considered by many to be unnecessary for IPv6. The added security features of enforcing traffic translation and hiding internal network details from direct outside connections will give NAT life well into the IPv6 timeframe.

Basic packet filtering, the next most common firewall technique, involves looking at packets, their ports, protocols, source and destination addresses, and checking that information against the rules configured on the firewall. Telnet and FTP connections may be prohibited from being established to a mail or database server, but they may be allowed for the respective service servers. This is a fairly simple method of filtering based on information in each packet header, such as IP addresses andTCP/UDP ports. Packet filtering will not detect and catch all undesired packets, but it is fast and efficient.

Wireless

Wireless devices bring additional security concerns. There is, by definition, no physical connection to a wireless device; radio waves or infrared carry data, which allows anyone within range access to the data. This means that unless you take specific precautions, you have no control over who can see your data. Placing a wireless device behind a firewall does not do any good, because the firewall stops only physically connected traffic from reaching the device. Outside traffic can come literally from the parking lot directly to the wireless device.

The point of entry from a wireless device to a wired network is performed at a device called a wireless access point. Wireless access points can support multiple concurrent devices accessing network resources through the network node they provide.Several mechanisms can be used to add wireless functionality to a machine. For PCs, this can be done via an expansion card.

Modems

Modems were once a slow method of remote connection that was used to connect client workstations to remote services over standard telephone lines. Modem is a shortened form of modulator/demodulator, covering the functions actually performed by the device as it converts analog signals to digital and vice versa. To connect a digital computer signal to the analog telephone line required one of these devices. Today, the use of the term has expanded to cover devices connected to special digital telephone lines—DSL modems—and to cable television lines—cable modems. Although these devices are not actually modems in the true sense of the word, the term has stuck through marketing efforts directed to consumers. DSL and cable modems offer broadband high-speed connections and the opportunity for continuous connections to the Internet. Along with these new desirable characteristics come some undesirable ones, however. Although they both provide the same type of service, cable and DSL modems have some differences. A DSL modem provides a direct connection between a subscriber’s computer and an Internet connection at the local telephone company’s switching station.

This private connection offers a degree of security, as it does not involve others sharing the circuit. Cable modems are set up in shared arrangements that theoretically could allow a neighbor to sniff a user’s cable modem traffic.

Both cable and DSL services are designed for a continuous connection, which brings up the question of IP address life for a client. Although some services originally used static IP arrangement, virtually all have now adopted the Dynamic Host Configuration Protocol (DHCP) to manage their address space. A static IP has an advantage of being the same and enabling convenient DNS connections for outside users. As cable and DSL services are primarily designed for client services asopposed to host services, this is not a relevant issue. A security issue of static IP is that it is a stationary target for hackers. The move to DHCP has not significantly lessened this threat, however, for the typical IP lease on a cable modem DHCP is for days. This is still relatively stationary, and some form of firewall protection needs to be employed by the user.

Cable/DSL Security

The modem equipment provided by the subscription service converts the cable or DSL signal into a standard Ethernet signal that can then be connected to a NIC on the client device. This is still just a direct network connection, with no security device separating the two. The most common security device used in cable/DSL connections is a firewall. The firewall needs to be installed between the cable/DSL modem and client computers.

Telecom/PBX

Private branch exchanges (PBXs) are an extension of the public telephone network into a business. Although typically considered a separate entity from data systems, they are frequently interconnected and have security requirements as part of this interconnection as well as of their own. PBXs are computer-based switching equipment designed to connect telephones into the local phone system. Basically digital switching systems, they can be compromised from the outside and used by phone hackers (preachers) to make phone calls at the business’ expense. Although this type of hacking has decreased with lower cost long distance, it has not gone away, and as several firms learn every year, voice mail boxes and PBXs can be compromised and the long-distance bills can get very high, very fast.

Another problem with PBXs arises when they are interconnected to the data systems, either by corporate connection or by rogue modems in the hands of users. In either case, a path exists for connection to outside data networks and the Internet. Just as a firewall is needed for security on data connections, one is needed for these connections as well. Telecommunications firewalls are a distinct type of firewall designed to protect both the PBX and the data connections. The functionality of a telecommunications firewall is the same as that of a data firewall: it is there to enforce security policies.

Telecommunication security policies can be enforced even to cover hours of phone use to prevent unauthorized long-distance usage through the implementation of access codes and/or restricted service hours.

RAS

Remote Access Service (RAS) is a portion of the Windows OS that allows the connection between a client and a server via a dial-up telephone connection. Although slower than cable/DSL connections, this is still a common method for connecting to a remote network. When a user dials into the computer system, authentication and authorization are performed through a series of remote access protocols. For even greater security, a callback system can be employed, where the server calls back to the client at a set telephone number for the data exchange. RAS can also meanRemote Access Server, a term for a server designed to permit remote users access to a network and to regulate their access. A variety of protocols and methods exist to perform this function.

VPN

A virtual private network (VPN) is a construct used to provide a secure communication channel between users across public networks such as the Internet. A variety of techniques can be employed to instantiate a VPN connection.

The use of encryption technologies allows either the data in a packet to be encrypted or the entire packet to be encrypted. If the data is encrypted, the packet header can still be sniffed and observed between source and destination, but the encryption protects the contents of the packet from inspection. If the entire packet is encrypted, it is then placed into another packet and sent via tunnel across the public network. Tunneling can protect even the identity of the communicating parties.

The most common implementation of VPN is via IPsec, a protocol for IP security. IPsec is mandated in IPv6 and is optionally back-fitted into IPv4. IPsec can be implemented in hardware, software, or a combination of both.

Intrusion Detection Systems

Intrusion detection systems (IDSs) are designed to detect, log, and respond to unauthorized network or host use, both in real time and after the fact. IDSs are available from a wide selection of vendors and are an essential part of network security. These systems are implemented in software, but in large systems, dedicated hardware is required as well. IDSs can be divided into two categories: network-based systems and host-based systems. Two primary methods of detection are used: signature-based and anomaly-based.

Network Access Control

Networks comprise connected workstations and servers. Managing security on a network involves managing a wide range of issues, from various connected hardware and the software operating these devices. Assuming that the network is secure, each additional connection involves risk. Managing the endpoints on a case-by-case basis as they connect is a security methodology known as network access control. Two main competing methodologies exist: Network Access Protection (NAP) is a Microsoft technology for controlling network access of a computer host, and Network Admission Control (NAC) is Cisco’s technology for controlling network admission.

Both the Cisco NAC and Microsoft NAP are in their early stages of implementation. The concept of automated admission checking based on client device characteristics is here to stay, as it provides timely control in the ever-changing network world of today’s enterprises.

Network Monitoring/Diagnostic

The computer network itself can be considered a large computer system, with performance and operating issues. Just as a computer needs management, monitoring,and fault resolution, so do networks. SNMP was developed to perform this function across networks. The idea is to enable a central monitoring and control center to maintain, configure, and repair network devices, such as switches and routers, as well as other network services such as firewalls, IDSs, and remote access servers. SNMP has some security limitations, and many vendors have developed software solutions that sit on top of SNMP to provide better security and better management tool suites.

The concept of a network operations center (NOC) comes from the old phone company network days, when central monitoring centers monitored the health of the telephone network and provided interfaces for maintenance and management. This same concept works well with computer networks, and companies with midsize and larger networks employ the same philosophy. The NOC allows operators to observe and interact with the network, using the self-reporting and in some cases self-healing nature of network devices to ensure efficient network operation. Although generally a boring operation under normal conditions, when things start to go wrong, as in the case of a virus or worm attack, the center can become a busy and stressful place as operators attempt to return the system to full efficiency while not interrupting existing traffic.

As networks can be spread out literally around the world, it is not feasible to have a person visit each device for control functions. Software enables controllers at NOCs to measure the actual performance of network devices and make changes to the configuration and operation of devices remotely. The ability to make remote connections with this level of functionality is both a blessing and a security issue. Although this allows efficient network operations management, it also provides an opportunity for unauthorized entry into a network. For this reason, a variety of security controls are used, from secondary networks to VPNs and advanced authentication methods with respect to network control connections.

Routers

Routers are network traffic management devices used to connect different network segments together. Routers operate at the network layer of the OSI model, routing traffic using the network address (typically an IP address) utilizing routing protocols to determine optimal routing paths across a network. Routers form the backbone of the Internet, moving traffic from network to network, inspecting packets from every communication as they move traffic in optimal paths.

Routers operate by examining each packet, looking at the destination address, and using algorithms and tables to determine where to send the packet next. This process of examining the header to determine the next hop can be done in quick fashion. Routers use access control lists (ACLs) as a method of deciding whether a packet is allowed to enter the network. With ACLs, it is also possible to examine the source address and determine whether or not to allow a packet to pass. This allows routers equipped with ACLs to drop packets according to rules built in the ACLs. This can be a cumbersome process to set up and maintain, and as the ACL grows in size, routing efficiency can be decreased. It is also possible to configure some routers to act as quasi–application gateways, performing stasteful packet inspection and using contentsas well as IP addresses to determine whether or not to permit a packet to pass. This can tremendously increase the time for a router to pass traffic and can significantly decrease router throughput.

Switches

Switches form the basis for connections in most Ethernet-based local area networks (LANs). Although hubs and bridges still exist, in today’s high-performance network environment switches have replaced both. A switch has separate collision domains for each port. This means that for each port, two collision domains exist: one from the port to the client on the downstream side and one from the switch to the network upstream.

When full duplex is employed, collisions are virtually eliminated from the two nodes, host and client. This also acts as a security factor in that a sniffer can see only limited traffic, as opposed to a hub-based system, where a single sniffer can see all of the traffic to and from connected devices.

Switches operate at the data link layer, while routers act at the network layer. For intranets, switches have become what routers are on the Internet—the device of choice for connecting machines. As switches have become the primary network connectivity device, additional functionality has been added to them. A switch is usually a layer 2 device, but layer 3 switches incorporate routing functionality.

Switches can also perform a variety of security functions. Switches work by moving packets from inbound connections to outbound connections. While moving the packets, it is possible to inspect the packet headers and enforce security policies. Port address security based on MAC addresses can determine whether a packet is allowed or blocked from a connection. This is the very function that a firewall uses for its determination, and this same functionality is what allows an 802.1x device to act as an “edge device.”

Load Balancers

Network Load Balancing, a clustering technology included in the Microsoft Windows 2000 Advanced Server and Datacenter Server operating systems, enhances the scalability and availability of mission-critical, TCP/IP-based services, such as Web, Terminal Services, virtual private networking, and streaming media servers. This component runs within cluster hosts as part of the Windows 2000 operating system and requires no dedicated hardware support. To scale performance, Network Load Balancing distributes IP traffic across multiple cluster hosts. It also ensures high availability by detecting host failures and automatically redistributing traffic to the surviving hosts. Network Load Balancing provides remote controllability and supports rolling upgrades from the Windows NT 4.0 operating system.

The unique and fully distributed architecture of Network Load Balancing enables it to deliver very high performance and failover protection, especially in comparison with dispatcherbased load balancers. This white paper describes the key features of thistechnology and explores its internal architecture and performance characteristics in detail.

Internet server programs supporting mission-critical applications such as financial transactions, database access, corporate intranets, and other key functions must run 24 hours a day, seven days a week. And networks need the ability to scale performance to handle large volumes of client requests without creating unwanted delays. For these reasons, clustering is of wide interest to the enterprise. Clustering enables a group of independent servers to be managed as a single system for higher availability, easier manageability, and greater scalability.

The Microsoft Windows 2000 Advanced Server and Datacenter Server operating systems include two clustering technologies designed for this purpose: Cluster service, which is intended primarily to provide failover support for critical line-of-business applications such as databases, messaging systems, and file/print services; and Network Load Balancing, which serves to balance incoming IP traffic among multi-node clusters. We will treat this latter technology in detail here.

Network Load Balancing provides scalability and high availability to enterprise-wide TCP/IP services, such as Web, Terminal Services, proxy, Virtual Private Networking (VPN), and streaming media services. Network Load Balancing brings special value to enterprises deploying TCP/IP services, such as e-commerce applications, that link clients with transaction applications and back-end databases.

Network Load Balancing servers (also called hosts) in a cluster communicate among themselves to provide key benefits, including:

Scalability. Network Load Balancing scales the performance of a server-based program, such as a Web server, by distributing its client requests across multiple servers within the cluster. As traffic increases, additional servers can be added to the cluster, with up to 32 servers possible in any one cluster

High availability. Network Load Balancing provides high availability by automatically detecting the failure of a server and repartitioning client traffic among the remaining servers within ten seconds, while providing users with continuous service.

Network Load Balancing distributes IP traffic to multiple copies (or instances) of a TCP/IP service, such as a Web server, each running on a host within the cluster. Network Load Balancing transparently partitions the client requests among the hosts and lets the clients access the cluster using one or more “virtual” IP addresses. From the client’s point of view, the cluster appears to be a single server that answers these client requests. As enterprise traffic increases, network administrators can simply plug another server into the cluster.

For example, the clustered hosts in the figure below work together to service network traffic from the Internet. Each server runs a copy of an IP-based service, such asInternet Information Services 5.0 (IIS), and Network Load Balancing distributes the networking workload among them. This speeds up normal processing so that Internet clients see faster turnaround on their requests. For added system availability, the back-end application (a database, for example) may operate on a two-node cluster running Cluster service.

A four-host cluster works as a single virtual server to handle network traffic. Each host runs its own copy of the server with Network Load Balancing distributing the work among the four hosts.

Advantages of Network Load Balancing

Network Load Balancing is superior to other software solutions such as round robin DNS (RRDNS), which distributes workload among multiple servers but does not provide a mechanism for server availability. If a server within the host fails, RRDNS, unlike Network Load Balancing, will continue to send it work until a network administrator detects the failure and removes the server from the DNS address list. This results in service disruption for clients. Network Load Balancing also has advantages over other load balancing solutions— both hardware- and software-based—that introduce single points of failure or performance bottlenecks by using a centralized dispatcher. Because Network Load Balancing has no proprietary hardware requirements, any industry-standard compatible computer can be used. This provides significant cost savings when compared to proprietary hardware load balancing solutions.

The unique and fully distributed software architecture of Network Load Balancing enables it to deliver the industry’s best load balancing performance and availability.

Proxy Servers

Though not strictly a security tool, a proxy server can be used to filter out undesirable traffic and prevent employees from accessing potentially hostile web sites. A proxy server takes requests from a client system and forwards it to the destination server on behalf of the client. Proxy servers can be completely transparent (these are usually called gateways or tunneling proxies), or a proxy server can modify the client request before sending it on or even serve the client’s request without needing to contact the destination server. Several major categories of proxy servers are in use:

Anonymizing proxy An anonymizing proxy is designed to hide information about the requesting system and make a user’s web browsing experience “anonymous.” Individuals concerned often use this type of proxy service with the amount of personal information being transferred across the Internet and the use of tracking cookies and other mechanisms to track browsing activity

Caching proxy This type of proxy keeps local copies of popular client requests and is often used in large organizations to reduce bandwidth usage and increase performance. When a request is made, the proxy server first checks to see whether it has a current copy of the requested content in the cache; if it does, it services the client request immediately without having to contact the destination server. If the content is old or the caching proxy does not have a copy of the requested content, the request is forwarded to the destination server.

Content filtering proxy Content filtering proxies examine each client request and compare it to an established acceptable use policy. Requests can usually be filtered in a variety of ways including the requested URL, destination system, or domain name or by keywords in the content itself. Content filtering proxies typically support userlevel authentication so access can be controlled and monitored and activity through the proxy can be logged and analyzed. This type of proxy is very popular in schools, corporate environments, and government networks.

Open proxy An open proxy is essentially a proxy that is available to any Internet user and often has some anonymizing capabilities as well. This type of proxy has been the subject of some controversy with advocates for Internet privacy and freedom on one side of the argument, and law enforcement, corporations, and government entities on the other side. As open proxies are often used to circumvent corporate proxies, many corporations attempt to block the use of open proxies by their employees.

Reverse proxy A reverse proxy is typically installed on the server side of a network connection, often in front of a group of web servers. The reverse proxy intercepts all incoming web requests and can perform a number of functions including traffic filtering, SSL decryption, serving of common static content such as graphics, and performing load balancing.

Web proxy A web proxy is solely designed to handle web traffic and is sometimes called a web cache. Most web proxies are essentially specialized caching proxies.

Web Security Gateways

If your organization is like most, Web security gateways weren’t high on your list of antimalware measures until pretty recently. Your attention to incoming Web traffic has focused largely on policy control–HR concerns over employee access to Internet pornography, gambling, etc., and productivity, as users spend disproportionate time shopping online and checking up on their stocks and favorite teams.

Anti-malware largely meant anti-virus and was pretty well controlled by email screening and desktop antivirus. While Web security gateways are attracting increased attention, desktop antivirus vendors are scrambling to reinforce their products with improved heuristics, hostbased IPS and application controls. The antivirus vendors are responding to the rapidly shifting threats from email-borne viruses to Web-based malware designed to steal confidential data and identities and take control of corporate computers.

The Web security gateway market is an interesting mix of appliance and software vendors, each expanding on their primary strengths–URL filtering vendors like Websense and Secure Computing; traditional AV vendors like McAfee, Trend Micro and Sophos; IM control specialists like FaceTime and email security vendors such as IronPort (recently purchased by Cisco) and MessageLabs–by development, acquisition or partnerships. Newer companies like Mi5 and Anchiva suggest room for growth. (Gartner identifies Blue Coat and Secure Computing as market leaders in a June Magic Quadrant report for this newly defined market.)

Managed Web security gateway services are another option. Although the market is still young, vendors are starting to offer their technology as a service. ScanSafe, the first company to offer antimalware and URL filtering and IM control as pure-play services, actually scans all their customers Web traffic. It OEMs for companies like Postini and AT&T. MessageLabs, which initially sold ScanSafe-based services, now offers managed services based on its own technology.

VPN concentrators

With the Internet, we had the ability to create a VPN, providing a secure connection for users dialing in to their ISP from wherever. As time has passed, the need for greater security over these VPNs has increased. Unfortunately, small businesses usually have a limited amount of funds and/or IT expertise. But that doesn’t mean they should ignore the need to secure their VPNs properly. A VPN concentrator — ideal when you require a single device to handle a large number of incoming VPN tunnels — may be just what they need.

VPN concentrators typically arrive in one of two architectures: SSL VPNs and IPSec VPNs. Some concentrators only offer support of one protocol or the other, whereas Cisco and other vendors advertise the ability to utilize either with their concentrators.

The traditional tunnel for VPNs relies on IPSec, which resides at the network layer of the OSI model. At this level, a client is considered a virtual member of the connected network and can pretty much access the network as if locally connected. Therein lies a positive aspect of IPSec: Apps run without any awareness that the client is comingfrom outside the network. The drawback is that additional security controls have to be configured to reduce risks.

For a client to access the IPSec VPN, it must have the client-side software configured. While this adds security, it provides additional cost to implement and leads to additional time and energy spent by tech support. This is what leads many toward an SSL solution.

SSL is already built in to the capabilities of pretty much all computers through Web browsers. Thus, there is no additional work to install and configure the client side. In addition, rather than residing at the network layer, allowing access to all aspects of a network, SSL lets admins allow access a bit more precisely toward applications that are Web-enabled. In addition, admins can establish a finer level of control over uses with SSL VPN connections.

On the negative angle, however, being that you can only utilize SSL VPNs through a Web browser, only Web-based applications will work. With a little bit of work, you can Webenable additional applications, but this adds to the configuration time and may make SSL an unattractive solution for some.

In addition, SSL applications will not have centralized storage, shared access to resources (like printers), or files and other options that you can achieve through an IPSec connection. Some worry about Web caching with private information being left behind. Thus, you might want to choose a VPN concentrator that lists within its feature sets “automatic cache cleanup after session termination to ensure privacy of data,” as the NetGear SSL device does.

Network-based IDSs

Network-based IDSs (NIDS) came along a few years after host-based systems. After running host-based systems for a while, many organizations grew tired of the time, energy, and expense involved with managing the first generation of these systems. The desire for a “better way” grew along with the amount of interconnectivity between systems and consequently the amount of malicious activity coming across the networks themselves. This fueled development of a new breed of IDS designed to focus on the source for a great deal of the malicious traffic—the network itself.

The NIDS integrated very well into the concept of perimeter security. More and more companies began to operate their computer security like a castle or military base with attention and effort focused on securing and controlling the ways in and out—the idea being that if you could restrict and control access at the perimeter, you didn’t have to worry as much about activity inside the organization. Even though the idea of a security perimeter is somewhat flawed (many security incidents originate inside the perimeter), it caught on very quickly, as it was easy to understand and devices such as firewalls, bastion hosts, and routers were available to define and secure that perimeter. The best way to secure the perimeter from outside attack is to reject all traffic fromexternal entities, but as this is impossible and impractical to do, security personnel needed a way to let traffic in but still be able to determine whether or not the traffic was malicious. This is the problem that NIDS developers were trying to solve.

Active vs. Passive NIDSs

Most NIDSs can be distinguished by how they examine the traffic and whether or not they interact with that traffic. On a passive system, the IDS simply watches the traffic, analyzes it, and generates alarms. It does not interact with the traffic itself in any way, and it does not modify the defensive posture of the system to react to the traffic. A passive IDS is very similar to a simple motion sensor—it generates an alarm when it matches a pattern much as the motion sensor generates an alarm when it sees movement. Active IDS will contain all the same components and capabilities of the passive IDS with one critical addition—the active IDS can react to the traffic it is analyzing.

These reactions can range from something simple, such as sending a TCP reset message to interrupt a potential attack and disconnect a session, to something complex, such as dynamically modifying firewall rules to reject all traffic from specific source IP addresses for the next 24 hours.

Signatures

Signatures can be very simple or remarkably complicated, depending on the activity they are trying to highlight. In general, signatures can be divided into two main groups, depending on what the signature is looking for: context-based and context-based. Content-based signatures are generally the simplest. They are designed to examine the content of such things as network packets or log entries. Content-based signatures are typically easy to build and look for simple things, such as a certain string of characters or a certain flag set in a TCP packet. Here are some example content-based signatures: • Matching the characters /etc/passwd in a Telnet session. On a UNIX system, the names of valid user accounts (and sometimes the passwords for those user accounts) are stored in a file called passwd located in the etc directory.

Matching a TCP packet with the synchronize, reset, and urgent flags all set within the same packet. This combination of flags is impossible to generate undernormal conditions, and the presence of all of these flags in the same packet would indicate this packet was likely created by a potential attacker for aspecific purpose, such as to crash the targeted system.

Matching the characters to: decode in the header of an e-mail message. On certain older versions of sendmail, sending an e-mail message to “decode” would cause the system to execute the contents of the e-mail.

Context-based signatures are generally more complicated, as they are designed to match large patterns of activity and examine how certain types of activity fit into the Other activities going on around them. Context signatures generally address the question “How does this event compare to other events that have already happened or might happen in the near future?” Context-based signatures are more difficult to analyze and take more resources to match, as the IDS must be able to “remember” past events to match certain context signatures. Here are some examples of context-based signatures:

Match a potential intruder scanning for open web servers on a specific network.A potential intruder may use a port scanner to look for any systems accepting connections on port 80. To match this signature, the IDS must analyze all attempted connections to port 80 and then be able to determine which connection attempts are coming from the same source but are going to multiple, different destinations.

Identify a Nessus scan.Nessus is an open-source vulnerability scanner that allows security administrators (and potential attackers) to quickly examine systems for vulnerabilities. Depending on the tests chosen, Nessus will typically perform the tests in a certain order, one after the other. To be able to determine the presence of a Nessus scan, the IDS must know which tests Nessus runs as well as the typical order in which the tests are run.

Identify a ping flood attack. A single ICMP packet on its own is generally regarded as harmless, certainly not worthy of an IDS signature. Yet thousands of ICMP packets coming to a single system in a short period of time can have a devastating effect on the receiving system. By flooding a system with thousands of valid ICMP packets, an attacker can keep a target system so busy it doesn’t have time to do anything else—a very effective denial-of-service attack. To identify a ping flood, the IDS must recognize each ICMP packet and keep track of how many ICMP packets different systems have received in the recent past.

False Positives and Negatives

Viewed in its simplest form, an IDS is really just looking at activity (be it host-based or network-based) and matching it against a predefined set of patterns. When it matches an activity to a specific pattern, the IDS cannot know the true intent behind that activity— whether or not it is benign or hostile—and therefore it can react only as it has been programmed to do. In most cases, this means generating an alert that must then be analyzed by a human who tries to determine the intent of the traffic from whatever information is available.

When AN ID matches a pattern and generates an alarm for benign traffic, meaning the traffic was not hostile and not a threat, this is called a falsepositive. In other words, the IDS matched a pattern and raised an alarm when it didn’t really need to do so. Keep in mind that the IDS can only match patterns and has no ability to determine intent behind the activity, so in some ways this is an unfair label. Technically, the IDS is functioning correctly by matching the pattern, but from a human standpoint this is not information the analyst needed to see, as it does not constitute a threat and does not require intervention.

NIPS (Network Intrusion Protection System)

The advent of Network-Based Intrusion Prevention heralds a new era of effective and efficient information security for corporations, educational institutions and government agencies. In effect, Network-Based Intrusion Prevention Systems (NBIPS) transform networks from a vulnerable and weak IT element to a tremendously powerful weapon against cyber-terrorism. The network becomes a potent and forceful instrument of protection – continuously defending every resource attached to it. Desktops, servers, operating systems, applications and Web services are aggressively protected from both external and internal attacks by Network-Based Intrusion Prevention Systems.

As well, the cost of securing your information assets declines dramatically with the deployment of Network-Based Intrusion Prevention. These efficient systems continuously filter attacks as they attempt to traverse the network and as a result, no damage occurs and no cleanup is required. Security administration is reduced and system downtime as a result of attack is eliminated.

An NBIPS installs in the network and is used to create physical security zones.

In essence, the network becomes intelligent and is able to quickly and precisely discern good traffic from bad traffic. The Intrusion Prevention System becomes a “jail” for hostile traffic such as Worms, Trojans, Viruses, Blended Attacks and Polymorphic Threats.

NBIPS are made possible through the deft blending of high-speed Application Specific Integrated Circuits (ASICS) and newly available Network Processors. Network Processors are very different from microprocessors in that they are specifically designed to process a high-speed flow of network traffic by executing tens of thousands of instructions and comparisons in parallel. A microprocessor, such as the Pentium, was designed as a general-purpose processor for graphics and spreadsheets and only executes one instruction at a time.

Network-Based Intrusion Prevention Systems are an extension of today’s Firewall technologies. To some extent, you can think of an NBIPS as a Seven- Layer Firewall. Today’s Firewalls inspect only the first four layers of any packet of information flow. NBIPS inspect all 7 Layers, making it impossible to hide anything in the last four layers of a packet Network-Based Intrusion Prevention Systems portend an immediate future where chaos, anxiety, cost and sweat are replaced with certainty, productivity and profitability. The nature of these systems creates a security posture never before seen and harmonizes the management of all security initiatives. We believe it is incumbent on all organizations, private and public, to deploy NBIPS for the following reasons:

NBIPS will improve corporate productivity and profitability

NBIPS will protect sensitive information from being stolen

NBIPS will protect key infrastructure from imminent global cyber-attacks thus preserving standards of living and ways of life.

NBIPS will limit copyright infringement liability

Protocol analyzers

•  A protocol analyzer (also known as a packet sniffer, network analyzer, or network sniffer) is a piece of software or an integrated software/hardware system that can capture and decode network traffic. Protocol analyzers have been popular with system administrators and security professionals for decades because they are such versatile and useful tools for a network environment. From a security perspective, protocol analyzers can be used for a number of activities, such as the following:

Detecting intrusions or undesirable traffic (IDS/IPS must have some typeof capture and decode ability to be able to look for suspicious traffic)

• Capturing traffic during incident response or incident handling

• Looking for evidence of botnets, Trojans, and infected systems

• Looking for unusual traffic or traffic exceeding certain thresholds

• Testing encryption between systems or applications

From a network administration perspective, protocol analyzers can be used for activities such as these:

Analyzing network problems

Detecting misconfigured applications or misbehaving applications

Gathering and reporting network usage and traffic statistics

Debugging client/server communications

Regardless of the intended use, a protocol analyzer must be able to see network traffic in order to capture and decode it. A software-based protocol analyzer must be able to place the NIC it is going to use to monitor network traffic in promiscuous mode (sometimes called promiscuous mode). Promiscuous mode tells the NIC to process every network packet it sees regardless of the intended destination. Normally, a NIC will process only broadcast packets (that are going to everyone on that subnet) and packets with the NIC’s Media Access Control (MAC) address as the destination address inside the packet. As a sniffer, the analyzer must process every packet crossing the wire, so the ability to place a NIC into promiscuous mode is critical.

Sniffers

The group of protocols that make up the TCP/IP suite was designed to work in a friendly environment where everybody who connected to the network used the protocols as they were designed. The abuse of this friendly assumption is illustrated by network traffic sniffing programs, sometimes referred to as sniffers.

A network sniffer is software or hardware device that is used to observe traffic as it passes through a network on shared broadcast media. The device can be used to view all traffic, or it can target a specific protocol, service, or even string of characters (looking for logins, for example). Normally, the network device that connects a computer to a network is designed to ignore all traffic that is not destined for that computer. Network sniffers ignore this friendly agreement and observe all traffic on the network, whether destined for that computer or others. A network card that is listening to all network traffic and not just its own is said to be in “promiscuous mode.” Some network sniffers are designed not just to observe all traffic but to modify traffic as well.

Network administrators for monitoring network performance can use network sniffers. They can be used to perform traffic analysis, for example, to determine what type of traffic is most commonly carried on the network and to determine which Segments are most active. They can also be used for network bandwidth analysis and to troubleshoot certain problems (such as duplicate MAC addresses).

Spoofing

Spoofing is nothing more than making data look like it has come from a different source. This is possible in TCP/IP because of the friendly assumptions behind the protocols. When the protocols were developed, it was assumed that individuals who had access to the network layer would be privileged users who could be trusted. When a packet is sent from one system to another, it includes not only the destination IP address and port but the source IP address as well. You are supposed to fill in the source with your own address, but nothing stops you from filling in another system’s address. This is one of the several forms of spoofing.

Spoofing E-Mail

In e-mail spoofing, a message is sent with a From address that differs from that of the sending system. This can be easily accomplished in several different ways using several programs. To demonstrate how simple it is to spoof an e-mail address, you can Telnet to port 25 (the port associated with e-mail) on a mail server. From there, you can fill in any address for the From and To sections of the message, whether or not the addresses are yours and whether they actually exist or not.

Spam Filter

Spam filter is a program that is used to detect unsolicited and unwanted email and prevent those messages from getting to a user’s inbox. Like other types of filtering programs, a spam filter looks for certain criteria on which it bases judgments. For example, the simplest and earliest versions (such as the one available with Microsoft’s Hotmail) can be set to watch for particular words in the subject line of messages and to exclude these from the user’s inbox. This method is not especially effective, too often omitting perfectly legitimate messages (these are called false positives) and letting actual spam through. More sophisticated programs, such as Bayesian filters or other heuristic filters, attempt to identify spam through suspicious word patterns or word frequency.

Bayesian Filtering

Bayesian spam filtering is the process of using a naive Bayes classifier to identify spam email. It is based on the principle that most events are dependent and that the probability of an event occurring in the future can be inferred from the previous occurrences of that event. This same technique can be used to classify spam. If some piece of text occurs often in spam but not in legitimate mail, then it would be reasonable to assume that this email is probably spam.

Bayesian spam filtering has become a popular mechanism to distinguish illegitimate spam email from legitimate email. Nowadays many mail clients implement Bayesian spam filtering.

Bayesian filters must be ‘trained’ to work effectively. Particular words have certain probabilities (also known as likelihood functions) of occurring in spam email but not in legitimate email. For instance, most email users will frequently encounter the word

Viagra in spam email, but will seldom see it in other email. Before mail can be filtered using this method, the user needs to generate a database with words and tokens (such as the $ sign, IP addresses and domains, and so on), collected from a sample of spam mail and valid mail (referred to as ‘ham’). For all words in each training email, the filter will adjust the probabilities that each word will appear in spam or legitimate email in its database.

After training, the word probabilities are used to compute the probability that an email with a particular set of words in it belongs to either category. If the total of word probabilities exceeds a certain threshold, the filter will mark the email as spam. Users can then decide whether to move email marked as spam to their spam folder or whether to just delete them.

Web application firewall

A web application firewall (WAF) is an appliance, server plugin, or filter that applies a set of rules to an HTTP conversation. Generally, these rules cover common attacks such as Crosssite Scripting (XSS) and SQL Injection. By customizing the rules to your application, many attacks can be identified and blocked. The effort to perform this customization can be significant and needs to be maintained as the application is modified.

A network-based application layer firewall is a computer networking firewall operating at the application layer of a protocol stack, and is also known as a proxy-based or reverse-proxy firewall. Application firewalls specific to a particular kind of network traffic may be titled with the service name, such as a web application firewall. They may be implemented through software running on a host or a stand-alone piece of network hardware. Often, it is a host using various forms of proxy servers to proxy traffic before passing it on to the client or server. Because it acts on the application layer, it may inspect the contents of the traffic, blocking specified content, such as certain websites, viruses, and attempts to exploit known logical flaws in client software.

Network-based application-layer firewalls work on the application level of the network stack (for example, all web browser, telnet, or ftp traffic), and may intercept all packets traveling to or from an application. In principle, application firewalls can prevent all unwanted outside traffic from reaching protected machines.

Modern application firewalls may also offload encryption from servers, block application input/output from detected intrusions or malformed communication, manage or consolidate authentication, or block content which violates policies.

img