SY0-501 Section 1.4 – Given a scenario, implement common protocols and services.
IPSec Internet Protocol security (IPsec) uses cryptographic security services to protect communications over Internet Protocol (IP) networks. IPsec supports network-level peer authentication, data origin authentication, data integrity, data confidentiality (encryption), and replay protection. The Microsoft implementation of IPsec is based on Internet Engineering Task Force (IETF) standards. In Windows 7, Windows Server 2008 R2, Windows Vista and Windows Server 2008, you can configure IPsec behavior by using the Windows Firewall with Advanced Security snapin. In earlier versions of Windows, IPsec was a stand-alone technology separate from Windows Firewall. One of…
Internet Protocol security (IPsec) uses cryptographic security services to protect communications over Internet Protocol (IP) networks. IPsec supports network-level peer authentication, data origin authentication, data integrity, data confidentiality (encryption), and replay protection. The Microsoft implementation of IPsec is based on Internet Engineering Task Force (IETF) standards.
In Windows 7, Windows Server 2008 R2, Windows Vista and Windows Server 2008, you can configure IPsec behavior by using the Windows Firewall with Advanced Security snapin. In earlier versions of Windows, IPsec was a stand-alone technology separate from Windows Firewall.
One of the first things that one notices when trying to set up IPsec is that there are so many knobs and settings: even a pair of entirely standards-conforming implementations sports a bewildering number of ways to impede a successful connection. It’s just an astonishingly complex suite of protocols.
One cause of the complexity is that IPsec provides mechanism, not policy: rather than define such-and-such encryption algorithm or a certain authentication function, it provides a framework that allows an implementation to provide nearly anything that both ends agree upon.
The IP Datagram
Since we’re looking at IPsec from the bottom up, we must first take a brief detour to revisit the IP Header itself, which carries all of the traffic we’ll be considering.
AH: Authentication Only
AH is used to authenticate — but not encrypt — IP traffic, and this serves the treble purpose of ensuring that we’re really talking to who we think we are, detecting alteration of data while in transit, and (optionally) to guard against replay by attackers who capture data from the wire and attempt to re-inject that data back onto the wire at a later date.
Authentication is performed by computing a cryptographic hash-based message authentication code over nearly all the fields of the IP packet (excluding those which might be modified in transit, such as TTL or the header checksum), and stores this in a newly added AH header and sent to the other end.
This AH header contains just five interesting fields, and it’s injected between the original IP header and the payload. We’ll touch on each of the fields here, though their utility may not be fully apparent until we see how they’re used in the larger picture.
ESP — Encapsulating Security Payload
Adding encryption makes ESP a bit more complicated because the encapsulation surrounds the payload rather than precedes it as with AH: ESP includes header and trailer fields to support the encryption and optional authentication. It also provides Tunnel and Transport modes, which are used in by-now familiar ways. The IPsec RFCs don’t insist upon any particular encryption algorithms, but we find DES, triple-DES, AES, and Blowfish in common use to shield the payload from prying eyes. The Security Association specifies the algorithm used for a particular connection, and this SA includes not only the algorithm, but the key used.
Unlike AH, which provides a small header before the payload, ESP surrounds the payload it’s protecting. The Security Parameters Index and Sequence Number serve the same purpose as in AH, but we find padding, the next header, and the optional Authentication Data at the end, in the ESP Trailer.
It’s possible to use ESP without any actual encryption (to use a NULL algorithm), which nonetheless structures the packet the same way. This provides no confidentiality, and it only makes sense if combined with ESP authentication. It’s pointless to use ESP without either encryption or authentication (unless one is simply doing protocol testing).
Padding is provided to allow block-oriented encryption algorithms room for multiples of their blocksize, and the length of that padding is provided in the pad len field. The next hdr field gives the type (IP, TCP, UDP, etc.) of the payload in the usual way, though it can be thought of as pointing “backwards” into the packet rather than forward as we’ve seen in AH.
In addition to encryption, ESP can also optionally provide authentication, with the same HMAC as found in AH. Unlike AH, however, this authentication is only for the ESP header and encrypted payload: it does not cover the full IP packet. Surprisingly, this does not substantially weaken the security of the authentication, but it does provide some important benefits.
When an outsider examines an IP packet containing ESP data, it’s essentially impossible to make any real guesses about what’s inside save for the usual data found in the IP header (particularly the source and destination IP addresses). The attacker will certainly know that it’s ESP data — that’s also in the header — but the type of the payload is encrypted with the payload.
Even the presence or absence of Authentication Data can’t be determined by looking at the packet itself (this determination is made by using the Security Parameters Index to reference the pre-shared set of parameters and algorithms for this connection).
However, it should be noted that sometimes the envelope provides hints that the payload does not. With more people sending VoIP inside ESP over the Internet, the QoS tagging are in the outside header and is fairly obvious what traffic is VoIP signaling (IP precedence 3) and what is RTP traffic (IP precedence 5). It’s not a sure thing, but it might be enough of a clue to matter in some circumstances.
Since its creation in 1988 as a short-term solution to manage elements in the growing Internet and other attached networks, SNMP has achieved widespread acceptance. SNMP was derived from its predecessor SGMP (Simple Gateway Management Protocol) and was intended to be replaced by a solution based on the CMIS/CMIP (Common Management Information Service/Protocol) architecture. This long-term solution, however, never received the widespread acceptance of SNMP.
SNMP is based on the manager/agent model consisting of an SNMP manager, an SNMP agent, a database of management information, managed SNMP devices and the network protocol. The SNMP manager provides the interface between the human network manager and the management system. The SNMP agent provides the interface between the manager and the physical device(s) being managed (see the illustration above).
The SNMP manager and agent use an SNMP Management Information Base (MIB) and a relatively small set of commands to exchange information. The SNMP MIB is organized in a tree structure with individual variables, such as point status ordescription, being represented as leaves on the branches. A long numeric tag or object identifier (OID) is used to distinguish each variable uniquely in the MIB and in SNMP messages.
SNMP uses five basic messages (GET, GET-NEXT, GET-RESPONSE, SET, and TRAP) to communicate between the SNMP manager and the SNMP agent. The GET and GET-NEXT messages allow the manager to request information for a specific variable.
The agent, upon receiving a GET or GET-NEXT message, will issue a GET-RESPONSE message to the SNMP manager with either the information requested or an error indication as to why the request cannot be processed. A SET message allows the SNMP manager to request a change be made to the value of a specific variable in the case of an alarm remote that will operate a relay. The SNMP agent will then respond with a GET-RESPONSE message indicating the change has been made or an error indication as to why the change cannot be made. The SNMP TRAP message allows the agent to spontaneously inform the SNMP manager of an “important” event.
As you can see, most of the messages (GET, GET-NEXT, and SET) are only issued by the SNMP manager. Because the TRAP message is the only message capable of being initiated by an SNMP agent, it is the message used by DPS Remote Telemetry Units (RTUs) to report alarms. This notifies the SNMP manager as soon as an alarm condition occurs, instead of waiting for the SNMP manager to ask.
The small number of commands used is only one of the reasons SNMP is “simple.” The other simplifying factor is the SNMP protocol’s reliance on an unsupervised or connectionless communication link. This simplicity has led directly to the widespread use of SNMP, specifically in the Internet Network Management Framework. Within this framework, it is considered “robust” because of the independence of the SNMP managers from the agents, e.g. if an SNMP agent fails, the SNMP manager will continue to function, or vice versa.
Secure Shell or SSH is a network protocol that allows data to be exchanged using a secure channel between two networked devices. The two major versions of the protocol are referred to as SSH1 or SSH-1 and SSH2 or SSH-2. Used primarily on Linux and Unix based systems to access shell accounts, SSH was designed as a replacement for Telnet and other insecure remote shells, which send information, notably passwords, in plaintext, rendering them susceptible to packet analysis. The encryption used by SSH is intended to provide confidentiality and integrity of data over an unsecured network, such as the Internet.
The DNS translates Internet domain and host names to IP addresses. DNS automatically converts the names we type in our Web browser address bar to the IP addresses of Web servers hosting those sites. DNS implements a distributed database to store this name and address information for all public hosts on the Internet. DNS assumes IP addresses do not change (are statically assigned rather than dynamically assigned).
The DNS database resides on a hierarchy of special database servers. When clients like Web browsers issue requests involving Internet host names, a piece of software called the DNS resolver (usually built into the network operating system) first contacts a DNS server to determine the server’s IP address. If the DNS server does not contain the needed mapping, it will in turn forward the request to a different DNS server at the next higher level in the hierarchy. After potentially several forwarding and delegation messages are sent within the DNS hierarchy, the IP address for the given host eventually arrives at the resolver, that in turn completes the request over Internet Protocol.
DNS additionally includes support for caching requests and for redundancy. Most network operating systems support configuration of primary, secondary, and tertiary DNS servers, each of which can service initial requests from clients. ISPs maintain their own DNS servers and use DHCP to automatically configure clients, relieving most home users of the burden of DNS configuration.
Encryption (SSL and TLS) Secure Sockets Layer (SSL) is a general-purpose protocol developed by Netscape for managing the encryption of information being transmitted over the Internet. It began as a competitive feature to drive sales of Netscape’s web server product, which could then send information securely to end users. This early vision of securing the transmission channel between the web server and the browser became an Internet standard.
Today, SSL is almost ubiquitous with respect to e-commerce—all browsers support it as do web servers, and virtually all sensitive financial traffic from e-commerce web sites uses this method to protect information in transit between web servers and browsers. The Internet Engineering Task Force (IETF) embraced SSL in 1996 through a series of RFCs and named the group Transport Layer Security (TLS). Starting with SSL 3.0, in 1999 the IETF issued RFC 2246, “TLS Protocol Version 1.0,” followed by RFC 2712, which added Kerberos authentication, and then RFCs 2817 and 2818, which extended TLS to HTTP version 1.1 (HTTP/1.1). Although SSL has been through several versions,
TLS begins with an equivalency to SSL 3.0, so today SSL and TLS are essentially the same although not inter-changeable. SSL/TLS is a series of functions that exist in the OSI (Open System Interconnection) model between the application layer and the transport and network layers. The goal of TCP is to send an unauthenticated error-free stream of information between two computers. SSL/TLS adds message integrity and authentication functionality to TCP through the use of cryptographic methods. Because cryptographic methods are an everevolving field, and because both parties must agree on an implementation method, SSL/TLS has embraced an open, extensible, and adaptable method to allow flexibility and strength.
When two programs initiate an SSL/TLS connection, one of their first tasks is to compare available protocols and agree on an appropriate common cryptographic protocol for use in this particular communication. As SSL/TLS can use separate algorithms and methods for encryption, authentication, and data integrity, each of these is negotiated and determined depending upon need at the beginning of a communication.
How SSL/TLS Works
SSL/TLS uses a wide range of cryptographic protocols. To use these protocols effectively between a client and a server, an agreement must be reached on which protocol to use via the SSL handshake process. The process begins with a client request for a secure connection and a server’s response. The questions asked and answered are which protocol and which cryptographic algorithm will be used. For the client and server to communicate, both sides must agree on a commonly held protocol (SSL v1, v2, v3, or TLS v1). Commonly available cryptographic algorithms include Diffie-Hellman and RSA. The next step is to exchange certificates and keys as necessary to enable authentication. Authentication was a one-way process for SSL v1 and v2 with only the server-providing authentication. In SSL v3/TLS, mutual authentication of both client and server is possible.
The certificate exchange is via X.509 certificates, and public key cryptography is used to establish authentication. Once authentication is established, the channel is secured with symmetric key cryptographic methods and hashes, typically RC4 or 3DES for symmetric key and MD5 or SHA-1 for the hash functions.
The Web (HTTP and HTTPS) HTTP
is used for the transfer of hyperlinked data over the Internet, from web servers to browsers. When a user types a URL such as http://www.example.com into a browser, the http:// portion indicates that the desired method of data transfer is HTTP. Although it was initially created just for HTML pages, today many protocols deliver content over this connection protocol. HTTP traffic takes place over TCP port 80 by default, and this port is typically left open on firewalls because of the extensive use of HTTP.
One of the primary drivers behind the development of SSL/TLS was the desire to hide the complexities of cryptography from end users. When using an SSL/TLS-enabled browser, simply requesting a secure connection from a web server instead of non-secure connection can do this.
When a browser is SSL/TLS-aware, the entry of an SSL/TLS-based protocol will cause the browser to perform the necessary negotiations with the web server to establish the required level of security. Once these negotiations have been completed and a session key secures the session, a closed padlock icon is displayed in the lower right of the screen to indicate that the session is secure. If the protocol is https:, your connection is secure; if it is http:, then the connection is carried by plaintext for anyone to see. As the tiny padlock placed in the lowerright corner of the screen could have been missed, Microsoft moved it to an obvious position next to the URL in Internet Explorer 7. Another new security feature that begins with Internet Explorer 7 and Firefox 3 is the use of high assurance SSL, a combination of an extended validation SSL certificate and a high security browser. If a high security browser, Internet Explorer 7 or Firefox 3 and beyond, establish a connection with a vendor that has registered with a certificate authority for an extended validation SSL certificate, then the URL box will be colored green and the box next to it will display the registered entity and additional validation information when clicked. These improvements were in response to phishing sites and online fraud, and although they require additional costs and registration on the part of the vendors, this is a modest up-front cost to help reduce fraud and provide confidence to customers.
One important note on SSL certificate-based security is the concept of single- versus dualsided authentication. The vast majority of SSL connections are single-sided, meaning that only the identity of the server side is vouched for via a certificate. The client is typically not identified by certificate, mainly because of the number of clients and corresponding PKI issues. A single-sided SSL secured conversation can be attacked using a man-in-the-middle attack by capturing all the traffic and relaying responses. Dual-sided SSL would prevent this attack mechanism, yet the management of every client needing to obtain and maintain a certificate makes this practically infeasible with the current PKI available to most end users.
The objective of enabling cryptographic methods in this fashion is to make it easy for end users to use these protocols. SSL/TLS is designed to be protocol agnostic. Although designed to run on top of TCP/IP, it can operate on top of other lower level protocols, such as X.25. SSL/TLS requires a reliable lower level protocol, so it is not designed and cannot properly function on top of a non-reliable protocol such as the User Datagram Protocol (UDP). Even with this limitation, SSL/TLS has been used to secure many common TCP/IP-based services
As with all other communications protocol, TCP/IP is composed of layers:
IP – is responsible for moving packet of data from node to node. IP forwards each packet based on a four byte destination address (the IP number). The Internet authorities assign ranges of numbers to different organizations. The organizations assign groups of their numbers to departments. IP operates on gateway machines that move data from department to organization to region and then around the world.
TCP – is responsible for verifying the correct delivery of data from client to server. Data can be lost in the intermediate network. TCP adds support to detect errors or lost data and to trigger retransmission until the data is correctly and completely received.
Sockets – is a name given to the package of subroutines that provide access to TCP/IP on most systems.
The Internet Protocol was developed to create a Network of Networks (the “Internet”). Individual machines are first connected to a LAN (Ethernet or Token Ring). TCP/IP shares the LAN with other uses (a Novell file server, Windows for Workgroups peer systems). One device provides the TCP/IP connection between the LAN and the rest of the world.
To insure that all types of systems from all vendors can communicate, TCP/IP is absolutely standardized on the LAN. However, larger networks based on long distances and phone lines are more volatile. In the US, many large corporations would wish to reuse large internal networks based on IBM’s SNA. In Europe, the national phone companies traditionally standardize on X.25. However, the sudden explosion of high-speed microprocessors, fiber optics, and digital phone systems has created a burst of new options: ISDN, frame relay, FDDI, Asynchronous Transfer Mode (ATM). New technologies arise and become obsolete within a few years. With cable TV and phone companies competing to build the National Information Superhighway, no single standard can govern citywide, nationwide, or worldwide communications.
The original design of TCP/IP as a Network of Networks fits nicely within the current technological uncertainty. TCP/IP data can be sent across a LAN, or it can be carried within an internal corporate SNA network, or it can piggyback on the cable TV service. Furthermore, machines connected to any of these networks can communicate to any other network through gateways supplied by the network vendor.
Each technology has its own convention for transmitting messages between two machines within the same network. On a LAN, messages are sent between machines by supplying the six byte unique identifier (the “MAC” address). In an SNA network, every machine has Logical Units with their own network address. DECNET, Appletalk, and Novell IPX all have a scheme for assigning numbers to each local network and to each workstation attached to the network.
On top of these local or vendor specific network addresses, TCP/IP assigns a unique number to every workstation in the world. This “IP number” is a four byte value that, by convention, is expressed by converting each byte into a decimal number (0 to 255) and separating the bytes with a period. For example, the PC Lube and Tune server is 220.127.116.11.
It is still possible for almost anyone to get assignment of a number for a small “Class C” network in which the first three bytes identify the network and the last byte identifies the individual computer. The author followed this procedure and was assigned the numbers 192.35.91.* for a network of computers at his house. Larger organizations can get a “Class B” network where the first two bytes identify the network and the last two bytes identify each of up to 64 thousand individual workstations. Yale’s Class B network is 130.132, so all computers with IP address 130.132.*.* are connected through Yale.
The organization then connects to the Internet through one of a dozen regional or specialized network suppliers. The network vendor is given the subscriber network number and adds it to the routing configuration in its own machines and those of the other major network suppliers.
There is no mathematical formula that translates the numbers 192.35.91 or 130.132 into “Yale University” or “New Haven, CT.” The machines that manage large regional networks or the central Internet routers managed by the National Science Foundation can only locate these networks by looking each network number up in a table. There is potentially thousands of Class B networks, and millions of Class C networks, but computer memory costs are low, so the tables are reasonable.
Customers that connect to the Internet, even customers as large as IBM, do not need to maintain any information on other networks. They send all external data to the regional carrier to which they subscribe, and the regional carrier maintains the tables and does the appropriate routing.
New Haven is in a border state split 50-50 between the Yankees and the Red Sox. In this spirit, Yale recently switched its connection from the Middle Atlantic regional network to the New England carrier. When the switch occurred, tables in the other regional areas and in the national spine had to be updated, so that traffic for 130.132 was routed through Boston instead of New Jersey. The large network carriers handle the paperwork and can perform such a switch given sufficient notice. During a conversion period, the university was connected to both networks so that messages could arrive through either path.
Although the individual subscribers do not need to tabulate network numbers or provide explicit routing, it is convenient for most Class B networks to be internally managed as a much smaller and simpler version of the larger network organizations.
It is common to subdivide the two bytes available for internal assignment into a one byte department number and a one byte workstation ID.
The enterprise network is built using commercially available TCP/IP router boxes. Each router has small tables with 255 entries to translate the one byte department number into selection of a destination Ethernet connected to one of the routers. Messages to the PC Lube and Tune server (18.104.22.168) are sent through the national and New England regional networks based on the 130.132 part of the number. Arriving at Yale, the 59 departments ID selects an Ethernet connector in the C& IS building. The 234 select a particular workstation on that LAN. The Yale network must be updated as new Ethernets and departments are added, but it is noteffected by changes outside the university or the movement of machines within the department.
Every time a message arrives at an IP router, it makes an individual decision about where to send it next. There is concept of a session with a preselected path for all traffic. Consider a company with facilities in New York, Los Angeles, Chicago and Atlanta. It could build a network from four phone lines forming a loop (NY to Chicago to LA to Atlanta to NY). A message arriving at the NY router could go to LA via either Chicago or Atlanta. The reply could come back the other way.
How does the router make a decision between routes? There is no correct answer. Traffic could be routed by the “clockwise” algorithm (go NY to Atlanta, LA to Chicago). The routers could alternate, sending one message to Atlanta and the next to Chicago. More sophisticated routing measures traffic patterns and sends data through the least busy link.
If one phone line in this network breaks down, traffic can still reach its destination through a roundabout path. After losing the NY to Chicago line, data can be sent NY to Atlanta to LA to Chicago. This provides continued service though with degraded performance. This kind of recovery is the primary design feature of IP. The routers in NY and Chicago immediately detect the loss of the line, but somehow this information must be sent to the other nodes.
Otherwise, LA could continue to send NY messages through Chicago, where they arrive at a “dead end.” Each network adopts some Router Protocol, which periodically updates the routing tables throughout the network with information about changes in route status.
If the size of the network grows, then the complexity of the routing updates will increase as will the cost of transmitting them. Building a single network that covers the entire US would be unreasonably complicated. Fortunately, the Internet is designed as a Network of Networks. This means that loops and redundancy are built into each regional carrier. The regional network handles its own problems and reroutes messages internally. Its Router Protocol updates the tables in its own routers, but no routing updates need to propagate from a regional carrier to the NSF spine or to the other regions (unless, of course, a subscriber switches permanently from one region to another).
IBM designs its SNA networks to be centrally managed. If any error occurs, it is reported to the network authorities. By design, any error is a problem that should be corrected or repaired. IP networks, however, were designed to be robust. In battlefield conditions, the loss of a node or line is a normal circumstance. Casualties can be sorted out later on, but the network must stay up. So IP networks are robust. They automatically (and silently) reconfigure themselves when something goes wrong. If there is enough redundancy built into the system, then communication is maintained.
In 1975 when SNA was designed, such redundancy would be prohibitively expensive, or it might have been argued that only the Defense Department could afford it. Today, however, simple routers cost no more than a PC. However, the TCP/IP design that, “Errors are normal and can be largely ignored,” produces problems of its own.
Data traffic is frequently organized around “hubs,” much like airline traffic. One could imagine an IP router in Atlanta routing messages for smaller cities throughout the Southeast. The problem is that data arrives without a reservation. Airline companies experience the problem around major events, like the Super Bowl. Just before the game, everyone wants to fly into the city. After the game, everyone wants to fly out. Imbalance occurs on the network when something new gets advertised. Adam Curry announced the server at “mtv.com” and his regional carrier was swamped with traffic the next day. The problem is that messages come in from the entire world over high-speed lines, but they go out to mtv.com over what was then a slow speed phone line.
Occasionally a snowstorm cancels flights and airports fill up with stranded passengers. Many go off to hotels in town. When data arrives at a congested router, there is no place to send the overflow. Excess packets are simply discarded. It becomes the responsibility of the sender to retry the data a few seconds later and to persist until it finally gets through. This recovery is provided by the TCP component of the Internet protocol.
TCP was designed to recover from node or line failures where the network propagates routing table changes to all router nodes. Since the update takes some time, TCP is slow to initiate recovery. The TCP algorithms are not tuned to optimally handle packet loss due to traffic congestion. Instead, the traditional Internet response to traffic problems has been to increase the speed of lines and equipment in order to say ahead of growth in demand.
TCP treats the data as a stream of bytes. It logically assigns a sequence number to each byte. The TCP packet has a header that says, in effect, “This packet starts with byte 379642 and contains 200 bytes of data.” The receiver can detect missing or incorrectly sequenced packets. TCP acknowledges data that has been received and retransmits data that has been lost. The TCP design means that error recovery is done end-to-end between the Client and Server machine. There is no formal standard for tracking problems in the middle of the network, though each network has adopted some ad hoc tools.
Need to Know
There are three levels of TCP/IP knowledge. Those who administer a regional or national network must design a system of long distance phone lines, dedicated routing devices, and very large configuration files. They must know the IP numbers and physical locations of thousands of subscriber networks. They must also have a formal network monitor strategy to detect problems and respond quickly.
Each large company or university that subscribes to the Internet must have an intermediate level of network organization and expertise. A half dozen routers might be configured to connect several dozen departmental LANs in several buildings. All traffic outside the organization would typically be routed to a single connection to a regional network provider.
However, the end user can install TCP/IP on a personal computer without any knowledge of either the corporate or regional network. Three pieces of information are required:
1. The IP address assigned to this personal computer
2. The part of the IP address (the subnet mask) that distinguishes other machines on the same LAN (messages can be sent to them directly) from machines in other departments or elsewhere in the world (which are sent to a router machine)
3. The IP address of the router machine that connects this LAN to the rest of the world.
In the case of the PCLT server, the IP address is 22.214.171.124. Since the first three bytes designate this department, a “subnet mask” is defined as 255.255.255.0 (255 is the largest byte value and represents the number with all bits turned on). It is a Yale convention (which we recommend to everyone) that the router for each department have station number 1 within the department network. Thus the PCLT router is 126.96.36.199. Thus the PCLT server is configured with the values:
My IP address: 188.8.131.52
Subnet mask: 255.255.255.0
Default router: 184.108.40.206
The subnet mask tells the server that any other machine with an IP address beginning 130.132.59.* is on the same department LAN, so messages are sent to it directly. Any IP address beginning with a different value is accessed indirectly by sending the message through the router at 220.127.116.11 (which is on the departmental LAN).
File Transfer (FTP and SFTP)
One of the original intended uses of the Internet was to transfer files from one machine to another in a simple, secure, and reliable fashion, which was needed by scientific researchers. Today, file transfers represent downloads of music content, reports, and other data sets from other computer systems to a PC-based client. Until 1995, the majority of Internet traffic was file transfers. With all of this need, a protocol was necessary so that two computers could agree on how to send and receive data. As such, FTP is one of the older protocols.
FTP is an application-level protocol that operates over a wide range of lower level protocols. FTP is embedded in most operating systems and provides a method of transferring files from a sender to a receiver. Most FTP implementations are designed to operate both ways, sending and receiving, and can enable remote file operations over a TCP/IP connection. FTP clients are used to initiate transactions and FTP servers are used to respond to transaction requests. The actual request can be either to upload (send data from client to server) or download (send data from server to client).
Clients for FTP on a PC can range from an application program to the command line ftp program in Windows/DOS to most browsers. To open an FTP data store in a browser, you can enter ftp://url in the browser’s address field to indicate that you want to see the data associated with the URL via an FTP session—the browser handles the details. File transfers via FTP can be either binary or in text mode, but in either case, they are in plaintext across the network.
Blind FTP (Anonymous FTP)
To access resources on a computer, an account must be used to allow the operating system level authorization function to work. In the case of an FTP server, you may not wish to control who gets the information, so a standard account called anonymous exists.
This allows unlimited public access to the files and is commonly used when you want to have unlimited distribution. On a server, access permissions can be established to allow only downloading or only uploading or both, depending on the system’s function. As FTP can be used to allow anyone access to upload files to a server, it is considered a security risk and is commonly implemented on specialized servers isolated from other critical functions. As FTP servers can present a security risk, they are typically not permitted on workstations and are disabled on servers without need for this functionality.
HTTPS (HTTP over SSL or HTTP Secure) is the use of Secure Socket Layer (SSL) or Transport Layer Security (TLS) as a sub-layer under regular HTTP application layering. HTTPS encrypts and decrypts user page requests as well as the pages that are returned by the Web server. The use of HTTPS protects against eavesdropping and man-in-the-middle attacks. Netscape developed HTTPS.
HTTPS and SSL support the use of X.509 digital certificates from the server so that, if necessary, a user can authenticate the sender. Unless a different port is specified, HTTPS uses port 443 instead of HTTP port 80 in its interactions with the lower layer, TCP/IP.
Suppose you visit a Web site to view their online catalog. When you’re ready to order, you will be given a Web page order form with a Uniform Resource Locator (URL) that starts with https://. When you click “Send,” to send the page back to the catalog retailer, your browser’s HTTPS layer will encrypt it. The acknowledgement you receive from the server will also travel in encrypted form, arrive with an https:// URL, and be decrypted for you by your browser’s HTTPS sub-layer.
The effectiveness of HTTPS can be limited by poor implementation of browser or server software or a lack of support for some algorithms. Furthermore, although HTTPS secures data as it travels between the server and the client, once the data is decrypted at its destination, it is only as secure as the host computer. According to security expert Gene Spafford, that level of security is analogous to “using an armored truck to transport rolls of pennies between someone on a park bench and someone doing business from a cardboard box.”
HTTPS is not to be confused with S-HTTP, a security-enhanced version of HTTP developed and proposed as a standard by EIT.
FTP operates in a plaintext mode, so an eavesdropper can observe the data being passed. If confidential transfer is required, Secure FTP (SFTP) utilizes both the Secure Shell (SSH) protocol and FTP to accomplish this task. SFTP is an application program that encodes both the commands and the data being passed and requires SFTP to be on both the client and the server. SFTP is not interoperable with standard FTP—the encrypted commands cannot be read by the standard FTP server program. To establish SFTP data transfers, the server must be enabled with the SFTP program, and then clients can access the server provided they have the correct credentials. One of the first SFTP operations is the same as that of FTP: an identification function that uses a username and an authorization function that uses a password. There is no anonymous SFTP account by definition, so access is established and controlled from the server using standard access control lists (ACLs), IDs, and passwords.
Several heavily used Internet applications such as FTP, GOPHER, and HTTP use a protocol model in which every transaction requires a separate TCP connection. Since clients normally issue multiple requests to the same server, this model is quite inefficient, as it incurs all the connection start up costs for every single request. SCP is a simple protocol, which lets a server and client have multiple conversations over a single TCP connection. The protocol is designed to be simple to implement, and is modeled after TCP.
SCP’s main service is dialogue control. This service allows either end of the connection to establish a virtual session over a single transport connection. SCP also allows a sender to indicate message boundaries, and allows a receiver to reject an incoming session.
Session ID allocation
Each session is allocated a session identifier. Session Identifiers below 1024 are reserved. Session IDs allocated by clients are even; those allocated by servers, odd.
A session is established by setting the SYN bit in the first message sent on that channel.
Sending a message with the FIN bit set ends a session. Each end of a connection may be closed independently.
Sending a message with the RST bit set may terminate a session. All pending data for that session should be discarded.
Sending a message with the PUSH bit set marks a message boundary. The boundary is set at the final octet in this message, including that octet.
Routers, intermediary devices, or hosts to communicate updates or error information to other routers, intermediary devices, or hosts use iCMPs.
Each ICMP message contains three fields that define its purpose and provide a checksum. They are TYPE, CODE, and CHECKSUM fields. The TYPE field identifies the ICMP message, the CODE field provides further information about the associated TYPE field, and the CHECKSUM provides a method for determining the integrity of the message.
The TYPES defined are:
0 Echo Reply
3 Destination Unreachable
4 Source Quench
5 Redirect Message
8 Echo Request
11 Time Exceeded
12 Parameter Problem
13 Timestamp Request
14 Timestamp Reply
15 Information Request (No Longer Used)
16 Information Reply (No Longer Used)
17 Address Mask Request
18 Address Mask Reply
Echo Request & Echo Reply
This is the ICMP most used to test IP connectivity commonly known as PING. The Echo Request ICMP will have a Type field of 8 and a Code field of 0. Echo Replies have a Type field of 0 and a Code field of 0.
When a packet is undeliverable, a Destination Unreachable, Type 3, ICMP is generated. Type 3 ICMPs can have a Code value of 0 to 15: Type 3 Code
0 Network Unreachable
1 Host Unreachable
2 Protocol Unreachable
3 Port Unreachable
4 Fragmentation needed and DF (Don’t Fragment) set
5 Source route failed
6 Destination Network unknown
7 Destination Host unknown
8 Source Host isolated
9 Communication with Destination Network Administratively Prohibited
10 Communication with Destination Host Administratively Prohibited
11 Network Unreachable for Type Of Service
12 Host Unreachable for Type Of Service
13 Communication Administratively Prohibited by Filtering
14 Host Precedence Violation
15 Precedence Cutoff in Effect
An ICMP Source Quench message has a Type field of 4 and Code 0. Source Quench messages are sent when the destination is unable to process traffic as fast as the source is sending it. The Source Quench ICMP tells the source to cut back the rate at which it is sending data. The destination will continue to generate Source Quench ICMPs until the source is sending at an acceptable speed.
An intermediary device will generate an ICMP Redirect Message when it determines that a route being requested can be reached either locally or through a better path.
If a router or host discards a packet due to a time-out, it will generate a Time Exceeded Type 11 ICMP. The Time Exceeded ICMP will have a Code value of either 0 or 1. A Code 0 is generated when the hop count of a datagram is exceeded and the packet is discarded. A Code 1 is generated when the reassemble of a fragmented packet exceeds the time-out value.
When an intermediary device or host discards a datagram due to inability to process, an ICMP 12 is generated. Common causes of this ICMP are corrupt header information or missing options. If the reason for the ICMP is a required missing option, the ICMP will have a Code value of 1. If the Code value is 0, the Pointer field will contain the octet of the discarded datagram’s header where the error was detected.
Timestamp Request & Timestamp Reply
Timestamp Request and Timestamp Reply is a rudimentary method for synchronizing the time maintained on different devices. The Request has a Type field of 13 and the Reply is Type 14. This method for time synchronization is crude and unreliable. Therefore, it is not heavily used.
Information Request & Information Reply
These ICMP types were originally designed to allow a booting host to discover an IP address. This method is obsolete and is no longer used. Most common methods for IP address discovery are BOOTP (bootstrap protocol) and DHCP (dynamic host configuration protocol). BOOTP is defined by RFC1542, and DHCP is defined by RFC1541.
Address Mask Request & Address Mask Reply
A booting computer to determine the subnet mask in use on the local network uses the Address Mask Request ICMP Type 17. An intermediary device or computer acting as an intermediary device will reply with a Type 18 ICMP Address Mask Reply ICMP.
IPv4 vs. IPv6
The “I” and “P” in “IPv” stands for “Internet Protocol” which directly refers to the communication protocol, or packet transfer procedure of the internet. Every device that connects to the Internet uses a unique address called an IP address, which works very similar to a home/location address. Pieces of data, called “packets”, are transferred via the Internet between machines, which in turn gives us the fully functioning interior workings of the online community. In order for two machines, or devices to communicate via the internet, they must transfer these “packets” of data back and forth. Unfortunately the data “packets” can not be transferred if the devices do not each have their own unique address.
Think of it basically as a home address. You can’t send a mail correctly if you don’t list a proper return address, because basically if the mail doesn’t reach its destination it must have a way of returning back to you. Also, the mail receiver would have no possible way of responding considering they have no idea what address the should reply to.
While the Internet does not necessarily return data “packets” that don’t reach their destination, like undelivered mail, proper use or protocol requires two devices to have unique addresses to even begin communications. The “v” and number (“4″ or “6″) in “IPv4 vs IPv6″ refers to the related protocol version number. “IPv4″ is of course “Internet Protocol version 4″, and “IPv6″ is subsequently “Internet Protocol version 6″.
IPv4 is of course the older, more supported version of the internet address procedure. But ultimately, there are no longer any free IPv4 addresses, meaning all of them have been occupied or taken up. What does this mean exactly?
In a general sense, there will no longer be any alternative IPv4 addresses, directly meaning they will all be occupied and new users will not be able to venture into cyberspace. Although the realistic situation is not quite as dire.
Queue in IPv6, the latest Internet Protocol or address procedure. The older IPv4 only supports a maximum 32 bit internet address, which translates to 2^32 IP addresses available for assignment (about 4.29 billion total). IPv6 utilizes 128 bit web addresses, allowing a maximum 2^128 available addresses:
340,282,366,920,938,000,000,000,000,000,000,000,000; which if you couldn’t already tell is a very big number
So basically the IPv4 protocol has run out of available addresses, which is why most websites or internet servers are adopting the newer IPv6 protocol. In most cases, the two versions are compatible. This contrast between the two protocol versions is exactly what’ s being referred to when “IPv4 vs IPv6″ is mentioned.
Ports identify how a communication process occurs. Ports are special addresses that allow communication between hosts. A port number is added from the origi- nator, indicating which port to communicate with on a server. If a server has a port defined and available for use, it will send back a message accepting the request. If the port isn’t valid, the server will refuse the connection. The Internet Assigned Numbers Authority (IANA) has defined a list of ports called well-known ports.
A port address or number is nothing more than a bit of additional information added either to the TCP or UDP message. This information is added in the header of the packet. The layer below it encapsulates the message with its header.
Many of the services you’ll use in the normal course of using the Internet use the TCP port numbers identified in following table. The other table identifies some of the more common, well-known UDP ports. You will note that some services use both TCP and UDP ports, whereas many use only one or the other.
|TCP Port Number
||FTP (data channel)
||FTP (Control Channel)
||SSH and SCP
||TACACS authentication service
||HTTP (used for world wide web)
The early documentation for these ports specified that ports below 1024 were restricted to administrative uses. However, enforcement of this restriction has been voluntary, and it is creating problems for computer security professionals. As you can see, each of these ports potentially requires different security considerations, depending on the application to which it’s assigned. All of the ports allow access to your network; even if you establish a firewall, you must have these ports open if you want to provide email or web services.
When discussing networking, most experts refer to the seven-layer OSI model—long considered the foundation for how networking protocols should operate. This model is the most common one used, and the division between layers is well defined.
TCP/IP precedes the creation of the OSI model. Although it carries out the same operations, it does so with four layers instead of seven.
The TCP/IP suite is broken into four architectural layers:
Host-to-Host, or Transport layer
Network Access layer (also known as the Network Interface layer or the Link layer)
Computers using TCP/IP use the existing physical connection between the systems. TCP/IP doesn’t concern itself with the network topology, or physical connections. The network controller that resides in a computer or host deals with the physical protocol, or topology. TCP/IP communicates with that controller and lets the controller worry about the network topology and physical connection.
In TCP/IP parlance, a computer on the network is a host. A host is any device connected to the network that runs a TCP/IP protocol suite, or stack. Figure 3.1 shows the four layers in a TCP/IP protocol stack. Note that this drawing includes the physical, or network topology. Although it isn’t part of TCP/IP, the topology is essential to conveying information on a network.
The four layers of TCP/IP have unique functions and methods for accomplishing work. Each layer talks to the layers that reside above and below it. Each layer also has its own rules and capabilities.
The Application Layer
The Application layer is the highest layer of the suite. It allows applications to access services or protocols to exchange data. Most programs, such as web browsers, interface with TCP/IP at this level. The most commonly used Application layer protocols are as follows:
Hypertext Transfer Protocol
Hypertext Transfer Protocol (HTTP) is the protocol usedfor web pages and the World Wide Web. HTTP applications use a standard language called Hypertext Markup Language (HTML). HTML files are normal text files that contain special coding that allows graphics, special fonts, and characters to be displayed by a web browser or other web-enabled applications. The default port is 80, and the URL begins with http://.
HTTP Secure (HTTPS) is the protocol used for “secure” web pages that users should see when they must enter personal information such as credit card numbers, passwords, and other identifiers. It combines HTTP with SSL/TLS to provide encrypted communication. The default port is 443, and the URL begins with https:// instead of http://. Netscape originally created the protocol for use with their browser, and it became an accepted standard with RFC 2818.
File Transfer Protocol
File Transfer Protocol (FTP) is an application that allows connections to FTP servers for file uploads and downloads. FTP is a common application that uses ports 20 and 21 by default. It is used to transfer files between hosts on the Internet but is inherently insecure. A number of options have been released to try to create a more secure protocol, including FTP over SSL (FTPS), which adds support for SSL cryptography, and SSH File Transfer Protocol (SFTP), which is also known as Secure FTP.
An alternative utility for copying files is Secure Copy (SCP), which uses port 22 by default and combines an old remote copy program (RCP) from the first days of TCP/IP with SSH. On the opposite end of the spectrum from a security standpoint is the Trivial File Transfer Protocol (TFTP), which can be configured to transfer files between hosts without any user interaction (unattended mode). It should be avoided anywhere there are more secure alternatives.
Simple Mail Transfer Protocol Simple Mail Transfer Protocol (SMTP) is the standard protocol for email communications. SMTP allows email clients and servers to communicate with each other for message delivery. The default port is 25.
Telnet Telnet is an interactive terminal emulation protocol. It allows a remote user to conduct an interactive session with a Telnet server. This session can appear to the client as if it were a local session.
Domain Name System Domain Name System (DNS) allows hosts to resolve hostnames to an Internet Protocol (IP) address. The default port used by name queries for this service is 53.
Remote Desktop Protocol The Remote Desktop Protocol (RDP) is becoming more common in the workplace, and it allows Windows-based terminal servers to run on port 3389 by default
Simple Network Management Protocol Simple Network Management Protocol (SNMP) is a management tool that allows communications between network devices and a management console. Most routers, bridges, and intelligent hubs can communicate using SNMP.
Post Office Protocol Post Office Protocol (POP) is a protocol used for receiving email. It enables the implementation of advanced features, and it is a standard interface in many email servers. The default port for version 3 (POP3) is 110. In its place, many systems now use the Internet Message Access Protocol (IMAP), which uses port 143 by default. The primary difference between the two is that POP was originally created to move email to your client machine and not keep it on the server, whereas IMAP was intended to store the email on the server and allow you to access it from there. Although those remain default options, today you can configure POP not to delete from the server automatically and IMAP to do so. For this reason, most email providers allow you to use either POP or IMAP and even change between them.
The Host-to-Host or Transport Layer
The Host-to-Host layer, also called the Transport layer, provides the Application layer with session and datagram communications services. The Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) operate at this layer. These two protocols provide a huge part of the functionality of the TCP/IP network:
TCP TCP is responsible for providing a reliable, one-to-one, connection-oriented ses- sion. TCP establishes a connection and ensures that the other end receives any packets sent. Two hosts communicate packet results with each other. TCP also ensures that packets are decoded and sequenced properly. This connection is persistent during the session. When the session ends, the connection is torn down.
UDP UDP provides an unreliable connectionless communication method between hosts. UDP is considered a best-effort protocol, but it’s considerably faster than TCP. The ses- sions don’t establish a synchronized session like the kind used in TCP, and UDP doesn’t guarantee error-free communications. The primary purpose of UDP is to send small pack- ets of information. The application is responsible for acknowledging the correct reception of the data.
The Internet Layer
The Internet layer is responsible for routing, IP addressing, and packaging. The Internet layer protocols accomplish most of the behind-the-scenes work in establishing the ability to exchange information between hosts. The following is an explanation of the four standard protocols of the Internet layer:
Internet Protocol Internet Protocol (IP) is a routable protocol that is responsible for IP addressing. IP also fragments and reassembles message packets. IP only routes information; it doesn’t verify it for accuracy. Accuracy checking is the responsibility of TCP. IP determines if a destination is known and, if so, routes the information to that destination. If the destination is unknown, IP sends the packet to the router, which sends it on.
Address Resolution Protocol Address Resolution Protocol (ARP) is responsible for resolving IP addresses to Network Interface layer addresses, including hardware addresses. ARP can resolve an IP address to a Media Access Control (MAC) address. MAC addresses are used to identify hardware network devices, such as a network interface card (NIC).
Internet Control Message Protocol Internet Control Message Protocol (ICMP) provides maintenance and reporting functions. The Ping program uses it. When a user wants to test connectivity to another host, they can enter the PING command with the IP address, and the user’s system will test connectivity to the other host’s system. If connectivity is good, ICMP will return data to the originating host. ICMP will also report if a destination is unreachable. Routers and other network devices report path information between hosts with ICMP.
The Network Access Layer
The lowest level of the TCP/IP suite is the Network Access (or Interface) layer. This layer is responsible for placing and removing packets on the physical network through communications with the network adapters in the host. This process allows TCP/IP to work with virtually any type of network topology or technology with little modification. If a new physical network topology were installed—say, a 10 GB Fiber Ethernet connection— TCP/IP would only need to know how to communicate with the network controllerin order to function properly. TCP/IP can also communicate with more than one network topology simultaneously. This allows the protocol to be used in virtually any environment.