SY0-501 Section 5.2 Given a scenario, select the appropriate authentication, authorization or access control.
Identification vs. authentication vs. authorization
Identification is the process whereby a network element recognizes a valid user’s identity. Authentication is the process of verifying the claimed identity of a user. A user may be a person, a process, or a system (e.g., an operations system or another network element) that accesses a network element to perform tasks or process a call. A user identification code is a non-confidential auditable representation of a user. Information used to verify the claimed identity of a user can be based on a password, Personal Identification Number (PIN), smart card, biometrics, token, exchange of keys, etc. Authentication information should be kept confidential.
If users are not properly identified then the network element is potentially vulnerable to access by unauthorized users. Because of the open nature of ONA, ONA greatly increases the potential for unauthorized access. If strong identification and authorization mechanisms are used, then the risk that unauthorized users will gain access to a system is significantly decreased. The exploitation of the following vulnerabilities, as well as other identification and authentication vulnerabilities, will result in the threat of impersonating a user.
– Weak authentication methods are used;
– The potential exists for users to bypass the authentication mechanism;
– The confidentiality and integrity of stored authentication information is not preserved, and
– Authentication information, which is transmitted over the network, is not encrypted.
Computer intruders have been known to compromise PSN assets by gaining unauthorized access to network elements. The severity of the threat of impersonating a user depends on the level of privilege that is granted to the unauthorized user.
The Least-Privileged User Account Approach
A defense-in-depth strategy, with overlapping layers of security, is the best way to counter these threats, and the least-privileged user account (LUA) approach is an important part of that defensive strategy. The LUA approach ensures that users follow the principle of least privilege and always log on with limited user accounts. This strategy also aims to limit the use of administrative credentials to administrators, and then only for administrative tasks. The LUA approach can significantly mitigate the risks from malicious software and accidental incorrect configuration. However, because the LUA approach requires organizations to plan, test, and support limited access configurations, this approach can generate significant costs and challenges. These costs can include redevelopment of custom programs, changes to operational procedures, and deployment of additional tools. It is difficult to find utilities and guidance on using limited user accounts, so this white paper refers to third-party tools and guidance from Web logs and other unofficial sources. Microsoft makes no warranty about the suitability of the tools or guidance for your environment. You should test any of these instructions or programs before you deploy them. As with all security issues, there is no perfect answer, and this software and guidance is no exception.
Separation of Duties
Another fundamental approach to security is separation of duties. This concept is applicable to physical environments as well as network and host security. Separation of duty ensures that for any given task, more than one individual needs to be involved. The task is broken into different duties, each of which is accomplished by a separate individual. By implementing a task in this manner, no single individual can abuse the system for his or her own gain. This principle has been implemented in the business world, especially financial institutions, for many years. A simple example is a system in which one individual is required to place an order and a separate person is needed to authorize the purchase. While separation of duties provides a certain level of checks and balances, it is not without its own drawbacks. Chief among these is the cost required to accomplish the task. This cost is manifested in both time and money. More than one individual is required when a single person could accomplish the task, thus potentially increasing the cost of the task. In addition, with more than one individual involved, a certain delay can be expected, as the task must proceed through its various steps.
Discretionary Access Control
Both discretionary access control and mandatory access control are terms originally used by the military to describe two different approaches to controlling an individual’s access to a system. As defined by the “Orange Book,” a Department of Defense document that at one time was the standard for describing what constituted a trusted computing system, DACs are “a means of restricting access to objects based on the identity of subjects and/or groups to which they belong. The controls are discretionary in the sense that a subject with a certain access permission is capable of passing that permission (perhaps indirectly) on to any other subject.” While this might appear to be confusing “government-speak,” the principle is rather simple. In systems that employ DACs, the owner of an object can decide which other subjects can have access to the object and what specific access they can have. One common method to accomplish this is the permission bits used in UNIX-based systems. The owner of a file can specify what permissions (read/write/execute) members in the same group can have and also what permissions all others can have. ACLs are also a common mechanism used to implement DAC.
Mandatory Access Control
A less frequently employed system for restricting access is mandatory access control. This system, generally used only in environments in which different levels of security classifications exist, is much more restrictive regarding what a user is allowed to do. Referring to the “Orange Book,” a mandatory access control is “a means of restricting access to objects based on the sensitivity (as represented by a label) of the information contained in the objects and the formal authorization (i.e., clearance) of subjects to access information of such sensitivity.” In this case, the owner or subject can’t determine whether access is to be granted to another subject; it is the job of the operating system to decide.
Role-Based Access Control
ACLs can be cumbersome and can take time to administer properly. Another access control mechanism that has been attracting increased attention is the role-based access control (RBAC). In this scheme, instead of each user being assigned specific access permissions for the objects associated with the computer system or network, each user is assigned a set of roles that he or she may perform. The roles are in turn assigned the access permissions necessary to perform the tasks associated with the role. Users will thus be granted permissions to objects in terms of the specific duties they must perform—not according to a security classification associated with individual objects.
Rule-Based Access Control The first thing that you might notice is the ambiguity that is introduced with this access control method also using the acronym RBAC. Rule-based access control again uses objects such as ACLs to help determine whether access should be granted or not. In this case, a series of rules are contained in the ACL and the determination of whether to grant access will be made based on these rules. An example of such a rule is one that states that no employee may have access to the payroll file after hours or on weekends. As with MAC, users are not allowed to change the access rules, and administrators are relied on for this. Rule-based access control can actually be used in addition to or as a method of implementing other access control methods. For example, MAC methods can utilize a rulebased approach for implementation.
Time of day restrictions
Some systems allow for the specification of time of day restrictions in their access control policies. This means that a user’s access to the system or specific resources can be restricted to certain times of the day and days of the week. If a user normally accesses certain resources during normal business hours, an attempt to access these resources outside this time period (either at night or on the weekend) might indicate an attacker has gained access to the account. Specifying time of day restrictions can also serve as a mechanism to enforce internal controls of critical or sensitive resources. Obviously, a drawback to enforcing time of day restrictions is that it means that a user can’t go to work outside of normal hours in order to “catch up” with work tasks. As with all security policies, usability and security must be balanced in this policy decision.
A smart card is a plastic card about the size of a credit card, with an embedded microchip that can be loaded with data, used for telephone calling, electronic cash payments, and other applications, and then periodically refreshed for additional use. Currently or soon, you may be able to use a smart card to:
Dial a connection on a mobile telephone and be charged on a per-call basis
– Establish your identity when logging on to an Internet access provider or to an online bank
– Pay for parking at parking meters or to get on subways, trains, or buses
– Give hospitals or doctors personal data without filling out a form
– Make small purchases at electronic stores on the Web (a kind of cybercash)
– Buy gasoline at a gasoline station
Over a billion smart cards are already in use. Currently, Europe is the region where they are most used. Ovum, a research firm, predicts that 2.7 billion smart cards will be shipped annually by 2003. Another study forecasts a $26.5 billion market for recharging smart cards by 2005. Compaq and Hewlett-Packard are reportedly working on keyboards that include smart card slots that can be read like bank credit cards. The hardware for making the cards and the devices that can read them is currently made principally by Bull, Gemplus and Schlumberger.
What has become the Internet was originally designed as a friendly environment where everybody agreed to abide by the rules implemented in the various protocols. Today, the Internet is no longer the friendly playground of researchers that it once was. This has resulted in different approaches that might at first seem less than friendly but that are required for security purposes. One of these approaches is implicit deny. Frequently in the network world, decisions concerning access must be made. Often a series of rules will be used to determine whether or not to allow access. If a particular situation is not covered by any of the other rules, the implicit deny approach states that access should not be granted. In other words, if no rule would allow access, then access should not be granted. Implicit deny applies to situations involving both authorization and access.
The term access control describes a variety of protection schemes. It sometimes refers to all security features used to prevent unauthorized access to a computer system or network. In this sense, it may be confused with authentication. More properly, access is the ability of a subject (such as an individual or a process running on a computer system) to interact with an object (such as a file or hardware device). Authentication, on the other hand, deals with verifying the identity of a subject.
To understand the difference, consider the example of an individual attempting to log in to a computer system or network. Authentication is the process used to verify to the computer system or network that the individual is who he claims to be. The most common method to do this is through the use of a user ID and password. Once the individual has verified his identity, access controls regulate what the individual can actually do on the system—just because a person is granted entry to the system does not mean that he should have access to all data the system contains. Consider another example. When you go to your bank to make a withdrawal, the teller at the window will verify that you are indeed who you claim to be by asking you to provide some form of identification with your picture on it, such as your driver’s license. You might also have to provide your bank account number. Once the teller verifies your identity, you will have proved that you are a valid (authorized) customer of this bank. This does not, however, mean that you have the ability to view all information that the bank protects—such as your neighbor’s account balance. The teller will control what information, and funds, you can access and will grant you access only to information for which you are authorized to see. In this example, your identification and bank account number serve as your method of authentication and the teller serves as the access control mechanism. In computer systems and networks, access controls can be implemented in several ways. An access control matrix provides the simplest framework for illustrating the process. In this matrix, the system is keeping track of two processes, two files, and one hardware device. Process 1 can read both File 1 and File 2 but can write only to File 1. Process 1 cannot access Process 2, but Process 2 can execute Process 1. Both processes have the ability to write to the printer. While simple to understand, the access control matrix is seldom used in computer systems because it is extremely costly in terms of storage space and processing. Imagine the size of an access control matrix for a large network with hundreds of users and thousands of files. The actual mechanics of how access controls are implemented in a system varies, though access control lists (ACLs) are common. An ACL is nothing more than a list that contains the subjects that have access rights to a particular object. The list identifies not only the subject but the specific access granted to the subject for the object. Typical types of access include read, write, and execute as indicated in the example access control matrix. No matter what specific mechanism is used to implement access controls in a computer system or network, the controls should be based on a specific model of access.
Single sign on Single sign-on (SSO) is mechanism whereby a single action of user authentication and authorization can permit a user to access all computers and systems where he has access permission, without the need to enter multiple passwords. Single sign-on reduces human error, a major component of systems failure and is therefore highly desirable but difficult to implement.
The SSO feature maintains a mapping between a user, or group of users, and the credentials (username and password) needed to access a particular data source. This mapping is referred to as an Application Definition (or “app def” for short). Only server administrators can create and modify app defs, using the browser-based Central Admin UI. When the DataFormWebPart needs to access a remote data source using SSO, it calls the SSO API to retrieve the necessary credentials for the given app def. If they happen to be Windows credentials, the web part temporarily “logs in” with those credentials, and then attempts to connect to the data source. This means the Windows credentials are only making one hop – from the web server to the database server – and not two.
SPD can create Data Views using both kinds of SSO app defs:
A Group app def is used to let everyone in a domain group access a database using a single account. For example, you might have a special account for database use that only has readonly permissions on a few tables; SSO lets you force everyone in your workgroup to connect to the database with the limited permission account. An Individual app def lets users provide their own account information (username and password). The first time a connection is attempted and the end user’s credentials are not already in SSO, the Data View will redirect to a web page to collect and store them. Subsequent attempts will reuse stored credentials without prompting. Either type of app def can be used to store Windows credentials (from a network domain account, in the format “domain\account”), or basic (non-Windows) credentials. In the case of SQL Server, you can use SSO to establish either Windows auth connections or SQL auth connections. You can also use SSO access web services that require a specific username and password, or Windows authentication.
When two or more access methods are included as part of the authentication process, you’re implementing a multifactor authentication system. A system that uses smart cards and passwords is referred to as a two-factor authentication system. This example requires both a smart card and a logon password process. A multifactor system can consist of a two-factor system, three-factor system, and so on. As long as more than one factor is involved in the authentication process, it is considered a multifactor system. For obvious reasons, the two or more factors employed should not be from the same category. Although you do increase difficulty in gaining system access by requiring the user to enter two sets of username/password combinations, it is much preferred to pair a single username/password combination with a biometric identifier or other security check. The most basic form of authentication is known as single-factor authentication (SFA), because only one type of authentication is checked. SFA is most often implemented as the traditional username/password combination. A username and password are unique identifiers for a logon process. Here’s a synopsis for how SFA works: When users sit down in front of a computer system, the first thing a security system requires is that they establish who they are. Identification is typically confirmed through a logon process. Most operating systems use a user ID (username) and password to accomplish this. These values can be sent across the connection as plain text or they can be encrypted. The logon process identifies that you are who you say you are to the operating sys- tem and possibly the network. Figure 4.1 illustrates the logon and password process. Notice that the operating system compares this information to the stored information from the security processor, and it either accepts or denies the logon attempt. The operating system might establish privileges or permissions based on stored data about that particular user ID.
Whenever two or more parties authenticate each other, it is known as mutual authentication. A client may authenticate to a server and a server may authenticate to a client when there is a need to establish a secure session between the two and employ encryption. Mutual authentication ensures that the client is not unwittingly connecting and providing its credentials to a rogue server, which can then turn around and steal the data from the real server.
Identification Understanding the difference between identification and authentication is critical to correctly answering access control questions on the Security+ exam. Identification means finding out who someone is. Authentication is a mechanism of verifying that identification. Put another way, identification is claiming an identity; authentication is proving it.
In the physical world, the best analogy would be that any person can claim to be anyone (identification). To prove it (authentication), however, that person needs to provide some evidence, such as a driver’s license, passport, and so forth.
– Authentication systems or methods are based on one or more of these five factors:
– Something you know, such as a password or PIN
– Something you have, such as a smart card, token, or identification device
– Something you are, such as your fingerprints or retinal pattern (often called biometrics) Something you do, such as an action you must take to complete authentication Somewhere you are (this is based on geo-location)
A federation is a collection of computer networks that agree on standards of operation, such as security standards. Normally, these are networks that are related in some way. In some cases, it could be an industry association that establishes such standards. Another example of a federation would be an instant messaging (IM) federation. In this scenario, multiple IM providers form common communication standards, thus allowing users on different platforms with different clients to communicate freely. In other situations, a group of partners might elect to establish common security and communication standards, thus forming a federation. This would facilitate communication between employees in each of the various partners. A federated identity is a means of linking a user’s identity with their privileges in a manner that can be used across business boundaries (for example, Microsoft Passport or Google checkout). This allows a user to have a single identity that they can use across different business units and perhaps even entirely different businesses.
The word transitive means involving transition—keep this in mind as you learn how transitive access problems occur. With transitive access, one party (A) trusts another party (B). If the second party (B) trusts another party (C), then a relationship can exist where the first party (A) also may trust the third party (C). In early operating systems, this process was often exploited. In current operating systems, such as Windows Server 2012, the problems with transitive access are solved by creating transitive trusts, which are a type of relationship that can exist between domains (the opposite is non-transitive trusts). When the trust relationship is transitive, the relation- ship between party (A) and party (B) flows through as described earlier (for instance, A now trusts C). In all versions of Active Directory, the default is that all domains in a forest trust each other with two-way, transitive trust relationships.