SY0-501 Section 4.4- Implement the appropriate controls to ensure data security.
Cloud Storage The first couple of PCs that this author owned booted from media (tape with one and floppies with another) and did not include hard drives. After saving up for quite a while, I bought and installed my first hard drive—costing more than $600. It had a capacity of 20 MB, and I could not fathom what I would possibly do with all of that space. Today that number is so small, it’s laughable. The trend for both individuals and enterprises has been to collect and store as much…
The first couple of PCs that this author owned booted from media (tape with one and floppies with another) and did not include hard drives. After saving up for quite a while, I bought and installed my first hard drive—costing more than $600. It had a capacity of 20 MB, and I could not fathom what I would possibly do with all of that space. Today that number is so small, it’s laughable. The trend for both individuals and enterprises has been to collect and store as much data as possible. This has led to large local hard drives—DAS (direct attached storage), NAS (network area storage), SANs (storage area networks), and now the cloud. Just as the cloud holds such promise for running applications, balancing loads, and a plethora of other options, it also offers the ability to store more and more data on it and to let a provider worry about scaling issues instead of local administrators.
A storage area network (SAN) is a separate network set up to appear as a server to the main organizational network. For example, multiple servers, network storage devices, and switches might be configured to store several terabytes of data. This mini-network has one purpose: to store data. It is then connected to the main organizational network. Users can access the data in the SAN without being concerned about the complexities involved in the SAN. SANs usually have redundant servers, and they are connected via high-speed fiber- optic connections or iSCSI running on copper. Security for a SAN is similar to that for any server, with the exception of network isolation. There needs to be a firewall, perhaps an intrusion detection system (IDS), user access control, and all of the other security features that you would expect on many networks. SANS are primarily used when there is a large amount of data to store that must be accessible to users on the network.
Increasingly, organizations have to store extremely large amounts of data, often many terabytes. This is sometimes referred to simply as Big Data. This data normally cannot fit ona single server, and it is instead stored on a storage area network (SAN). One of the issues with Big Data is that it reaches a size where it becomes difficult to search, to store, to share, to back up, and to truly manage.
Data encryption refers to mathematical calculations and algorithmic schemes that transform plaintext into cypher text, a form that is non-readable to unauthorized parties. The recipient of an encrypted message uses a key, which triggers the algorithm mechanism to decrypt the data, transforming it to the original plaintext version. Before the Internet, the public seldom used data encryption, as it was more of a military security tool. With the prevalence of online shopping, banking and other services, even basic home users are now aware of data encryption. Today’s web browsers automatically encrypt text when making a connection to a secure server. This prevents intruders from listening in on private communications. Even if they are able to capture the message, encryption allows them to only view scrambled text or what many callUn-readable gibberish. Upon arrival, the data is decrypted, allowing the intended recipient to view the message in its original form.
Types of Data Encryption
There are many different types of data encryption, but not all are reliable. In the beginning, 64-bit encryption was thought to be strong, but was proven wrong with the introduction of 128-bit solutions. AES (Advanced Encryption Standard) is the new standard and permits a maximum of 256-bits. In general, the stronger the computer, the better chance it has at breaking a data encryption scheme.
Data encryption schemes generally fall in two categories: symmetric and asymmetric. AES, DES and Blowfish use symmetric key algorithms. Each system uses a key, which is shared between the sender and the recipient. This key has the ability to encrypt and decrypt the data. With asymmetric encryption such as Diffie-Hellman and RSA, a pair of keys is created and assigned: a private key and a public key. The public key can be known by anyone and used to encrypt data that will be sent to the owner. Once the message is encrypted, the owner of the private key can only decrypt it. Asymmetric encryption is said to be somewhat more secure than symmetric encryption as the private key is not to be shared. Strong encryption like SSL (Secure Sockets Layer) and TLS (Transport Layer Security) will keep data private, but cannot always ensure security. Websites using this type of data encryption can be verified by checking the digital signature on their certificate, which should be validated by an approved CA (Certificate Authority).
Hardware based encryption devices In addition to software-based encryption, hardware-based encryption can be applied. Within the advanced configuration settings on some BIOS configuration menus, for exam- ple, you can choose to enable or disable TPM. A Trusted Platform Module (TPM) can be used to assist with hash key generation. TPM is the name assigned to a chip that can store cryptographic keys, passwords, or certificates. TPM can be used to protect smart phones and devices other than PCs as well. It can also be used to generate values used with whole disk encryption such as BitLocker. BitLocker can be used with or without TPM. It is much more secure when coupled with TPM (and is preferable) but does not require it.
The TPM chip may be installed on the motherboard; when it is, in many cases it is set to off in the BIOS by default. In addition to TPM, HSM (Hardware Security Module) is also a cryptoprocessor that can be used to enhance security. HSM is commonly used with PKI systems to augment security with CAs. As opposed to being mounted on the motherboard like TPMs, HSMs are traditionally PCI adapters.
Data in-transit/Data at-rest, Data in-use
Protecting Data at Rest:
You must either (a) encrypt the entire contents of the storage media or (b) you must have complete knowledge of how any system or user organizes data when writing to the storage media so that you can encrypt the data that needs to be protected. (a) is FDE. (b) can accomplished by any one of a number of other solutions, but is very difficult because even if you know how the system stores everything, you don’t know (or have to enforce through restriction) how the user may store something (you must disable his/her ability to store anything “sensitive” on the media in a location that is not encrypted). Furthermore (c) you must enforce strong keys/passwords and (d) you must prevent the user from storing the password on the media. Finally, remember, (e) for detachable media, including laptop hard drives, the USER is considered the “node associated with the media”, so really, your data can’t be considered secure, because the user is the node, and the user has the key. (Unless, I suppose, you have the ability to revoke the key remotely, preventing Disgruntled Joe from taking a laptop out and then quitting with a copy of your code base already in his possession).
By far, (c)/(d)/(e) are going to be the hardest. A suitably strong password that prevents a dictionary attack is going to be burdensome to the user to retain, so they’re either going to forget it, or write it down and stickynote it to the monitor, etc. The only way to mitigate this risk effectively is to *limit access to the data in the first place* – people look at FDE as a “silver bullet” to allow them to say, “We can now allow our vice president to take a copy of the financial database home on his laptop, because it is encrypted, so we don’t have to worry if the laptop is stolen”, but that assumes that (c)/(d)/(e) aren’t problems, which is screwy. Sensitive data shouldn’t leave the house, people. If the VP wants access to the data because it makes his life easier, say “No, you need to be in the office to get access to that,” or make sure ahead of time that everyone at the CEO/Board of Directors level knows that you have *no real data protection* – your data is only as secure as everyone is trustworthy. And while I may trust a particular worker to not read data to a corporate rival over the phone, I simply don’t trust any number of workers > 2 to *not put their password on a sticky note on the screen of their laptop*.
Protecting Data in Use:
This is basically impossible in today’s OS market… anyone who claims that they have “secure data is use” is full of baloney. The best you can do here is mitigate the attack vectors. If you use FDE, you solve some of the problems because the swap space is encrypted, which prevents one attack vector, or you can get rid of swap altogether (and make sure you’re not using NVRAM). However, if you look at the various ways that Data in Use can be mishandled, virtually all of the major vulnerabilities are exploitable at the OS level, which is something that you’ve more or less outsourced to your OS vendor. Your only mitigation here is to lock down the OS as much as you possibly can (including using FDE to protect the OS files at rest!), and this is more often way more trouble than it is worth, given that even if you could cover all of your bases, it doesn’t protect from Kevin Mitnick. From a cost/benefit analysis, aside from taking basic steps to secure an operating system, you’re wasting money – locking down Windows to the point of near un-usability isn’t going to protect you from a zero-day IE exploit.
The number one way to prevent OS level exploits is to use a web proxy at your border and disallow all attachments via email. Anybody who can successfully sell #2, please let me know how you did it. If you can’t do those two things, though, spending more than a minimal effort locking down the host OS is largely a waste of time.
Protecting Data in Transit:
Here’s where S/MIME and SSL and IPSec and all that good stuff comes in. Actually, next to protecting Data at Rest, protecting Data in Transit is probably one of the easier tasks to accomplish at the present time, except for the fact that both hosts have to be able to protect the Data in Use, and we illustrated in the previous paragraph how hard that is. Yes, you can man-in-the-middle data in transit in many, many instances in today’s networked world, but we already have many of the technologies to mitigate this; we just don’t deploy them properly.
User permissions may be the most basic aspect of security. Remember the concept of least privileges, which means that any given user will be granted only the privileges necessary to perform their job function. Microsoft describes five file permissions and one additional folder permission:
Full Control This means the user cannot only read, executes, and write, but they can also assign permissions to other uses.
Modify This is the same as read and write, with delete added.
Read and Execute Not all files are documents. For example, programs are files, and theRead and Execute privilege is needed to run the program.
Read This permission allows the user to read the file but not to modify it.
Write This permission allows the user to modify the file.
Folders have the same permissions, with one added permission: list folder contents. This permission allows the user to see what is in a folder but not to read the files.
Access Control Lists
Related to permissions is the concept of the access control list (ACL). An ACL is literally a list of who can access what resource and at what level. It can be an internal part of an operating system or application. For example, a custom application might have an ACL that lists which users have what permissions (access levels) in that system.
An ACL can also be a physical list of who is allowed to enter a room or a building. This is a much less common definition for ACL, but it is relevant to physical security. Related to ACLs are white lists and black lists. In fact, you could consider these to be special types of access control lists. Essentially, a white list is a list of items that are allowed. It could be a list of websites that are okay to visit with company computers, or it could be a list of third-party software that is authorized to be installed on company computers. Black lists are the opposite. They are lists of things that are prohibited. It could be specific websites that employees should not visit or software that is forbidden to be installed on client computers.