ISC CISSP Certified Information Systems Security Professional Exam Dumps and Practice Test Questions Set 9 Q161-180
Visit here for our full ISC CISSP exam dumps and practice test questions.
161. Question:
Under Domain 1, what is the main purpose of a security charter?
A) To assign daily operational duties to technical staff
B) To formally authorize the information security program and define its scope and authority
C) To establish encryption standards for all departments
D) To define system-specific security configurations
Answer: B) To formally authorize the information security program and define its scope and authority.
Explanation:
A security charter, approved by executive management, legitimizes the information security program, outlining its mission, goals, authority, and alignment with corporate governance structures.
Question:
Under Domain 2, which data retention principle supports legal defensibility and minimizes risk exposure?
A) Retain all data indefinitely
B) Retain only as long as necessary to meet legal and business needs
C) Delete all data after one year
D) Archive all unclassified data permanently
Answer: B) Retain only as long as necessary to meet legal and business needs.
Explanation:
Controlled retention ensures data is available for compliance or operational use but destroyed when obsolete, limiting storage costs and breach liability.
Question:
Under Domain 3, which property of cryptographic algorithms ensures that plaintext cannot be feasibly derived from ciphertext without the key?
A) Diffusion
B) Non-repudiation
C) Irreversibility
D) Confusion
Answer: C) Irreversibility.
Explanation:
Irreversibility ensures that the ciphertext does not reveal plaintext patterns or allow practical key recovery, maintaining confidentiality even under extensive analysis.
Question:
Under Domain 4, what is the purpose of a network access control (NAC) system?
A) To encrypt all internal data
B) To enforce endpoint compliance before allowing network access
C) To replace firewalls
D) To detect phishing attempts
Answer: B) To enforce endpoint compliance before allowing network access.
Explanation:
NAC systems authenticate devices and assess compliance (e.g., patch level, antivirus) before granting access, enforcing security baselines dynamically.
Question:
Under Domain 5, what is the primary difference between identification and authentication?
A) Identification confirms the validity of credentials; authentication claims identity
B) Identification asserts who a user is; authentication verifies that claim
C) Authentication replaces authorization
D) Identification encrypts user data
Answer: B) Identification asserts who a user is; authentication verifies that claim.
Explanation:
Users first identify themselves (e.g., with a username), then authenticate through credentials (e.g., password, token) to prove legitimacy before access authorization.
Question:
Under Domain 6, what is the most important outcome of a penetration test report?
A) Listing all open ports
B) Highlighting exploitable vulnerabilities and actionable remediation recommendations
C) Estimating bandwidth consumption
D) Testing backup recovery
Answer: B) Highlighting exploitable vulnerabilities and actionable remediation recommendations.
Explanation:
Penetration testing reports detail exploit paths, affected systems, and prioritized mitigation steps, guiding organizations toward effective risk reduction.
Question:
Under Domain 7, what is the main goal of a business impact analysis (BIA)?
A) To determine the root causes of previous incidents
B) To quantify operational and financial impacts of disruptions
C) To design network segmentation plans
D) To manage patch schedules
Answer: B) To quantify operational and financial impacts of disruptions.
Explanation:
A BIA identifies critical functions, interdependencies, and tolerable downtime to inform continuity strategies and recovery prioritization.
Question:
Under Domain 8, what is the main security risk of using third-party software libraries in applications?
A) Reduced code size
B) Inherited vulnerabilities and lack of update control
C) Improved code portability
D) Increased performance
Answer: B) Inherited vulnerabilities and lack of update control.
Explanation:
Using open-source software provides numerous benefits, such as cost savings, community support, and flexibility. However, it also introduces certain security risks, the most significant of which is inherited vulnerabilities and a lack of update control, making option B the correct answer. Open-source software often relies on contributions from multiple developers and may include third-party libraries or components. If these components contain vulnerabilities, they can be inherited by any application that uses the software. Organizations relying on open-source solutions may not always have visibility into all dependencies, increasing the risk that unpatched vulnerabilities remain in their environment.
A) Reduced code size
While modular open-source libraries can help streamline development, reduced code size is not inherently a security concern or risk. It is more of a potential advantage, as smaller codebases can be easier to manage or integrate, but it does not relate to the risks posed by open-source software.
C) Improved code portability
Open-source software is often designed to run across multiple platforms, which enhances portability. While this is a significant benefit, it is not a risk. Portability helps organizations deploy applications more broadly and efficiently, but does not address security issues.
D) Increased performance
Some open-source software may be optimized for performance, but this is also an advantage rather than a risk. Performance considerations are separate from the security concerns associated with open-source software.
In conclusion, while open-source software provides advantages like portability, community support, and potential performance improvements, it introduces security risks primarily due to inherited vulnerabilities and the challenge of managing updates. Without careful monitoring and patch management, organizations may unknowingly expose themselves to threats present in third-party code. This makes option B the correct choice.
Question:
Under Domain 1, which risk response strategy involves accepting residual risk while monitoring for changes?
A) Mitigation
B) Transfer
C) Avoidance
D) Acceptance
Answer: D) Acceptance.
Explanation:
Risk management involves identifying, assessing, and responding to potential threats to an organization’s assets, operations, or objectives. Once a risk is identified, organizations must decide how to handle it using one of several strategies, including mitigation, transfer, avoidance, or acceptance. The correct answer to the question is option D: acceptance. Risk acceptance is the process of consciously acknowledging a risk and choosing to proceed without taking additional measures to reduce or transfer it, typically because the cost or effort of other strategies outweighs the potential impact.
A) Mitigation
Mitigation involves taking proactive steps to reduce the likelihood or impact of a risk. This can include implementing security controls, backup procedures, training, or redundant systems. Mitigation is a common approach when the risk is significant and can be reasonably controlled or reduced. Unlike acceptance, mitigation requires resources, planning, and ongoing management. While mitigation lowers risk exposure, risk acceptance is chosen when an organization decides the potential impact is tolerable or unavoidable.
B) Transfer
Risk transfer involves shifting the financial or operational burden of a risk to a third party, most commonly through insurance, outsourcing, or contractual agreements. For example, a company may purchase cybersecurity insurance to cover potential data breach costs or outsource cloud hosting to a provider responsible for maintaining security. While transfer reduces the organization’s direct exposure, it does not eliminate the risk itself, and there is often a cost associated with transferring risk. Risk acceptance, by contrast, does not involve shifting responsibility or paying for coverage; the organization simply acknowledges the risk and chooses to live with it.
C) Avoidance
Risk avoidance seeks to eliminate the risk by changing plans, processes, or operations. For instance, a company might avoid operating in a high-risk country or discontinue the use of vulnerable software. While avoidance effectively removes the risk, it can be impractical, restrictive, or costly, and it may limit opportunities. Acceptance is a different approach where the organization deliberately accepts that the risk exists, rather than attempting to remove or avoid it, often because the probability or impact is low or manageable.
D) Acceptance
This is the correct answer. Risk acceptance is appropriate when the potential impact of a risk is considered minor or when the cost of mitigation, transfer, or avoidance exceeds the expected loss. Acceptance is a conscious decision documented in risk registers or management plans, often accompanied by monitoring to ensure that the risk does not escalate unexpectedly. For example, a company might accept minor software bugs in a non-critical application rather than spending excessive resources to eliminate every issue. Acceptance acknowledges that risk is inherent in business operations and focuses resources on higher-priority risks while consciously tolerating low-level or unavoidable threats.
In conclusion, risk acceptance is a deliberate choice to acknowledge and tolerate risk rather than actively mitigating, transferring, or avoiding it. While mitigation reduces the likelihood or impact, transfer shifts responsibility, and avoidance seeks to eliminate the risk, acceptance allows organizations to focus resources efficiently while understanding and managing their exposure. This makes option D the correct strategy for handling certain risks in a balanced and pragmatic manner.
Question:
Under Domain 2, what is the primary purpose of data masking?
A) To compress large datasets
B) To conceal sensitive data in test environments or analytics
C) To encrypt databases
D) To delete personal data
Answer: B) To conceal sensitive data in test environments or analytics.
Explanation:
Data masking is a security technique used to protect sensitive information by replacing real data with realistic but fictional values. The correct answer is option B: to conceal sensitive data in test environments or analytics. Data masking allows organizations to use realistic datasets for development, testing, or analysis without exposing actual personal or confidential information, thereby reducing the risk of data breaches or accidental exposure. Understanding why this is the correct choice requires examining each of the four options.
A) To compress large datasets
Data compression reduces the size of datasets to save storage space or improve transmission efficiency. While compression is a useful technique in computing, it does not inherently protect sensitive information or conceal it from unauthorized users. Compressed data can still contain real, unprotected sensitive information if accessed by an unauthorized party. Therefore, compression addresses storage and bandwidth efficiency rather than the security or privacy objectives that data masking fulfills.
B) To conceal sensitive data in test environments or analytics
This is the correct answer. Data masking replaces sensitive information, such as personally identifiable information (PII), financial records, or health data, with obfuscated but realistic values. For example, in a test database, real customer names and social security numbers can be replaced with fictitious values that retain the format and characteristics of the original data. This allows developers, testers, or data analysts to perform realistic operations, quality assurance, or analytics without accessing real sensitive information. Data masking supports regulatory compliance with frameworks such as GDPR, HIPAA, and PCI DSS, which require organizations to protect sensitive data while enabling business operations. It ensures that sensitive data cannot be exploited in non-production environments, reducing the risk of data leaks, misuse, or insider threats.
C) To encrypt databases
Encryption protects data by converting it into unreadable formats that can only be decrypted with the correct key. While encryption secures data in storage or transit, it is primarily used for protecting production databases from unauthorized access or theft. Encrypted data is not typically suitable for use in testing or analytics because it cannot be readily processed without decryption. Data masking, on the other hand, allows realistic use of data while concealing the sensitive content, making it more appropriate for non-production environments where developers or analysts need functional access without exposure to real information.
D) To delete personal data
Deleting personal data is a data disposal or data erasure method used to permanently remove information from storage. While deletion ensures that sensitive data cannot be recovered or misused, it eliminates the data entirely, making it unusable for testing, analytics, or reporting purposes. Data masking, by contrast, allows organizations to retain the structure and usability of the dataset while concealing sensitive elements, providing a balance between data protection and operational utility.
In conclusion, data masking is specifically designed to conceal sensitive information in non-production environments, enabling safe testing, development, and analytics without exposing real data. While compression addresses storage efficiency, encryption secures production data, and deletion removes data entirely, data masking uniquely protects sensitive information while maintaining usability for legitimate business processes. This makes option B the correct choice.
Question:
Under Domain 3, what cryptographic function is used in digital signatures to verify message integrity?
A) Hashing
B) Symmetric encryption
C) Steganography
D) Key stretching
Answer: A) Hashing.
Explanation:
Digital signatures use hashing to generate a fixed-length digest that represents message integrity. Any modification changes the digest, revealing tampering.
Question:
Under Domain 4, which technology establishes secure, encrypted tunnels across untrusted networks?
A) Virtual Private Network (VPN)
B) Proxy server
C) Load balancer
D) Network sniffer
Answer: A) Virtual Private Network (VPN).
Explanation:
A Virtual Private Network (VPN) is a technology that provides secure communication over public networks, such as the internet, by encrypting traffic and creating a private tunnel between endpoints. The correct answer is option A: Virtual Private Network (VPN). VPNs are widely used to ensure confidentiality, integrity, and secure remote access to corporate resources. Understanding why VPN is the correct choice requires examining each of the four options and their functions.
A) Virtual Private Network (VPN)
A VPN allows remote users or branch offices to connect securely to a private network over an untrusted network, like the Internet. By encrypting data traffic and encapsulating it within a secure tunnel, VPNs protect sensitive information from interception or eavesdropping. VPNs can use protocols such as IPsec, SSL/TLS, or WireGuard to ensure data confidentiality and integrity. VPNs also provide authentication mechanisms to verify that only authorized users can establish a connection. In addition to securing data, VPNs enable organizations to extend their internal networks to remote locations, allowing employees to access files, applications, and services as if they were directly connected to the corporate network. This makes VPNs an essential tool for telecommuting, branch office connectivity, and secure communication across public infrastructure.
B) Proxy server
A proxy server acts as an intermediary between clients and external servers. It can filter traffic, cache content to improve performance, and provide anonymity by masking the client’s IP address. While proxies can offer some privacy benefits, they do not inherently provide end-to-end encryption or secure tunneling like a VPN. Traffic between the proxy and the destination server can still be exposed unless additional encryption is implemented. Therefore, a proxy server cannot be considered a substitute for the secure communication capabilities provided by a VPN.
C) Load balancer
A load balancer is a network device or software component that distributes incoming network traffic across multiple servers to improve performance, reliability, and availability. Load balancers optimize resource utilization and prevent individual servers from becoming overwhelmed, but they do not provide encryption, secure tunnels, or confidentiality of data. Their primary purpose is performance and fault tolerance, not secure remote communication, making them an inappropriate choice when the goal is to protect sensitive data across public networks.
D) Network sniffer
A network sniffer is a tool used to capture and analyze network traffic. While sniffers are useful for network troubleshooting, performance monitoring, and security auditing, they do not provide any security or privacy to network communications. In fact, sniffers can be used maliciously to intercept unencrypted traffic, highlighting the need for technologies like VPNs to secure data in transit. Network sniffers are passive monitoring tools and cannot encrypt or protect data.
In conclusion, a Virtual Private Network (VPN) is the correct choice for securely connecting remote users or networks over an untrusted medium. Unlike proxy servers, which offer limited privacy, load balancers, which improve performance, or network sniffers, which monitor traffic, VPNs provide encryption, authentication, and secure tunneling to protect data from interception and unauthorized access. VPNs are essential for ensuring confidentiality, integrity, and secure access in modern distributed networks.
Question:
Under Domain 5, what access control model assigns permissions based on organizational role rather than identity?
A) DAC
B) MAC
C) RBAC
D) ABAC
Answer: C) RBAC.
Explanation:
Access control is a fundamental concept in information security, determining who can access resources and what actions they are allowed to perform. One widely used access control model is Role-Based Access Control (RBAC), which assigns permissions to users based on their roles within an organization. The correct answer is option C: RBAC. Understanding why RBAC is the most appropriate choice requires examining each option and its characteristics.
A) DAC
Discretionary Access Control (DAC) is an access control model where the owner of an object or resource determines who can access it and what permissions they have. DAC provides flexibility because resource owners can grant or revoke access as needed. However, DAC can become difficult to manage in large organizations because permissions are managed individually for each user, making it harder to enforce consistent security policies or the principle of least privilege. DAC also increases the risk of accidental permission misconfigurations, which could lead to unauthorized access.
B) MAC
Mandatory Access Control (MAC) is a strict access control model in which access decisions are determined by system-enforced policies based on classifications or labels assigned to both users and resources. MAC is often used in environments requiring high security, such as military or government systems. While MAC provides strong protection and policy enforcement, it is inflexible for commercial environments because users cannot modify access, and the system can be complex to manage when multiple classifications and policies are involved. MAC is not ideal for general business applications that require dynamic role assignments.
C) RBAC
This is the correct answer. Role-Based Access Control assigns permissions based on predefined roles within an organization, such as manager, accountant, or IT administrator. Each role has a set of privileges that align with the responsibilities of that job function. Users are then assigned to roles rather than individual permissions, which simplifies management and ensures consistent enforcement of the principle of least privilege. For example, an HR role may have access to employee records, while an IT role may have access to network configurations but not sensitive personnel data. RBAC is scalable, particularly in large organizations, because adding a new employee to a role automatically grants the appropriate access without individually configuring permissions. RBAC also facilitates compliance with regulatory requirements by ensuring that access is aligned with organizational policies and job functions, making audits easier and more accurate.
D) ABAC
Attribute-Based Access Control (ABAC) makes access decisions based on attributes associated with users, resources, and the environment, such as department, location, or time of day. While ABAC provides fine-grained control and is highly flexible, it can be more complex to implement and maintain than RBAC. ABAC is ideal for dynamic and context-aware access control scenarios, but may introduce unnecessary complexity for organizations where roles clearly define access needs.
In conclusion, RBAC is the most practical and widely adopted access control model for organizations that want to assign permissions efficiently while enforcing least privilege. DAC provides flexibility but can lead to inconsistent access management, MAC offers strong security but is inflexible, and ABAC delivers fine-grained control at the cost of complexity. By using RBAC, organizations can streamline authorization, simplify administration, and ensure that users have only the access necessary to perform their job functions, making option C the correct choice.
Question:
Under Domain 6, what security assessment type simulates an insider threat scenario?
A) Black-box test
B) White-box test
C) Gray-box test
D) Fuzz test
Answer: C) Gray-box test.
Explanation:
Gray-box testing is a security testing methodology that provides testers with partial knowledge of a system’s internal workings while still evaluating it from an external perspective. This approach simulates threats from insiders or compromised users who possess limited but legitimate access to information. The correct answer is option C: gray-box test. Understanding why this is the correct choice requires analyzing each of the four options in detail.
A) Black-box test
Black-box testing is a methodology where the tester does not know the system’s internal structure, design, or implementation. The tester interacts with the system solely from an external perspective, focusing on input-output behavior and functionality. Black-box testing is effective for identifying vulnerabilities visible to an external attacker with no insider knowledge. While valuable for simulating real-world external attacks, it does not provide insight into security weaknesses that could be exploited by someone with partial access or insider knowledge, which is the focus of gray-box testing.
B) White-box test
White-box testing is the opposite of black-box testing. Here, the tester has full knowledge of the system’s internal design, source code, architecture, and logic. This allows a comprehensive examination of all paths, control flows, and potential vulnerabilities. White-box testing is highly thorough and useful for identifying deep security flaws, such as logic errors or insecure coding practices. However, it does not simulate the real-world scenario where an attacker has only limited, legitimate knowledge, which gray-box testing aims to replicate. White-box testing is more intrusive and assumes insider-level information that may not reflect typical threats faced by the organization.
C) Gray-box test
This is the correct answer. Gray-box testing combines elements of both black-box and white-box testing. The tester has limited knowledge of internal structures, configurations, or credentials—enough to simulate an insider threat or a compromised user—but not full access as in white-box testing. For example, a gray-box tester might know the system architecture, some user credentials, or access permissions, but not the full source code. This method allows organizations to identify vulnerabilities that could be exploited by partially informed insiders or attackers who have obtained limited credentials, such as through phishing or privilege escalation. Gray-box testing is especially effective for evaluating authentication mechanisms, access controls, and permission configurations, as well as for testing real-world attack scenarios where an adversary already has partial access.
D) Fuzz test
Fuzz testing is a technique where random or malformed input data is injected into a system to discover coding errors, crashes, or unexpected behavior. While fuzz testing is highly effective for identifying buffer overflows, input validation flaws, and application crashes, it is not focused on simulating partially informed insider threats. Fuzz testing is a valuable vulnerability discovery method, but does not provide the context or insight associated with limited knowledge scenarios, which are the focus of gray-box testing.
In conclusion, gray-box testing provides a realistic simulation of attacks from insiders or partially knowledgeable adversaries, combining the advantages of black-box and white-box approaches. Black-box tests focus on external threats with no system knowledge, white-box tests provide full access to internal information, and fuzz tests randomly explore input handling vulnerabilities. Gray-box testing uniquely addresses scenarios where attackers have limited but legitimate access, making option C the most appropriate choice for organizations seeking to understand potential risks from partially informed insiders or compromised users.
Question:
Under Domain 7, what does the Recovery Point Objective (RPO) define?
A) Maximum acceptable outage duration
B) Amount of data loss tolerable between backups
C) Duration required to restore operations
D) Number of alternate sites
Answer: B) Amount of data loss tolerable between backups.
Explanation:
RPO indicates the time interval between the last data backup and potential data loss, guiding backup frequency to meet business continuity needs.
Question:
Under Domain 8, what is a common countermeasure against injection attacks?
A) Error suppression
B) Parameterized queries and input validation
C) Session replay
D) Output encoding only
Answer: B) Parameterized queries and input validation.
Explanation:
Preventing web application vulnerabilities, particularly SQL injection attacks, is a critical aspect of secure software development. SQL injection occurs when an attacker manipulates input fields or parameters to execute malicious SQL commands on a database. To mitigate this risk, developers must implement strategies that properly handle user input and database queries. The correct answer to the question is option B: parameterized queries and input validation. Understanding why this is correct requires examining each option in detail.
A) Error suppression
Error suppression involves hiding or disabling error messages that are returned by an application or database. While suppressing detailed error messages can prevent attackers from gaining insight into a system’s internal workings, error suppression alone does not prevent SQL injection attacks. Attackers can still exploit vulnerable input fields if queries are constructed improperly, even if error messages are hidden. Error suppression is a defensive measure to reduce information leakage, but it is not a primary mechanism for preventing injection vulnerabilities.
B) Parameterized queries and input validation
This is the correct answer. Parameterized queries, also known as prepared statements, separate SQL code from user input by using placeholders in the query and binding user-provided values at execution time. This approach ensures that the database interprets user input strictly as data, not as executable code, effectively preventing SQL injection. Input validation complements parameterized queries by verifying that input data meets expected formats, lengths, and types before it is processed. For example, a numeric input field can be validated to reject non-numeric characters, and an email field can be validated against a standard email format. Together, parameterized queries and input validation provide a robust defense-in-depth approach. These techniques are widely recommended in secure coding guidelines, such as the OWASP Top Ten, and are considered best practices for mitigating injection attacks and protecting sensitive data in web applications.
C) Session replay
Session replay refers to the process of capturing and replaying user interactions within a web application, often for monitoring or testing purposes. While session replay can help developers analyze application behavior, it does not inherently prevent SQL injection. Attackers could potentially exploit vulnerabilities regardless of whether session replay is enabled or disabled. Therefore, session replay is unrelated to the prevention of input-based attacks like SQL injection.
D) Output encoding only
Output encoding involves converting data into a safe format before displaying it to users, which is an effective technique for preventing cross-site scripting (XSS) attacks. For example, special HTML characters are encoded to prevent malicious scripts from executing in a browser. However, output encoding does not prevent SQL injection, because injection attacks target the database query execution layer rather than the output displayed to the user. Relying solely on output encoding leaves the system vulnerable to database-level attacks.
In conclusion, parameterized queries and input validation are the most effective methods for preventing SQL injection attacks, making option B the correct choice. While error suppression and output encoding serve important security functions—such as reducing information leakage and preventing XSS—neither addresses the core mechanism of query manipulation. Session replay is primarily a testing or monitoring tool and does not provide a security control against SQL injection. By implementing parameterized queries and robust input validation, developers can ensure that user-supplied data is treated safely, protecting the database and sensitive information from malicious exploitation.
Question:
Under Domain 1, what type of control is a policy requiring annual security awareness training?
A) Technical
B) Physical
C) Administrative
D) Corrective
Answer: C) Administrative.
Explanation:
In the field of information security, controls are typically categorized into three primary types: administrative, technical, and physical. Each type of control serves a distinct role in protecting information assets, ensuring compliance, and mitigating risk. Administrative controls are policies, procedures, and management practices designed to govern how an organization manages its information security program. The correct answer to the question is option C: administrative.
Administrative controls form the foundation of an organization’s security posture. They include policies, standards, procedures, guidelines, and training programs that define how employees and systems should handle sensitive information. Examples include acceptable use policies, data classification guidelines, incident response procedures, employee background checks, and mandatory security awareness training. These controls ensure that personnel understand their roles and responsibilities in protecting organizational assets. Administrative controls also help establish accountability and enforce compliance with legal and regulatory requirements, such as GDPR, HIPAA, or PCI DSS. By clearly defining expectations and responsibilities, administrative controls guide decision-making and behavior within the organization, reducing the likelihood of accidental or intentional security breaches.
A) Technical
Technical controls, also referred to as logical controls, rely on technology to enforce security policies. These include firewalls, intrusion detection systems, encryption, access control lists, and multi-factor authentication. Technical controls protect systems and data by preventing, detecting, and responding to threats automatically. While technical controls are essential for implementing security, they are operational mechanisms that enforce policies defined by administrative controls. Without the policies, procedures, and management oversight provided by administrative controls, technical mechanisms may be inconsistently applied or misconfigured, reducing their effectiveness.
B) Physical
Physical controls safeguard the organization’s tangible assets, including buildings, data centers, and hardware. Examples include security guards, locked doors, surveillance cameras, access cards, and environmental controls such as fire suppression and temperature monitoring. Physical controls prevent unauthorized access to systems and infrastructure, but do not address procedural or organizational aspects of security. They complement administrative controls but cannot replace the policies and management practices necessary to enforce security effectively.
D) Corrective
Corrective controls are implemented after an incident to restore systems and data to a secure state. These include restoring backups, applying patches, or reconfiguring compromised systems. Corrective controls are reactive in nature and focus on minimizing the impact of security incidents. While important for recovery, they do not establish the proactive framework that guides employee behavior, sets organizational standards, or defines procedures—the primary focus of administrative controls.
In conclusion, administrative controls are the backbone of a comprehensive information security program. They establish policies, procedures, and management practices that guide the implementation of technical and physical controls while ensuring accountability and compliance. Technical controls enforce the rules, physical controls protect the infrastructure, and corrective controls restore systems after incidents, but administrative controls define the rules, responsibilities, and expectations that underpin all security efforts. This makes option C the correct choice for the category that governs organizational security strategy, policy, and management practices.
Question:
Under Domain 2, which law primarily governs the protection of personal data for EU citizens?
A) SOX
B) HIPAA
C) GDPR
D) FISMA
Answer: C) GDPR.
Explanation:
The General Data Protection Regulation (GDPR) is a comprehensive data privacy and protection regulation enacted by the European Union (EU) in May 2018. The correct answer to the question is option C: GDPR. GDPR is designed to protect the personal data of EU citizens, giving individuals greater control over how their information is collected, processed, stored, and shared. It has a significant impact on organizations worldwide that handle EU residents’ data, establishing strict obligations and penalties for non-compliance. Understanding why GDPR is the correct choice requires examining its purpose and comparing it to other regulatory frameworks.
A) SOX
The Sarbanes-Oxley Act (SOX) is a U.S. federal law enacted in 2002 to improve corporate governance and financial transparency in publicly traded companies. SOX focuses primarily on internal controls for financial reporting, requiring organizations to maintain accurate records and implement mechanisms to prevent fraud. While SOX mandates security and audit controls over financial data, it is not a data privacy regulation and does not specifically address the protection of personal information or the rights of individuals. Therefore, SOX is unrelated to GDPR’s goals of privacy protection and individual data rights.
B) HIPAA
The Health Insurance Portability and Accountability Act (HIPAA) is a U.S. federal law enacted in 1996 that establishes standards for protecting sensitive patient health information. HIPAA’s primary focus is on healthcare organizations, insurance providers, and related entities, requiring safeguards to maintain confidentiality, integrity, and availability of protected health information (PHI). While HIPAA addresses data security and privacy within the healthcare sector, it is limited in scope to health-related data and U.S. entities. GDPR, in contrast, applies broadly to all personal data of EU residents, across industries and globally, making it more comprehensive in terms of data protection.
C) GDPR
This is the correct answer. GDPR provides a unified legal framework for data protection across the European Union. It introduces principles such as data minimization, purpose limitation, accuracy, storage limitation, and accountability. It also grants individuals rights such as the right to access, rectify, erase, and port their personal data, as well as the right to object to processing. GDPR mandates organizations to implement technical and administrative measures to protect personal data, conduct data protection impact assessments (DPIAs), and notify authorities and affected individuals in the event of a breach. Organizations that fail to comply with GDPR can face substantial fines of up to 20 million euros or 4% of global annual turnover, whichever is higher. Its global reach means that any organization processing the data of EU residents must adhere to these requirements, regardless of where the organization is located.
D) FISMA
The Federal Information Security Management Act (FISMA) is a U.S. federal law focused on information security standards for federal agencies. It requires agencies to develop, document, and implement comprehensive security programs to protect federal information systems. While FISMA establishes strong security requirements, its scope is limited to U.S. federal government information systems and does not provide rights or protections for individuals’ personal data in the way GDPR does. FISMA is primarily concerned with safeguarding government IT infrastructure rather than regulating privacy for citizens.
In conclusion, GDPR is the regulatory framework that governs the protection of personal data for individuals within the EU, making it the correct answer. Unlike SOX, HIPAA, and FISMA, which focus on financial controls, healthcare data, or federal information systems, GDPR applies broadly to personal data across industries and borders. It emphasizes individual rights, organizational accountability, and stringent penalties for non-compliance, establishing one of the most robust and influential data protection standards in the world.
Question:
Under Domain 3, what is the primary purpose of key escrow?
A) To store private keys for users in encrypted databases for backup or legal access
B) To generate random keys
C) To provide temporary encryption keys for sessions
D) To hide keys in steganographic images
Answer: A) To store private keys for users in encrypted databases for backup or legal access.
Explanation:
Key escrow ensures authorized recovery of encryption keys during emergencies or legal investigations, balancing security and accountability.
Question:
Under Domain 4, which security mechanism limits broadcast traffic and enhances segmentation within a network switch?
A) Port mirroring
B) VLAN configuration
C) Spanning tree protocol
D) NAT
Answer: B) VLAN configuration.
Explanation:
Virtual Local Area Networks (VLANs) are a fundamental networking technique used to segment a physical network into multiple logical networks. The correct answer to the question is option B: VLAN configuration. VLAN configuration allows administrators to divide a single Layer 2 network into separate broadcast domains, improving both security and performance. Understanding why VLAN configuration is the correct answer requires examining each option individually.
A) Port mirroring
Port mirroring is a technique used to copy network traffic from one or more ports to another port, typically for monitoring and analysis purposes. While port mirroring is useful for troubleshooting, intrusion detection, and traffic analysis, it does not provide logical separation of network devices or users. It does not create isolated network segments, so it cannot prevent devices from communicating freely across the network. Therefore, port mirroring does not achieve the objectives of network segmentation or enhanced security in the same way that VLANs do.
B) VLAN configuration
This is the correct answer. VLANs allow network administrators to logically segment a network into different broadcast domains regardless of physical location. For example, employees in the finance department can be placed in one VLAN, while IT staff are placed in another. This segmentation isolates traffic between VLANs, reducing the risk of unauthorized access, limiting broadcast traffic within each segment, and improving overall network performance. VLANs also enhance security by restricting lateral movement: if an attacker compromises a device in one VLAN, they cannot easily access devices in another VLAN without passing through routing or firewall controls. VLANs are configured on switches using tagging protocols such as IEEE 802.1Q, which ensures that traffic is correctly identified and separated. In addition to security and traffic management, VLANs provide flexibility for network design, allowing administrators to reorganize users and devices without physically rewiring the network. By logically isolating sensitive systems and controlling communication paths, VLANs play a critical role in network security architecture.
C) Spanning tree protocol
The Spanning Tree Protocol (STP) is a network protocol designed to prevent loops in Layer 2 networks. It ensures a loop-free topology by selectively blocking redundant paths. While STP is important for network reliability and avoiding broadcast storms, it does not provide segmentation or isolation of devices into logical networks. STP ensures network stability but does not control access between users or departments, so it cannot serve the same security function as VLANs.
D) NAT
Network Address Translation (NAT) is a technique that modifies IP address information in packet headers, usually to allow multiple devices to share a single public IP address or to hide internal IP addresses from external networks. NAT is primarily a method of conserving IP addresses and providing basic network-level obfuscation. While NAT can prevent direct external access to internal systems, it does not create separate broadcast domains or isolate devices within an internal network. NAT does not provide the internal segmentation or traffic isolation capabilities of VLANs.
In conclusion, VLAN configuration is the most appropriate solution for logically segmenting a network, isolating sensitive systems, improving security, and controlling lateral movement between devices. Port mirroring, STP, and NAT provide other network functions—monitoring, loop prevention, and IP address management—but do not achieve the traffic isolation and security benefits that VLANs provide. By implementing VLANs, organizations can design flexible, secure, and efficient networks that align with both operational and security requirements.
Popular posts
Recent Posts
