ISC CISSP Certified Information Systems Security Professional Exam Dumps and Practice Test Questions Set 6 Q101-120
Visit here for our full ISC CISSP exam dumps and practice test questions.
Question 101:
Under Domain 1, what is the primary purpose of a risk register?
A) To eliminate all identified risks
B) To document, track, and monitor identified risks and mitigation plans
C) To act as a financial ledger for security spending
D) To store all compliance audit findings
Answer: B) To document, track, and monitor identified risks and mitigation plans.
Explanation:
A risk register is one of the most important governance and risk-management tools used in Domain 1: Security and Risk Management. Its primary purpose is to document, track, monitor, and communicate information about identified risks and their associated mitigation strategies. Among the four answer choices, option B correctly reflects this purpose, while the other options misunderstand the true role of a risk register.
A) To eliminate all identified risks
This option is incorrect because eliminating all risks is neither realistic nor feasible for any organization. No business environment can ever be completely risk-free. Attempting to eliminate every risk would require excessive costs, operational restrictions, and potentially make the organization unable to function. The actual goal of risk management is to identify risks, analyze their potential impact and likelihood, determine appropriate treatment strategies such as mitigation, transfer, avoidance, or acceptance, and then monitor them. The risk register is a tool for tracking this process rather than eliminating risks eB To document, track, and monitor identified risks and mitigation plans
This is the correct answer because it accurately describes the purpose and function of a risk register. The risk register acts as a centralized and continuously updated record of all identified risks within the organization. It includes key details such as risk descriptions, risk owners, impact and likelihood assessments, risk ratings, mitigation strategies, residual risk evaluations, and status updates. By maintaining this information in an organized and structured format, the risk register ensures transparency, accountability, and consistent monitoring. It also helps management prioritize resources, evaluate progress, and make informed decisions. In Domain 1, this aligns with the broader framework of governance, documentation, and continuous risk management.
C) To act as a financial ledger for security spending
This option is incorrect because a financial ledger is an accounting tool used to track expenses, budgets, and financial transactions. While some risk mitigation measures may involve costs, the risk register is not designed to function as a budgeting or financial management tool. Any cost-related details that may appear in the risk register are included only to support decision-making, not to replace financial systems.
D) To store all compliance audit findings
This option is also incorrect. Compliance audit findings are documented in audit reports and other compliance records. Although some audit findings may identify risks, the risk register is not the main storage location for all audit information. Its focus remains on risks and their mitigation plans.
Thus, option B is the most accurate description of the risk register’s purpose.
Question 102:
Under Domain 2, which data handling control ensures that sensitive data remains protected when stored on removable media?
A) Data labeling
B) Encryption and access control
C) Retention scheduling
D) File compression
Answer: B) Encryption and access control.
Explanation:
In Domain 2: Asset Security, one of the primary objectives is ensuring that data remains protected throughout its entire lifecycle, including when it is stored on removable media such as USB drives, external hard drives, SD cards, and other portable storage devices. The correct answer, option B, highlights encryption and access control as the essential measures for maintaining the confidentiality and integrity of sensitive information on such media. Each of the four answer choices represents a different type of data handling control, but only one directly addresses the risks associated with removable media storage.
A) Data labeling
Data labeling is an important component of asset security because it ensures that data is accurately classified based on its sensitivity. Labels such as public, internal, confidential, or highly sensitive notify users of required handling procedures. However, labeling alone does not provide technical protection. Even if a removable device is clearly labeled as containing sensitive data, it does not prevent unauthorized individuals from accessing the information if the device is lost or stolen. Thus, while labeling improves awareness, it cannot ensure the protection of data stored on portable media.
B) Encryption and access control
This is the correct answer because encryption and access control directly address the vulnerabilities associated with removable media. Encryption protects sensitive information by converting it into an unreadable form that cannot be interpreted without the correct decryption key. If the removable device is lost or stolen, the encrypted data remains secure. Access control ensures that only authorized individuals can decrypt or use the stored data. This may involve password protection, multifactor authentication, or strict key-management procedures. Together, encryption and access control form a strong technical defense that protects sensitive information even when the physical security of the device cannot be guaranteed.
C) Retention scheduling
Retention scheduling defines how long data should be stored before being archived or destroyed. Although this is a vital part of data lifecycle management, it does not provide real-time protection for data on removable media. A device following proper retention rules could still be lost, stolen, or accessed by unauthorized individuals. Retention scheduling addresses longevity, not confidentiality or access protection.
D) File compression
File compression reduces file size to save storage space or speed up transfers, but it does not add any security. Compressed files remain readable unless combined with encryption. Compression alone cannot protect sensitive data on removable devices.
Therefore, only option B provides the necessary protection mechanisms for safeguarding sensitive data stored on removable media.
Question 103:
Under Domain 3, what type of security model focuses on preventing conflicts of interest in organizations handling sensitive data for multiple clients?
A) Clark-Wilson Model
B) Bell-LaPadula Model
C) Brewer-Nash (Chinese Wall) Model
D) Biba Model
Answer: C) Brewer-Nash (Chinese Wall) Model.
Explanation:
In access control models within security architecture, different models are designed to protect different aspects of information security, such as confidentiality, integrity, and conflict-of-interest separation. The question focuses on which model addresses conflict-of-interest scenarios, and the correct answer is the Brewer-Nash Model, also known as the Chinese Wall Model. Each of the four options represents a well-known access control model, but only one is specifically designed to prevent conflicts of interest by dynamically restricting access based on previous actions taken by a user.
A) Clark-Wilson Model
The Clark-Wilson Model is centered on maintaining data integrity. It enforces well-formed transactions, separation of duties, and uses transformation procedures to ensure that only authorized programs can modify data. It operates on the principle that users cannot directly access data; instead, they must go through controlled applications that enforce integrity constraints. This model is useful in financial systems, commercial applications, and environments where data accuracy and correctness are paramount. However, it does not address conflict-of-interest scenarios. Its primary focus is ensuring that data is modified in a controlled and verifiable manner. Therefore, while Clark-Wilson ensures integrity, it does not manage access decisions based on conflicting business relationships, making it unsuitable for controlling conflicts of interest.
B) Bell-LaPadula Model
The Bell-LaPadula Model is one of the earliest formal security models and is primarily concerned with maintaining confidentiality. It uses rules such as no read up and no write down, designed to prevent users from accessing sensitive information above their clearance or leaking information to lower classification levels. This model is widely used in military and government systems where confidentiality is the primary concern. Although effective for restricting unauthorized disclosure of classified information, Bell-LaPadula does not evaluate or address conflicts of interest. It does not dynamically change access rights based on user activity or their interactions with specific datasets. Its focus remains strictly on secrecy levels, not business competition or interest-based restrictions.
C) Brewer-Nash (Chinese Wall) Model
This is the correct answer because the Brewer-Nash Model is specifically designed to prevent conflicts of interest within commercial environments such as consulting, law, or financial firms. The model dynamically adjusts a user’s access based on the data they have already accessed. Once a user accesses data from one company within a conflict-of-interest class, the model prevents them from accessing data belonging to competitors within the same class. This prevents a consultant or analyst, for example, from inadvertently or intentionally using confidential information from one client to benefit another competing client. The Chinese Wall Model is unique among access control models because it does not rely on fixed security labels. Instead, it uses the user’s past actions to determine what they can access in the future. This dynamic approach directly addresses real-world scenarios where access needs to change based on relationships and business conflicts.
D) Biba Model
The Biba Model focuses on integrity, but in contrast to Bell-LaPadula, it prevents unauthorized modification rather than disclosure. It uses rules such as no write-up, no read down. These rules ensure that high-integrity data cannot be contaminated by lower-integrity sources. Biba is commonly applied in industrial systems or environments where data correctness is more important than confidentiality. However, like the other integrity-focused models, Biba does not address conflict-of-interest concerns. It does not analyze whether accessing information from two competing sources could create ethical or legal risks.
In conclusion, among the four models, only the Brewer-Nash Model directly addresses conflict-of-interest scenarios through dynamically changing access rules, making option C the correct choice.
Question 104:
In Domain 4, what is the most effective way to ensure secure remote administrative access to critical systems?
A) Enable Telnet with strong passwords
B) Use encrypted channels such as SSH or VPN
C) Allow access through unsecured public Wi-Fi
D) Depend solely on firewall whitelisting
Answer: B) Use encrypted channels such as SSH or VPN.
Explanation:
Secure remote access is one of the most critical components of modern cybersecurity. As employees, administrators, and external partners increasingly need to access systems from distributed locations, it becomes essential to ensure that remote connections do not create opportunities for attackers to intercept data, compromise credentials, or gain unauthorized entry into the network. Among the four provided options, using encrypted channels such as SSH or VPN is the correct and most secure method for implementing remote access. Each answer choice reflects a different approach, but only one provides the required level of protection for sensitive communications over potentially insecure networks.
A) Enable Telnet with strong passwords
This option is incorrect because Telnet is an outdated protocol that transmits all data, including usernames and passwords, in plaintext. Even if strong passwords are used, the encryption weakness remains. Any attacker monitoring the network with a simple packet-sniffing tool could easily capture login credentials, commands, and sessions. Strengthening the password does not address the core problem: Telnet itself provides no encryption and therefore cannot protect confidentiality or integrity. Telnet is considered insecure by modern standards and should be replaced entirely with secure remote protocols that use cryptographic protections. Relying on strong passwords does nothing to mitigate the inherent vulnerabilities of the Telnet protocol.
B) Use encrypted channels such as SSH or VPN
This is the correct answer because encrypted channels protect remote communication against eavesdropping, interception, and manipulation. Secure Shell (SSH) is a cryptographic protocol that provides secure command-line access, file transfers, and tunneling. It ensures that passwords, keys, commands, and data remain encrypted throughout the session. Similarly, Virtual Private Networks (VPNs) create encrypted tunnels between remote users and internal networks, protecting all data packets that travel across public or untrusted networks. Using encryption in remote access significantly reduces the risk of attacks such as man-in-the-middle attacks, credential harvesting, and unauthorized access. It also supports strong authentication, including key-based access and multifactor authentication, which greatly enhances overall security. Encrypted channels are the industry standard for secure remote access because they combine confidentiality, integrity, and authentication in one solution.
C) Allow access through unsecured public Wi-Fi
This option is extremely unsafe and should never be used for remote access without additional protections. Public Wi-Fi networks are inherently insecure and are frequent targets for cybercriminals who set up rogue access points, conduct man-in-the-middle attacks, or intercept unencrypted traffic. Any user who logs into a sensitive system over open Wi-Fi without encryption places organizational data at severe risk. Attackers can capture credentials, session cookies, internal network addresses, or even inject malicious payloads. Even casual browsing is unsafe on unsecured public networks, and remote access to critical systems is far more dangerous. Secure, encrypted channels, such as VPN or SSH, must always be used if remote access occurs over untrusted networks.
D) Depend solely on firewall whitelisting
While firewall whitelisting can control which IP addresses are allowed to connect to certain systems, it is insufficient by itself for secure remote access. Firewalls do not encrypt communication, nor do they authenticate users at a granular level. Attackers can spoof IP addresses, exploit compromised whitelisted devices, or take advantage of allowed pathways. A firewall is an important part of a defense-in-depth strategy, but it cannot replace encryption or secure remote protocols. Relying only on whitelisting ignores the need for encrypted communication and strong authentication.
In conclusion, only encrypted channels such as SSH or VPN provide the necessary level of protection for secure remote access, making option B the correct and most effective choice.
Question 105:
Under Domain 5, what type of attack can occur if session tokens are not invalidated upon logout?
A) Denial-of-service attack
B) Session fixation or hijacking
C) Password spraying
D) Man-in-the-middle with ARP spoofing
Answer: B) Session fixation or hijacking.
Explanation:
Failure to invalidate tokens allows attackers to reuse or predict session IDs, gaining unauthorized access to authenticated sessions. Proper session management and token regeneration prevent this.
Question 106:
Under Domain 6, what is the main objective of vulnerability scanning?
A) To exploit identified weaknesses
B) To identify potential weaknesses without exploiting them
C) To generate social engineering reports
D) To stress-test network hardware
Answer: B) To identify potential weaknesses without exploiting them.
Explanation:
In the field of cybersecurity assessment, understanding the purpose and scope of different testing methodologies is essential. One such methodology is vulnerability scanning, which plays a foundational role in identifying weaknesses before they are exploited by malicious actors. The correct answer, option B, accurately describes the primary purpose of vulnerability scanning: to identify potential weaknesses without exploiting them. By examining how each option aligns or misaligns with this purpose, we gain a clearer understanding of what vulnerability scanning is—and what it is not.
A) To exploit identified weaknesses
This option is incorrect because exploiting vulnerabilities is not part of vulnerability scanning. Exploitation is performed during penetration testing, which goes beyond mere identification of weaknesses and attempts to actively compromise systems to determine the real-world impact of vulnerabilities. Vulnerability scanners do not attempt to break into systems or attempt exploitation techniques such as buffer overflow attacks, privilege escalation attempts, SQL injection exploitation, or unauthorized access attempts. Their role is limited to detection, reporting, and prioritization—not exploitation. Attempting to exploit vulnerabilities without proper authorization could cause system instability, downtime, or data loss, which is why exploitation is reserved for controlled penetration tests performed under strict guidelines.
B) To identify potential weaknesses without exploiting them
This is the correct answer because vulnerability scanning focuses exclusively on detection rather than exploitation. A vulnerability scanner analyzes systems, applications, configurations, and network services to identify known vulnerabilities, misconfigurations, missing patches, weak protocols, or outdated software versions. It typically relies on a large database of known vulnerabilities (such as CVE entries) and compares system information against those definitions. The purpose is to give organizations visibility into their security posture, allowing them to prioritize patching and remediation efforts. Vulnerability scanning is safe, automated, repeatable, and non-intrusive, making it suitable for routine assessments. Since scanners do not attempt exploitation, they pose minimal risk to system stability and can be used more frequently than penetration testing.
C) To generate social engineering reports
This option is incorrect because social engineering testing involves evaluating the human element of security, not system vulnerabilities. Social engineering assessments may include phishing campaigns, vishing calls, impersonation attempts, or physical intrusion tests. These tests measure employee awareness, policy compliance, and organizational training effectiveness. Vulnerability scanners do none of this. They do not interact with employees, test human behavior, or simulate social engineering scenarios. While social engineering is a critical part of security testing, it falls under a different category entirely and is not related to vulnerability scanning’s goal of identifying technical weaknesses.
D) To stress-test network hardware
This option is incorrect because stress-testing, load-testing, and performance-testing are related to system capacity and resilience, not vulnerability identification. Stress-testing evaluates how systems behave under extreme loads, high traffic, or resource exhaustion, helping organizations understand system performance limits and stability. Vulnerability scanners do not put systems under heavy load or attempt to overwhelm hardware. Instead, they collect information through safe scanning processes, checking ports, services, banners, and software versions. Stress-testing requires specialized tools designed to simulate heavy traffic, denial-of-service conditions, or operational strain. These goals differ entirely from vulnerability scanning, which focuses solely on identifying potential security weaknesses without impacting system performance.
In summary, vulnerability scanning is a key security assessment technique designed to identify weaknesses safely and efficiently. Only option B accurately reflects its purpose, making it the correct answer.
Question 107:
Under Domain 7, what phase of the incident response process includes lessons learned and root cause analysis?
A) Containment
B) Eradication
C) Recovery
D) Post-incident activity
Answer: D) Post-incident activity.
Explanation:
Incident response is a structured process that organizations follow to manage and resolve security events effectively. The goal is not only to stop the incident itself, but also to understand why it occurred, strengthen defenses, and prevent similar events in the future. The four options provided correspond to different phases of the incident response lifecycle. The correct answer, option D, refers to post-incident activity, which is the crucial final stage where lessons learned are captured and long-term improvements are implemented. Understanding how this phase differs from containment, eradication, and recovery helps clarify why it is essential.
A) Containment
Containment is one of the earlier stages of the incident response process, and its purpose is to limit the damage caused by a security event. During containment, security teams take immediate actions to prevent the situation from worsening, such as isolating infected systems, blocking malicious network traffic, disabling compromised accounts, or segmenting affected environments. The goal is to stop the spread of the attack without necessarily removing the threat completely. Containment can be short-term (quick isolation) or long-term (more strategic measures to prevent further compromise). However, containment focuses strictly on controlling the incident while it is happening. It does not involve reviewing what happened afterward or identifying improvements, which means it cannot be considered post-incident activity.
B) Eradication
Eradication follows containment and focuses on removing the root cause of the incident. This may involve deleting malware, closing exploited vulnerabilities, removing unauthorized users, patching systems, cleaning corrupted files, or strengthening configurations. The purpose of eradication is to ensure that the attack cannot reappear from the same source. While eradication is essential to restoring the security of the environment, it is still part of the active response phase. It deals with eliminating the threat, not learning from the event or documenting the organization’s response. Because it addresses immediate technical remediation, it does not fall under post-incident activities.
C) Recovery
Recovery is the phase where systems and services are restored to normal operation. This can include rebuilding systems, restoring data from backups, revalidating configurations, and bringing systems back online gradually to ensure no residual threats remain. Recovery also includes monitoring systems for any signs of recurring compromise. The recovery phase is still part of the operational response to an incident, even though it may occur after the threat has been eradicated. Recovery ensures business continuity, but it is not focused on analyzing the incident, documenting lessons learned, or improving future defenses. Therefore, it is not part of post-incident activity.
D) Post-incident activity
This is the correct answer because post-incident activity occurs after the incident has been fully contained, eradicated, and resolved. This phase is essential for continuous improvement. During post-incident activity, the organization conducts a thorough review of the event, often called a lessons-learned meeting, to understand what happened, why it happened, how effectively the response was executed, and what gaps need to be addressed. Documentation is completed, response procedures are updated, and policies or technical controls may be revised. Additional training, awareness activities, or upgrades may be implemented to strengthen resilience. Metrics and reports may be generated for leadership, auditors, or regulatory bodies. Post-incident activity ensures that each incident becomes a learning opportunity.
This final stage is critical for improving the organization’s security posture and preparedness, making option D the correct choice.
Question 108:
Under Domain 8, what is the benefit of implementing code repositories with version control (e.g., Git) in secure development practices?
A) It allows unrestricted changes to code
B) It provides rollback capability, traceability, and change accountability
C) It automatically encrypts all code files
D) It eliminates the need for developer authentication
Answer: B) It provides rollback capability, traceability, and change accountability.
Explanation:
Configuration management and version control systems play a crucial role in software development, DevSecOps, and overall security governance. Their purpose is to provide structure, organization, and accountability when managing changes to code, configurations, or system components. The correct answer, option B, accurately reflects the benefits of version control and configuration management: rollback capability, traceability, and change accountability. These functions ensure that development teams maintain the integrity, reliability, and security of the codebase. To fully understand why option B is correct, it is helpful to examine what each option implies and how it aligns—or fails to align—with the actual functions of version control.
A) It allows unrestricted changes to code
This option is incorrect because version control systems do not allow unrestricted changes. In fact, they do the opposite: they structure and restrict how changes are made. Version control provides mechanisms such as branching, merging, access controls, and permission-based workflows to ensure that code modifications are controlled, reviewed, and authenticated. Allowing unrestricted changes would create chaos, introduce security vulnerabilities, and compromise software integrity. Instead of encouraging unrestricted modifications, version control enforces guidelines that help ensure stability. Unrestricted changes would contradict every principle of secure software development. In professional environments, version control systems help developers follow defined processes such as peer reviews, approvals, and CI/CD checks before any code is merged. For these reasons, option A does not represent the purpose of configuration management or version control.
B) It provides rollback capability, traceability, and change accountability
This is the correct answer because it captures the core functions of version control systems like Git, Subversion, or Mercurial. Rollback capability allows developers to revert to a previous version of the code when issues arise, such as bugs or configuration errors. This ensures resilience and minimizes downtime or system disruption. Traceability is another essential function: version control systems track every change made, including who made it, when it was made, what was changed, and why. Developers can examine commit logs, diff outputs, and historical records to understand how the codebase evolved. Accountability is also a key benefit. Every commit is tied to a specific user, enabling organizations to enforce change control policies and maintain a clear audit trail. This is essential for compliance with standards such as PCI-DSS, ISO 27001, and SOC 2, which require organizations to document changes and maintain full visibility. Version control ensures that changes are intentional, reviewable, reversible, and attributable to specific developers or processes. These capabilities collectively support secure, reliable, and well-governed software development practices, making option B the correct choice.
C) It automatically encrypts all code files
This option is incorrect because version control systems do not automatically encrypt code files. While organizations may choose to store repositories on encrypted disks or use encrypted communication channels such as SSH, these protections exist at the system or transport layer—not within the version control system itself. Version control tools focus on managing changes, not encrypting data. Encryption may be applied externally, but it is not a built-in universal feature or a defining purpose of version control. Furthermore, automatic encryption of all code files would interfere with standard version control features such as diffs and merges. Therefore, encryption is not a native function, and this option does not reflect the purpose of version control.
D) It eliminates the need for developer authentication
This option is incorrect because authentication is essential in any secure development environment. Version control systems require developer identification to track changes, ensure accountability, and protect repositories from unauthorized modifications. Eliminating the need for authentication would create serious security risks, allowing anonymous or malicious users to alter critical code without traceability. Instead, version control systems rely on authentication methods such as SSH keys, tokens, passwords, or multi-factor authentication. Authentication is fundamental, not optional, and certainly not something that version control eliminates.
Question 109:
Under Domain 1, what is a policy exception?
A) A situation where a policy is ignored permanently
B) A formally approved temporary deviation from policy with documented risk acceptance
C) An unapproved change to a policy
D) A routine operational workaround
Answer: B) A formally approved temporary deviation from policy with documented risk acceptance.
Explanation:
In information security and risk management, organizations establish policies to ensure consistent, secure, and compliant operations. However, there are situations in which strict adherence to a policy may not be feasible due to technical, operational, or business constraints. In such cases, organizations may allow a formal mechanism to temporarily deviate from the policy. This mechanism is known as a policy exception, and the correct answer is option B: a formally approved temporary deviation from policy with documented risk acceptance. Understanding why this is the correct answer requires examining each of the provided options in detail.
A) A situation where a policy is ignored permanently
This option is incorrect because permanently ignoring a policy is not an exception; it is a policy violation. Permanent disregard for established controls undermines governance, increases organizational risk, and can result in compliance failures, security breaches, or regulatory penalties. Exceptions are temporary and controlled deviations, not permanent neglect. Ignoring a policy without formal approval, documentation, or risk assessment introduces unpredictability and eliminates accountability, which defeats the purpose of risk management. Therefore, permanent noncompliance does not qualify as a policy exception.
B) A formally approved temporary deviation from policy with documented risk acceptance
This is the correct answer because it accurately describes the concept of a policy exception. A policy exception is a controlled, temporary allowance to bypass a specific policy requirement when compliance is impractical or impossible. Such deviations must be formally approved by the relevant authority—often a security officer, risk manager, or executive sponsor—and must include a documented risk assessment. This documentation explains why the exception is necessary, the potential risks involved, and the measures taken to mitigate those risks. By requiring formal approval and risk acknowledgment, policy exceptions maintain accountability, traceability, and risk awareness while allowing operational flexibility. Policy exceptions also typically include an expiration date or a condition under which normal policy adherence must resume, ensuring that the deviation remains temporary and monitored.
C) An unapproved change to a policy
This option is incorrect because an unapproved change to a policy is not a policy exception; it is an unauthorized modification. Policy exceptions do not alter the policy itself—they temporarily allow deviation while the policy remains in effect. Changing a policy without formal approval undermines governance and can introduce inconsistencies in enforcement. Exceptions are designed to work within the policy framework, not to replace or modify it. An unapproved policy change bypasses formal processes, does not involve documented risk acceptance, and lacks the controlled structure that defines a legitimate policy exception.
D) A routine operational workaround
This option is also incorrect because routine workarounds are informal practices that employees may adopt to bypass inconvenient policy requirements. Workarounds often develop organically without formal approval, risk assessment, or oversight. While they may temporarily solve operational problems, they introduce unmanaged risk and can become security vulnerabilities over time. Unlike a formal policy exception, a routine workaround does not provide documentation, accountability, or risk mitigation. Organizations generally seek to eliminate informal workarounds or convert them into formal exceptions or policy updates to maintain control.
In conclusion, policy exceptions are formal, temporary, and controlled deviations from established policies that include documented risk acceptance. Option B correctly captures all of these elements. Options A, C, and D describe situations that either lack formal approval, documentation, or accountability, making them inappropriate representations of a policy exception. By following the structured exception process, organizations can balance operational flexibility with security and compliance objectives, ensuring that deviations are safe, monitored, and temporary.
Question 110:
Under Domain 2, which process ensures data is usable only for a specified purpose and timeframe?
A) Data integrity management
B) Data retention and disposal policy
C) Data minimization
D) Data masking
Answer: C) Data minimization.
Explanation:
Data minimization ensures only necessary data is collected, stored, and processed for defined purposes and durations, supporting privacy regulations like GDPR.
Question 111:
Under Domain 3, which security principle ensures that a system continues functioning correctly even after some components fail?
A) Fail-secure design
B) Defense in depth
C) Fault tolerance
D) Obfuscation
Answer: C) Fault tolerance.
Explanation:
In the field of information security and system design, organizations implement various strategies to ensure that systems remain reliable, resilient, and secure under different operational conditions. One key aspect of system reliability is the ability to continue operating despite failures, whether these are hardware malfunctions, software errors, or network outages. The correct answer to the question, therefore, is option C: fault tolerance. Fault tolerance refers to a system’s ability to maintain its intended function even when one or more components fail. To understand why this is the correct choice, it is important to analyze each of the four options and their roles in system design.
A) Fail-secure design
Fail-secure design refers to a system’s ability to default to a secure state in the event of a failure. The primary goal is to protect confidentiality and integrity by ensuring that security mechanisms remain effective even during system errors or power failures. For example, a fail-secure door lock remains locked when power is lost, preventing unauthorized access. While fail-secure systems are essential for security, they do not inherently provide continuous availability or operational continuity. In fact, a fail-secure system may intentionally restrict access or disable functionality to maintain security, which could interrupt normal operations. Therefore, while faa secure design enhances security, it is not the same as fault tolerance, which emphasizes system reliability and uninterrupted functionality.
B) Defense in depth
Defense in depth is a layered security strategy in which multiple controls are deployed to protect assets. These layers can include physical security, network segmentation, firewalls, intrusion detection systems, antivirus software, access controls, and policies. The idea is that if one layer fails, others still provide protection. Defense in depth is critical for mitigating risks and improving overall security posture, but it does not ensure operational continuity in the presence of system failures. It focuses on security redundancy rather than system reliability. In other words, defense in depth reduces the likelihood of compromise but does not guarantee that a system continues to function when a component fails. Therefore, defense in depth, although valuable, is not the correct answer in the context of fault-tolerant systems.
C) Fault tolerance
This is the correct answer because fault tolerance specifically addresses the ability of a system to continue operating properly even when components fail. Fault-tolerant systems achieve this by incorporating redundancy, error detection, and automatic recovery mechanisms. Examples include redundant power supplies, mirrored storage, RAID configurations, clustered servers, and failover systems. Fault tolerance ensures that a failure in one component does not lead to a total system outage. This is critical for mission-critical applications such as financial systems, healthcare systems, industrial control systems, and cloud services, where downtime can lead to severe operational, financial, or safety consequences. Fault tolerance is an essential principle in both high-availability design and disaster recovery planning. It ensures that the system maintains its intended function, regardless of individual component failures, aligning directly with the concept described in the question.
D) Obfuscation
Obfuscation refers to techniques used to make code, data, or communications difficult to understand or interpret by unauthorized individuals. Common uses include source code obfuscation to prevent reverse engineering, data obfuscation to protect sensitive information, or network obfuscation to hide communications. While obfuscation is a useful security measure to protect intellectual property and confidentiality, it does not provide operational continuity or resilience. Obfuscation does not address hardware failures, software crashes, or system outages. It is a security measure rather than a reliability measure, which is why it cannot be equated with fault tolerance.
In summary, fault tolerance is a foundational principle in system and software design aimed at maintaining operational continuity despite component failures. It relies on techniques such as redundancy, error detection, and failover mechanisms to ensure that systems remain functional under adverse conditions. Unlike a fail-safe design, which prioritizes security over operational continuity, or defense in depth, which focuses on layered security protections, fault tolerance directly addresses the system’s ability to sustain its operations. Obfuscation, while useful for security, does not contribute to system availability or reliability.
By implementing fault-tolerant architectures, organizations ensure that mission-critical applications and systems can withstand component failures without disrupting service. This is particularly important in environments where downtime can result in significant financial losses, safety hazards, or reputational damage. In essence, fault tolerance combines redundancy, resilience, and proactive failure management to create systems that are robust, reliable, and capable of maintaining functionality under a wide range of adverse conditions. This makes option C the definitive choice when considering the ability of a system to continue functioning despite failures.
Question 112:
Under Domain 4, what is the function of a demilitarized zone (DMZ) in a network architecture?
A) To store encrypted backups
B) To isolate internal systems from external access
C) To replace the VPN infrastructure
D) To block all external communications
Answer: B) To isolate internal systems from external access.
Explanation:
A DMZ acts as a buffer zone between the public internet and internal network, hosting systems that need limited external access, like web or mail servers, reducing risk exposure.
Question 113:
Under Domain 5, what controls can mitigate password-guessing attacks without user inconvenience?
A) Complex password rules only
B) Account lockout thresholds and CAPTCHA challenges
C) Mandatory 2-hour password expiry
D) Eliminating password reuse
Answer: B) Account lockout thresholds and CAPTCHA challenges.
Explanation:
Lockout mechanisms and human verification slow brute-force attempts while maintaining usability. Overly strict expiration policies increase user fatigue and poor password behavior.
Question 114:
Under Domain 6, what is the key difference between a security audit and a penetration test?
A) Audits exploit systems while pen tests review paperwork
B) Audits assess compliance; penetration tests evaluate defenses through exploitation
C) Pen tests are purely theoretical
D) Both are identical in purpose
Answer: B) Audits assess compliance; penetration tests evaluate defenses through exploitation.
Explanation:
Audits compare operations against standards or policies, while penetration tests simulate attacks to uncover real exploitable vulnerabilities, emphasizing technical exposure.
Question 115:
Under Domain 7, what is the purpose of a tabletop exercise in disaster recovery planning?
A) To fully activate all systems
B) To simulate recovery scenarios without impacting operations
C) To execute data restoration live
D) To train new employees only
Answer: B) To simulate recovery scenarios without impacting operations.
Explanation:
Tabletop exercises allow teams to discuss and rehearse responses to hypothetical events, identifying procedural gaps before real disruptions occur.
Question 116:
Under Domain 8, what coding flaw leads to attackers executing commands via unvalidated input?
A) Race condition
B) Command injection
C) Heap overflow
D) Data truncation
Answer: B) Command injection.
Explanation:
Command injection occurs when unvalidated input is concatenated into system commands, allowing arbitrary code execution. Proper input validation and escaping prevent it.
Question 117:
Under Domain 1, which document outlines management’s overall security direction and high-level goals?
A) Security standard
B) Security guideline
C) Security policy
D) Security procedure
Answer: C) Security policy.
Explanation:
Policies define management intent and expectations. Standards, guidelines, and procedures are derived from policies, which form the governance framework’s foundation.
Question 118:
Under Domain 2, what classification control ensures that only authorized individuals can view or modify confidential data?
A) Data labeling
B) Access control matrix
C) Data categorization
D) Encryption
Answer: A) Data labeling.
Explanation:
Labeling identifies data sensitivity levels (e.g., Confidential, Public) and drives handling, storage, and access controls aligned with classification and clearance requirements.
Question 119:
Under Domain 3, what is the purpose of electromagnetic shielding in secure environments?
A) To prevent signal eavesdropping and emanations
B) To protect against data corruption from static discharge
C) To control temperature in server rooms
D) To enhance radio communications
Answer: A) To prevent signal eavesdropping and emanations.
Explanation:
TEMPEST and shielding techniques protect against EM leaks from devices, preventing attackers from capturing sensitive data through side-channel emissions.
Question 120:
Under Domain 4, which security mechanism ensures integrity and authenticity in VPN connections?
A) MD5 checksum only
B) IPsec with AH and ESP protocols
C) NAT without encryption
D) SSL stripped connections
Answer: B) IPsec with AH and ESP protocols.
Explanation:
IPsec uses the Authentication Header (AH) for integrity and the Encapsulating Security Payload (ESP) for confidentiality and authentication, ensuring secure and trusted VPN communication.
Popular posts
Recent Posts
