ISC CISSP Certified Information Systems Security Professional Exam Dumps and Practice Test Questions Set 7 Q121-140
Visit here for our full ISC CISSP exam dumps and practice test questions.
Question 121:
Under Domain 1, what is the primary purpose of security governance frameworks such as COBIT and ISO 27001?
A) To replace technical security controls
B) To provide structured, repeatable approaches to aligning security with business objectives
C) To manage HR policies for cybersecurity teams
D) To define hardware procurement standards
Answer: B) To provide structured, repeatable approaches to aligning security with business objectives.
Explanation:
In modern organizations, aligning information security with business objectives is critical to ensure that security investments, policies, and practices support the overall goals of the enterprise. Security frameworks provide the guidance and structure necessary to achieve this alignment. The correct answer, option B, emphasizes that security frameworks provide structured, repeatable approaches to integrating security with business objectives. Examining each option in detail clarifies why this is the most accurate description of a security framework and why the other options are incorrect.
A) To replace technical security controls
This option is incorrect because security frameworks do not replace technical security controls such as firewalls, intrusion detection systems, encryption, or access controls. Instead, frameworks serve as a guiding structure to implement, manage, and assess these controls consistently. Technical security controls are the tools and mechanisms that enforce security, while frameworks define policies, processes, and best practices that guide how those tools should be deployed, monitored, and evaluated. Replacing technical controls with a framework would leave an organization without the actual protective mechanisms, which is neither practical nor secure.
B) To provide structured, repeatable approaches to aligning security with business objectives
This is the correct answer because it accurately captures the purpose and value of security frameworks. Frameworks such as NIST Cybersecurity Framework, ISO/IEC 27001, COBIT, and CIS Controls provide a structured methodology to design, implement, measure, and improve security programs. They ensure that security initiatives are not ad hoc but are aligned with the organization’s risk tolerance, strategic goals, and regulatory requirements. A structured approach includes standardized processes for risk assessment, policy development, incident response, continuous monitoring, and governance. Repeatability ensures that these processes can be consistently applied across the organization, enabling ongoing compliance, audit readiness, and continuous improvement. By linking security efforts directly to business objectives, frameworks help organizations optimize resource allocation, prioritize security investments, and reduce exposure to risks while supporting operational goals.
C) To manage HR policies for cybersecurity teams
This option is incorrect because security frameworks are not designed to manage human resources policies, payroll, training schedules, or staffing decisions for cybersecurity teams. While frameworks may recommend training and awareness programs as part of their controls, they do not provide the mechanisms to manage employment policies or personnel administration. HR management is a separate function, and although aligned security personnel are critical for implementing frameworks, the frameworks themselves focus on security governance, processes, and best practices rather than HR administration.
D) To define hardware procurement standards
This option is also incorrect because security frameworks do not focus on specifying hardware procurement standards. While frameworks may include guidelines for securely configuring or managing hardware, they do not dictate which vendors to choose, which models to purchase, or hardware lifecycle management. Procurement decisions are typically governed by organizational IT policies, procurement regulations, and vendor evaluation criteria, not by the security framework itself. The framework ensures that any acquired hardware meets the organization’s security requirements, but does not define purchasing standards.
In summary, security frameworks are essential for organizations seeking to systematically align their cybersecurity strategies with overall business objectives. They provide structured, repeatable approaches to managing risk, implementing security controls, measuring performance, and continuously improving security posture. Option B correctly reflects the purpose of a security framework. Options A, C, and D either mischaracterize the role of a framework or focus on unrelated operational areas, such as replacing technical controls, managing HR, or defining procurement standards. By following a recognized framework, organizations can ensure that security efforts are consistent, auditable, and aligned with their strategic priorities.
Question 122:
Under Domain 2, what data state is most vulnerable to interception and requires strong encryption and integrity controls?
A) Data at rest
B) Data in use
C) Data in transit
D) Data archived
Answer: C) Data in transit.
Explanation:
In information security, protecting data throughout its lifecycle is a critical concern for organizations. Data exists in multiple states—at rest, in use, in transit, or archived—and each state requires different security measures. Understanding the distinctions between these states is essential to implementing effective controls. The correct answer in this context is option C: data in transit. Data in transit refers to information that is actively moving from one system, device, or location to another over a network, and it presents unique security challenges that must be addressed to maintain confidentiality, integrity, and availability. Examining each option helps clarify why data in transit is the correct focus and why the other options are not.
A) Data at rest
Data at rest refers to information stored on physical or digital media, such as databases, file servers, hard drives, or cloud storage. Security measures for data at rest typically include encryption, access controls, data classification, and physical security to prevent unauthorized access or theft. While protecting data at rest is important, the question specifically addresses the context of data that is moving between locations or systems. Data at rest is static and does not face the same risks as data in transit, such as interception, packet sniffing, or man-in-the-middle attacks. Therefore, data at rest is not the correct answer for scenarios involving active movement of data.
B) Data in use
Data in use refers to information that is actively being processed by applications, programs, or users. Examples include data being read, modified, or analyzed in memory or CPU registers. Security concerns for data in use include preventing unauthorized access, ensuring secure computation, and protecting sensitive information during processing. Techniques such as memory encryption, secure enclaves, and access controls are used to protect data in this state. Although data in use requires significant protection, it does not address the risks associated with transmitting information over networks. Therefore, it is not the focus when the primary concern is securing data as it moves between systems, which makes it an incorrect choice in this context.
C) Data in transit
This is the correct answer. Data in transit refers to information actively traveling over communication channels, whether within internal networks, across the internet, or between devices and servers. Examples include emails sent over SMTP, files transferred via FTP or SFTP, HTTP requests to web applications, and API calls between services. Data in transit is particularly vulnerable to interception, eavesdropping, and tampering, making encryption and integrity checks essential. Protocols such as TLS/SSL, IPsec, VPN tunnels, and SSH are widely used to protect data while it moves between endpoints. Encryption ensures that even if an attacker intercepts the data, it remains unreadable, maintaining confidentiality. Integrity checks and message authentication codes (MACs) verify that the data has not been altered in transit. Access controls and secure key management further strengthen protection. Because these risks are unique to data that is moving, data in transit requires dedicated security controls, which is why option C is correct.
D) Data archived
Data archived refers to information that has been moved to long-term storage for retention, compliance, or backup purposes. Archival data may no longer be actively used or frequently accessed, but it must still be protected from unauthorized access or tampering. Security measures typically include encryption, physical and logical access controls, and integrity checks. While protecting archived data is important, it does not address the challenges of data actively moving between systems. Like data at rest, archived data is static and does not face the interception or eavesdropping risks associated with transmission, making it an incorrect choice in this context.
In conclusion, while data at rest, data in use, and data archived each require specific security measures, data in transit is the unique state where information is actively moving across networks and is most vulnerable to interception and manipulation. Protecting data in transit through encryption, secure protocols, and integrity checks is essential for maintaining confidentiality, integrity, and trust in communications. Option C accurately identifies the data state that requires such protections.
Question 123:
Under Domain 3, which cryptographic property ensures that even a minor change in input produces a completely different output?
A) Diffusion
B) Confusion
C) Avalanche effect
D) Obfuscation
Answer: C) Avalanche effect.
Explanation:
In cryptography, ensuring that small changes in input produce significant and unpredictable changes in output is critical for the security of encryption algorithms. This property prevents attackers from inferring patterns or relationships between plaintext and ciphertext, making cryptographic systems resistant to various types of attacks, including differential and linear cryptanalysis. The correct answer in this context is option C: the avalanche effect. To understand why the avalanche effect is the correct answer, it is helpful to examine each of the four options and their roles in cryptography.
A) Diffusion
Diffusion is a principle in cryptography that ensures that the influence of a single plaintext bit spreads over many ciphertext bits. In other words, changing one bit of plaintext should affect multiple bits in the ciphertext, making it harder for an attacker to determine relationships between plaintext and ciphertext. Diffusion is a crucial property in block ciphers, often achieved through substitution-permutation networks or complex mixing operations. While diffusion is related to the concept of spreading the effect of input changes, it does not fully describe the situation where a single-bit change causes a dramatic and widespread effect throughout the ciphertext. It contributes to the avalanche effect but is not synonymous with it.
B) Confusion
Confusion is another fundamental principle in cryptography introduced by Claude Shannon. Confusion refers to making the relationship between the encryption key and ciphertext as complex and unintelligible as possible. The purpose of confusion is to prevent attackers from deducing key information through analysis of the ciphertext. Techniques such as substitution operations in block ciphers are commonly used to achieve confusion. While confusion increases the complexity of the ciphertext and enhances security, it does not directly describe the phenomenon where a small change in input leads to a significant and unpredictable change in output. Confusion focuses on key-to-ciphertext relationships, not the output sensitivity to input changes.
C) Avalanche effect
This is the correct answer because the avalanche effect specifically describes the desirable property in cryptographic algorithms where a single-bit change in either the plaintext or the key results in a significant and unpredictable change in the ciphertext. In practice, this means that flipping one bit in the input should, on average, change approximately half of the bits in the output. The avalanche effect ensures that small variations in input produce widely divergent outputs, which makes it extremely difficult for attackers to find patterns or exploit weaknesses in the algorithm. Strong block ciphers, such as AES (Advanced Encryption Standard) and DES (Data Encryption Standard), are designed to exhibit a high degree of avalanche effect, combining both diffusion and confusion principles. The avalanche effect is a measurable property that demonstrates the effectiveness of diffusion and confusion working together. Without this effect, encryption algorithms would be predictable and vulnerable to cryptanalysis.
D) Obfuscation
Obfuscation is a technique primarily used to make code, data, or processes difficult to understand or reverse engineer. In software security, obfuscation hides program logic to prevent attackers from discovering vulnerabilities or intellectual property. While obfuscation adds a layer of security, it is unrelated to cryptographic transformations of plaintext to ciphertext. Obfuscation does not guarantee that a small change in input produces a significant change in output, nor does it directly enhance the unpredictability of a cryptographic algorithm. It is a separate concept from the avalanche effect, diffusion, and confusion, which are specifically related to cryptography.
In conclusion, while diffusion and confusion are essential principles that support the design of secure cryptographic algorithms, and obfuscation serves a role in software security, the avalanche effect uniquely describes the property where a single-bit change in the input results in a widespread, unpredictable change in the output. This effect is crucial for maintaining the strength and unpredictability of cryptographic systems. By ensuring that even minimal input changes cause drastic output differences, the avalanche effect prevents attackers from exploiting patterns, thereby enhancing overall encryption security. Therefore, option C is the correct choice.
Question 124:
Under Domain 4, what is the function of a bastion host?
A) To provide unrestricted access between networks
B) To serve as a hardened gateway exposed to untrusted networks
C) To manage patch deployment across systems
D) To host DNS services internally only
Answer: B) To serve as a hardened gateway exposed to untrusted networks.
Explanation:
A demilitarized zone (DMZ) is a critical network security architecture designed to provide a buffer between an organization’s internal network and untrusted external networks, such as the internet. The DMZ allows organizations to expose certain services to the outside world while minimizing the risk to the internal network. The correct answer is option B: to serve as a hardened gateway exposed to untrusted networks. Each of the four options provided describes a different network function or practice, and analyzing them individually clarifies why option B accurately represents the purpose of a DMZ.
A) To provide unrestricted access between networks
This option is incorrect because a DMZ is not intended to allow unrestricted access. In fact, the opposite is true: a DMZ is specifically designed to control and limit access between untrusted external networks and the internal network. Unrestricted access would defeat the purpose of the DMZ and expose sensitive internal systems to significant risk. DMZs employ firewalls, access control lists, and segmentation to carefully manage traffic flows. The goal is to provide limited, secure access to specific services—such as web servers, email gateways, or public-facing applications—without granting full access to internal systems.
B) To serve as a hardened gateway exposed to untrusted networks
This is the correct answer because a DMZ functions as a controlled, hardened segment that sits between the internal network and external networks. It hosts services that must be accessible to outside users, such as web servers, mail servers, DNS servers, and proxy servers, while isolating these systems from the internal network. Hardening measures, including patching, minimal service installation, strict access controls, and logging, are applied to DMZ systems to reduce the risk of compromise. By positioning the DMZ between firewalls—an external firewall facing the internet and an internal firewall protecting the corporate network—the DMZ limits the attack surface while providing the required services. This layered approach ensures that even if a DMZ system is compromised, attackers do not gain direct access to sensitive internal resources. In essence, the DMZ is both a hardened gateway and a protective buffer, which precisely matches the description in option B.
C) To manage patch deployment across systems
This option is incorrect because patch management is a separate operational process that involves updating software, firmware, and operating systems across all devices in an organization. While it is essential to maintain DMZ systems in a hardened state through patching, the primary function of a DMZ is not patch deployment. Patch management addresses software vulnerability mitigation, whereas the DMZ is a network architecture designed to control access and isolate resources. Confusing patch management with the DMZ’s function misrepresents its core purpose.
D) To host DNS services internally only
This option is also incorrect because a DMZ is typically used to host services that must be accessible from external networks, not internal-only services. While an organization may maintain internal DNS servers for internal name resolution, these servers are not part of the DMZ. Internal-only services remain behind the internal firewall, fully protected from untrusted networks. The DMZ, in contrast, allows external access to public-facing services while isolating internal systems. Hosting internal-only DNS in the DMZ would contradict the principle of controlled exposure to untrusted networks.
In conclusion, the primary purpose of a DMZ is to serve as a hardened gateway exposed to untrusted networks. It provides controlled access to public-facing services while protecting internal systems through network segmentation, firewalls, and hardened configurations. Option B correctly captures this purpose, whereas options A, C, and D either misunderstand or misrepresent the function of a DMZ. By implementing a properly designed DMZ, organizations can balance the need to provide external access with the need to maintain strong internal security.
Question 125:
Under Domain 5, which access control concept ensures users only access data relevant to their job duties?
A) Need-to-know
B) Discretionary control
C) Trusted path
D) Ownership
Answer: A) Need-to-know.
Explanation:
In information security, controlling access to sensitive information is a fundamental principle to protect confidentiality and prevent unauthorized disclosure. One of the key methods of limiting access is the “need-to-know” principle, which ensures that users are granted access only to the information necessary to perform their specific job functions. The correct answer in this context is option A: need-to-know. To understand why this is the correct choice, it is important to examine each of the four options individually and consider their role in access control.
A) Need-to-know
The need-to-know principle is a security control that restricts access to information based on an individual’s responsibilities and duties. Even if someone has the proper clearance level or role within an organization, they are only permitted to access information that is directly relevant to their job tasks. For example, an employee in human resources may have access to employee records but not to sensitive financial data unrelated to their duties. This principle minimizes the risk of insider threats, data leaks, or accidental exposure by limiting unnecessary access. Need-to-know is widely used in military, government, and corporate environments where sensitive or classified information is handled. By enforcing this principle, organizations maintain confidentiality while still allowing authorized personnel to perform their functions efficiently. The need-to-know approach is both a technical and administrative control, often implemented through access control lists, role-based access control (RBAC), and policy enforcement.
B) Discretionary control
Discretionary access control (DAC) is a type of access control in which the owner of a resource or file determines who is allowed to access it. The owner can grant or revoke permissions at their discretion, often through access control lists or file permissions. While DAC provides flexibility and allows resource owners to manage access, it does not enforce strict restrictions based on job responsibilities. Users may inadvertently grant access to individuals who do not require it, increasing the risk of unauthorized disclosure. Unlike need-to-know, DAC relies on the discretion of users rather than organizational policy and risk assessment. Therefore, while DAC is important in managing permissions, it does not inherently enforce the need-to-know principle.
C) Trusted path
A trusted path is a secure communication channel that ensures sensitive information, such as authentication credentials, is transmitted directly between the user and the operating system or application without interception by untrusted components. Trusted paths are critical for protecting passwords, secure inputs, and system commands from malware or man-in-the-middle attacks. While trusted paths enhance security by ensuring safe communication, they do not govern access based on job responsibilities or limit exposure of information to authorized personnel. Trusted path technology supports confidentiality, but it is unrelated to the concept of restricting access based on need-to-know.
D) Ownership
Ownership refers to the designation of an individual or entity as the responsible party for a specific asset, system, or piece of information. Owners are accountable for the protection, maintenance, and proper use of the asset. While ownership is essential for accountability and governance, it does not in itself limit access. Ownership provides the authority to assign permissions and enforce policies, but the actual restriction of access based on job necessity is implemented through mechanisms such as need-to-know or role-based access controls. Ownership alone does not guarantee that sensitive information is accessed only by those who require it for their duties.
In conclusion, the need-to-know principle is the most precise method for controlling access to sensitive information based on an individual’s job responsibilities. Unlike discretionary control, which relies on user discretion, or trusted paths, which secure communications, or ownership, which establishes accountability, need-to-know enforces strict restrictions on information exposure to only those who require it for legitimate purposes. This makes option A the correct choice for maintaining confidentiality and minimizing risk in organizational information security.
Question 126:
Under Domain 6, which assessment type involves reviewing system documentation and configuration without system exploitation?
A) Penetration test
B) Security audit
C) Vulnerability assessment
D) Code review
Answer: C) Vulnerability assessment.
Explanation:
In the field of information security, organizations employ a variety of assessment methods to identify weaknesses and strengthen their overall security posture. One of these methods is a vulnerability assessment, which is designed to systematically identify, quantify, and prioritize security vulnerabilities in systems, networks, and applications. The correct answer to the question is option C: vulnerability assessment. Understanding why this is the correct choice requires examining the four options individually and considering their purposes, scope, and methodologies.
A) Penetration test
A penetration test, or “pen test,” is an active security evaluation where testers attempt to exploit vulnerabilities to gain unauthorized access or demonstrate the potential impact of a breach. Penetration tests are designed to simulate real-world attacks and test the effectiveness of security controls under adversarial conditions. While penetration tests do identify vulnerabilities, their primary focus is on exploitation rather than identification alone. They are often conducted after a vulnerability assessment to validate findings and measure potential risk. Because a penetration test goes beyond discovery and actively attempts to compromise systems, it is not the same as a vulnerability assessment, which identifies weaknesses without exploiting them.
B) Security audit
A security audit is a formal review of an organization’s adherence to policies, procedures, standards, and regulatory requirements. Audits often include evaluating access controls, reviewing logs, verifying compliance with security policies, and checking procedural enforcement. While audits may uncover security gaps, their focus is primarily on compliance and governance rather than technical vulnerabilities. A security audit evaluates whether the organization follows required controls, whereas a vulnerability assessment specifically identifies technical weaknesses in systems, applications, and networks. Therefore, a security audit is not the correct answer in this context.
C) Vulnerability assessment
This is the correct answer. A vulnerability assessment is a systematic process for identifying, classifying, and prioritizing security weaknesses in IT systems. Vulnerability assessments use automated tools and manual techniques to scan networks, servers, applications, and endpoints for known vulnerabilities, misconfigurations, missing patches, weak passwords, and other security issues. The primary goal is to provide organizations with a clear understanding of potential threats without exploiting them. Unlike penetration tests, vulnerability assessments are non-intrusive and safe to perform on production systems. Once vulnerabilities are identified, the results are typically reported with severity ratings, suggested mitigations, and recommendations for remediation. Vulnerability assessments are often the first step in a comprehensive security testing program because they provide the foundational knowledge required to prioritize security investments, patch critical issues, and reduce the organization’s attack surface.
D) Code review
A code review is a process in which developers and security specialists examine source code to identify errors, security flaws, and violations of coding standards. Code reviews are effective for detecting vulnerabilities such as buffer overflows, SQL injection, insecure API calls, or improper input validation within software applications. While code review is a valuable method for improving software security, it is limited to source code analysis and does not typically evaluate the broader system, network, or infrastructure vulnerabilities. Vulnerability assessment, by contrast, is more comprehensive, encompassing both software and hardware systems. Therefore, code review is not the correct answer in this context, as it addresses only one aspect of security evaluation.
In conclusion, vulnerability assessment is a structured, non-intrusive process designed to identify, classify, and prioritize security weaknesses across systems, networks, and applications. Unlike penetration testing, it does not attempt to exploit vulnerabilities, and unlike security audits, it focuses on technical weaknesses rather than policy compliance. While code reviews target application-level issues, vulnerability assessments provide a broader view of potential risks. This makes option C the correct choice for identifying and assessing security vulnerabilities in an organization’s IT environment.
Question 127:
Under Domain 7, which backup strategy stores only data changed since the last full backup?
A) Incremental backup
B) Differential backup
C) Snapshot backup
D) Continuous data protection
Answer: B) Differential backup.
Explanation:
Backup strategies are a critical part of any organization’s data protection and disaster recovery plan. The objective of a backup strategy is to ensure that data can be restored in the event of accidental deletion, hardware failure, software corruption, or cyberattacks such as ransomware. Among the various backup methods, differential backup is a widely used approach that balances efficiency and recovery speed. The correct answer in this case is option B: differential backup. To understand why, it is helpful to examine each of the four options and their characteristics.
A) Incremental backup
An incremental backup involves copying only the data that has changed since the last backup, whether it was a full backup or a previous incremental backup. While incremental backups are highly efficient in terms of storage space and backup time, they can create longer recovery times because restoring data requires starting with the last full backup and then applying each incremental backup in sequence. If any incremental backup in the chain is missing or corrupted, data recovery becomes more complicated. Therefore, while incremental backups are useful for minimizing storage use, they are not the primary solution when rapid and straightforward restoration is a priority, making them less ideal than differential backups in certain scenarios.
B) Differential backup
This is the correct answer. Differential backups copy all data that has changed since the last full backup, rather than just changes since the previous backup as in incremental backups. This approach strikes a balance between storage efficiency and recovery speed. In the event of a system failure, restoring data from a differential backup only requires the last full backup plus the latest differential backup, making the recovery process faster and simpler than with incremental backups. Differential backups are particularly effective for organizations that require frequent backups without the need to process a long chain of incremental files during recovery. The differential method also provides a predictable backup schedule and reduces the risk of data loss associated with a broken incremental chain, making it a practical choice for routine enterprise backup strategies.
C) Snapshot backup
Snapshot backups create a point-in-time copy of a system or dataset, often at the storage or virtual machine level. Snapshots are useful for quickly capturing the current state of a system, enabling fast rollback to a previous state in case of corruption or misconfiguration. However, snapshots typically depend on the underlying storage system and may not be sufficient for long-term archival or disaster recovery, as they can consume significant storage over time and are not a substitute for full backups. While snapshots provide fast recovery for short-term operational purposes, they do not offer the same structured backup strategy that differential backups provide for comprehensive data recovery.
D) Continuous data protection
Continuous data protection (CDP) is a method of automatically capturing every change made to data in real-time or near-real-time. CDP ensures that virtually no data is lost, as every write is recorded and can be restored to any previous point in time. Although CDP provides the highest level of protection, it can be complex and expensive to implement due to the storage and processing requirements. CDP is ideal for organizations that require near-zero data loss, but it is often overkill for typical backup needs and may not be as straightforward as differential backups for standard enterprise environments.
In conclusion, differential backups provide a balanced approach to data protection. They efficiently capture changes since the last full backup while minimizing the complexity and recovery time associated with incremental backups. Unlike snapshot backups, which are temporary and storage-intensive, or continuous data protection, which can be costly and complex, differential backups offer a practical, predictable, and reliable method for routine data backup and recovery. This makes option B the most appropriate choice for organizations looking to maintain data integrity and ensure timely recovery in the event of data loss or system failure.
Question 128:
Under Domain 8, what secure coding practice helps defend against buffer overflow vulnerabilities?
A) Input truncation
B) Bounds checking and memory-safe languages
C) Encryption at rest
D) Tokenization
Answer: B) Bounds checking and memory-safe languages.
Explanation:
Proper input validation and bounds checking prevent writing beyond memory buffers. Using memory-safe languages like Java or Python further minimizes buffer overflow risk.
Question 129:
Under Domain 1, what principle dictates that security controls should be as simple and minimal as possible to reduce errors?
A) Complete mediation
B) Least privilege
C) Economy of mechanism
D) Open design
Answer: C) Economy of mechanism.
Explanation:
In the field of information security, designing secure systems requires adhering to fundamental principles that help reduce vulnerabilities and improve maintainability. One such principle is the economy of mechanism, which emphasizes keeping security designs and implementations as simple and straightforward as possible. The correct answer is option C: economy of mechanism. Understanding why this is correct involves examining each of the four options and their roles in secure system design.
A) Complete mediation
Complete mediation is a security principle that requires every access attempt to a resource to be checked for authorization. The idea is that security checks should not be bypassed, cached, or assumed to have already been performed. This principle helps prevent unauthorized access and ensures that access control is consistently enforced. While complete mediation is critical for ensuring that security policies are applied at all times, it does not address the complexity of system design itself. Therefore, although important, complete mediation is not synonymous with the concept of economy of mechanism.
B) Least privilege
The principle of least privilege dictates that users, processes, and systems should operate with the minimum level of access necessary to perform their functions. By limiting permissions, the principle reduces the potential damage caused by accidental or malicious actions. Least privilege is a key security control for managing risk and restricting unauthorized actions. However, it focuses on access control and the scope of permissions rather than the design simplicity of the security mechanisms themselves. While least privilege supports secure system operation, it is not the principle that specifically advocates for simplicity in system design, which is the essence of economy of mechanism.
C) Economy of mechanism
This is the correct answer. Economy of mechanism states that security systems should be designed as simply as possible. Complexity in design often introduces vulnerabilities because complex systems are harder to understand, audit, and maintain. By keeping designs simple, developers reduce the likelihood of errors, misconfigurations, and unforeseen interactions that can be exploited by attackers. Simplicity also makes it easier to implement, test, and verify security controls, which improves overall reliability and reduces operational risk. For example, a straightforward authentication protocol with clear, well-documented steps is easier to secure and audit than a complicated protocol with multiple conditional branches and optional features. The economy of mechanism principle is widely applied in cryptography, operating system design, and application development to minimize the attack surface and ensure that security mechanisms are robust, maintainable, and comprehensible. By adhering to this principle, organizations can avoid the pitfalls of over-engineered systems that are prone to errors, difficult to manage, and more likely to contain hidden vulnerabilities.
D) Open design
Open design is a principle that suggests that the security of a system should not depend on the secrecy of its design or implementation. Instead, security should rely on well-tested algorithms, transparent mechanisms, and publicly scrutinized methods. Open design promotes trust, peer review, and the identification of flaws through community evaluation. While open design is important for ensuring that security is based on strong principles rather than obscurity, it does not specifically address simplicity in design. Open design can be implemented in systems that are either simple or highly complex, so it does not capture the essence of economy of mechanism.
In conclusion, economy of mechanism emphasizes simplicity as a key security design principle. Simple designs are easier to understand, test, maintain, and audit, reducing the likelihood of security vulnerabilities. While complete mediation, least privilege, and open design are all important principles that contribute to secure systems, they focus on access control, permission minimization, and reliance on transparent security mechanisms, respectively. Economy of mechanism uniquely addresses the importance of keeping security mechanisms straightforward and manageable, making option C the correct choice. By following this principle, organizations can build secure systems that are robust, efficient, and less prone to human or technical error.
Question 130:
Under Domain 2, what is the main goal of data classification?
A) To assign ownership and determine appropriate protection levels
B) To encrypt all organizational data equally
C) To control user access only
D) To meet hardware configuration requirements
Answer: A) To assign ownership and determine appropriate protection levels.
Explanation:
Classification categorizes information by sensitivity and business value, defining how it must be handled, labeled, stored, and destroyed according to policy.
Question 131:
Under Domain 3, which security concept ensures that data integrity is protected through mathematical validation?
A) Symmetric encryption
B) Digital signature
C) Obfuscation
D) Access control list
Answer: B) Digital signature.
Explanation:
Digital signatures authenticate the sender and verify data integrity via cryptographic hashing and public-key encryption, preventing tampering or impersonation.
Question 132:
Under Domain 4, what technology helps segregate different VLANs over the same physical infrastructure securely?
A) Multiplexing
B) VLAN trunking with tagging protocols (e.g., IEEE 802.1Q)
C) MAC address filtering
D) IP subnetting
Answer: B) VLAN trunking with tagging protocols (e.g., IEEE 802.1Q).
Explanation:
Trunking allows multiple VLANs to share network links while maintaining logical isolation through tagging, ensuring network segmentation and reducing lateral movement.
Question 133:
Under Domain 5, which authentication method provides the highest level of assurance?
A) Password only
B) Token and PIN
C) Multi-factor authentication (MFA) combining different categories
D) Biometric alone
Answer: C) Multi-factor authentication (MFA) combining different categories.
Explanation:
MFA combines factors like “something you know,” “something you have,” and “something you are,” providing strong resistance to credential compromise.
Question 134:
Under Domain 6, what is the main purpose of static application security testing (SAST)?
A) To analyze source code for vulnerabilities without executing it
B) To execute applications in real time for dynamic flaws
C) To monitor production network traffic
D) To test endpoint configurations
Answer: A) To analyze source code for vulnerabilities without executing it.
Explanation:
SAST detects coding flaws early in the SDLC by examining source or bytecode, identifying issues like injection flaws or logic errors before deployment.
Question 135:
Under Domain 7, which metric measures how long an organization can tolerate service disruption before unacceptable damage occurs?
A) Recovery Point Objective (RPO)
B) Mean Time Between Failures (MTBF)
C) Maximum Tolerable Downtime (MTD)
D) Recovery Time Objective (RTO)
Answer: C) Maximum Tolerable Downtime (MTD).
Explanation:
MTD defines the threshold beyond which service unavailability causes irreparable harm to business operations, guiding continuity planning and RTO determination.
Question 136:
Under Domain 8, what secure SDLC phase focuses on defining security requirements before development begins?
A) Implementation phase
B) Design phase
C) Requirements phase
D) Testing phase
Answer: C) Requirements phase.
Explanation:
Security requirements are defined alongside functional requirements early in the SDLC to ensure proper control integration and compliance alignment from inception.
Question 137:
Under Domain 1, what control type is exemplified by a security guard physically inspecting identification badges?
A) Preventive
B) Detective
C) Corrective
D) Compensating
Answer: A) Preventive.
Explanation:
Physical inspections stop unauthorized access before an incident occurs, serving as preventive controls that deter and block violations proactively.
Question 138:
Under Domain 2, what process involves permanently deleting data from storage media to prevent recovery?
A) Sanitization
B) Obfuscation
C) Classification
D) Encryption
Answer: A) Sanitization.
Explanation:
Sanitization techniques like degaussing or cryptographic erasure ensure data cannot be recovered from media, protecting confidentiality after disposal or repurposing.
Question 139:
Under Domain 3, which form of access control relies on system-enforced rules based on information labels and user clearances?
A) Discretionary Access Control (DAC)
B) Role-Based Access Control (RBAC)
C) Mandatory Access Control (MAC)
D) Attribute-Based Access Control (ABAC)
Answer: C) Mandatory Access Control (MAC).
Explanation:
MAC uses system-enforced security labels (e.g., Top Secret, Confidential) and clearance levels, common in government or military environments, to enforce non-discretionary policies.
Question 140:
Under Domain 4, what protocol secures email transmission by providing end-to-end encryption?
A) SMTP
B) S/MIME
C) POP3
D) IMAP
Answer: B) S/MIME.
Explanation:
S/MIME provides confidentiality, authentication, and integrity for email using asymmetric encryption and digital signatures, ensuring secure communication between endpoints.
Popular posts
Recent Posts
