CompTIA CAS-005 SecurityX Exam Dumps and Practice Test Questions Set1 Q1-20
Visit here for our full CompTIA CAS-005 SecurityX exam dumps and practice test questions.
Question 1:
Which of the following security principles ensures that sensitive data is not exposed to unauthorized users while still allowing authorized users to access it?
A. Availability
B. Confidentiality
C. Integrity
D. Non-repudiation
Answer: B. Confidentiality
Explanation :
Confidentiality is one of the three foundational pillars of information security, often represented in the CIA Triad: Confidentiality, Integrity, and Availability. The principle of confidentiality focuses on ensuring that sensitive or classified information is accessed only by authorized individuals, entities, or systems. It prevents disclosure of data to unauthorized users, whether accidental or intentional. In essence, confidentiality is about keeping information private and secure from those who should not have access to it.
Organizations enforce confidentiality through a variety of administrative, technical, and physical security measures. Administrative controls include security policies, nondisclosure agreements, user training, and information classification schemes. Technical controls involve encryption, access control lists (ACLs), authentication mechanisms (such as passwords, biometrics, or multi-factor authentication), and secure communication protocols (such as SSL/TLS or IPSec). Physical controls such as locked server rooms, access badges, and surveillance systems also help protect sensitive data from unauthorized exposure.
Encryption is one of the most critical technologies that upholds confidentiality. When data is encrypted, even if it is intercepted or stolen, it remains unreadable without the correct decryption key. This is vital for protecting data in transit (e.g., over a network) and data at rest (e.g., stored on a server or backup drive).
Confidentiality is often enforced using the principle of least privilege, which ensures users are granted only the permissions necessary to perform their job duties. Failure to maintain confidentiality can lead to data breaches, reputational damage, legal consequences, and compliance violations. In short, confidentiality protects the secrecy and privacy of information while maintaining accessibility for those who legitimately need it, making it a cornerstone of cybersecurity defense and policy frameworks worldwide.
Question 2:
A company implements a policy requiring employees to use multi-factor authentication (MFA) when accessing corporate email remotely. Which of the following security goals does this primarily address?
A. Data integrity
B. Availability
C. Confidentiality
D. Non-repudiation
Answer: C. Confidentiality
Explanation :
Multi-Factor Authentication (MFA) is one of the most effective modern security mechanisms designed to strengthen user authentication and protect sensitive information from unauthorized access. The principle behind MFA is that users must provide two or more independent credentials—known as factors—to verify their identity. These factors fall into three categories: something you know (such as a password or PIN), something you have (such as a hardware token, smartphone, or smart card), and something you are (biometric identifiers like fingerprints, facial recognition, or iris scans).
By requiring multiple factors, MFA significantly enhances confidentiality, which is the assurance that sensitive information is accessible only to authorized users. Even if one authentication factor (such as a password) is compromised through phishing, brute-force attacks, or credential theft, the attacker would still need the additional factors to gain access. This layered approach reduces the likelihood of unauthorized intrusion into corporate email accounts or other sensitive systems, safeguarding company communications and intellectual property.
In this scenario, the company’s policy is focused on protecting corporate email when accessed remotely—a common attack vector exploited by cybercriminals. Since remote access often occurs over the internet, MFA provides a crucial defense against stolen credentials or session hijacking.
Although MFA can indirectly contribute to integrity (by preventing unauthorized data modification) and non-repudiation (by ensuring that authenticated users cannot deny their actions), its primary role lies in maintaining confidentiality. MFA does not inherently improve availability, as it focuses on access control rather than system uptime.
Ultimately, implementing MFA demonstrates a proactive approach to identity and access management, aligning with best practices and regulatory frameworks such as NIST SP 800-63 and Zero Trust Architecture principles. It helps ensure that only legitimate users can access company resources, thereby protecting sensitive data from exposure or misuse.
Question 3:
Which type of attack involves an adversary intercepting communications between two parties without their knowledge, potentially altering or capturing data?
A. Phishing
B. Man-in-the-middle (MITM)
C. Denial-of-service (DoS)
D. SQL injection
Answer: B. Man-in-the-middle (MITM)
Explanation :
A Man-in-the-Middle (MITM) attack is a form of cyberattack in which an adversary secretly intercepts, monitors, and possibly alters the communication between two unsuspecting parties who believe they are directly communicating with each other. The attacker positions themselves between the sender and the receiver, enabling them to capture sensitive information such as usernames, passwords, credit card details, or session tokens. In some cases, attackers modify data packets in transit, which can alter messages, redirect traffic, or inject malicious content.
MITM attacks can occur in various contexts. One common example is on unsecured public Wi-Fi networks, where attackers set up rogue access points (APs) or use packet sniffing tools to intercept unencrypted communications. Another vector is DNS spoofing or ARP poisoning, which allows attackers to redirect network traffic to malicious servers. In web communications, exploiting vulnerabilities in HTTP or improperly configured SSL/TLS certificates can also facilitate MITM attacks.
The security principles most affected by MITM attacks are confidentiality, integrity, and authentication. Confidentiality is compromised when attackers eavesdrop on sensitive data; integrity is violated if data is modified in transit; and authentication is undermined when the attacker impersonates one or both parties.
To defend against MITM attacks, organizations should enforce end-to-end encryption through protocols such as HTTPS, TLS, and VPNs. Implementing certificate pinning, digital signatures, and mutual authentication can prevent impersonation. Network administrators should also use intrusion detection/prevention systems (IDS/IPS) and secure DNS configurations to detect and block suspicious activity.
MITM attacks differ from other threats such as phishing, which deceives users through fake communications, or DoS attacks, which disrupt availability rather than intercept data. Ultimately, the goal of a MITM attack is stealthy interception and data manipulation, making it one of the most dangerous and difficult-to-detect cybersecurity threats.
Question 4:
A penetration tester is hired to evaluate an organization’s security posture. During the engagement, the tester discovers an unpatched system that could allow privilege escalation. Which phase of the penetration testing methodology does this finding belong to?
A. Reconnaissance
B. Scanning and enumeration
C. Exploitation
D. Reporting
Answer: C. Exploitation
Explanation :
The exploitation phase of a penetration test is where the tester takes the vulnerabilities discovered during earlier phases—such as reconnaissance and scanning and enumeration—and actively attempts to use them to gain unauthorized access or demonstrate the real-world impact of those weaknesses. In this scenario, the tester identifies an unpatched system that could allow privilege escalation, meaning the tester might exploit the vulnerability to move from a lower level of system access (like a regular user account) to higher administrative or root-level privileges. This action belongs squarely within the exploitation phase because it involves actively validating and leveraging the discovered vulnerability.
During the reconnaissance phase, testers gather publicly available information about the target organization, such as domain names, IP ranges, or employee details, often through passive techniques. The scanning and enumeration phase then builds upon this by actively probing systems and networks to identify open ports, running services, and potential vulnerabilities. However, finding an unpatched system alone does not confirm its exploitability—testing whether it can be used to gain unauthorized access is the goal of the exploitation phase.
The exploitation phase is performed carefully and ethically, following predefined rules of engagement to avoid unnecessary system disruption or data loss. This phase can include testing for privilege escalation, password cracking, web application attacks, or network pivoting. The insights gained here demonstrate the potential damage an attacker could inflict if these vulnerabilities were discovered and exploited maliciously.
After exploitation, the reporting phase documents all findings, evidence of compromise, and remediation recommendations. The purpose is not to harm the organization but to help it strengthen its defenses. Properly conducted exploitation verifies risks, helps prioritize patches, and provides tangible proof of an organization’s security weaknesses, making it a critical step in ethical hacking and penetration testing engagements.
Question 5:
Which of the following access control models grants users permissions based on their roles within an organization, minimizing administrative overhead?
A. Discretionary Access Control (DAC)
B. Mandatory Access Control (MAC)
C. Role-Based Access Control (RBAC)
D. Rule-Based Access Control
Answer: C. Role-Based Access Control (RBAC)
Explanation :
Role-Based Access Control (RBAC) is one of the most widely used and effective access control models in enterprise environments. It simplifies access management by assigning permissions to specific roles within an organization rather than to individual users. A role represents a set of job-related responsibilities or functions, such as “System Administrator,” “HR Manager,” or “Finance Analyst.” Users are then assigned to these roles, automatically inheriting the permissions associated with that role. This approach minimizes administrative complexity and reduces the likelihood of human error when granting or revoking permissions.
RBAC is particularly beneficial in large organizations where managing permissions on a per-user basis would be time-consuming and error-prone. For example, instead of granting access to payroll files to every new HR employee individually, administrators can simply add the user to the “HR” role. If an employee changes departments or leaves the company, access can be modified or revoked instantly by changing their role assignment.
This model supports the principle of least privilege (PoLP) by ensuring users have only the access necessary to perform their duties. It also promotes segregation of duties (SoD), reducing the risk of fraud or accidental misuse by ensuring that no single individual has conflicting or excessive permissions.
In contrast, Discretionary Access Control (DAC) allows data owners to determine who can access their resources, which can lead to inconsistent permission settings. Mandatory Access Control (MAC) uses security labels and classifications—often found in military or government systems—and is rigid, leaving no room for user discretion. Rule-Based Access Control uses system-enforced policies, such as time-based or location-based access rules, to allow or deny actions automatically.
Overall, RBAC strikes an optimal balance between security, scalability, and administrative efficiency, making it a cornerstone of modern identity and access management systems and a key concept tested in the CAS-005 exam.
Question 6:
An organization wants to ensure that critical systems can remain operational during a ransomware attack. Which of the following strategies is most effective?
Implementing strong password policies
B. Frequent offline backups
C. Disabling unnecessary services
D. Enforcing multi-factor authentication
Answer: B. Frequent offline backups
Explanation :
Ransomware is one of the most destructive and financially damaging types of cyberattacks, designed to encrypt an organization’s critical files and systems, rendering them unusable until a ransom is paid to the attacker. In many cases, even if the ransom is paid, there is no guarantee that data will be decrypted successfully or that attackers will not leak sensitive information. Therefore, the most effective defense against ransomware is to ensure that the organization can restore operations independently, without relying on the attacker. This is achieved through frequent offline backups.
Offline, or air-gapped backups, are backup copies of critical data that are stored in a location not connected to the main network or the internet. Because they are isolated, ransomware cannot reach or encrypt these backups during an attack. By keeping regular, versioned offline backups—whether on external drives, magnetic tapes, or secure cloud environments with immutable storage—organizations can quickly restore systems and data to a pre-attack state, minimizing downtime and financial loss.
Implementing strong password policies and multi-factor authentication (MFA) can reduce the chance of initial compromise, but these controls primarily protect against unauthorized access rather than data loss. Similarly, disabling unnecessary services helps reduce the attack surface but cannot recover encrypted files once ransomware has executed.
Offline backups should also be tested regularly to verify integrity and ensure that recovery processes work effectively. These backups should be part of a comprehensive disaster recovery plan (DRP) and business continuity strategy (BCP) that defines recovery time objectives (RTOs) and recovery point objectives (RPOs).
In summary, frequent and secure offline backups are the cornerstone of resilience against ransomware attacks. They provide the assurance that, even in the event of total system compromise, critical data can be restored quickly and securely, ensuring operational continuity.
Question 7:
Which type of malware specifically disguises itself as legitimate software or files to trick users into executing it?
Rootkit
B. Trojan
C. Worm
D. Adware
Answer: B. Trojan
Explanation :
A Trojan horse, often simply called a Trojan, is a type of malicious software that disguises itself as a legitimate or desirable application in order to deceive users into installing or executing it. The term originates from the ancient Greek myth of the wooden horse used to infiltrate the city of Troy — similarly, Trojans appear harmless on the surface but conceal harmful code within. Unlike worms or viruses, Trojans do not self-replicate; instead, they rely on social engineering techniques to trick victims into voluntarily installing them.
Once executed, a Trojan can perform a variety of malicious actions depending on its design and purpose. Common types include backdoor Trojans, which allow remote control of the infected system; banking Trojans, designed to steal financial data; and downloaders, which install additional malware such as ransomware or spyware. Some Trojans can also disable antivirus software, modify system settings, or exfiltrate confidential information to an attacker-controlled server.
A key characteristic of Trojans is that they exploit user trust. Attackers often embed them in email attachments, software cracks, fake updates, or seemingly legitimate downloads from untrusted websites. Because users initiate the installation themselves, security systems may not immediately detect the malicious intent, making Trojans particularly dangerous in environments where employees are not trained in cybersecurity awareness.
In contrast, worms are self-replicating malware that spread autonomously across networks without user interaction, and rootkits are stealth tools designed to hide other malware or maintain persistent access to a system. Adware, while generally less harmful, displays unwanted advertisements and may sometimes include spyware components.
To prevent Trojan infections, organizations should enforce strict application whitelisting, use endpoint protection, and train users to identify suspicious software or phishing attempts. Regular software updates and digital signature verification also help ensure that only authentic, verified software is installed.
Question 8:
Which of the following describes the principle of least privilege?
A. Users have access only to the data and resources necessary to perform their job functions
B. Users are granted full access and must request limitations
C. Access rights are based on mandatory policies and cannot be modified
D. Permissions are granted temporarily and automatically revoked after a set period
Answer: A. Users have access only to the data and resources necessary to perform their job functions
Explanation:
The Principle of Least Privilege (PoLP) is a fundamental concept in cybersecurity and access management that dictates users, processes, and systems should be granted only the minimum level of access or permissions required to perform their specific tasks—nothing more, nothing less. This minimizes the potential damage that could result from accidents, human error, or malicious activity. By strictly limiting access, organizations reduce their attack surface and the risk of privilege misuse or compromise.
Implementing the principle of least privilege helps safeguard critical systems and sensitive data. For instance, a data analyst may need read-only access to a financial database for reporting purposes but should not have permission to modify or delete data. Likewise, an application service account should be able to access only the resources it needs to function, preventing it from being exploited to compromise other systems. This principle applies across operating systems, network resources, and even cloud environments.
The benefits of enforcing least privilege extend beyond reducing insider threats—it also helps contain the impact of malware or external breaches. If an attacker compromises a low-privilege account, their ability to move laterally or escalate privileges within the network is significantly restricted.
Enforcing PoLP typically involves using Role-Based Access Control (RBAC) or Attribute-Based Access Control (ABAC), implementing Just-In-Time (JIT) access, and conducting regular privilege reviews to ensure permissions remain appropriate. Logging and auditing privileged activity further enhance accountability and detect misuse.
In contrast, granting excessive privileges (Option B) or inflexible access (Option C) increases security risks. Option D, while referring to temporary access control, does not fully capture the ongoing practice of limiting all privileges by necessity. Properly applied, the principle of least privilege forms the backbone of secure system design and identity governance frameworks like Zero Trust Architecture (ZTA).
Question 9:
An administrator notices unusual outbound traffic from an internal host to multiple external IP addresses. Which type of compromise should the administrator suspect?
A. Denial-of-service attack
B. Botnet infection
C. SQL injection
D. Privilege escalation
Answer: B. Botnet infection
Explanation :
A botnet infection occurs when a computer or device becomes compromised by malware and is remotely controlled by an attacker, often without the user’s knowledge. Once infected, the system becomes part of a larger network of compromised machines—called bots or zombies—that operate under the direction of a central command-and-control (C2) server. Attackers use these botnets to perform coordinated malicious activities such as distributed denial-of-service (DDoS) attacks, mass email spam campaigns, data exfiltration, and cryptocurrency mining.
The key indicator in this scenario—unusual outbound traffic to multiple external IP addresses—strongly suggests botnet communication. In a botnet, each infected host communicates regularly with one or more external C2 servers to receive instructions or send stolen data. The traffic patterns often include connections to unfamiliar or suspicious domains, high volumes of encrypted traffic, or unusual activity outside of normal business hours. This outbound nature distinguishes botnets from attacks like Denial-of-Service (DoS), which usually involve large volumes of inbound traffic targeting a victim.
Detecting botnet activity requires network traffic analysis, intrusion detection/prevention systems (IDS/IPS), and endpoint security monitoring. Administrators often look for anomalies such as repeated failed connections, beaconing behavior (regularly timed communications with external hosts), or use of known botnet-related IP addresses. Threat intelligence feeds and security information and event management (SIEM) systems can also correlate logs across multiple systems to identify widespread infection patterns.
To mitigate and prevent botnet infections, organizations should implement strong endpoint protection, regular patching, least privilege access, and user education to prevent malware delivery via phishing or malicious downloads. Network segmentation can also help contain an infection to a limited area, preventing lateral movement.
In summary, persistent and unusual outbound traffic from a host to numerous unknown IPs is a classic sign of botnet compromise, where the infected system is being remotely controlled to execute malicious tasks as part of a larger coordinated attack network.
Question 10:
Which encryption method uses two mathematically related keys, one public and one private, to secure communications?
A. Symmetric encryption
B. Asymmetric encryption
C. Hashing
D. Steganography
Answer: B. Asymmetric encryption
Explanation :
Asymmetric encryption, also known as public-key cryptography, is a cryptographic method that uses a pair of mathematically related keys — a public key and a private key — to encrypt and decrypt data. The public key is shared openly, while the private key is kept secret by its owner. Data encrypted with the public key can only be decrypted with the corresponding private key, and vice versa. This design enables secure communication over untrusted networks without the need for both parties to share a secret key in advance, solving one of the biggest challenges in symmetric encryption: key distribution.
In practical use, asymmetric encryption ensures confidentiality, integrity, authentication, and non-repudiation. For instance, when someone encrypts a message using a recipient’s public key, only the recipient can decrypt it with their private key, ensuring confidentiality. Conversely, when a sender digitally signs a message using their private key, anyone with the sender’s public key can verify the authenticity of the signature, ensuring integrity and non-repudiation.
Common asymmetric algorithms include RSA (Rivest–Shamir–Adleman), Elliptic Curve Cryptography (ECC), and Diffie-Hellman key exchange (which, while not encryption per se, is used to securely share symmetric keys). These algorithms are the foundation of many modern security protocols, including SSL/TLS for secure web communications, PGP (Pretty Good Privacy) for encrypted emails, and SSH for secure remote access.
In contrast, symmetric encryption relies on a single shared key for both encryption and decryption, which requires a secure channel for key exchange. Hashing, while often used in conjunction with encryption for data integrity verification, is a one-way process that cannot be reversed. Steganography, on the other hand, conceals data within images, audio, or other media but does not encrypt it.
Overall, asymmetric encryption revolutionized cybersecurity by enabling secure digital communication, authentication, and trust establishment across the internet without prior key sharing, forming the backbone of today’s public key infrastructure (PKI) systems.
Question 11:
Which type of firewall inspects traffic at the application layer and can enforce rules based on the content of specific protocols such as HTTP, FTP, or SMTP?
A. Packet-filtering firewall
B. Stateful firewall
C. Application-layer firewall
D. Circuit-level gateway
Answer: C. Application-layer firewall
Explanation :
An application-layer firewall, sometimes called a proxy firewall, operates at Layer 7 of the OSI model—the application layer. Unlike traditional firewalls that focus on IP addresses, ports, or connection states, application-layer firewalls analyze the actual contents of the traffic, including application-specific commands and protocol-level data. This capability allows them to enforce very granular security policies, detect malicious payloads, and block attacks that exploit specific application vulnerabilities. For example, an application-layer firewall can inspect HTTP traffic for malicious input used in SQL injection, cross-site scripting (XSS), or buffer overflow attacks, and block it before it reaches the target server.
In contrast, a packet-filtering firewall works at Layer 3 (network layer) and filters traffic primarily based on source and destination IP addresses, ports, and protocol types. While fast and lightweight, packet-filtering firewalls cannot analyze the actual content of packets or detect application-specific threats. A stateful firewall tracks the state of active connections and can allow return traffic for legitimate sessions, but it also lacks deep inspection capabilities at the application layer. Circuit-level gateways operate at the session layer (Layer 5) and monitor TCP or UDP sessions for legitimacy, but they do not inspect the contents of the messages themselves.
Application-layer firewalls often function as intermediaries, acting as proxies between clients and servers. This allows them to log detailed activity, enforce authentication, and even cache content to improve performance. They are particularly important in web-facing applications, email servers, and other protocol-specific services where attackers may exploit vulnerabilities at the application level.
Deploying application-layer firewalls is a key strategy in a defense-in-depth architecture, complementing network-layer firewalls, intrusion detection/prevention systems (IDS/IPS), and endpoint protections. By inspecting traffic at the application layer, organizations gain the ability to detect and prevent sophisticated attacks that bypass traditional firewall protections, making them an essential component of modern cybersecurity.
Question 12:
An organization implements a system where users must authenticate with a password and then provide a fingerprint scan. This is an example of:
A. Single-factor authentication
B. Multi-factor authentication
C. Certificate-based authentication
D. Single sign-on
Answer: B. Multi-factor authentication
Explanation :
Multi-factor authentication (MFA) is a security mechanism that requires users to provide two or more independent forms of verification to prove their identity before gaining access to a system, application, or resource. MFA strengthens security by combining factors from different categories: something you know (e.g., passwords or PINs), something you have (e.g., security tokens, smart cards, or mobile devices), and something you are (e.g., biometric traits such as fingerprints, facial recognition, or iris scans). This layered approach significantly reduces the likelihood of unauthorized access even if one factor is compromised.
In the scenario described, the user first provides a password, which is the knowledge factor. Then, the user completes a fingerprint scan, which is the biometric factor. Since two distinct categories of authentication are required, this is a clear example of MFA. By requiring multiple factors, the organization protects against common attacks such as phishing, credential stuffing, brute-force attacks, and password reuse. Even if an attacker obtains the user’s password, they would still need the fingerprint to complete authentication, greatly enhancing security.
By contrast, single-factor authentication relies on only one factor—usually a password or PIN—which is more vulnerable to compromise. Certificate-based authentication uses digital certificates to verify identity, often in VPNs or secure communications, but typically relies on possession of the certificate plus a password or PIN, which may or may not constitute MFA. Single sign-on (SSO) allows a user to access multiple systems with one set of credentials, improving usability and reducing password fatigue, but it does not inherently enforce multiple authentication factors.
MFA is a critical component of modern cybersecurity strategies, especially in environments where sensitive data is accessed remotely, such as cloud services, corporate email, and financial systems. Implementation of MFA aligns with best practices and frameworks such as NIST SP 800-63B and Zero Trust Architecture, helping organizations ensure that only legitimate users gain access while significantly mitigating the risk of account compromise.
Question 13:
Which type of security control is a security awareness training program for employees?
A. Technical
B. Administrative
C. Physical
D. Detective
Answer: B. Administrative
Explanation :
Administrative controls are policies, procedures, and processes that an organization implements to guide human behavior and manage the overall security posture. Unlike technical or physical controls, which rely on technology or physical barriers to prevent incidents, administrative controls focus on the human element of security, ensuring that employees, contractors, and other personnel understand their responsibilities and follow standardized procedures to reduce risk.
A security awareness training program is a prime example of an administrative control. Such programs educate employees about cybersecurity best practices, organizational policies, and potential threats. Topics often include phishing and social engineering attacks, secure password creation, proper handling of sensitive data, safe use of email and internet resources, and understanding acceptable use policies. By providing ongoing training, organizations reduce the likelihood of human error, which is one of the leading causes of security breaches. Employees become the first line of defense, capable of recognizing suspicious emails, reporting incidents, and complying with secure operational procedures.
In contrast, technical controls are mechanisms implemented using hardware or software, such as firewalls, intrusion prevention systems, antivirus software, or encryption tools. Physical controls protect the organization’s tangible assets, including building access controls, locks, surveillance cameras, and environmental protections. Detective controls are designed to identify and alert security teams about anomalous or malicious activity, such as intrusion detection systems, log monitoring, or audit trails.
While technical and physical controls are critical, they can fail if users are unaware of security risks or fail to follow established procedures. Administrative controls, therefore, provide the framework for risk management, ensuring employees understand how to properly interact with systems, recognize threats, and follow policies that protect organizational assets. A comprehensive security program relies on all three types of controls, but administrative measures, such as training programs, are essential for shaping behavior, promoting security culture, and complementing technical defenses effectively.
Question 14:
Which term describes a vulnerability that is unknown to software vendors and for which no patch is currently available?
A. Zero-day
B. Logic bomb
C. Rootkit
D. Exploit kit
Answer: A. Zero-day
Explanation :
A zero-day vulnerability refers to a software flaw that is unknown to the software vendor and for which no patch or official fix exists at the time of discovery. The term “zero-day” emphasizes that developers have had zero days to address the vulnerability, leaving systems exposed and potentially highly susceptible to attacks. Because these vulnerabilities are undisclosed, attackers can exploit them before security teams or vendors have an opportunity to release a patch, making zero-day exploits particularly dangerous and valuable in cybercrime markets.
Attackers may use zero-day vulnerabilities to gain unauthorized access, execute arbitrary code, escalate privileges, or install malware without detection. These exploits are often incorporated into sophisticated malware campaigns, targeted attacks, or advanced persistent threats (APTs). Organizations are especially vulnerable because traditional defense mechanisms—such as signature-based antivirus or intrusion prevention systems—may not detect previously unknown exploits.
It is important to differentiate zero-day vulnerabilities from other types of malicious activity. A logic bomb is a piece of malicious code triggered by specific conditions, a rootkit is software designed to hide the presence of malware on a system, and an exploit kit is a prepackaged toolkit that automates the exploitation of known vulnerabilities. Zero-day exploits are distinct in that they target unknown, unpatched flaws, making proactive defense strategies critical.
Mitigating zero-day risk requires a combination of behavioral monitoring, network segmentation, and anomaly detection to identify unusual activity even if the specific vulnerability is unknown. Security teams should also implement application whitelisting, regular backups, and least privilege access to limit the potential impact of a successful zero-day attack. Threat intelligence feeds and endpoint detection and response (EDR) solutions can also help organizations detect exploit attempts and respond quickly.
Ultimately, zero-day vulnerabilities represent one of the most challenging security threats, highlighting the need for defense-in-depth strategies and continuous vigilance to protect systems before a patch becomes available.
Question 15:
Which type of attack relies on overwhelming a system’s resources to make services unavailable to legitimate users?
A. Phishing
B. Denial-of-service (DoS)
C. Man-in-the-middle
D. Cross-site scripting
Answer: B. Denial-of-service (DoS)
Explanation :
A Denial-of-Service (DoS) attack is a type of cyberattack aimed at making a network, server, application, or other system unavailable to legitimate users by overwhelming it with excessive traffic or resource demands. The objective is not necessarily to steal data or gain unauthorized access, but to disrupt normal operations, resulting in downtime, service degradation, or financial and reputational damage. DoS attacks exploit limitations in system resources such as CPU, memory, bandwidth, or disk space, causing systems to crash, slow down, or reject legitimate requests.
A common variant is the Distributed Denial-of-Service (DDoS) attack, where multiple compromised devices, often part of a botnet, are used to flood a target simultaneously. By leveraging a large volume of sources, attackers can amplify the impact and make mitigation more challenging. DDoS attacks may use different techniques, such as volumetric floods (saturating bandwidth), protocol attacks (exploiting network protocols like SYN floods), or application-layer attacks (overloading specific services, e.g., HTTP requests).
DoS attacks primarily affect the availability component of the CIA triad (Confidentiality, Integrity, Availability). While confidentiality and integrity remain intact, users are unable to access the services they rely on, which can lead to lost revenue, decreased customer trust, and operational disruption.
Mitigation strategies include traffic filtering, rate limiting, load balancing, and DDoS protection services offered by cloud providers or specialized security vendors. Organizations may also implement redundant infrastructure and geographically distributed servers to absorb and distribute attack traffic. Additionally, intrusion detection and prevention systems (IDS/IPS) can identify attack patterns in real time and help block malicious traffic.
Other attack types differ in purpose: phishing deceives users to reveal credentials, man-in-the-middle attacks intercept communications, and cross-site scripting (XSS) injects malicious scripts into web applications. DoS attacks are unique in targeting service availability directly, making them a critical concern for business continuity and incident response planning.
Question 16:
A company requires that employees change their passwords every 60 days and prevent reuse of the last 10 passwords. Which security principle is this policy enforcing?
A. Authentication
B. Account expiration
C. Password management
D. Access control
Answer: C. Password management
Explanation :
Password management refers to the set of policies, procedures, and best practices an organization implements to ensure that user passwords are created, stored, and used securely. Effective password management reduces the risk of unauthorized access due to compromised or weak credentials and is a critical component of an organization’s identity and access management (IAM) strategy.
In this scenario, the company requires employees to change their passwords every 60 days and to prevent reuse of the last 10 passwords. These rules enforce strong password hygiene by minimizing the window during which a stolen or guessed password can be exploited. Regular password changes ensure that even if an attacker gains access to a password, its usefulness is time-limited. Preventing password reuse mitigates the risk of users cycling back to previously compromised or weak passwords, which could otherwise allow repeated unauthorized access.
While authentication is the process of verifying a user’s identity, and account expiration involves disabling accounts after a set time, password management specifically focuses on the creation, rotation, complexity, and storage of passwords. Access control governs what resources a user can access once authenticated, but does not dictate how passwords are managed.
Strong password management policies are often paired with multi-factor authentication (MFA) to further enhance security. MFA adds additional layers of verification beyond just a password, protecting accounts even if passwords are stolen or guessed. Organizations may also enforce password complexity requirements, such as minimum length, the use of special characters, and restrictions on common words, to reduce susceptibility to brute-force attacks.
In summary, enforcing periodic password changes, preventing reuse of old passwords, and combining these measures with MFA strengthens an organization’s overall security posture, protects sensitive systems, and reduces the likelihood of unauthorized access resulting from compromised credentials. Password management is therefore a foundational component of proactive cybersecurity strategy.
Question 17:
Which of the following security devices monitors network traffic and can automatically block malicious activity in real-time?
A. Intrusion Detection System (IDS)
B. Intrusion Prevention System (IPS)
C. SIEM
D. Firewall
Answer: B. Intrusion Prevention System (IPS)
Explanation :
An Intrusion Prevention System (IPS) is an advanced network security device designed to actively monitor traffic and automatically block or prevent malicious activity in real-time. Unlike passive detection tools, an IPS not only identifies threats but also takes immediate action to mitigate them, thereby protecting network resources and maintaining operational continuity. IPS devices are typically deployed inline with network traffic so that all packets pass through the system, allowing it to inspect data flows thoroughly.
The IPS leverages multiple detection techniques to identify potential attacks. Signature-based detection compares network traffic against a database of known attack patterns, effectively catching threats that match previously identified exploits. Anomaly-based detection monitors baseline network behavior to flag unusual activity that could indicate an attack. Behavior-based detection focuses on identifying abnormal patterns in system or user behavior. By combining these techniques, IPS solutions can respond to both known and emerging threats proactively, reducing the window of exposure and minimizing damage.
In comparison, an Intrusion Detection System (IDS) also monitors network traffic for suspicious activity but does not take preventive action automatically; it only generates alerts for administrators to investigate. Similarly, Security Information and Event Management (SIEM) systems aggregate logs from multiple sources, correlate events, and provide situational awareness, but they are not inline devices and cannot block threats in real-time. Firewalls control traffic based on preconfigured rules, such as allowing or blocking certain IP addresses or ports, but traditional firewalls are not designed to inspect traffic at a detailed behavioral or signature level.
IPS solutions are crucial for defending against sophisticated threats such as malware propagation, brute-force attacks, and zero-day exploits. They are a key component of a defense-in-depth strategy, complementing firewalls, endpoint protection, and network segmentation. By automatically detecting and preventing attacks, IPS devices help organizations maintain network availability, integrity, and security, reducing the reliance on manual intervention and enhancing overall resilience against cyber threats.
Question 18:
What type of backup strategy copies all files every time the backup runs, consuming more storage but ensuring fast recovery?
A. Differential
B. Incremental
C. Full
D. Snapshots
Answer: C. Full
Explanation :
A full backup is a backup strategy in which all selected files and data are copied every time the backup process is executed. This means that each backup contains a complete snapshot of the system or dataset, regardless of whether files have changed since the last backup. The primary advantage of a full backup is that it simplifies the recovery process: if data loss occurs, restoring from a full backup requires only the latest copy, making it quick and straightforward to bring systems back online. This ensures maximum reliability in disaster recovery scenarios, as there is no need to piece together data from multiple backup sets.
However, full backups come with notable trade-offs. Since every file is copied during each backup, they consume significantly more storage space and require longer backup windows compared to incremental or differential strategies. For large datasets or enterprise environments, performing full backups frequently can strain storage resources and network bandwidth. Despite these drawbacks, full backups are foundational in any robust disaster recovery plan (DRP) because they provide a complete, standalone copy of data that can be restored independently of other backups.
In comparison, incremental backups only copy files that have changed since the last backup of any type, greatly reducing storage usage and backup time but requiring a chain of incremental backups plus the last full backup for a full restoration. Differential backups copy all changes since the last full backup, striking a balance between storage and recovery speed. Snapshots capture a system or volume’s state at a specific point in time, enabling fast rollback, but they are generally not suitable as a long-term archival solution.
Organizations often combine full backups with incremental or differential backups in a hybrid strategy to balance storage efficiency with recovery speed. Despite higher storage requirements, full backups are essential for ensuring data integrity, fast restoration, and reliable disaster recovery, making them a cornerstone of enterprise backup planning.
Question 19:
Which of the following attacks involves injecting malicious SQL statements into an application’s database query?
A. Cross-site scripting
B. SQL injection
C. Buffer overflow
D. Phishing
Answer: B. SQL injection
Explanation:
SQL injection occurs when an attacker injects malicious SQL commands into an application’s input fields, which are then executed by the database server. This can allow unauthorized data access, modification, or deletion. Cross-site scripting targets web users by executing scripts in their browsers, buffer overflow exploits memory vulnerabilities, and phishing targets human behavior to capture credentials. Prevention of SQL injection involves input validation, use of parameterized queries or prepared statements, and database permission restrictions. SQL injection attacks are highly prevalent due to improper input handling in web applications.
Question 20:
Which security framework is designed to provide a set of best practices and standards for managing IT security risks in U.S. federal agencies?
A. ISO/IEC 27001
B. NIST Cybersecurity Framework (CSF)
C. COBIT
D. HIPAA
Answer: B. NIST Cybersecurity Framework (CSF)
Explanation:
The NIST Cybersecurity Framework (CSF) provides a structured approach for organizations, particularly U.S. federal agencies, to manage and reduce cybersecurity risk. It consists of core functions: Identify, Protect, Detect, Respond, and Recover. ISO/IEC 27001 is an international standard for information security management systems (ISMS), COBIT focuses on IT governance and management, and HIPAA sets compliance requirements for healthcare data protection. NIST CSF emphasizes risk-based management, measurable outcomes, and alignment with other standards and industry best practices. Its widespread adoption demonstrates its effectiveness in creating a repeatable and scalable security program.
Popular posts
Recent Posts
