CompTIA CAS-005 SecurityX Exam Dumps and Practice Test Questions Set2 Q21-40
Visit here for our full CompTIA CAS-005 SecurityX exam dumps and practice test questions.
Question 21:
Which type of attack involves an attacker sending unsolicited emails that appear to come from a trusted source to steal credentials or deliver malware?
A. Phishing
B. SQL Injection
C. Man-in-the-Middle
D. Denial-of-Service
Answer: A. Phishing
Explanation :
Phishing is a type of social engineering attack where attackers send emails or messages that appear to come from a trusted entity, such as a bank, company executive, or service provider. The goal is usually to trick recipients into revealing sensitive information, including usernames, passwords, social security numbers, or financial data. Phishing attacks often employ urgent language, fake login pages, or malicious links and attachments to deceive users. A common variant is spear phishing, which targets specific individuals using personalized information to increase the likelihood of success. Another variant is whaling, aimed at high-level executives or “big fish” with access to critical systems.
Unlike technical attacks such as SQL injection, which exploit software vulnerabilities, or man-in-the-middle attacks, which intercept communications, phishing primarily targets human psychology and trust. Denial-of-Service (DoS) attacks, on the other hand, aim to overwhelm systems rather than steal data. Effective mitigation strategies against phishing include user awareness training, implementing multi-factor authentication (MFA), and deploying email filtering technologies that detect malicious content or suspicious senders. Organizations may also use DMARC, SPF, and DKIM email authentication protocols to reduce the likelihood of spoofed emails reaching end users.
Phishing remains one of the most common and dangerous attack vectors because even well-secured systems can be bypassed if an employee inadvertently provides credentials to an attacker. Security awareness programs, simulated phishing campaigns, and clear reporting procedures are critical in creating a human firewall. Understanding phishing is central to CAS-005 objectives because the exam emphasizes human factors in security, social engineering mitigation, and the importance of layered defenses combining technology and user training.
Question 22:
Which security principle ensures that data has not been altered in an unauthorized manner during storage or transmission?
A. Confidentiality
B. Integrity
C. Availability
D. Non-repudiation
Answer: B. Integrity
Explanation
Integrity is one of the three pillars of the CIA triad (Confidentiality, Integrity, Availability) and focuses on maintaining the accuracy and reliability of data. Integrity ensures that information remains unaltered, complete, and trustworthy, whether at rest, in transit, or during processing. Unauthorized modifications to data, whether accidental or malicious, violate integrity. Common threats include malware that modifies files, insider attacks, transmission errors, or database tampering. Techniques to maintain integrity include checksums, hash functions, digital signatures, and version control systems.
For example, using a SHA-256 hash to verify the contents of a downloaded file ensures the file has not been tampered with. If even a single bit changes, the hash value will differ, alerting the recipient to potential corruption or manipulation. Integrity also extends to network communications. Protocols like TLS use cryptographic methods to ensure that messages are not altered during transmission, preventing attackers from modifying sensitive information such as financial transactions or authentication tokens.
Other CIA triad principles complement integrity. Confidentiality ensures data is only accessible to authorized users, while availability guarantees timely access to systems and resources. Non-repudiation ensures that an action or transaction cannot later be denied, often implemented through digital signatures. Maintaining integrity is critical because compromised data can lead to incorrect decisions, financial loss, and reputational damage. Organizations implement integrity controls across databases, file systems, applications, and networks to protect against both insider and external threats.
Integrity violations can be detected through audit trails, monitoring, and automated integrity checking tools. Strong internal policies, user training, and security awareness also reinforce integrity by minimizing human error. In CAS-005, understanding integrity is essential for designing secure systems, protecting sensitive data, and complying with legal or regulatory requirements.
Question 23
Which type of attack targets a web application by injecting malicious SQL statements into input fields?
A. Cross-site scripting (XSS)
B. SQL Injection
C. Command Injection
D. Directory Traversal
Answer: B. SQL Injection
Explanation :
SQL Injection (SQLi) is a type of web application attack where an attacker manipulates input fields or URLs to inject malicious Structured Query Language (SQL) commands into a backend database. The goal is typically to bypass authentication, extract sensitive data, modify or delete records, or gain administrative access. SQLi attacks exploit improper input validation and inadequate parameterization of queries in web applications.
For example, if a login form directly concatenates user input into a query without sanitization, an attacker could input ‘ OR 1=1– to bypass authentication, giving them unauthorized access. Another common SQLi variant is Union-based SQLi, which combines the results of a legitimate query with the attacker’s crafted query to extract information from other database tables. Attackers may also exploit blind SQL injection, where error messages or response timings reveal database structure without directly displaying data.
SQLi is different from cross-site scripting (XSS), which targets client-side scripts to execute malicious code in a user’s browser. Command injection exploits operating system commands, while directory traversal allows unauthorized access to restricted files. SQLi specifically manipulates database queries, making it one of the most serious threats to web applications.
Mitigation strategies include parameterized queries, stored procedures, input validation, and least privilege access for database accounts. Regular code reviews, penetration testing, and web application firewalls (WAFs) can further reduce risk. SQLi is highly relevant to the CAS-005 exam because web application security is a key domain, and understanding how to prevent injection attacks demonstrates both defensive knowledge and compliance with secure coding best practices. Organizations often combine technical controls with developer training and auditing to ensure robust SQLi mitigation.
Question 24:
Which concept ensures that a user cannot deny performing a specific action, such as sending an email or making a transaction?
A. Authentication
B. Confidentiality
C. Non-repudiation
D. Integrity
Answer: C. Non-repudiation
Explanation :
Non-repudiation is a security principle that guarantees that a user or entity cannot deny performing a specific action, such as sending an email, initiating a financial transaction, or signing a digital document. It provides proof of origin, intent, and integrity, ensuring accountability and legal defensibility. Non-repudiation is typically achieved using cryptographic techniques, including digital signatures, asymmetric encryption, and public key infrastructure (PKI).
For instance, when a user digitally signs a document using their private key, anyone with access to their public key can verify the signature, proving that the user indeed performed the action and that the content has not been altered. This prevents disputes over the authenticity or origin of messages or transactions. Non-repudiation is critical in environments that require audit trails, such as banking, e-commerce, healthcare, and government systems, where accountability and legal compliance are essential.
Non-repudiation differs from authentication, which merely confirms identity, or integrity, which ensures that data has not been altered. It also complements confidentiality, but it focuses on proving that actions or messages originated from a specific source. Common implementations of non-repudiation include email signing with S/MIME or PGP, code signing, and secure transaction logging.
Without non-repudiation, users could deny sending malicious or unauthorized instructions, undermining security and trust. Organizations must implement strong controls, such as secure key management, audit logging, and timestamping, to support non-repudiation in both technical and legal contexts. CAS-005 emphasizes non-repudiation as part of understanding cryptography, authentication, and accountability frameworks. It ensures that systems are designed to log, verify, and enforce accountability, providing legal and operational assurance that actions can be traced to responsible entities reliably.
Question 25:
Which of the following is a primary benefit of implementing network segmentation?
A. Reduces the need for encryption
B. Limits the spread of malware or attacks
C. Eliminates the need for firewalls
D. Replaces intrusion detection systems
Answer: B. Limits the spread of malware or attacks
Explanation :
Network segmentation is the practice of dividing a network into multiple smaller segments or subnets to enhance security, manageability, and performance. By isolating different parts of a network, organizations can control access, monitor traffic, and limit the lateral movement of attackers if a compromise occurs. Each segment can enforce its own security policies, reducing exposure of sensitive systems to untrusted devices or users.
The primary security benefit of segmentation is that it limits the spread of malware, ransomware, or other attacks. For example, if one segment of a network is compromised, attackers cannot automatically access other critical areas, such as financial servers, human resources systems, or intellectual property repositories. This containment reduces overall risk and allows incident response teams to isolate affected segments while maintaining business continuity in other areas.
Segmentation does not eliminate the need for encryption, as sensitive data in transit still requires protection, nor does it replace firewalls or intrusion detection systems. In fact, segmentation often works in conjunction with firewalls, access control lists (ACLs), VLANs, and intrusion prevention/detection systems (IPS/IDS) to enforce policy at the boundaries of each segment. Organizations may implement physical segmentation, such as separate switches or routers, or logical segmentation, such as VLANs, software-defined networking, or microsegmentation within cloud environments.
Beyond security, segmentation can improve network performance, simplify troubleshooting, and support regulatory compliance by isolating sensitive systems like payment card environments or personally identifiable information (PII). Properly designed network segmentation aligns with the CAS-005 objectives by demonstrating defense-in-depth strategy, risk mitigation, and controlled access, which are foundational for enterprise network security.
Question 26:
Which of the following protocols provides secure communication over an unsecured network by encrypting traffic at the transport layer?
A. HTTP
B. SSL/TLS
C. FTP
D. Telnet
Answer: B. SSL/TLS
Explanation :
SSL (Secure Sockets Layer) and its successor TLS (Transport Layer Security) are cryptographic protocols that provide secure communication over unsecured networks, primarily by encrypting data transmitted between clients and servers. Operating at the transport layer, SSL/TLS protects the confidentiality and integrity of data in transit, making it a fundamental mechanism for secure web browsing, email transmission, VPN connections, and other client-server communications.
TLS works by establishing a secure session through a process known as the TLS handshake, which authenticates the server, negotiates encryption algorithms, and optionally authenticates the client. Public key cryptography is often used during this handshake to exchange symmetric session keys, allowing high-speed encryption for the duration of the session. Once the session is established, all subsequent data is encrypted and cannot be intercepted or altered by unauthorized parties.
TLS ensures confidentiality, preventing eavesdroppers from reading transmitted data, integrity, protecting messages from tampering, and authentication, verifying the identity of the server (and sometimes the client). This makes SSL/TLS distinct from protocols like HTTP, FTP, or Telnet, which transmit data in plaintext and are vulnerable to interception, session hijacking, and credential theft. While secure alternatives like HTTPS (HTTP over TLS) and FTPS (FTP over TLS) exist, using standard HTTP or FTP without encryption exposes sensitive information.
SSL/TLS also supports features such as forward secrecy, certificate validation, and digital signatures to reinforce trust and prevent man-in-the-middle attacks. In the context of CompTIA CAS-005, understanding SSL/TLS is crucial for implementing secure communication channels, protecting sensitive data during transmission, and ensuring compliance with industry standards and regulations. Proper configuration, certificate management, and awareness of protocol versions (e.g., TLS 1.2 or 1.3) are essential for mitigating vulnerabilities associated with older SSL/TLS implementations.
Question 27:
Which type of malware is designed to gain persistent control over a system while hiding its presence from detection mechanisms?
A. Adware
B. Rootkit
C. Trojan
D. Worm
Answer: B. Rootkit
Explanation :
A rootkit is a type of malware specifically designed to provide persistent unauthorized access to a system while remaining hidden from detection mechanisms. Rootkits can operate at multiple levels, including user mode, kernel mode, bootloader, or firmware, allowing attackers to maintain control even after system reboots. The stealthy nature of rootkits makes them particularly dangerous, as traditional antivirus or endpoint detection systems may not easily identify them.
Rootkits are often used to conceal other malicious software, such as keyloggers, spyware, or backdoors, enabling attackers to exfiltrate data, modify system behavior, or manipulate processes without the user or security software noticing. They differ from Trojans, which disguise themselves as legitimate applications to trick users into execution, and worms, which self-replicate to spread across networks. Adware, while potentially intrusive, primarily serves to deliver unwanted advertisements and rarely provides deep system control or stealth.
Detection of rootkits requires specialized tools and techniques, such as behavioral analysis, integrity checking, memory inspection, and forensic tools. Removing rootkits can be complex; in many cases, complete reinstallation of the operating system is necessary to fully eradicate them. Preventive measures include keeping software up to date, minimizing administrative privileges, using secure boot mechanisms, and employing hardware-based security controls.
Rootkits pose a significant threat to confidentiality, integrity, and sometimes availability, aligning with the CIA triad objectives covered in CAS-005. Their use in targeted attacks, advanced persistent threats (APTs), and ransomware campaigns underscores the importance of proactive monitoring, layered security defenses, and strong endpoint protection. Understanding rootkits also highlights the critical need for incident response planning, forensic readiness, and continuous threat intelligence, as they can persist undetected for long periods and facilitate other malicious activities within a network.
Question 28:
Which access control model bases permissions on centrally defined rules and enforces policies that cannot be altered by end users?
A. Discretionary Access Control (DAC)
B. Mandatory Access Control (MAC)
C. Role-Based Access Control (RBAC)
D. Attribute-Based Access Control (ABAC)
Answer: B. Mandatory Access Control (MAC)
Explanation :
Mandatory Access Control (MAC) is an access control model where access decisions are determined by centrally defined policies rather than individual user discretion. In MAC, all subjects (users or processes) and objects (files, data, resources) are assigned security labels or classifications, such as Confidential, Secret, or Top Secret. Access is then granted based on these labels and the policies set by the organization’s security administrators. End users cannot override or modify access rights, making MAC a highly controlled and rigid access model suitable for environments with strict security requirements.
MAC is commonly used in government, military, and intelligence environments, where data classification and regulatory compliance are critical. For example, a user with a Secret clearance cannot access Top Secret documents unless explicitly allowed by policy. This model helps enforce information flow control and prevents unauthorized data disclosure, supporting confidentiality and integrity within sensitive systems.
In contrast, Discretionary Access Control (DAC) allows resource owners to determine access permissions, which can introduce risks if users misconfigure settings. Role-Based Access Control (RBAC) assigns permissions based on job roles rather than labels, providing flexibility while maintaining manageability. Attribute-Based Access Control (ABAC) evaluates multiple attributes such as user role, location, time, and device type to make access decisions dynamically.
Implementing MAC requires careful policy design, secure labeling of data, and thorough understanding of organizational workflows. Challenges include administrative overhead, potential bottlenecks for user access requests, and the need for comprehensive classification standards. Despite these challenges, MAC provides strong, enforceable security guarantees by preventing users from inadvertently or maliciously bypassing access controls. In CAS-005, understanding MAC is essential for demonstrating knowledge of secure access models, data classification, and compliance-driven security implementations.
Question 29:
Which type of backup captures only data that has changed since the last full backup, reducing storage requirements while simplifying restoration compared to incremental backups?
A. Full backup
B. Differential backup
C. Incremental backup
D. Snapshot
Answer: B. Differential backup
Explanation :
A differential backup is a backup strategy in which only the data that has changed since the last full backup is copied. This method strikes a balance between storage efficiency and ease of restoration. Unlike incremental backups, which copy data changed since the last backup of any type and require a chain of multiple backups to restore, differential backups allow restoration using only the latest full backup and the most recent differential backup, simplifying recovery while reducing the total number of backup sets needed.
Differential backups are commonly used in enterprise environments to maintain frequent backups without consuming the same amount of storage as full backups. For example, if a full backup is performed on Sunday and differential backups occur on Monday, Tuesday, and Wednesday, each differential backup captures changes since Sunday, making Wednesday’s differential backup sufficient to restore all changes in combination with the original full backup. This reduces restoration complexity compared to incremental backups, which would require restoring the full backup plus every incremental backup in sequence.
Full backups, by contrast, copy all data every time, consuming significant storage and time. Snapshots capture the system state at a particular point, allowing quick rollback but not necessarily long-term archival. Differential backups are especially useful for disaster recovery scenarios, providing a reliable recovery point without overwhelming storage or recovery procedures.
Organizations implementing differential backup strategies should consider backup frequency, retention policies, storage capacity, and automated scheduling to ensure continuity and data availability. Differential backups are also critical for compliance and regulatory requirements, as they maintain data integrity while optimizing performance and storage utilization. In CAS-005, understanding differential backups demonstrates knowledge of backup types, disaster recovery planning, and data protection strategies, essential for both operational security and business continuity.
Question 30
Which authentication factor relies on something a user possesses, such as a smart card, token, or mobile device?
A. Knowledge factor
B. Possession factor
C. Inherence factor
D. Location factor
Answer: B. Possession factor
Explanation :
The possession factor is an authentication category based on something the user physically possesses, such as a smart card, hardware token, USB security key, or mobile device that generates one-time passwords (OTPs). Possession-based factors are commonly used as part of multi-factor authentication (MFA) to enhance security by requiring both knowledge-based (passwords) and possession-based verification for access.
In MFA implementations, a possession factor ensures that even if an attacker obtains a password, they cannot log in without the physical device or token. Common examples include RSA SecurID tokens, YubiKeys, or push-based authentication apps that confirm login attempts. Possession factors may also include digital certificates stored on smart cards or USB devices, which cryptographically authenticate users to networks, applications, or encrypted resources.
This factor differs from a knowledge factor, which relies on something the user knows (e.g., a password or PIN), or an inherence factor, which uses biometrics such as fingerprints or facial recognition. Location factors are contextual and assess user access based on network location or geolocation. Combining factors from multiple categories significantly strengthens authentication by mitigating risks associated with stolen passwords, social engineering, or phishing attacks.
Possession factors are widely adopted in corporate and government environments to meet security and compliance requirements. For example, using smart cards for employee logins satisfies many NIST and ISO standards for secure authentication. Organizations must also implement policies for device issuance, secure storage, revocation, and replacement in case of loss or theft to maintain security.
In the context of CompTIA CAS-005, understanding the possession factor is critical for designing robust authentication systems, implementing MFA, and mitigating credential-based attacks. Proper use of possession-based authentication, combined with knowledge and/or inherence factors, provides a layered approach to securing access to sensitive systems and data.
Question 31:
Which wireless security protocol provides encryption and authentication using a pre-shared key and is considered stronger than WEP?
A. WPA2
B. WEP
C. WPA
D. TKIP
Answer: A. WPA2
Explanation :
WPA2 (Wi-Fi Protected Access 2) is a wireless security protocol designed to provide strong encryption and authentication for Wi-Fi networks. WPA2 replaced the older WEP (Wired Equivalent Privacy) standard, which was found to be insecure due to vulnerabilities in its RC4 encryption and weak initialization vectors. WPA2 is considered stronger because it uses AES (Advanced Encryption Standard) for encryption, offering both confidentiality and integrity, making it highly resistant to attacks such as key recovery and packet sniffing.
WPA2 supports both pre-shared key (PSK) mode for personal networks and enterprise mode, which leverages 802.1X authentication with a RADIUS server for corporate networks. PSK mode uses a shared password among authorized devices, while enterprise mode provides unique credentials for each user, enhancing accountability and access control. AES-based encryption ensures that traffic is secure against modern attacks, unlike WEP, which can be compromised in minutes with readily available tools.
Earlier iterations like WPA used TKIP (Temporal Key Integrity Protocol) to address WEP vulnerabilities, but TKIP is less secure than AES and is largely deprecated. WPA2 provides support for CCMP (Counter Mode with Cipher Block Chaining Message Authentication Code Protocol), which ensures strong data confidentiality and integrity verification. WPA3, the successor, further enhances security with features like Simultaneous Authentication of Equals (SAE) and improved encryption for open networks.
In the context of CAS-005, understanding WPA2 is crucial for securing wireless networks against eavesdropping, unauthorized access, and man-in-the-middle attacks. Best practices include strong passphrases, disabling legacy WEP/WPA protocols, enabling enterprise authentication in corporate settings, and monitoring for rogue access points. Proper configuration of WPA2 aligns with security principles such as confidentiality, integrity, and authentication, helping organizations mitigate common wireless threats while maintaining secure connectivity.
Question 32:
Which incident response phase involves determining the cause of an incident, its impact, and gathering evidence for remediation?
A. Containment
B. Eradication
C. Identification
D. Lessons Learned
Answer: C. Identification
Explanation :
The identification phase in incident response is the initial step in recognizing and understanding a security incident. This phase involves detecting unusual behavior, confirming that an incident has occurred, and determining the scope, impact, and type of threat. Proper identification ensures that the organization can respond effectively while minimizing damage to systems, data, and operations.
During identification, security teams use tools such as intrusion detection systems (IDS), log analysis, network monitoring, SIEM platforms, and endpoint detection and response (EDR) solutions to collect evidence of suspicious activity. Analysts determine whether the activity represents a genuine security incident, such as malware infection, data breach, unauthorized access, or denial-of-service attack, and assess which systems and data are affected. Correctly identifying the incident type guides subsequent response actions, including containment, eradication, and recovery.
Identification also involves documenting events, preserving evidence, and establishing timelines to support later investigation and potential legal or regulatory compliance. Misidentifying an incident can lead to delayed responses, increased damage, or unnecessary disruption of normal operations. This phase is distinct from containment, which focuses on limiting the spread of an incident, and eradication, which removes the threat. The lessons learned phase occurs after the incident is resolved to improve future defenses and response strategies.
Effective identification requires trained personnel, robust monitoring systems, and incident detection policies that define what constitutes suspicious or anomalous behavior. By thoroughly identifying the root cause, impact, and affected systems, organizations can prioritize remediation, mitigate further risk, and ensure accurate reporting. In CAS-005, understanding identification is essential for incident response planning, demonstrating competency in detecting, analyzing, and responding to security incidents, and ensuring continuity of operations during and after an attack.
Question 33:
Which cryptographic function produces a fixed-length output from input data and is used to verify integrity but cannot be reversed to reveal the original data?
A. Symmetric encryption
B. Asymmetric encryption
C. Hashing
D. Digital signatures
Answer: C. Hashing
Explanation:
Hashing is a cryptographic function that transforms input data of arbitrary length into a fixed-length output, called a hash value or digest. Hashing is one-way, meaning it is computationally infeasible to reverse the output to obtain the original data. Hash functions are primarily used to ensure data integrity, as any modification to the input will produce a significantly different hash value, signaling that the data may have been altered.
Common hash algorithms include MD5, SHA-1, SHA-256, and SHA-3. For example, when downloading software or files, a provided hash value allows users to verify that the file has not been tampered with during transmission. Hashing is also used in password storage, where passwords are hashed before storing in databases, ensuring that even if the database is compromised, attackers cannot retrieve plain-text passwords.
Hashing differs from symmetric and asymmetric encryption, which are reversible with the proper key and are used for confidentiality, and from digital signatures, which combine hashing with encryption to provide authentication, non-repudiation, and integrity. While hashing alone cannot provide authenticity, it is often used as part of digital signatures, message authentication codes (MACs), and integrity verification systems.
Strong hash algorithms are essential because weak or compromised algorithms (e.g., MD5, SHA-1) are vulnerable to collision attacks, where two different inputs produce the same hash value. Security best practices involve using modern, collision-resistant algorithms and incorporating salting techniques for password hashing to increase security against precomputed attacks.
In the context of CompTIA CAS-005, hashing is a fundamental concept for ensuring data integrity, secure password management, and digital signature verification. It enables verification of transmitted or stored information without revealing the original content, forming the basis of trust, authentication, and data protection in modern information security architectures.
Question 34:
Which network attack involves an attacker secretly intercepting and potentially altering communications between two parties?
A. Phishing
B. Man-in-the-Middle (MITM)
C. Denial-of-Service (DoS)
D. Replay attack
Answer: B. Man-in-the-Middle (MITM)
Explanation :
A Man-in-the-Middle (MITM) attack occurs when an attacker intercepts, relays, or modifies communications between two parties without their knowledge. MITM attacks compromise confidentiality, integrity, and authentication, as attackers can eavesdrop on sensitive information, alter messages in transit, or impersonate one of the participants. Common MITM scenarios include unencrypted Wi-Fi networks, compromised routers, ARP poisoning, DNS spoofing, and SSL/TLS stripping.
During a MITM attack, the attacker may insert themselves transparently between the sender and recipient. For instance, an attacker on a public Wi-Fi network could intercept login credentials, financial transactions, or confidential emails by acting as a proxy. MITM attacks differ from phishing, which deceives users into voluntarily providing information, and Denial-of-Service (DoS) attacks, which aim to disrupt availability rather than intercept communications. A replay attack, meanwhile, involves resending previously captured data to gain unauthorized access or repeat a transaction, rather than modifying the data in real time.
Mitigation strategies against MITM include strong encryption protocols (e.g., HTTPS/TLS, VPNs), certificate validation, public key pinning, and user awareness of suspicious network activity. Organizations may also use network intrusion detection systems (NIDS) and secure DNS configurations to detect or prevent MITM attempts. Multi-factor authentication can further reduce the impact, as stolen credentials alone may be insufficient for the attacker to gain access.
Understanding MITM attacks is crucial for CAS-005 exam objectives because it illustrates vulnerabilities in communication channels, the importance of confidentiality and integrity, and the need for both technical and procedural defenses. MITM attacks highlight the intersection of network security, cryptography, and user awareness, emphasizing defense-in-depth strategies to prevent unauthorized interception and modification of critical communications.
Question 35:
Which type of attack attempts to overwhelm a system, network, or service with traffic to make it unavailable to legitimate users?
A. Phishing
B. Denial-of-Service (DoS)
C. SQL Injection
D. Cross-Site Scripting (XSS)
Answer: B. Denial-of-Service (DoS)
Explanation :
A Denial-of-Service (DoS) attack aims to render a system, network, or application unavailable to legitimate users by overwhelming resources such as CPU, memory, bandwidth, or application connections. In a Distributed Denial-of-Service (DDoS) attack, multiple compromised systems (often part of a botnet) coordinate to amplify the attack, making it more difficult to mitigate and increasing its impact.
DoS attacks can target web servers, DNS servers, email servers, or entire network infrastructures. Techniques include flooding with excessive TCP/UDP packets, HTTP request floods, SYN floods, and amplification attacks that exploit vulnerable protocols. Unlike phishing, which attempts to steal credentials, or SQL injection and XSS, which exploit application vulnerabilities, DoS primarily targets availability, one of the three CIA triad principles.
Detection involves network monitoring, traffic analysis, and intrusion detection systems, which can identify abnormal spikes in traffic or suspicious patterns. Mitigation strategies include traffic filtering, rate limiting, blackholing, load balancing, scrubbing services, and DDoS protection appliances. Organizations often combine preventative controls with incident response plans to minimize downtime and maintain business continuity.
DoS attacks may also be used as a distraction for other malicious activities, such as breaches, malware deployment, or data exfiltration, highlighting the importance of comprehensive monitoring and layered defenses. Properly designed network architecture, redundancy, and segmentation can further reduce the impact of DoS events.
In CAS-005, understanding DoS attacks demonstrates knowledge of threat types, network vulnerabilities, mitigation techniques, and business impact. Candidates are expected to recognize how availability can be compromised, evaluate risks, and recommend appropriate security controls to ensure continuous operation of critical systems.
Question 36:
Which type of malware replicates itself across networks without user interaction, often consuming bandwidth and system resources?
A. Worm
B. Trojan
C. Rootkit
D. Adware
Answer: A. Worm
Explanation :
A worm is a self-replicating malware program designed to spread across networks without user intervention. Unlike Trojans, which require users to execute malicious programs, worms exploit vulnerabilities in operating systems, applications, or network protocols to propagate automatically. Worms often consume network bandwidth, system memory, and processing resources, leading to degraded performance, system crashes, or service outages.
Worms may also deliver payloads, such as ransomware, backdoors, or spyware, enabling attackers to compromise additional systems after initial propagation. High-profile examples include ILOVEYOU, SQL Slammer, and WannaCry, which caused widespread disruption by leveraging vulnerabilities in operating systems and network services. Worms differ from rootkits, which hide and maintain persistent control; adware, which primarily delivers unwanted ads; and Trojans, which require user execution and do not self-replicate.
Detection and mitigation of worms involve patch management, as worms commonly exploit known software vulnerabilities. Firewalls, intrusion prevention systems (IPS), antivirus software, network segmentation, and monitoring tools help prevent propagation. Educating users about security best practices and limiting unnecessary network exposure also reduce worm attack surfaces.
From a CAS-005 perspective, worms are relevant to malware types, attack vectors, and mitigation strategies. Understanding worms emphasizes defense-in-depth, highlighting the importance of combining technical controls, user training, and network monitoring. Effective response requires isolating infected systems, restoring from clean backups, and applying patches to prevent reinfection. Worms illustrate the potential impact of automated malware propagation, underscoring the need for proactive vulnerability management and continuous security monitoring to maintain availability, integrity, and confidentiality in enterprise networks.
Question 37:
Which type of security control includes policies, procedures, and employee training to influence organizational behavior?
A. Technical control
B. Administrative control
C. Physical control
D. Detective control
Answer: B. Administrative control
Explanation :
Administrative controls are security measures that focus on governing organizational behavior through policies, procedures, and training. Unlike technical or physical controls, which rely on technology or barriers, administrative controls influence how personnel interact with systems, data, and processes. Examples include acceptable use policies, security awareness training, incident response procedures, background checks, and job rotation policies.
Administrative controls are critical because human error and insider threats are major contributors to security incidents. By educating employees, setting expectations, and formalizing procedures, organizations can reduce risk and improve compliance with laws, regulations, and internal standards. For instance, security awareness programs teach staff to recognize phishing emails, handle sensitive data securely, and report incidents promptly. Similarly, formal access request and approval procedures prevent unauthorized access and enforce accountability.
Administrative controls differ from technical controls, such as firewalls, antivirus software, and encryption, which enforce security through technology. They also differ from physical controls, such as locks, cameras, or security guards, which protect the physical environment. Detective controls, such as monitoring and auditing systems, detect incidents but do not proactively guide behavior.
In the CAS-005 exam, understanding administrative controls highlights the human factor in cybersecurity. Effective security requires a combination of administrative, technical, and physical controls to form a comprehensive defense-in-depth strategy. Administrative measures are often foundational for regulatory compliance frameworks, including HIPAA, PCI DSS, and ISO 27001, ensuring employees follow secure practices and organizations maintain governance standards. By implementing clear policies, procedures, and training programs, organizations can minimize errors, reduce insider threats, and create a culture of security awareness, complementing technical defenses and enhancing overall resilience against cyber threats.
Question 38:
Which type of access control assigns permissions based on a user’s job function, simplifying management and reducing administrative overhead?
A. Discretionary Access Control (DAC)
B. Mandatory Access Control (MAC)
C. Role-Based Access Control (RBAC)
D. Rule-Based Access Control
Answer: C. Role-Based Access Control (RBAC)
Explanation :
Role-Based Access Control (RBAC) is an access control model in which permissions are assigned to roles rather than individual users, and users acquire permissions by being assigned to these roles. This approach simplifies administrative management, especially in large organizations with numerous users and resources, by allowing administrators to manage access collectively for a role instead of configuring permissions individually.
RBAC reduces the likelihood of permission misconfiguration, enforces the principle of least privilege, and supports compliance requirements by controlling access based on organizational function. For example, an HR role may have access to employee records, while a finance role has access to payroll systems. Users changing roles within the organization only need to be reassigned to the new role, automatically updating permissions.
This differs from Discretionary Access Control (DAC), which allows owners to grant access to resources, increasing the risk of inconsistent permissions. Mandatory Access Control (MAC) relies on centrally defined policies and classifications, which cannot be altered by users, and is more rigid. Rule-Based Access Control enforces access based on predefined rules, often used for firewall policies or automated network systems, but is less tied to organizational structure.
RBAC also enables organizations to implement segregation of duties, ensuring that no single user has excessive privileges that could lead to fraud or errors. It aligns with CAS-005 objectives by demonstrating how access control models enforce security principles, reduce risk, and maintain compliance. Implementing RBAC involves careful role definition, periodic review, and audit trails to ensure permissions remain appropriate over time. Combining RBAC with technical controls like MFA further strengthens access security.
Question 39:
Which security principle ensures that systems and data are available to authorized users when needed, even during attacks or failures?
A. Confidentiality
B. Integrity
C. Availability
D. Non-repudiation
Answer: C. Availability
Explanation:
Availability is a core principle of the CIA triad, alongside confidentiality and integrity. It ensures that systems, applications, and data are accessible to authorized users whenever needed, even during cyberattacks, hardware failures, or other disruptions. Availability supports business continuity and operational efficiency, making it essential for both technical and strategic security planning.
Availability is threatened by various factors, including Denial-of-Service (DoS) attacks, distributed DoS (DDoS) attacks, ransomware, hardware failures, network outages, and misconfigurations. Organizations implement measures to ensure availability through redundancy, fault-tolerant systems, load balancing, backup solutions, disaster recovery plans, and continuous monitoring. For example, redundant servers or network paths prevent single points of failure, while offline backups ensure data can be restored quickly in case of ransomware encryption.
Availability differs from confidentiality, which focuses on restricting unauthorized access, and integrity, which ensures data is not altered maliciously. While availability does not prevent data breaches, it ensures users can access systems during incidents. Non-repudiation ensures accountability of actions but does not guarantee system access.
High availability is critical for mission-critical systems, such as financial services, healthcare applications, and cloud-based services. Implementing service level agreements (SLAs), disaster recovery sites, and regular testing of backup and failover procedures reinforces availability. Proactive monitoring using network monitoring tools, intrusion detection, and automated failover mechanisms allows organizations to respond quickly to disruptions.
In CAS-005, understanding availability emphasizes the importance of resilient infrastructure, business continuity, and redundancy. Candidates must recognize threats to availability, implement controls, and plan for scenarios that could disrupt operations. Ensuring availability complements confidentiality and integrity, forming a balanced approach to protecting systems and data in real-world environments.
Question 40:
Which security device monitors network traffic, identifies threats, and can automatically block malicious activity in real time?
A. Intrusion Detection System (IDS)
B. Intrusion Prevention System (IPS)
C. Firewall
D. SIEM
Answer: B. Intrusion Prevention System (IPS)
Explanation :
An Intrusion Prevention System (IPS) is a proactive network security device that monitors traffic, detects suspicious activity, and automatically blocks identified threats in real time. Unlike Intrusion Detection Systems (IDS), which only alert administrators when anomalous activity occurs, IPS devices can prevent attacks from propagating by acting inline with network traffic. This capability allows IPS solutions to mitigate threats before they impact systems, applications, or data.
IPS systems use various detection methods, including signature-based detection, which identifies known attack patterns; anomaly-based detection, which flags deviations from normal network behavior; and behavioral detection, which observes unusual actions that may indicate malicious activity. Combining these techniques enables IPS devices to respond to both known and emerging threats, including malware propagation, denial-of-service attacks, brute-force attempts, and zero-day exploits.
While firewalls enforce traffic rules based on ports, protocols, or IP addresses, they lack deep inspection of content and context. SIEM (Security Information and Event Management) solutions aggregate logs, correlate events, and provide analytics but do not inherently block attacks. IPS complements firewalls and SIEM by providing active defense, often integrated into a defense-in-depth strategy with layered security controls.
Deployment of IPS devices involves inline placement on critical network segments, ongoing signature updates, and proper tuning to reduce false positives that could disrupt legitimate traffic. Organizations also use network segmentation, redundancy, and monitoring in conjunction with IPS to enhance resilience.
In CAS-005, understanding IPS is essential because it demonstrates the ability to actively protect networks, detect and prevent attacks, and integrate security solutions into a layered defense approach. Knowledge of IPS highlights real-time threat mitigation, traffic inspection, and incident response, emphasizing how organizations maintain confidentiality, integrity, and availability while minimizing operational disruption.
Popular posts
Recent Posts
