CompTIA SY0-701 Security+ Exam Dumps and Practice Test Questions Set 6 Q101-120

Visit here for our full CompTIA SY0-701 Security+ exam dumps and practice test questions.

Q101. 

A security operations analyst notices that several internal hosts begin communicating with an unfamiliar external domain shortly after users report system sluggishness and browser freezes. The traffic analysis shows encrypted outbound connections over uncommon ports, periodic beaconing, and DNS requests containing long randomized subdomains. The analyst suspects that an attacker has established covert command-and-control communications. Which technique is the attacker most likely using?

A) Data loss prevention bypass
B) Domain generation algorithms
C) ARP cache poisoning
D) SSL stripping

Answer: B) Domain generation algorithms

Explanation:

A) Data loss prevention bypass refers to intentionally circumventing DLP mechanisms in order to transfer sensitive information outside the organization. Although attackers can attempt to evade DLP controls using obfuscation, encryption, compression, or tunneling, this does not fully explain the presence of recurring randomized subdomains or consistent beaconing signals. DLP bypass techniques tend to focus on hiding the content of data exfiltration rather than building dynamic, resilient command-and-control channels. The scenario emphasizes periodic beaconing, unusual domain formats, and continually changing subdomains, which are hallmarks of automated domain generation rather than manual DLP evasion. Additionally, DLP bypass does not inherently create an external domain with randomized strings. These characteristics point toward a malware-driven infrastructure adaptation mechanism rather than a defense-avoidance technique focused solely on content filtering.

B) Domain generation algorithms is correct. A domain generation algorithm (DGA) is a method used by malware to algorithmically produce large numbers of pseudo-random domain names that can be used for command-and-control communication. DGAs help attackers maintain resilience because defenders cannot easily take down all potential domains that malware might attempt to contact. In the scenario, the internal hosts send DNS queries that contain long randomized subdomains, which is a classic indicator of DGA-based activity. DGAs also support periodic beaconing, in which infected hosts regularly attempt to contact domains generated at specific time intervals until an attacker-controlled C2 server responds. The detected encrypted outbound traffic over unusual ports further suggests that the malware is hiding its communication within non-standard channels to avoid detection. In addition, DGAs help malware withstand sinkholing and domain takedowns by security vendors because thousands of potential domains are algorithmically produced every day. This perfectly matches the behavior described: dynamic domains, beaconing, and unusual outbound communication patterns. The presence of browser freezes and system slowdown strongly suggests a malware infection operating in the background, consistent with advanced C2 techniques often paired with DGAs. For these reasons, the attacker is most likely employing a domain generation algorithm.

C) ARP cache poisoning is a local network attack aimed at redirecting or intercepting traffic by sending forged ARP messages. It enables man-in-the-middle capabilities but does not produce random external domain queries, encrypted outbound C2 traffic, or beaconing patterns to unknown domains. ARP poisoning affects local network communication—not DNS requests containing randomized subdomains nor malware-controlled external communication channels. Nothing in the scenario suggests that local traffic is being redirected or intercepted at the LAN level. Therefore ARP cache poisoning cannot explain the pattern of suspicious DNS activity and outbound beaconing.

D) SSL stripping involves intercepting HTTPS traffic and downgrading it to unencrypted HTTP to capture sensitive datA) Attackers usually perform SSL stripping in man-in-the-middle scenarios during user browsing activity. SSL stripping does not generate randomized domain names, unusual ports, or periodic beaconing. Nor does it account for persistent malware symptoms like system slowdowns or covert external communication. While SSL stripping compromises confidentiality, it is not a technique used to establish long-term command-and-control or evade detection using dynamically generated domains. Therefore SSL stripping is unrelated to the observed behaviors.

When combining all the observed indicators—encrypted outbound traffic on unusual ports, DNS queries with randomized subdomains, periodic beaconing intervals, and observable system performance issues—the most accurate interpretation is the use of domain generation algorithms for malware command-and-control. DGAs are widely employed in modern botnets and advanced malware families to maintain resilient infrastructure, making them the best explanation for this scenario.

Q102. 

A financial institution is developing a new online banking platform and wants to ensure that all customer-facing APIs remain secure against unauthorized access, replay attacks, and credential theft. The development team needs a mechanism that validates each request, confirms the legitimacy of the client, prevents tampering, and ensures that expired or intercepted requests cannot be reuseD) Which technology best satisfies these requirements?

A) OAuth 2.0 with short-lived tokens
B) Static API keys
C) TLS offloading
D) Network address translation

Answer: A) OAuth 2.0 with short-lived tokens

Explanation:

A) OAuth 2.0 with short-lived tokens is correct. OAuth 2.0 is widely used for securing APIs and provides a robust authentication and authorization framework. By using short-lived access tokens, replay attacks become significantly harder because even if an attacker intercepts a token, it quickly becomes invaliD) Additionally, OAuth supports signed tokens (such as JWTs) that ensure integrity protection, preventing tampering or forgery. Features such as refresh tokens, scopes, client validation, and token expiration collectively provide strong protections against unauthorized access and credential misuse. OAuth 2.0 also supports multi-factor authentication, token revocation, and granular authorization policies appropriate for high-security environments like financial institutions. The need for request validation, replay prevention, and tamperproof communication strongly suggests a token-based approach with integrated expiration and signing, making OAuth 2.0 the appropriate choice.

B) Static API keys provide basic authentication but are insufficient for high-security environments. They do not expire automatically, cannot prevent replay attacks, and offer no built-in protection against credential interception. If a static API key is captured, an attacker can reuse it indefinitely unless manually rotateD) Moreover, static keys do not inherently support signatures, scopes, session differentiation, or granular authorization. For an online banking platform where customer data and financial transactions are at risk, static API keys would represent a major security liability and do not satisfy the need for tamperproof, replay-resistant requests.

C) TLS offloading refers to terminating SSL/TLS connections at a load balancer instead of the application server. While TLS offloading improves performance and helps manage certificate operations, it does not authenticate incoming API requests beyond the basic secure channel. TLS alone does not prevent replay attacks or verify the legitimacy of API clients. It protects data in transit but does not ensure token expiration, signed requests, or authorization granularity. Although TLS is necessary for encrypting communications, it is not sufficient to satisfy the full set of security requirements needed by the banking platform.

D) Network address translation hides internal IP structures by translating private addresses into public ones. While NAT is useful for network design and adds a layer of obscurity, it does not provide authentication, authorization, anti-replay capabilities, or request-level validation. It is unrelated to API protection and cannot secure customer-facing transactional systems. NAT plays no role in preventing credential theft or unauthorized access at the application layer.

Because the institution requires tamperproof communication, strong authorization, replay protection, and secure client validation, OAuth 2.0 with short-lived signed tokens is the strongest, most appropriate, and industry-standard solution.

Q103. 

A global retail organization wants to strengthen its incident response capabilities. The security team plans to build a system that automatically collects logs from servers, cloud platforms, network devices, endpoints, and authentication systems in real time. They want the system to correlate these events to detect suspicious patterns, generate alerts, and assist in forensic investigations. Which solution best meets these requirements?

A) File integrity monitoring
B) Security information and event management
C) Patch management server
D) Vulnerability scanner

Answer: B) Security information and event management

Explanation:

A) File integrity monitoring is a tool that tracks modifications to critical system files. While useful for detecting unauthorized changes or compromises, its visibility is limited to file state changes and cannot correlate large sets of log data across distributed environments. File integrity monitoring lacks the analytical, correlation, and alerting features required for comprehensive incident response. It does not analyze authentication logs, network traffic, or cloud events and cannot provide real-time centralized detection across multiple systems.

B) Security information and event management is correct. A SIEM aggregates logs from nearly all security and system components, including authentication servers, firewalls, IDS/IPS tools, cloud providers, applications, and endpoints. It enables event correlation that identifies suspicious patterns across multiple sources, such as failed login attempts, anomaly detection, lateral movement indicators, or privilege escalation events. SIEM solutions also support real-time alerting, dashboards, compliance reporting, and long-term log storage, all essential for forensic investigations. SIEM platforms often integrate with threat intelligence feeds to improve detection accuracy. This aligns exactly with the organization’s requirement to collect logs, correlate events, detect anomalies, and support incident response at a global scale.

C) Patch management servers automate the deployment of updates to software and operating systems. While this improves security posture by reducing vulnerabilities, patch management does not aggregate logs, correlate events, or support real-time threat detection. Patch management contributes to prevention, not to detection or forensic evidence collection.

D) Vulnerability scanners identify weaknesses in systems by probing software versions, misconfigurations, and exposures. Although vulnerability scanners are essential for security maintenance, they are not designed for real-time monitoring or event correlation. They do not collect logs or detect active suspicious behavior and cannot assist with detailed incident investigations.

Q104.

 A manufacturing company’s OT network is isolated but contains many legacy PLCs and industrial controllers that cannot be patcheD) The security team wants to reduce the risk of intrusions while maintaining operational continuity. They plan to deploy a security solution capable of monitoring traffic, identifying protocol anomalies, blocking malicious activity, and preventing unauthorized commands to critical industrial devices. Which solution best fits this requirement?

A) Application allowlisting
B) Next-generation firewall with deep packet inspection
C) Endpoint antivirus
D) Network segmentation only

Answer: B) Next-generation firewall with deep packet inspection

Explanation:

A) Application allowlisting is a strong control for preventing unauthorized software execution on endpoints. However, it is not effective for securing industrial controllers or monitoring industrial network traffiC) Allowlisting is generally host-based and cannot analyze network protocols, detect malicious commands, or protect legacy PLCs that cannot run allowlisting agents. Operational technology environments require protection at the network level rather than at the software execution layer.

B) Next-generation firewall with deep packet inspection is correct. NGFW devices can understand and analyze industrial control system (ICS) protocols, detect deviations from expected command patterns, identify malformed or malicious packets, and block unauthorized traffiC) Deep packet inspection allows the firewall to inspect traffic beyond headers, enabling visibility into commands sent to PLCs and legacy controllers. NGFWs can enforce granular security policies, restrict cross-network communication, and detect threats such as unauthorized read/write commands, scanning, or lateral movement attempts. Because the legacy controllers cannot be patched or modified, network-based compensating controls such as DPI-enabled firewalls are essential. This solution allows monitoring without interfering with operational continuity and provides a robust protective layer.

C) Endpoint antivirus is not designed for ICS environments that lack modern operating systems or cannot support agent installation. PLCs and industrial devices typically cannot run antivirus software. Antivirus tools also cannot inspect ICS protocols or prevent unauthorized control commands. They offer little to no protection for OT environments dependent on legacy hardware.

D) Network segmentation is critical for OT security, but segmentation alone does not detect malicious commands or anomalies within allowed network zones. While segmentation reduces the attack surface, it does not provide granular inspection of protocol behavior or block malicious ICS traffiC) Segmentation should be paired with DPI-capable firewalls to fully protect legacy industrial systems.

Q105.

 A multinational enterprise experiences a breach where attackers gained access through a stolen VPN credential. Once inside, they moved laterally across the network, accessed administrative shares, and exfiltrated sensitive datA) The company wants to implement a solution that continuously validates trust levels, checks device compliance, analyzes session risk, and restricts access based on dynamic risk scoring rather than static network perimeter rules. Which security model should they adopt?

A) Zero trust architecture
B) Air-gapped network design
C) Implicit trust model
D) Perimeter-based security

Answer: A) Zero trust architecture

Explanation:

A) Zero trust architecture is correct. Zero trust operates on the principle of never trust, always verify. It eliminates implicit trust within the network perimeter, requiring continuous authentication, authorization, and context verification for every user, device, and session. Zero trust frameworks incorporate: device posture checks, risk scoring, behavioral analytics, microsegmentation, least privilege access, and continuous monitoring. In this breach, attackers used valid credentials to gain initial access, highlighting the failure of perimeter-based trust assumptions. Zero trust mitigates this by validating context beyond credentials, such as device health, geolocation, session patterns, and user behavior. Access decisions become dynamic rather than statiC) This model prevents lateral movement, limits attacker privileges, and ensures that stolen credentials alone cannot grant unrestricted access.

B) Air-gapped network design physically isolates systems from external networks. While it provides strong protection for critical systems, it is impractical for a multinational enterprise with cloud, remote access, and mobile workforce requirements. Air-gapping does not address dynamic access control needs or session risk scoring.

C) Implicit trust model assumes that entities inside the network perimeter are trustworthy. This is the outdated traditional approach that led to the breach in the first place. It allows lateral movement once an attacker enters the network and does not validate device posture or session risk. Implicit trust models are unsuitable for modern threat environments.

D) Perimeter-based security focuses on preventing external intrusions but fails to protect against insider threats or breaches based on stolen credentials. Once attackers get inside the perimeter, they often face minimal additional controls. The breach described demonstrates the weaknesses of perimeter-centric strategies.

 Q106.

 A multinational research organization recently expanded its cloud operations and now stores extremely sensitive intellectual property across multiple cloud providers. During a recent internal assessment, auditors discovered inconsistent encryption configurations, misconfigured IAM policies, and a lack of centralized visibility across environments. The organization wants a solution that can automatically evaluate cloud configuration settings, enforce compliance frameworks, detect misconfigurations, and provide real-time visibility into resource security posture across all cloud platforms. Which solution would best address these needs?

A) Cloud workload protection platform
B) Cloud access security broker
C) Cloud security posture management
D) Data loss prevention

Answer: C) Cloud security posture management

Explanation:

A) Cloud workload protection platform focuses on protecting workloads such as VMs, containers, and serverless functions. CWPPs typically provide host-based protections including runtime security, malware defense, behavioral analytics, and vulnerability management. While CWPP is valuable in multi-cloud environments, it is not primarily designed to identify misconfigured IAM roles, insecure storage buckets, overly permissive firewall rules, or compliance drifts across cloud resources. The organization’s audit findings mention inconsistent encryption settings and misconfigured permissions, which fall more naturally under configuration-centric security controls rather than workload-centric controls. A CWPP would help protect workloads themselves but will not provide centralized, continuous posture evaluation across all cloud resources.

B) Cloud access security broker provides monitoring and enforcement between cloud service consumers and cloud service providers. CASBs help with controlling data egress, shadow IT detection, user activity monitoring, and access control policy enforcement. They are excellent for detecting risky data movement, enforcing DLP rules in cloud SaaS applications, or managing user behavior analytics. However, CASBs do not provide deep visibility into configuration baselines across multi-cloud infrastructure, nor do they automatically assess encryption status, IAM configurations, or alignment with compliance frameworks like CIS, NIST, or GDPR. CASBs operate more at the user–application interaction layer rather than infrastructure configuration posture.

C) Cloud security posture management is correct. CSPM solutions are specifically designed to identify misconfigurations, enforce best practices, and maintain continuous compliance across multi-cloud deployments. They provide automated assessments of security configurations, highlight violations of least-privilege principles, ensure encryption standards are met, and evaluate IAM policy correctness. CSPM tools also alert security teams when cloud resources drift from required configurations. Because the organization has multiple cloud providers and inconsistent security controls, a CSPM provides the centralized visibility they lack. It continuously monitors every cloud asset, flags misconfigurations, and helps the organization maintain a secure and compliant posture across all environments. CSPM is the only category aligned directly with the organization’s needs: real-time configuration evaluation, compliance enforcement, and multi-environment visibility.

D) Data loss prevention focuses on protecting sensitive data from being leaked, misused, or exfiltrateD) While DLP is essential for protecting intellectual property, it does not analyze cloud resource configurations or evaluate IAM policies. DLP does not resolve encryption inconsistencies or detect misconfigured cloud platform settings. It monitors data movement, not infrastructure posture. Therefore, it cannot centralize cloud configuration visibility or enforce compliance frameworks.

Q107.

 A cybersecurity incident response team is investigating a breach where an attacker compromised an employee’s laptop, escalated privileges, created a persistent scheduled task, and exfiltrated several gigabytes of proprietary datA) Forensic investigators want to determine the exact sequence of attacker actions, identify all artifacts left behind, and understand how the attacker escalated access. They need a comprehensive technique that reconstructs system behavior using snapshots of system memory, logs, process execution history, registry modifications, and scheduled task creation. Which forensic technique is most appropriate for this scenario?

A) Chain of custody
B) Static malware analysis
C) Timeline analysis
D) File carving

Answer: C) Timeline analysis

Explanation:

A) Chain of custody is critical for ensuring evidence integrity during forensic investigations, documenting who handled evidence, when, and under what conditions. However, chain of custody is strictly procedural. It does not reconstruct events or provide forensic insights into attacker behavior. While it ensures evidence is admissible in court, it does not allow analysts to determine how the attacker escalated privileges or created persistent tasks. The investigators in the scenario need a technical method for reconstructing activity over time, not a process for handling evidence.

B) Static malware analysis involves examining malware samples without executing them. Analysts review binary file headers, embedded strings, imports, and code structure to understand functionality. Static analysis is useful when investigating malicious executables but does not reconstruct full system activity across logs, registry entries, or timeline artifacts. The scenario involves analyzing system behavior—scheduled tasks, privilege escalation, data exfiltration—not purely analyzing malware. Static malware analysis alone is insufficient to uncover the sequence of events, especially if the attacker used built-in tools (living off the land) instead of malware.

C) Timeline analysis is correct. Timeline analysis correlates data from multiple forensic sources—file system timestamps, registry modification times, scheduled task creation logs, process execution logs, memory artifacts, and event logs—to reconstruct an attacker’s actions chronologically. This technique allows investigators to piece together each event, determining when the attacker first gained access, escalated privileges, created persistence mechanisms, and exfiltrated datA) Since the investigators want to understand the exact sequence of events and all attacker actions, timeline analysis provides the most comprehensive methoD) It integrates timestamps from numerous data sources, enabling detailed reconstruction of system behavior. This is essential when analyzing advanced intrusions involving privilege escalation and data theft.

D) File carving extracts files from unallocated disk space or raw data fragments. This is useful when recovering deleted files or reconstructing evidence that lacks metadatA) However, file carving does not provide chronological event sequencing. It cannot determine the order of operations or analyze attacker actions such as privilege escalation or persistence creation. File carving may assist with recovering stolen data, but it does not reconstruct attacker behavior or sequence events.

Q108.

 A company is concerned about increasing phishing attacks targeting employees. Attackers are sending emails that appear legitimate, include malicious links, and attempt to harvest credentials. The organization wants a solution that inspects email content, identifies suspicious patterns, rewrites URL links to route them through a safe-click analysis service, and detonates attachments in a sandbox before delivery to employees. Which type of solution best addresses this requirement?

A) Secure email gateway
B) Multifactor authentication
C) DNS filtering
D) Endpoint detection and response

Answer: A) Secure email gateway

Explanation

A) Secure email gateway is correct. SEG solutions are designed specifically to filter inbound and outbound email traffic, detect phishing attempts, inspect attachments, rewrite URLs, and block malicious messages before reaching users. They often incorporate advanced threat detection, sandboxing, content filtering, impersonation detection, and machine learning–based analysis. The organization in the scenario wants URL rewriting, sandbox detonation for attachments, and phishing pattern detection—all core functionalities of a secure email gateway. SEGs protect the organization at the point where phishing threats first enter the environment. By rewriting URLs and analyzing attachments pre-delivery, SEGs prevent employees from unintentionally activating malicious content. SEG is therefore the most fitting solution.

B) Multifactor authentication helps protect accounts after attackers attempt credential theft. While MFA reduces the impact of a successful phishing attempt, it does not prevent phishing emails from being delivered, nor does it inspect email content or rewrite URLs. MFA is an excellent compensating control but does not stop malicious emails.

C) DNS filtering blocks access to known malicious domains when users attempt to visit them. While this helps prevent access to harmful sites, it does not perform email inspection or sandboxing. DNS filtering cannot rewrite URLs in emails or evaluate attachments. DNS filtering activates after a user clicks a link, not before delivery.

D) Endpoint detection and response monitors endpoint behavior, detects suspicious activity, and provides response capabilities. Although EDR can help with detecting malicious payloads executed on endpoints, it does not inspect emails at the gateway level or rewrite hyperlinks. EDR engages when the threat is already on the endpoint or executing, rather than preventing it from reaching the user’s inbox.

Q109. 

A cyber defense team needs a solution that can automatically detect suspicious user activity, such as multiple failed login attempts, privilege escalation anomalies, lateral movement patterns, and deviations from normal behavior. They want a system that uses machine learning to determine baselines for normal activity, evaluates deviations, correlates user actions with contextual risk scores, and integrates with SIEM platforms to trigger alerts. Which solution best fits these requirements?

A) User and entity behavior analytics
B) Host-based intrusion prevention system
C) Security orchestration, automation, and response
D) Data classification tool

Answer: A) User and entity behavior analytics

Explanation:

A) User and entity behavior analytics is correct. UEBA solutions utilize machine learning algorithms to build behavioral baselines for users and devices. They identify anomalies such as unusual login locations, abnormal file access patterns, privilege misuse, and potential insider threat indicators. The scenario explicitly mentions behavior deviations, machine-learning-driven baselines, contextual risk scoring, and integration with SIEM systems—all core elements of UEBA) UEBA excels at detecting subtle threats that would go unnoticed by traditional signature-based systems, including compromised accounts and insider threats. This makes it the most appropriate solution for the described needs.

B) Host-based intrusion prevention systems focus on detecting exploit attempts, blocking malicious processes, preventing known attack signatures, and monitoring system behavior on individual hosts. While HIPS is valuable, it does not analyze user behavior across the environment or correlate events with machine-learning models. HIPS lacks the contextual and behavioral analysis capabilities required to detect abnormal user patterns across systems.

C) Security orchestration, automation, and response coordinates automated actions across tools in response to alerts. SOAR helps streamline incident response but does not itself detect suspicious behavior. SOAR depends on tools like SIEMs, EDR platforms, or UEBA systems for detection. It is a response automation tool, not a behavior analysis solution.

D) Data classification tools help categorize and label data based on sensitivity. They support DLP programs and compliance but have no relation to detecting privilege escalation anomalies, unusual user logins, or behavioral deviations. Data classification does not incorporate machine learning for behavioral baselines.

UEBA provides the precise functionalities needed: anomaly detection, risk scoring, behavioral analytics, and SIEM integration.

Q110. 

A software development team is designing a secure authentication system for an internal application. They want to ensure that even if an attacker obtains the password database, stored credentials cannot be used to impersonate users. They require a method that protects passwords using a slow, computationally expensive hashing process that includes a unique salt value for each passworD) Which approach best meets these requirements?

A) SHA-1 hashing
B) MD5 hashing
C) PBKDF2 or bcrypt
D) Base64 encoding

Answer: C) PBKDF2 or bcrypt

Explanation:

A) SHA-1 hashing is a cryptographic hash function but is no longer secure for protecting passwords. SHA-1 is vulnerable to collision attacks, and it is extremely fast, making it susceptible to brute-force attacks using modern GPUs or ASIC hardware. Additionally, SHA-1 does not inherently incorporate salting or key stretching. Even if developers manually add salts, SHA-1’s speed makes it unsuitable for secure password storage today.

B) MD5 hashing suffers from significant security flaws. It is fast, easily brute-forced, and vulnerable to collisions. MD5 cannot withstand modern cracking techniques, rainbow tables, or GPU-driven brute-force attacks. Like SHA-1, MD5 does not include built-in slow-down mechanisms, salts, or key stretching. Storing passwords with MD5 is considered insecure and non-compliant with modern security standards.

C) PBKDF2 or bcrypt is correct. Both PBKDF2 and bcrypt are password hashing algorithms specifically designed for secure credential storage. They incorporate salting to ensure each password hash is unique, even if two users share the same passworD) They also implement key stretching, meaning they intentionally perform thousands of iterations to slow down hashing operations. This dramatically increases resistance against brute-force attacks. PBKDF2 is used widely in enterprise authentication systems and supports configurable iteration counts. Bcrypt automatically handles salting and uses a computationally expensive hashing routine designed to resist GPU cracking. Either algorithm satisfies the requirement for a slow, salted, secure password hashing mechanism.

D) Base64 encoding is not a security mechanism. It simply converts binary data into ASCII characters. Encoded data can be trivially decoded and does not provide encryption or hashing. Base64 provides no protection for stored passwords and is completely inappropriate for authentication systems.

PBKDF2 or bcrypt meets all requirements for secure password hashing, making option C the correct choice.

Q111.

 A company recently deployed an enterprise mobile device management (MDM) solution to enforce security controls on smartphones used by employees. After deployment, administrators noticed that many users attempt to disable security settings such as screen-lock timers, application whitelisting, and device encryption. The organization wants to ensure that users cannot remove required policies, access corporate data on unauthorized apps, or connect to the network unless their devices meet compliance requirements enforced by the MDM. Which feature would BEST support these goals?

A) Geofencing
B) Containerization
C) Compliance enforcement
D) Remote wiping

Answer: C) Compliance enforcement

Explanation:

A) Geofencing is a feature in many mobile device management platforms that allows system administrators to trigger specific security actions based on the physical location of a device. This may include restricting access when a device leaves a secure premises, enabling additional restrictions when entering high-risk regions, or preventing certain apps from functioning based on GPS boundaries. Although geofencing enhances location-based control and is valuable for lost device protection or sensitive facility restrictions, it does not enforce ongoing configuration requirements like ensuring device encryption, preventing tampering with screen-lock timers, or blocking unauthorized apps. The organization’s primary need is the ability to ensure configuration compliance and restrict device access if settings are altered, which geofencing does not provide.

B) Containerization provides separation between corporate data and personal data on a mobile device, creating isolated workspaces controlled by corporate IT. This improves security by preventing data leakage from business apps into personal apps and allows corporate data to be wiped without affecting personal information. While containerization does improve control over corporate information, it does not inherently prevent the user from disabling device-wide settings such as encryption, lock-screen timers, or overall system-level MDM configurations. Additionally, containerization alone does not restrict network access based on device compliance posture.

C) Compliance enforcement is correct. Compliance enforcement is a core feature of modern MDM and mobile access management platforms. With compliance enforcement, devices must meet mandatory security criteria before they are allowed to connect to corporate networks, access business applications, or synchronize sensitive datA) Administrators can configure required policies — such as encryption, automatic screen lock, app restrictions, version patch levels, and MDM enrollment. If a user attempts to disable these settings, the device becomes noncompliant. In a noncompliant state, MDM systems can automatically prevent the device from connecting to VPNs, enterprise Wi-Fi, internal applications, or email services. Compliance enforcement ensures that users cannot weaken security policies, providing continuous device posture verification. This directly satisfies the organization’s needs, making it the best solution.

D) Remote wiping allows administrators to erase corporate data or the entire device (in corporate-owned scenarios) when a device is lost, stolen, or compromiseD) While remote wiping is an important security capability, it does not enforce compliance with security settings nor prevent users from disabling required configurations. Remote wiping is reactive rather than proactive, and the organization needs preventive enforcement to ensure policies cannot be circumvented in the first place.

Compliance enforcement is the only option that ensures devices remain in a secure state, users cannot disable required policies, and corporate access is denied until the device meets all configured requirements.

Q112.

 A company manages a large fleet of Linux servers used for high-performance computing tasks. Recently, several machines were found to be running unauthorized processes, and CPU usage spiked unexpectedly across multiple clusters. Administrators suspect that attackers deployed malicious code that leverages system resources for cryptocurrency mining. To strengthen defenses, the organization wants a solution capable of monitoring system calls, detecting abnormal process execution, tracking privilege escalations, and generating alerts for anomalous runtime behavior. Which technology BEST meets these requirements?

A) File integrity monitoring
B) Runtime application self-protection
C) Endpoint detection and response
D) Immutable infrastructure

Answer: C) Endpoint detection and response

Explanation

A) File integrity monitoring is valuable for detecting changes to critical system files, configurations, binaries, or important directories. It can detect unauthorized modifications, the creation of suspicious executables, or unauthorized changes to security configurations. While FIM plays an important role in overall system security and compliance, it does not provide ongoing monitoring of system calls, detection of abnormal runtime events, or behavioral analytics for privilege escalation and process anomalies. It alerts on static changes, not dynamic behaviors. Since cryptocurrency mining malware often runs entirely in memory or leverages legitimate system binaries without altering protected files, file integrity monitoring alone would miss much of this activity.

B) Runtime application self-protection is primarily designed for application security, especially in web applications and microservices environments. RASP tools integrate within applications to detect attacks such as SQL injection, command injection, and other common application-layer vulnerabilities. They monitor application-level behavior but do not observe system processes, Linux kernel events, privilege escalations, or cluster-wide anomalies. Cryptocurrency miners running at the OS level would fall completely outside RASP’s scope.

C) Endpoint detection and response is correct. EDR solutions provide deep visibility into runtime system behavior, including monitoring system calls, detecting abnormal process activity, identifying suspicious patterns such as cryptocurrency mining behavior, identifying persistence mechanisms, and alerting on privilege escalations. EDR tools specialize in detecting living-off-the-land attacks, unauthorized binaries, lateral movement, and advanced threats across Linux, Windows, and macOS environments. Most importantly, EDR can correlate unusual CPU usage, repeated unauthorized processes, and anomalous runtime behavior—exactly the symptoms described in the scenario. The organization needs detailed, real-time monitoring of Linux hosts, and EDR provides the most comprehensive detection capability for these kinds of compromises.

D) Immutable infrastructure prevents configuration drift by replacing entire servers rather than patching or modifying existing ones. This enhances security and resilience, but immutability alone does not detect active threats, anomalous processes, or runtime exploitation. While immutable infrastructure may limit the impact of certain attacks, it does not provide real-time detection or monitoring of abnormal system behavior. It cannot monitor privilege escalations or identify cryptocurrency mining malware.

EDR is the best choice because it provides the exact behavioral monitoring and anomaly detection capabilities required.

Q113. 

A security engineer is designing a new authentication architecture for a cloud-based application that serves thousands of global users. The organization wants to move away from passwords and toward a model that uses public key cryptography, eliminates shared secrets, resists phishing, and supports strong, hardware-backed authentication mechanisms. Which approach aligns BEST with these goals?

A) OAuth tokens
B) Federation via SAML
C) WebAuthn with FIDO2
D) Time-based one-time passwords

Answer: C) WebAuthn with FIDO2

Explanation

A) OAuth tokens are used for authorization, not authentication. OAuth enables delegated access (e.g., allowing an application to access resources on behalf of a user). It is not designed as a passwordless authentication method and relies on another system (such as OpenID Connect) to perform the actual authentication step. OAuth tokens are vulnerable if the authentication system behind them uses weak methods or if user credentials are phisheD) OAuth by itself does not eliminate shared secrets nor provide hardware-backed, cryptographic authentication for end-users.

B) Federation via SAML allows users to authenticate through an identity provider, enabling single sign-on across platforms. While SAML reduces password sprawl and centralizes authentication, SAML still often relies on traditional authentication factors such as passwords or MFA) It does not inherently provide hardware-backed cryptography nor eliminate shared secrets. Users may still be susceptible to phishing if the authentication method used at the identity provider is not passwordless.

C) WebAuthn with FIDO2 is correct. WebAuthn and FIDO2 were specifically designed for secure, passwordless authentication. They use public key cryptography and bind authentication to a physical device such as a hardware security key or a secure enclave on a smartphone or laptop. This prevents phishing because private keys never leave the device and authentication requires possession of the hardware authenticator. WebAuthn eliminates shared secrets entirely. Instead, each service receives a unique public key, and authentication occurs through cryptographic challenge-response mechanisms. Because FIDO2 is hardware-backed, attackers cannot steal credentials from servers or intercept them through phishing attacks. WebAuthn is the industry-leading approach for secure, passwordless authentication.

D) Time-based one-time passwords improve security by requiring users to enter codes that expire quickly. However, TOTP still depends on a shared secret between the server and the client device. TOTP codes can be phished, intercepted, or entered into malicious authentication forms. They do not provide a phishing-resistant mechanism or hardware-backed cryptography. Although more secure than passwords alone, TOTP does not achieve the organization’s goal of eliminating shared secrets and providing phishing-resistant authentication.

WebAuthn with FIDO2 is the only option that meets all requirements: no passwords, hardware-backed security, resistance to phishing, and public-key cryptography.

Q114.

 A security team notices that attackers are increasingly bypassing perimeter defenses by exploiting misconfigured cloud systems, vulnerable APIs, and compromised credentials. The organization wants a framework that assumes no network location is trusted, enforces strict authentication and authorization at every access attempt, and continuously evaluates device posture, user identity, and context before granting access to resources. Which security model BEST fits these objectives?

A) Defense in depth
B) Zero trust architecture
C) Network segmentation
D) Perimeter-based security

Answer: B) Zero trust architecture

Explanation

A) Defense in depth is a layered security strategy that uses multiple controls to protect against attacks. While it strengthens overall security, it still often relies on traditional trust boundaries, such as internal networks being considered safer than external ones. Defense in depth does not require continuous authentication, contextual analysis, or distrust of internal traffiC) Although part of modern security programs, it does not satisfy the organization’s need for an identity-first, context-aware access model.

B) Zero trust architecture is correct. Zero trust is built on the principle that no user, device, application, or network segment is inherently trusteD) Trust must be continuously evaluated at every access request. The zero trust model involves strict authentication, real-time policy enforcement, and continuous monitoring. It addresses modern threats such as credential compromise, lateral movement, and cloud misconfigurations. Zero trust assumes that attackers may already be inside the environment and uses least-privilege principles to limit exposure. It directly meets the organization’s goals by requiring evaluation of user identity, device posture, and contextual risk for each access attempt—exactly what the scenario describes.

C) Network segmentation helps isolate systems to limit the spread of attacks, improving security by separating resources into smaller zones. While segmentation is valuable within zero trust environments, it does not provide identity-based access control, continuous authentication, or contextual risk evaluation. Segmentation alone cannot stop credential misuse or misconfigured cloud access.

D) Perimeter-based security assumes that the network edge is the main defense boundary. Systems inside the perimeter are often implicitly trusteD) This older model is ineffective against modern cloud, mobile, and credential theft attacks. It does not meet the organization’s needs to authenticate every access attempt and treat all environments as untrusted.

Zero trust architecture is the only model that fully addresses modern threats by continuously verifying identity and context.

Q115.

A company recently experienced a data breach where attackers exploited a vulnerability in a third-party library integrated into their web application. The development team wants to reduce the risk of similar supply-chain attacks by implementing a process that automatically analyzes open-source components, identifies known vulnerabilities, flags outdated libraries, and enforces security requirements before the software is deployeD) Which solution BEST meets these needs?

A) Web application firewall
B) Static application security testing
C) Software composition analysis
D) Secure code review

Answer: C) Software composition analysis

Explanation

A) Web application firewall protects against external threats by filtering and monitoring HTTP traffiC) A WAF can block SQL injection, cross-site scripting, and other common attacks. However, WAFs do not examine internal source code, third-party components, or library vulnerabilities. They cannot detect outdated dependencies or identify known CVEs affecting integrated packages. Therefore, WAF does not mitigate software supply chain vulnerabilities originating from within the development lifecycle.

B) Static application security testing analyzes source code for insecure coding practices and logic flaws. While SAST is valuable for detecting developer-introduced vulnerabilities, such as buffer overflows or insecure error handling, it typically does not cover third-party library versioning or vulnerability status. Although some SAST tools can flag dangerous functions within libraries, they generally do not perform deep dependency analysis or correlate libraries with known vulnerabilities. SAST alone cannot mitigate supply-chain risks from imported components.

C) Software composition analysis is correct. SCA tools focus specifically on managing risks associated with open-source and third-party components. They analyze dependency trees, identify outdated packages, and compare library versions against vulnerability databases such as NVD, OSS Index, or vendor-maintained advisories. SCA tools alert teams when a dependency is affected by a known CVE, enforce version policies, and can stop insecure builds through CI/CD integration. SCA is designed for exactly the type of supply-chain vulnerability described in the scenario. It enables continuous monitoring of third-party libraries and ensures that insecure dependencies cannot reach production environments.

D) Secure code review involves manual or automated review of internal source code. Like SAST, it helps catch coding errors but often overlooks vulnerabilities in external libraries unless the reviewer specifically examines package manifests. Manual reviews are time-consuming and not scalable for dependency monitoring. Secure code review is important, but it does not adequately address supply-chain dependency risks.

SCA is the only approach that systematically identifies known vulnerabilities in third-party libraries, enforces security policies, and integrates into the development pipeline.

Q116.

 An organization’s SOC team observes that multiple endpoints across different departments are communicating with a suspicious external IP address at regular intervals. This communication occurs over an uncommon port and consists of small, encrypted data packets. Further investigation reveals that the endpoints are initiating the outbound connections, and each connection contains a nearly identical pattern in packet size and timing. Analysts suspect that malware is using this channel to receive commands. Which threat type most accurately describes this behavior?

A) Adware
B) Command-and-control beaconing
C) Lateral movement reconnaissance
D) Drive-by download

Answer: B) Command-and-control beaconing

Explanation:

A) Adware is designed to display intrusive advertisement content, redirect web traffic, or inject unwanted marketing material into browser sessions. While adware can exhibit persistence and may generate ongoing network traffic, its behavior typically focuses on HTTP or HTTPS connections to advertisement networks, not periodic beaconing to a suspicious command server. Adware also rarely uses encrypted, repetitive traffic patterns over uncommon ports. Its purpose is not to receive remote attacker instructions or maintain control over compromised systems. Therefore, adware cannot explain the regular beacon pattern, small encrypted packets, or the endpoints’ attempts to communicate with a command server.

B) Command-and-control beaconing is the correct answer because the described behavior matches it comprehensively. Beaconing refers to the periodic, automated communication that malware sends to a command-and-control (C2) server. Beaconing typically exhibits several characteristics highlighted in the scenario:

Regular intervals. Beaconing usually follows a timed pattern to check in with the attacker. These intervals can be constant or pseudo-randomized to reduce detection likelihood.

Outbound initiation. The infected endpoint contacts the external server, allowing the attacker to operate behind firewalls and NAT devices.

Small, encrypted packets. Beacons require minimal data transmission, often containing just enough information for the malware to request instructions. Encryption is used to conceal the contents and avoid detection by deep packet inspection tools.

Uncommon port usage. Attackers sometimes use obscure ports to avoid rule-based network filtering or signature detection.

Multiple infected hosts synchronizing their beacons. This can occur when malware spreads across an environment and each instance follows the same programmed communication routine.

The entire scenario aligns with the behavior of a C2 beacon. Analysts encountering this pattern typically use tools such as network flow analysis, anomaly detection systems, packet captures, and endpoint forensics to identify how the malware was installed and what capabilities it supports. Detecting beaconing early is critical because it often indicates the presence of an ongoing intrusion where attackers still maintain remote access.

C) Lateral movement reconnaissance occurs after initial compromise when attackers attempt to discover other assets within the network. This includes scanning internal systems, enumerating shares, looking for vulnerable hosts, or mapping topologies. Such activity often manifests as increased traffic between internal machines, repeated authentication attempts, unusual SMB or RDP connections, or port scans on internal subnets. The scenario, however, focuses on outbound communication to a single suspicious external server, not internal reconnaissance or east-west traffiC) There is no mention of scanning activity, enumeration requests, or internal authentication anomalies. Therefore lateral movement reconnaissance does not explain the observed traffic pattern.

D) Drive-by download refers to malware installation that occurs when a user visits a compromised or malicious website containing exploits. While drive-by downloads are a common infection method, the scenario describes post-infection behavior, not how the infection occurreD) Even if the malware had been introduced via drive-by download, this does not explain the periodic encrypted communication with an external server. The question specifically asks what type of threat the observed behavior most accurately describes. Since drive-by downloads concern infection vectors rather than ongoing communication methods, this option is not suitable.

Q117.

A financial services company detects suspicious behavior involving an employee’s workstation. The system begins accessing large volumes of customer data from internal databases at unusual hours, including late-night and early-morning periods when the employee is not on shift. The data access patterns show consistent queries requesting sensitive fields such as Social Security numbers, account numbers, and transaction histories. Additionally, the workstation has begun transmitting encrypted outbound traffic to an IP address located overseas, which the employee claims to have no knowledge of. The security team confirms that the employee’s credentials were used but cannot find evidence that the employee initiated these actions. Which type of security incident best describes this situation?

A) Insider misuse
B) Credential compromise
C) Whaling attack
D) Social engineering pretexting
Answer: B) Credential compromise

 Explanation:

A) Insider misuse suggests that a legitimate employee intentionally abuses their access to perform unauthorized actions, such as accessing data they should not access or using company resources for malicious intent. Insider misuse typically involves clear malicious intent from the insider: stealing data, conducting fraud, sabotaging files, or bypassing company policies knowingly. In this scenario, however, investigators have found no indication that the employee intentionally performed the database queries or transmitted data externally. The employee’s claim of non-involvement is consistent with the analysis showing suspicious activities occurring outside working hours, which typically points toward an external attacker leveraging the employee’s credentials rather than the employee abusing their own access. Insider misuse can be ruled out when there is no evidence that the insider initiated or benefited from the unauthorized actions.

B) Credential compromise is the correct answer because the scenario strongly suggests that someone other than the employee used the employee’s credentials to access sensitive databases and transmit data externally. When credentials are compromised, attackers can impersonate legitimate users, bypass identity-based security controls, and gain access to systems without raising immediate alarms. The data access patterns in the scenario match what attackers typically do after gaining credential access: searching for valuable information in internal systems, conducting large-scale data sweeps, and exfiltrating sensitive data to external command-and-control servers or remote attacker infrastructure. The fact that the workstation is sending encrypted outbound traffic to a foreign IP is also a sign of compromise because attackers often use encryption to evade monitoring tools and maintain secrecy.

Additionally, the after-hours activity is a strong indication of credential misuse. Compromised credentials often get used at odd times that do not correlate with a legitimate employee’s schedule. The attacker may be in a different time zone or may simply prefer to conduct their activity when fewer internal employees are working, reducing the likelihood of detection. Since no evidence ties the malicious behavior to the employee, the most reasonable conclusion is that an external actor obtained and misused the employee’s credentials.

C) Whaling attack refers to a type of phishing attack that targets senior executives or high-profile individuals within an organization. These attacks are designed to trick executives into revealing sensitive information or performing unauthorized actions, such as approving fraudulent transactions. Although credential compromise can occur via whaling, the scenario does not indicate that an executive was targeted or that the attack involved deceptive emails aimed at high-level personnel. Instead, the scenario focuses on data access anomalies and suspicious outbound traffic from an employee’s workstation. This does not match the defining characteristics of a whaling attack.

D) Social engineering pretexting involves attackers creating a fabricated scenario or pretext to trick users into providing information, performing unauthorized tasks, or granting access. While pretexting can lead to credential compromise, the question specifically asks which type of incident best describes the behavior observeD) The ongoing unauthorized database queries and encrypted outbound traffic are not examples of pretexting; instead, they represent post-compromise malicious activity. Pretexting is an attack technique, not an incident outcome. The SOC team is investigating the effect of the attack (misuse of credentials and data exfiltration), not the initial tactic used to compromise the system.

Q118. 

A large retail corporation experiences a sudden disruption in its inventory management system after a third-party cloud storage provider suffers a power outage. The outage affects the provider’s entire region, making data temporarily inaccessible. The business cannot process shipments, update stock levels, or synchronize data with stores nationwide. Although no data was lost and services eventually come back online, the impact causes several hours of downtime and financial loss. The corporation’s CIO argues that the event could have been avoided if the provider had implemented proper redundancy across multiple geographic areas. Which cloud design principle would have best reduced the impact of this outage?

A) Vertical scaling
B) Geographic distribution
C) Single-tenancy deployment
D) Cloud bursting

Answer: B) Geographic distribution

Explanation:

A) Vertical scaling refers to increasing the resources (CPU, memory, storage capacity) of a single system or instance. This approach improves performance but does not improve availability. If a power outage occurs in a region, vertical scaling does nothing to prevent service interruptions because the entire region may be unavailable regardless of how powerful the individual systems are. Vertical scaling is typically used to accommodate increased workload demands, not to mitigate geographic failures. Since the scenario involves downtime caused by a lack of cross-regional availability, vertical scaling does not address the problem.

B) Geographic distribution is the correct answer because it describes the strategy of placing workloads, data, and services across multiple geographic regions to improve fault tolerance and resiliency. Cloud providers often offer multi-region replication, failover configurations, and cross-zone load balancing. If the cloud storage provider had used geographically distributed resources, the retail corporation could have accessed its data from another functioning region during the outage. By spreading workloads across different regions, organizations avoid single points of geographic failure. Geographic distribution is particularly important for organizations with nationwide or global operations because it ensures that a local or regional issue does not cascade into a complete service disruption.

C) Single-tenancy deployment refers to using dedicated isolated systems for one customer. While this can improve security or performance guarantees, it does not inherently increase resiliency. Single-tenancy does not solve the problem of regional outages because even a dedicated system located within a single region remains vulnerable to that region’s outages. The scenario explicitly mentions a region-wide outage affecting multiple customers, so isolation at the tenancy level would not have prevented the disruption.

D) Cloud bursting is a hybrid cloud technique where organizations automatically shift workloads from on-premises environments to cloud environments during peak demanD) It relates to performance scaling and elasticity, not geographic availability or redundancy. Cloud bursting would not have prevented downtime due to a regional power outage at the cloud provider. The corporation is not dealing with excess demand; it is dealing with an outage that prevents access to critical datA) Cloud bursting is unrelated to cross-regional resilience and therefore cannot solve the problem described.

Q119.

 A multinational corporation implements a new secure remote access system for employees working from home or while traveling. The system requires multi-factor authentication and uses encrypted tunnels between user devices and corporate resources. After deployment, security analysts observe numerous login attempts from IP addresses located in countries where the organization has no employees. The attempts include repeated username-password combinations and appear to originate from automated bots. The authentication logs show no successful logins from these attempts, but the volume of activity is increasing each day. Which type of attack best describes this pattern?

A) Password spraying
B) Cross-site scripting
C) SQL injection
D) Clickjacking

Answer: A) Password spraying

Explanation:

A) Password spraying is the correct answer because it aligns fully with the scenario: numerous login attempts, use of automated bots, attempts across multiple accounts, and repeated login failures. Password spraying typically targets externally-facing authentication portals such as VPN systems, webmail interfaces, or cloud services. Attackers often rely on common passwords such as “Password123,” “Welcome2025,” or reused corporate-wide passwords. The goal is to find one weak account that will allow initial access. Once access is gained, attackers often escalate privileges, move laterally, or launch further internal attacks. The description perfectly fits the symptoms of a password spraying attack.

B) Cross-site scripting (XSS) occurs when attackers inject malicious scripts into websites, enabling them to steal cookies, hijack sessions, or impersonate users interacting with the affected site. XSS has nothing to do with remote login attempts or repeated username-password trials from foreign IP addresses. The scenario describes authentication intrusion attempts, not web content manipulation or browser-based exploitation. Therefore, XSS is not applicable.

C) SQL injection involves inserting malicious SQL queries into input fields of vulnerable web applications to manipulate backend databases. SQL injection can allow attackers to extract information, alter records, or gain administrative control over database systems. The scenario, however, describes failed authentication attempts through an encrypted remote access system, not exploitation of a database or manipulation of input fields. There is no evidence that the attacker is interacting with a database directly. Therefore SQL injection is not relevant.

D) Clickjacking involves tricking users into clicking hidden or disguised page elements, causing them to unintentionally perform actions such as enabling webcam access or authorizing transactions. Clickjacking requires victim interaction and is usually performed through malicious webpages. The scenario centers around automated bots attempting to authenticate against a remote access system, with no mention of user deception or malicious webpages. Consequently, clickjacking does not match the described behavior.

Given all these factors, password spraying is the clearest, most accurate match for the described activity. Organizations typically address password spraying by implementing multi-factor authentication (which they have), enforcing strong password policies, implementing account lockout thresholds, monitoring for impossible travel attempts, restricting authentication attempts by region, and deploying behavioral analytics solutions to detect abnormal login patterns. The organization should also consider blocking foreign IP ranges, applying conditional access rules, and implementing CAPTCHA protections to reduce automated bot attempts.

Q120. 

A cybersecurity team is reviewing an incident in which an attacker gained access to a server by exploiting an unpatched vulnerability in a software component. The attacker then escalated privileges, installed additional malicious tools, exfiltrated sensitive corporate data, and attempted to cover their tracks by modifying logs and removing temporary files. During the investigation, the team identifies that the software vendor issued a patch for the exploited vulnerability three months earlier but the organization failed to apply it. Which security process failure most directly contributed to the success of this attack?

A) Poor password rotation policy
B) Ineffective patch management
C) Lack of network segmentation
D) Weak multi-factor authentication enforcement

Answer: B) Ineffective patch management

Explanation:

A) Poor password rotation policy would involve long-standing or reused passwords that could facilitate unauthorized access if compromiseD) While password rotation issues can create security weaknesses, nothing in the scenario indicates that the attacker obtained a password or authenticated using stolen credentials. Instead, the attack originated from exploitation of a software vulnerability. Passwords played no role in the attacker gaining access. Therefore, poor password rotation is not relevant to this incident.

B) Ineffective patch management is the correct answer because the failure to apply a security patch directly enabled the attacker to exploit a known vulnerability. Patch management includes identifying available updates, testing them, scheduling deployment, and verifying installation. A breakdown in any of these steps can result in systems remaining vulnerable long after a fix is available. This is exactly what occurred in the scenario. Patch management is one of the most critical components of vulnerability management; organizations that neglect it expose themselves to preventable attacks. The failure here shows inadequate processes, insufficient prioritization, or lack of automation in the patching workflow.

C) Lack of network segmentation refers to insufficient separation of systems, allowing attackers to move laterally more easily once they gain access. While poor segmentation can increase the severity of an attack, the scenario does not indicate that network segmentation was the primary issue. The attacker did escalate privileges and exfiltrate data, but those actions would only matter after the attacker already gained initial access. The question asks which failure most directly contributed, which makes the root cause — the unpatched vulnerability — the correct focus. Segmentation may have worsened the impact but did not enable the attacker to enter the system initially.

D) Weak multi-factor authentication enforcement typically increases the likelihood of credential-based attacks. MFA issues become relevant when attackers try to authenticate using stolen passwords or brute-force attempts, but the scenario does not mention authentication misuse. The attacker exploited a vulnerability, not an authentication weakness. MFA would not prevent exploit-based access because attackers bypass authentication altogether when exploiting software vulnerabilities. Therefore, weak MFA enforcement cannot be the cause of the attack.

Given this analysis, it is clear that ineffective patch management is the failure that most directly contributed to the attack. Had the organization implemented a robust patching process including automated updates, vulnerability scanning, prioritization based on severity, and timely deployment, the attacker would have been unable to exploit the vulnerability. Effective patch management is foundational to cybersecurity, reducing the attack surface and preventing adversaries from exploiting known weaknesses.

 

img