CompTIA SY0-701 Security+ Exam Dumps and Practice Test Questions Set 8 Q141-160

Visit here for our full CompTIA SY0-701 Security+ exam dumps and practice test questions.

Q141. A multinational financial institution is implementing a new centralized authentication platform for 40,000 global employees. The security team wants to incorporate adaptive authentication that evaluates user risk based on behavior, geolocation, device posture, and historical login patterns. The system must automatically require stronger authentication if the risk score increases, such as detecting unusual access times or suspicious travel patterns. Which technology best supports this goal?

A) Single sign-on
B) Context-aware authentication
C) Password vaulting
D) LDAP directory services

Answer: B) Context-aware authentication

Explanation

 This scenario describes a high‑security environment requiring dynamic and intelligent authentication decisions. The company wants risk‑based, adaptive authentication where the system evaluates multiple factors in real time. This includes behavior analysis, geolocation checks, device posture validation, and login pattern profiling. This is significantly more advanced than traditional authentication and aligns directly with the concept of context‑aware authentication.

A, single sign‑on, provides convenience by enabling users to authenticate once and access multiple systems without repeated logins. While useful, SSO does not inherently evaluate risk levels, location, behavioral analytics, or device signals. It is not adaptive by itself. SSO can be integrated with adaptive technology but cannot meet these requirements alone.

C, password vaulting, refers to a system that stores and manages passwords securely, typically used in privileged access management solutions. Password vaults automate password rotations, prevent credential sharing, and secure high‑value administrative credentials. However, this has nothing to do with evaluating user risk or enforcing adaptive multi‑factor authentication based on contextual factors.

D, LDAP directory services, is a directory protocol for storing user accounts, devices, groups, and authentication objects. LDAP can support large enterprise identity infrastructures, but it lacks intelligence, behavior analysis, and risk scoring. It is static, not adaptive.

B, context‑aware authentication, is the correct answer because this technology evaluates environmental and behavioral factors during every authentication event. Its characteristics include real‑time analytics, geolocation intelligence, travel pattern detection, continuous risk scoring, and adaptive policy responses such as requiring MFA only when suspicious activity is detecteD) The concept is part of zero trust and identity-centric security models that financial institutions and large enterprises increasingly implement. This approach reduces friction for legitimate users while improving security against account compromise, credential theft, and insider threats.

Context‑aware authentication systems often incorporate machine learning to baseline normal user behavior and identify anomalies. For example, if an employee typically logs in from London during business hours but suddenly attempts to authenticate at 3:00 A)m. from Brazil on an unrecognized device, the system increases the risk score and demands additional verification. This makes account takeover far more difficult.

The technology also integrates seamlessly with SSO platforms, identity providers, and enterprise MFA systems. While SSO simplifies user access and LDAP stores account data, context‑aware authentication actually enforces risk‑based controls, achieving the organization’s stated objectives. The financial sector, which handles sensitive data and requires stringent compliance regulations such as PCI DSS, FFIEC guidelines, and global privacy rules, heavily relies on this technology to balance security and usability. It helps detect compromised credentials, unusual behavior, and suspicious environmental variables while maintaining a seamless authentication experience for legitimate users.

Q142. A cloud security engineer discovers that several virtual machines in a public cloud environment were launched without proper hardening. Ports such as SSH and RDP are exposed to the internet, default credentials are still enabled, and no logging or monitoring tools are installeD) The organization wants to prevent this from happening again through automation. Which solution best enforces these security requirements before any virtual machine is allowed to deploy?

A) Continuous integration pipeline
B) Cloud workload protection platform
C) Infrastructure as code with policy enforcement
D) Virtual private cloud segmentation

Answer: C) Infrastructure as code with policy enforcement

 Explanation:

The scenario involves a recurring issue: virtual machines are being deployed in an insecure manner. This includes exposed ports, default credentials, and missing critical controls such as logging agents. The organization wants to enforce policies automatically before a VM is even deployeD) The correct approach is to use infrastructure as code combined with policy enforcement mechanisms.

A, continuous integration pipeline, relates primarily to application development, not infrastructure deployment governance. While CI can integrate security scanning tools, it does not directly prevent insecure cloud machines from being launched by operations teams or developers using cloud portals.

B, cloud workload protection platform (CWPP), offers runtime protection, vulnerability monitoring, and threat detection once workloads are running. It does not prevent insecure deployments at the creation stage. CWPP tools identify issues after resources already exist, meaning misconfigurations could still expose systems during the deployment window.

D, virtual private cloud segmentation, helps isolate workloads and limit network access. Although segmentation prevents lateral movement and reduces the attack surface, it does not enforce machine‑level hardening or stop insecure VMs from launching.

C, infrastructure as code with policy enforcement, is correct because it embeds security requirements directly into the provisioning process. With IaC tools like Terraform, CloudFormation, or ARM templates, administrators can define the exact configuration of each virtual machine, including port restrictions, credential management rules, and required agents. Combined with policy engines such as Open Policy Agent, AWS Config Rules, Azure Policy, or Terraform Sentinel, the system automatically blocks any deployment that violates predefined security controls.

This creates a secure‑by‑design environment where every VM launched conforms to the organization’s hardening standards. The policies ensure that RDP or SSH cannot be publicly exposed, default credentials cannot be used, and monitoring tools must be installed before deployment is successful. These mechanisms reduce human error and prevent misconfigurations that attackers frequently exploit in public cloud environments.

Q143. A SOC analyst observes unusual outbound traffic from a server that should only communicate internally. Packet capture shows encrypted communication being sent to an unknown offshore IP address. The server also exhibits signs of process injection, and new hidden scheduled tasks appear. The analyst suspects an advanced persistent threat. What is the most likely category of malware responsible?

A) Botnet malware
B) Spyware
C) Remote access trojan
D) Ransomware

 Answer: C) Remote access trojan

 Explanation:

 The evidence described suggests an advanced and stealthy threat capable of maintaining persistent unauthorized access. The attacker is exfiltrating data, manipulating processes, hiding activity, and maintaining long‑term control. This behavior aligns directly with a remote access trojan, frequently used in advanced persistent threat operations.

A, botnet malware, typically focuses on building distributed compromised systems for DDoS attacks, spam campaigns, or credential stuffing. Botnets often maintain command‑and‑control communication, but they rarely exhibit deep persistence, process injection, and targeted data exfiltration patterns associated with APT campaigns. The objective of botnet malware is scale, not stealth.

B, spyware, focuses solely on collecting user or system information such as browsing activity, keystrokes, screenshots, or stored credentials. While spyware observes behavior, it does not usually create hidden scheduled tasks or inject processes to gain full system control. Spyware lacks the depth of functionality required for long‑term covert access and complex operational command chains.

D, ransomware, encrypts files for financial extortion. While modern ransomware groups sometimes deploy remote access tools in early stages, ransomware itself is not the category of malware describeD) Ransomware attacks are noisy and destructive, not stealthy and persistent.

C, remote access trojan (RAT), is the correct answer because RATs give attackers full remote control of a compromised system. Capabilities include process injection, credential harvesting, covert communication, file exfiltration, creation of persistence mechanisms like hidden scheduled tasks, and encrypted command‑and‑control channels. RATs are commonly used by APT groups to maintain stealthy access over long durations, monitor operations, perform lateral movement, and extract sensitive datA) The abnormal outbound traffic to an unknown offshore IP aligns with C2 communication, a hallmark of RAT activity.

Q144. A company’s incident response team discovers that attackers gained access through a misconfigured API that lacked proper authentication. The attackers were able to query backend databases, enumerate account information, and extract sensitive datA) The CIO wants to implement a solution that continuously evaluates API behavior, detects anomalies, and blocks suspicious requests without relying solely on signatures. Which solution best addresses this?

A) Web application firewall
B) API gateway with behavioral analysis
C) Reverse proxy
D) Database activity monitoring

 Answer: B) API gateway with behavioral analysis

 Explanation:

API‑related attacks are increasingly common due to misconfigurations, weak authentication, and exposed endpoints. The organization wants a solution capable of continuously analyzing API behavior, detecting anomalies, blocking suspicious activity, and functioning beyond simple signature‑based detection.

A, web application firewall, filters HTTP traffic and blocks known attack patterns like SQL injection or XSS. However, traditional WAFs often rely heavily on signatures and rule sets. They do not natively understand API schemas, user context, or behavioral patterns such as request frequency anomalies or improper API call sequences.

C, reverse proxy, provides load balancing, caching, and traffic routing but does not evaluate behavior or detect anomalies. Reverse proxies are structural components rather than security intelligence tools.

D, database activity monitoring, detects suspicious queries within the database but does not protect the API layer. It monitors database operations but cannot prevent malformed or unauthorized API calls from reaching the backenD)

B, an API gateway with behavioral analysis, is correct because modern API gateways include advanced security features tailored for API traffiC) These capabilities include API schema enforcement, behavioral baselining, anomaly detection, user context evaluation, token validation, and rate‑limiting. Behavioral analysis allows the system to detect deviations from normal API usage patterns, such as sudden spikes in database queries or enumeration attempts that do not match typical user behavior. This approach is highly effective for preventing abuse of misconfigured or exposed APIs.

API gateways also integrate authentication, encryption, and throttling mechanisms to prevent brute forcing, credential stuffing, and data scraping attempts. This makes them a foundational requirement for securing microservices and distributed cloud architectures.

Q145. A penetration tester is assessing a corporate wireless environment and discovers that the company uses WPA3‑Personal with SAE, strong passwords, and proper access point isolation. However, the tester notices that the guest Wi‑Fi network uses open authentication with no encryption. Visitors frequently use this network for basic internet access. What is the primary security risk associated with the guest network?

A) Rogue access point attacks
B) Evil twin attacks
C) Credential interception
D) Packet sniffing

 Answer: D) Packet sniffing

 Explanation:

 The guest Wi‑Fi network uses open authentication, meaning there is no encryption such as WPA2 or WPA3 protecting wireless traffiC) In open networks, anyone within radio range can capture all transmitted datA) This makes packet sniffing the most immediate and direct risk.

A, rogue access point attacks, occur when an attacker sets up an unauthorized AP to lure users into connecting. While possible in any wireless environment, the question specifically asks about the primary security risk associated with an open network. Rogue APs are broader threats and not tied specifically to open authentication.

B, evil twin attacks, involve an attacker cloning the legitimate SSID to trick users into connecting. Although open networks increase the feasibility of evil twin attacks, the root vulnerability of open Wi‑Fi is unencrypted traffiC) Evil twins require additional attacker effort, whereas passive sniffing requires none.

C, credential interception, is possible if users log into websites without HTTPS. However, many modern applications enforce HTTPS, reducing but not eliminating the risk. Credential theft is a subset of unencrypted traffic interception, not the primary risk itself.

D, packet sniffing, is the correct answer because open Wi‑Fi lacks encryption, allowing attackers to capture traffic effortlessly. Tools like Wireshark, Kismet, and Aircrack can collect packets in real time. Even with HTTPS, metadata such as visited domains, unencrypted DNS requests, and traffic patterns can be captureD) For traffic not protected by TLS, attackers can see usernames, search queries, session tokens, and other sensitive information.

Open wireless networks are fundamentally insecure. Even a simple attacker passively listening can gather information without interacting with victims. The best practice is to use WPA3‑Personal for any network, including guest Wi‑Fi. Alternatively, guest networks can use captive portals or rely on WPA3‑Enhanced Open, which provides Opportunistic Wireless Encryption while still offering a public access experience.

Q146. A company wants to implement a zero-trust network architecture (ZTNA) to protect its internal applications. Employees often work remotely and need access from various devices, but access must be strictly limited based on device health, user identity, and application sensitivity. Which solution best supports this requirement?

A) VPN concentrator
B) Software-defined perimeter
C) Traditional firewall
D) Network intrusion detection system

Answer: B) Software-defined perimeter

Explanation:

 The organization is moving towards a zero-trust model where trust is never implicit. Users and devices must continuously prove they are authorized before gaining access to applications or resources. Traditional network segmentation or perimeter-based security cannot dynamically enforce this level of granular control, especially in a remote workforce environment.

A, VPN concentrator, allows secure encrypted tunnels between users and corporate networks. While VPNs provide confidentiality and basic authentication, they inherently trust users once connecteD) VPNs do not assess device health, application context, or perform continuous verification, which is required in a zero-trust model. VPNs also expand the attack surface by granting broad access to the internal network, contrary to the zero-trust principle of least privilege.

C, traditional firewall, enforces access control based on IP addresses, ports, and protocols. While useful for basic perimeter defense, firewalls cannot dynamically adapt to user identity, device posture, or application context. They are static controls and cannot evaluate behavioral or risk factors, making them insufficient for zero-trust requirements.

D, network intrusion detection system, passively monitors traffic for anomalies and known threats. While it can alert on suspicious activity, it does not actively enforce access policies or restrict application access based on device and user context. IDS tools are reactive rather than proactive.

B, software-defined perimeter, is correct because it implements zero-trust principles by creating individualized, encrypted connections between authenticated users and authorized applications. SDPs evaluate user identity, device security posture, location, and application sensitivity before granting access. Connections are ephemeral and dynamic, ensuring that users cannot access resources unless policies are met. SDP solutions reduce the attack surface by hiding internal applications from unauthorized users and provide continuous verification. They integrate with identity providers, endpoint detection tools, and conditional access policies to enforce granular access control.

SDP is particularly suited for remote work and cloud adoption because it eliminates the need to expose internal resources via VPNs or public IP addresses. By evaluating real-time risk and applying adaptive policies, organizations can ensure users only access resources necessary for their role and under approved conditions. This aligns perfectly with the company’s goal of enforcing zero-trust principles while accommodating a flexible workforce.

Q147. During a penetration test, a tester finds a web application that allows file uploads without validating file type or size. The application processes these files and executes them in a backend environment. Which type of vulnerability does this represent?

A) Cross-site scripting
B) Remote code execution
C) SQL injection
D) Directory traversal

Answer: B) Remote code execution

 Explanation:

 The scenario describes a critical flaw where uploaded files are executed on the backend without validation. Attackers can leverage this vulnerability to run arbitrary code on the server, potentially taking full control of the system.

A, cross-site scripting, involves injecting malicious scripts into web pages viewed by other users. XSS targets the client side and does not allow execution of commands on the server itself, making it unrelated to the file upload vulnerability describeD)

C, SQL injection, targets the database layer by injecting malicious SQL queries. While dangerous, SQL injection exploits database logic and cannot execute arbitrary OS-level commands through file uploads.

D, directory traversal, allows attackers to access files outside of intended directories by manipulating path references. While directory traversal can lead to information disclosure, it does not execute arbitrary code in the server environment.

B, remote code execution, is correct because the uploaded files are processed and executed on the server. RCE vulnerabilities allow attackers to execute arbitrary scripts or commands with the privileges of the web application. Exploitation can lead to complete server compromise, lateral movement, data exfiltration, or deployment of persistent malware. Mitigation requires validating file types, enforcing size limits, scanning for malicious content, and executing files in isolated environments. Proper input validation, secure coding practices, and runtime monitoring are essential to prevent RCE in file upload scenarios.

Q148. A SOC analyst notices repeated login attempts targeting multiple accounts using only a few common passwords. No single account is being targeted aggressively, and some logins occur from various geographic locations. Which type of attack is being observed?

A) Brute-force attack
B) Password spraying
C) Credential stuffing
D) Phishing

Answer: B) Password spraying

Explanation

 The described behavior matches a password spraying attack. Password spraying is characterized by using a small set of commonly used passwords across many accounts to avoid triggering account lockouts and evade detection. It exploits weak password practices rather than attempting exhaustive guessing on a single account.

A, brute-force attack, tries every possible password for a specific account until successful. Brute-force is highly targeted and noisy, often triggering alerts and lockouts. In this case, no single account is under exhaustive attack, so brute-force is unlikely.

C, credential stuffing, involves using previously leaked username and password combinations to automate login attempts. Credential stuffing relies on stolen credentials rather than trying a few common passwords across multiple accounts.

D, phishing, uses social engineering to trick users into revealing credentials. The scenario does not indicate deceptive communication; it shows automated login attempts.

B, password spraying, is correct. Attackers avoid detection by attempting a limited set of passwords across numerous accounts, often from multiple IPs or geolocations. Detection and mitigation include monitoring failed login patterns, enforcing multi-factor authentication, implementing lockout thresholds, and educating users to use strong, unique passwords. Password spraying is commonly used in large organizations where users tend to reuse simple passwords, and effective defenses require a combination of monitoring, policy enforcement, and behavioral analytics.

Q149. A company plans to enforce that sensitive cloud data is encrypted in a way that even the cloud provider cannot decrypt it. The organization will generate and manage all encryption keys internally. Which encryption model achieves this requirement?

A) Provider-managed encryption
B) Customer-managed encryption with provider key storage
C) Customer-managed encryption with customer key storage
D) Provider-managed encryption with customer key storage

Answer: C) Customer-managed encryption with customer key storage

 Explanation

 The organization wants full control over encryption keys so that only they can access the datA) This approach is often referred to as client-side encryption or customer key ownership in cloud environments.

A, provider-managed encryption, means the cloud provider generates, stores, and manages the keys. While this protects data at rest, the provider technically has access to keys and could decrypt the data if required or compromiseD)

B, customer-managed encryption with provider key storage, allows customers to define policies and rotate keys but stores the keys in the provider’s environment. The provider retains access, which does not meet the requirement of preventing provider decryption.

D, provider-managed encryption with customer key storage, is conceptually inconsistent because providers cannot manage keys stored entirely outside their infrastructure.

C, customer-managed encryption with customer key storage, is correct. The organization generates and securely stores the keys, retaining exclusive control. Data encrypted in this manner cannot be decrypted by the cloud provider. This model ensures strong confidentiality and regulatory compliance, suitable for industries with strict privacy requirements. Implementation requires robust key lifecycle management, including generation, rotation, backup, and secure destruction. Loss of keys would result in permanent data inaccessibility, so secure storage practices are critical.

Q150. During a forensic investigation, an analyst wants to ensure that a disk image collected from a suspect system has not been altereD) Which technique provides the highest assurance of integrity?

A) Disk partitioning
B) Hashing
C) Defragmentation
D) Sanitization

Answer: B) Hashing

Explanation

 Preserving evidence integrity is a cornerstone of digital forensics. The goal is to confirm that the disk image remains identical to the original, with no modifications during collection, transfer, or analysis.

A, disk partitioning, changes the structure of the disk, which would alter data and compromise integrity verification. It is not used for integrity assurance.

C, defragmentation, reorganizes files to improve storage efficiency. This process modifies data placement and would invalidate forensic evidence.

D, sanitization, destroys or wipes data to prevent recovery. This is counterproductive in a forensic investigation.

B, hashing, is correct. Hashing uses cryptographic functions (SHA-256, SHA-3, or SHA-512) to produce a unique fingerprint of the disk image. Even a single-bit change results in a completely different hash value. Analysts typically compute hashes of both the original media and any duplicates. During the investigation, repeated hashing ensures the image remains unaltereD) Hashes are also documented to support the chain of custody and legal admissibility. This technique allows investigators to demonstrate to courts or regulatory bodies that evidence has not been tampered with and maintains trust in the forensic process. Hashing is widely regarded as a best practice in evidence collection, analysis, and reporting.

Q151. A security analyst is investigating unusual network traffic originating from an internal server. The traffic pattern shows the server connecting to multiple external IP addresses, sending small amounts of data frequently, and using uncommon ports. What type of compromise is most likely occurring?

A) Distributed denial-of-service attack
B) Botnet infection
C) Man-in-the-middle attack
D) SQL injection

Answer: B) Botnet infection

Explanation:

A, distributed denial-of-service attack, involves overwhelming a target system with traffic to render it unavailable. In this scenario, the internal server is the source of outbound connections, not the target of overwhelming traffic, making DDoS unlikely.

C, man-in-the-middle attack, occurs when an attacker intercepts communication between two parties to eavesdrop or alter datA) The unusual outbound traffic does not indicate interception of communication, so MITM is not relevant.

D, SQL injection, targets databases via malicious SQL queries. This type of compromise would not generate the pattern of small, frequent outbound connections to multiple external IPs.

B, botnet infection, is correct. Botnets involve compromised systems controlled remotely by attackers, often referred to as bots or zombies. Characteristics include:

Communication with command-and-control (C2) servers, often over uncommon ports to avoid detection.

Small, frequent outbound traffic (beaconing) as the bot checks in with C2 infrastructure.

Participation in larger malicious campaigns such as spam distribution, DDoS, or crypto-mining.

Detection and mitigation involve network traffic monitoring, endpoint scans for malware, isolating affected systems, updating signatures, and applying behavioral analytics to identify command-and-control patterns. Botnet infections are highly stealthy and can persist if not detected early, highlighting the importance of continuous monitoring and endpoint protection.

Q152. A company enforces that employees can only access resources necessary for their job functions and nothing more. Access rights are regularly reviewed and adjusted based on changing responsibilities. Which security principle is being implemented?

A) Separation of duties
B) Least privilege
C) Role rotation
D) Mandatory vacation

Answer: B) Least privilege

 Explanation:

A, separation of duties, distributes responsibilities across multiple individuals to prevent fraud or errors. While it reduces risk, it does not focus on minimizing access rights for each user.

C, role rotation, involves periodically shifting responsibilities to prevent insider threats and frauD) This strategy does not inherently enforce access limitation.

D, mandatory vacation, is a policy requiring employees to take time off to detect anomalies in their tasks. It is unrelated to access rights or privilege limitation.

B, least privilege, is correct. Least privilege restricts users, processes, and applications to only the minimum privileges necessary to perform their duties. Key aspects include:

Access reviews: Regularly auditing permissions to ensure they align with current roles.

Role-based access control (RBAC): Assigning access based on clearly defined roles.

Privilege escalation monitoring: Detecting unauthorized attempts to gain higher access.

Enforcing least privilege reduces the attack surface, mitigates potential insider threats, limits the impact of compromised accounts, and aligns with regulatory compliance requirements. It is a foundational concept in cybersecurity frameworks such as NIST, ISO 27001, and CIS Controls.

Q153. An attacker modifies DNS records of a popular website to redirect users to a fraudulent website without their knowledge. Users enter credentials, which the attacker captures. What type of attack is this?

A) Pharming
B) Phishing
C) Man-in-the-middle
D) Domain hijacking

Answer: A) Pharming

 Explanation:

B, phishing, involves sending fraudulent emails or messages to trick users into revealing sensitive information. Phishing does not modify DNS records or redirect traffic at the domain level.

C, man-in-the-middle, intercepts communication between parties. While MITM can capture credentials, it usually requires real-time interception rather than permanent DNS changes.

D, domain hijacking, occurs when an attacker gains control of a domain’s registration information. Domain hijacking can facilitate pharming but is a distinct process from redirecting users through DNS manipulation.

A, pharming, is correct. Pharming redirects legitimate website traffic to malicious sites by exploiting DNS vulnerabilities or altering hosts files. Users are unaware they are on a fake site and may provide sensitive credentials. Mitigation strategies include:

Implementing DNSSEC to ensure DNS record integrity.

Monitoring DNS records for unauthorized changes.

Educating users to verify HTTPS certificates and site authenticity.

Pharming attacks are highly effective because they bypass user vigilance, often relying on the trust in widely used domains. Organizations must combine technical controls with user education to reduce exposure.

Q154. A penetration tester discovers that a web application is vulnerable to SQL injection. The tester can manipulate input fields to retrieve unauthorized data from the backend database. What security control would directly mitigate this vulnerability?

A) Input validation
B) Strong password policies
C) Multi-factor authentication
D) Network segmentation

Answer: A) Input validation

Explanation:

B, strong password policies, strengthens authentication but does not prevent SQL injection attacks. Password policies do not affect data input handling in applications.

C, multi-factor authentication, adds an additional layer of authentication. While it protects account access, it does not prevent SQL injection in application input fields.

D, network segmentation, separates network segments to limit lateral movement. Segmentation may limit exposure but does not directly stop SQL injection.

A, input validation, is correct. Input validation ensures that user-supplied data conforms to expected formats and constraints before being processeD) Key techniques include:

Parameterized queries or prepared statements to separate SQL code from input datA)

Whitelisting acceptable characters and rejecting malicious input.

Escaping special characters to prevent execution of arbitrary SQL commands.

Additional measures include proper error handling to prevent information disclosure and conducting security testing to identify injection points. Input validation directly mitigates SQL injection risks and enhances overall application security.

Q155. A security administrator notices multiple failed login attempts to critical systems from the same IP address. After several attempts, one account is compromiseD) Which type of attack occurred?

A) Brute-force attack
B) Password spraying
C) Credential stuffing
D) Keylogging

 Answer: A) Brute-force attack

Explanation:

 B, password spraying, involves trying a small set of commonly used passwords across many accounts. In this case, the attack focused on one account with repeated attempts, so password spraying is not applicable.

C, credential stuffing, relies on previously breached usernames and passwords, attempting login across many accounts. There is no indication that leaked credentials were used in this scenario.

D, keylogging, involves malware capturing keystrokes to steal credentials. The scenario does not indicate malware activity or key capture.

A, brute-force attack, is correct. Brute-force attacks involve systematically attempting all possible password combinations or trying a large list of passwords against a single account until successful. Characteristics include:

High volume of login attempts targeting one account.

Use of automated tools to try multiple passwords quickly.

Potential triggering of account lockouts or alerts if monitoring is in place.

Mitigation strategies include enforcing account lockout policies, implementing multi-factor authentication, using strong passwords, and monitoring for unusual login attempts. Brute-force attacks are a common initial compromise method and can lead to unauthorized access if controls are weak.

Q156. A company wants to prevent unauthorized devices from connecting to its wireless network. The security team implements a solution that allows only devices with registered MAC addresses to access the network. Which type of control is being used?

A) Network access control
B) MAC filtering
C) WPA3 encryption
D) Captive portal

 Answer: B) MAC filtering

Explanation:

A, network access control (NAC), is a broader solution that enforces security policy compliance for devices attempting to connect to a network. NAC can include posture assessment, device health checks, and policy enforcement, but the scenario specifically describes allowing devices based on MAC addresses only.

C, WPA3 encryption, provides strong encryption for wireless networks to protect data in transit. WPA3 ensures confidentiality and integrity but does not control which devices are permitted to join the network.

D, captive portal, is a web page presented to users before granting network access, often used in guest Wi-Fi environments. While it enforces authentication, it does not rely on device-specific identifiers like MAC addresses.

B, MAC filtering, is correct. MAC filtering restricts network access based on the unique hardware addresses of network interfaces. Key characteristics include:

Device registration: Only pre-approved MAC addresses are allowed to connect.

Policy enforcement: Unauthorized devices are denied access.

Simple implementation: Can be configured on wireless access points or switches.

Limitations include MAC spoofing, where attackers can change their device’s MAC address to mimic an authorized one. Despite this, MAC filtering adds an additional layer of security and can be combined with stronger controls like WPA3, NAC, and 802.1X authentication for comprehensive protection.

Q157. During a forensic examination, an investigator wants to ensure that logs from a compromised system cannot be altered or deleted by an attacker. Which security principle should be enforced?

A) Confidentiality
B) Integrity
C) Non-repudiation
D) Immutability

 Answer: D) Immutability

Explanation:

A, confidentiality, focuses on protecting data from unauthorized access. While important, confidentiality does not prevent tampering or deletion of logs.

B, integrity, ensures that data remains accurate and unaltereD) Integrity checks, such as hashing, can detect changes but may not inherently prevent them.

C, non-repudiation, guarantees that actions can be traced back to a specific user to prevent denial of responsibility. Non-repudiation supports accountability but does not enforce log preservation against modification.

D, immutability, is correct. Immutability ensures that data, such as logs, cannot be modified or deleted once written. Implementation strategies include:

Write-once-read-many (WORM) storage: Prevents modification after creation.

Append-only logging: New entries can be added but existing records cannot be altereD)

Blockchain-based or cryptographically protected logs: Provides tamper-evident mechanisms.

Enforcing immutability is critical for forensic investigations, regulatory compliance (e.g., PCI DSS, HIPAA, ISO 27001), and internal auditing. It ensures that evidence remains reliable and provides a trustworthy record of system activity, even if an attacker gains access.

Q158. A security analyst observes unusual outbound traffic on a server, which appears to be communicating with multiple external IP addresses. The traffic includes encrypted payloads sent at regular intervals. Which type of compromise is most likely present?

A) Botnet
B) Phishing
C) Ransomware
D) SQL injection

Answer: A) Botnet

Explanation:

B, phishing, involves tricking users into disclosing credentials or sensitive datA) Phishing does not explain ongoing automated outbound traffic from a server.

C, ransomware, encrypts files and often demands payment for decryption. While ransomware may communicate with a command-and-control server for keys, the continuous, periodic beaconing described is more indicative of persistent compromise rather than immediate file encryption.

D, SQL injection, targets a database through unsanitized input. SQL injection affects data retrieval or modification, not outbound command-and-control communication.

A, botnet, is correct. Characteristics of botnet activity include:

Command-and-control communication: Bots connect to C2 servers to receive instructions.

Beaconing: Regular, small, and encrypted traffic to maintain covert communication.

Multiple target connections: Bots often communicate with multiple external IPs.

Potential participation in broader attacks: Bots may be used for spam, DDoS, or cryptocurrency mining.

Detection strategies include monitoring network traffic for unusual patterns, inspecting for encrypted outbound connections on uncommon ports, and employing intrusion detection systems. Remediation involves isolating affected systems, performing malware removal, and patching vulnerabilities that allowed compromise.

Q159. A company mandates that employees authenticate using a password and a one-time code generated by an authenticator app. Which authentication method is being employed

A) Single-factor authentication
B) Two-factor authentication
C) Biometric authentication
D) Certificate-based authentication

Answer: B) Two-factor authentication

Explanation:

A, single-factor authentication, involves only one type of credential, such as a passworD) Single-factor authentication provides limited security and is insufficient for sensitive accounts.

C, biometric authentication, relies on physical characteristics such as fingerprints or facial recognition. The scenario describes a password and code, not biometrics.

D, certificate-based authentication, uses cryptographic certificates and public/private key pairs for validation. This is unrelated to the scenario describeD)

B, two-factor authentication, is correct. Two-factor authentication (2FA) combines:

Something you know: A password or PIN.

Something you have: A time-based one-time password (TOTP) from an authenticator app.

2FA significantly increases security by requiring both factors for access. Even if a password is compromised, access cannot be obtained without the second factor. Best practices include:

Implementing 2FA across all critical systems and applications.

Educating users on the importance of safeguarding authentication devices.

Monitoring for failed attempts and anomalies to detect possible credential theft.

Two-factor authentication aligns with zero-trust principles, reducing risks associated with password reuse, phishing, and credential compromise.

Q160. During a penetration test, a tester discovers that a Linux server has a cron job owned by root that executes a script every five minutes. The script is writable by all users. Which type of attack could be performed next?

A) Privilege escalation
B) Lateral movement
C) Credential harvesting
D) Pivoting

Answer: A) Privilege escalation

 Explanation:

 B, lateral movement, involves moving from one compromised system to others within a network. Lateral movement typically occurs after initial access and often relies on having elevated privileges to access other systems or sensitive resources. In this scenario, the focus is on a local vulnerability within a single Linux host, and there is no indication that other systems are being targeted yet. Therefore, lateral movement is a subsequent step that might follow exploitation but is not the immediate attack vector.

C, credential harvesting, refers to collecting stored credentials, such as passwords, API keys, or authentication tokens, to gain further access. While credential harvesting can be part of a larger attack campaign, the vulnerability described here involves improper file permissions on a cron job script rather than exposed credentials. There is no direct indication that credentials are stored insecurely in this script, making credential harvesting an unlikely immediate attack.

D, pivoting, involves using a compromised host as a staging point to attack other systems in a network. Pivoting is often part of advanced persistent threat (APT) tactics, where attackers gain initial access and then leverage a foothold to move laterally to other hosts. Similar to lateral movement, pivoting may occur after privilege escalation, but the immediate concern here is exploiting the local system vulnerability.

A, privilege escalation, is correct. Privilege escalation refers to techniques that allow an attacker to gain higher privileges than those originally granteD) In the scenario, the cron job script is executed by root but is writable by all users, which is a classic example of a local privilege escalation vulnerability. Key factors include:

Misconfigured permissions: The script is owned by root, meaning it runs with root privileges. However, the script is writable by all users, which violates the principle of least privilege and creates an opportunity for exploitation. Any user on the system can edit the script and inject commands that will execute with root-level access when the cron job runs.

Exploitation method: A penetration tester or attacker could append malicious commands, such as creating a new root-level user, adding the current user to privileged groups, installing a backdoor, or changing system configurations. Since the cron job executes automatically every five minutes, the attack can be performed asynchronously without the attacker needing to maintain a live session, making this vulnerability particularly dangerous.

Outcome: Successful exploitation provides the attacker with full administrative control over the Linux server. This allows:

Installation of persistent malware or rootkits.

Modification or deletion of system files and logs, which can hinder detection.

Access to sensitive data stored locally or mounted from network shares.

Facilitation of further attacks, such as lateral movement or pivoting to other systems, once root privileges are obtained.

Mitigation strategies include:

Enforcing least privilege: All scripts, particularly those executed by root or other privileged accounts, should have minimal write permissions. Only trusted administrators should have the ability to modify these scripts. Permissions should generally be set to 700 (owner read/write/execute) or 755 if execution by others is necessary but without write access.

Regular auditing: Security teams should periodically review all cron jobs and associated file permissions to identify misconfigurations. Automated tools can assist in detecting writable root-owned scripts and alert administrators to potential vulnerabilities.

Monitoring and alerts: Implement file integrity monitoring to detect unauthorized modifications to critical scripts. Alerts can be configured to notify security teams immediately when a root-owned cron job script is changed.

System hardening: Apply standard Linux hardening guidelines, including removing unnecessary cron jobs, restricting user permissions, and limiting access to administrative accounts. Using tools like sudo policies, SELinux, or AppArmor can further restrict the capabilities of processes and reduce the risk of privilege escalation.

Education and best practices: Administrators and developers should follow secure coding and configuration practices. All scripts and automated tasks should be reviewed for security risks before deployment, and access should be granted only on a need-to-know basis.

Misconfigured cron jobs are a common and high-risk vector for local privilege escalation on Linux systems. According to security research and CVE reports, writable root-owned cron jobs have been exploited in multiple real-world attacks. Attackers often automate exploitation to gain persistent root access, making proactive detection and remediation critical. By combining permissions enforcement, regular auditing, monitoring, and least privilege principles, organizations can significantly reduce the likelihood of privilege escalation and maintain a stronger security posture.

img