CompTIA SY0-701 Security+ Exam Dumps and Practice Test Questions Set1 Q1-20

Visit here for our full CompTIA SY0-701 Security+ exam dumps and practice test questions.

Q1. A security analyst is reviewing authentication logs and notices that several user accounts have attempted logins from multiple countries within minutes. The analyst suspects an automated attack that leverages previously exposed credentials. Which security concept best describes this type of threat?

A. Password spraying
B. Credential stuffing
C. Brute-force attack
D. Rainbow table attack

Answer: B. Credential stuffing

Explanation:

Password spraying is an attack technique where an attacker attempts a small set of commonly used passwords across many user accounts. Unlike a traditional brute-force attack that targets a single account with numerous password attempts, password spraying aims to avoid triggering account lockouts. While password spraying can generate multiple login attempts, it does not rely on previously stolen credentials but rather exploits human tendencies to use weak, common passwords. In the scenario described, multiple logins from different countries within minutes suggest a coordinated, automated attempt leveraging known credentials, which does not align with password spraying behavior.

Credential stuffing is correct. This attack exploits the widespread habit of password reuse across platforms. Attackers obtain large datasets of exposed credentials from prior breaches and attempt to authenticate on unrelated platforms using automation tools or botnets. The rapid succession of login attempts from geographically diverse locations strongly indicates an automated system is in use, attempting to find accounts where users have reused credentials. Unlike brute-force attacks, which attempt every possible password combination, credential stuffing targets known username-password pairs, increasing efficiency and likelihood of success. Organizations mitigate these attacks with multi-factor authentication, adaptive login risk assessments, IP reputation filtering, and monitoring for impossible travel scenarios—exactly what the analyst noticed. Credential stuffing campaigns have grown in prevalence due to frequent data breaches, the human tendency to reuse passwords, and the scalability provided by automation.

Brute-force attacks systematically attempt all possible combinations of characters for a single account until the correct password is discovered. These attacks are usually slower and resource-intensive. Brute-force attacks do not rely on previous breaches or exposed credentials. In this case, multiple accounts are accessed rapidly across multiple countries, which is inconsistent with the brute-force methodology.

Rainbow table attacks involve precomputed tables that reverse cryptographic hashes to recover plaintext passwords from stolen hash values. They are offline attacks performed against hash databases rather than live login systems. This method does not produce distributed login attempts across multiple countries and is unrelated to authentication logs.

Credential stuffing represents a significant threat because it combines the use of previously compromised credentials, human password reuse, and distributed automation to compromise accounts at scale. Detecting it involves behavioral analytics, anomaly detection, and defensive measures such as CAPTCHA enforcement and account lockout policies. Multi-factor authentication is particularly effective because it ensures that possession of credentials alone is insufficient for unauthorized access.

Q2. A company wants to improve the confidentiality of sensitive documents stored in its cloud environment. They decide to encrypt data with keys that are generated, managed, and stored by the customer exclusively. Which cloud security model does this approach represent?

A. Provider-managed encryption
B. Customer-managed encryption with provider key storage
C. Customer-managed encryption with customer key storage
D. Provider-managed encryption with customer key storage

Answer: C. Customer-managed encryption with customer key storage

Explanation:

Provider-managed encryption occurs when the cloud provider generates, stores, and manages encryption keys on behalf of the customer. While this provides basic encryption, the cloud provider technically has access to the keys. If the provider’s internal systems are compromised, or a malicious insider attempts to access the data, they could decrypt it. This approach does not fulfill the requirement for exclusive control over key management by the customer.

Customer-managed encryption with provider key storage allows the organization to control policies like key rotation and encryption rules but still relies on the provider to store the keys. Because the provider retains access to the keys, confidentiality is reduced relative to fully customer-controlled storage. Although this model improves control over encryption policies, it does not achieve the maximum level of cryptographic independence.

Customer-managed encryption with customer key storage is correct. In this model, the organization generates, rotates, stores, and destroys cryptographic keys entirely under its control. Keys may be stored on customer-managed hardware security modules or secure cloud-integrated key management solutions where the cloud provider cannot access the keys. This ensures that even if the cloud storage infrastructure is compromised, the attacker cannot decrypt sensitive documents. This approach provides the highest level of confidentiality and aligns with zero-trust principles, regulatory compliance (such as financial and healthcare data protection), and defense-in-depth strategies. It also allows the customer to enforce strict audit policies and key rotation schedules, ensuring the highest level of security and compliance.

Provider-managed encryption with customer key storage is unrealistic because the provider cannot manage encryption keys that are not in their control. Key management inherently requires access to keys; therefore, this model is not practical.

This approach mitigates risks associated with unauthorized provider access and strengthens overall data confidentiality, making it the preferred model for organizations with sensitive data and regulatory obligations.

Q3. A security engineer detects that an attacker has exploited a web application through unsanitized user input. The attacker was able to retrieve confidential data from the backend database by manipulating query structures. Which attack best matches this scenario?

A. XML external entity attack
B. LDAP injection
C. SQL injection
D. Directory traversal

Answer: C. SQL injection

Explanation:

XML external entity (XXE) attacks exploit vulnerabilities in XML parsers to read arbitrary files, interact with internal systems, or exfiltrate sensitive data. While serious, XXE attacks do not manipulate SQL queries and therefore are not consistent with the described scenario.

LDAP injection targets Lightweight Directory Access Protocol services by injecting commands into directory queries. While LDAP injection can expose sensitive information within directory services, it does not affect SQL databases or allow retrieval of confidential database records as described.

SQL injection is correct. This attack occurs when input from users is concatenated directly into SQL statements without validation or parameterization. By injecting specially crafted input, attackers manipulate SQL queries to bypass authentication, retrieve sensitive data, or modify the database. The scenario specifies that confidential data was retrieved through query manipulation, which aligns precisely with SQL injection behavior. SQL injection is one of the most common web application vulnerabilities and remains a leading cause of data breaches. Preventive measures include parameterized queries, prepared statements, stored procedures, strict input validation, and the use of web application firewalls to filter malicious payloads.

Directory traversal attacks aim to access files outside authorized directories using techniques such as “../” sequences. While this could expose files on a server, it does not manipulate database queries and is therefore inconsistent with the attack described.

SQL injection attacks can have severe consequences, including database takeover, privilege escalation, and remote code execution. Ensuring proper coding practices, secure frameworks, and continuous application scanning is critical for defense.

Q4. During a security audit, an organization discovers that many employees are plugging personal USB drives into company computers. The security team wants to enforce a policy that blocks unauthorized removable media while still allowing approved company-issued encrypted drives. Which control best supports this requirement?

A. Network segmentation
B. Data loss prevention on endpoints
C. Application allowlisting
D. Group policy for removable media restrictions

Answer: D. Group policy for removable media restrictions

Explanation:

Network segmentation involves separating network traffic into isolated zones to protect sensitive data, but it does not provide control over local hardware device usage. Personal USB drives could still be connected to endpoints regardless of network design.

Data loss prevention (DLP) solutions monitor or restrict the movement of sensitive files. While DLP can alert administrators when data is copied to removable media, it does not inherently block unauthorized devices or enforce encryption requirements.

Application allowlisting controls which software can execute on endpoints. While this enhances security against malware, it does not restrict hardware devices, so personal USB drives could still introduce risk.

Group policy for removable media restrictions is correct. Administrators can define policies that permit only devices meeting specific criteria—such as vendor IDs, serial numbers, or enforced encryption. Unapproved drives are automatically blocked. This approach prevents malware introduction, unauthorized data transfers, and accidental leakage while allowing employees to use company-approved secure devices. Group policies provide centralized, manageable enforcement suitable for enterprise environments.

Using group policies ensures consistent security across the organization, reduces risk from human behavior, and supports regulatory compliance by restricting device usage and enforcing encryption.

Q5. A threat actor sends an email that appears to come from a well-known online marketplace, notifying the user that their recent order was canceled. The email contains a link to sign in and confirm the cancellation, but the URL redirects to a fake login page designed to harvest credentials. Which type of social engineering attack is being used?

A. Spear phishing
B. Whaling
C. Pharming
D. Phishing

Answer: D. Phishing

Explanation:

Spear phishing is a targeted form of phishing that uses personalized information about the victim to increase effectiveness. The email in the scenario is generic and targets potentially any user, so it is not considered spear phishing.

Whaling specifically targets high-profile executives or decision-makers. The described attack targets general users, not corporate leadership.

Pharming manipulates DNS or host files to redirect users to fraudulent websites automatically, often without requiring a link click. In this case, the attack relies on a deceptive email link, making pharming inconsistent with the scenario.

Phishing is correct. The attacker impersonates a trusted entity, fabricates urgency (cancellation notice), and uses a link to a fake website to collect credentials. This is a classic social engineering tactic used to exploit human trust and urgency. Users may inadvertently disclose sensitive information, which can then be used for account compromise, identity theft, or financial fraud.

Mitigation strategies include user awareness training, email filtering solutions, strong authentication methods (such as MFA), and URL verification practices. Phishing remains a top vector for cybersecurity breaches because it targets human behavior rather than technical vulnerabilities, making awareness critical.

Q6. A security analyst reviewing network logs notices that a previously unknown IoT device is communicating with an external IP address known for command-and-control activity. The traffic consists of periodic check-ins and small outbound packets. What type of malware behavior is most consistent with these observations?

A. Ransomware
B. Botnet participation
C. Trojan downloader
D. Worm propagation

Answer: B. Botnet participation

Explanation:

Ransomware is malware that encrypts files and demands payment, typically generating high-volume local disk activity. It does not typically maintain periodic outbound connections or small status-check communications with a remote command-and-control server. Therefore, the observed traffic pattern does not align with ransomware behavior.

Botnet participation is correct. A botnet is a network of compromised devices controlled remotely by an attacker, often through a command-and-control (C2) server. The IoT device’s periodic check-ins to a known malicious IP and small outbound packets are consistent with reporting status and receiving instructions, hallmarks of botnet behavior. Attackers exploit IoT devices because they often have weak authentication, outdated firmware, and minimal security protections. Botnets can be leveraged for distributed denial-of-service attacks, spamming, credential stuffing, or cryptocurrency mining. Detecting this behavior early is critical to prevent the device from being exploited for large-scale attacks. Organizations mitigate IoT botnet risks by segmenting IoT networks, enforcing strong authentication, keeping firmware updated, and monitoring for unusual outbound traffic patterns.

Trojan downloaders primarily retrieve additional malware payloads onto a system. While they do communicate externally, the traffic is typically bursty and associated with downloads, not small, periodic check-ins. The pattern described in the scenario does not match a downloader’s activity.

Worm propagation involves scanning networks, finding vulnerabilities, and spreading to other devices. This activity generates aggressive, high-volume traffic with repeated connection attempts across multiple hosts, unlike the low-bandwidth, periodic communication observed.

Botnet infections are increasingly common in IoT devices, given their limited security configurations, and early detection through network monitoring and threat intelligence feeds is crucial to prevent their misuse.

Q7. A cyber incident responder discovers that an attacker used stolen admin credentials to access a cloud environment. The attacker created several new virtual machines that are being used for cryptocurrency mining. What is the attacker primarily exploiting?

A. Resource exhaustion
B. Elasticity
C. Shadow IT
D. Misconfigured firewall

Answer: B. Elasticity

Explanation:

Resource exhaustion refers to using up preexisting system resources until performance degrades. While mining can consume resources, in this case, the attacker is leveraging the ability to create new resources, not deplete existing ones.

Elasticity is correct. Elasticity is a core feature of cloud computing, allowing on-demand provisioning of computing resources. By exploiting stolen credentials, the attacker can rapidly spin up multiple virtual machines to perform cryptocurrency mining. This takes advantage of the cloud’s flexible scalability to generate computing power for financial gain without initially owning the hardware. Elasticity can be misused if identity and access management controls, resource monitoring, and cost alerts are not properly implemented.

Shadow IT refers to unauthorized systems deployed by employees without IT approval. This is an internal operational issue, not an external attack exploiting cloud features.

Misconfigured firewalls could allow unwanted inbound or outbound traffic, but the creation of new virtual machines itself does not depend on firewall misconfigurations.

The misuse of cloud elasticity demonstrates why continuous monitoring, resource usage alerts, and strong identity access management are essential in cloud environments to prevent cryptojacking and other attacks.

Q8. An organization wants to ensure that its database backup files stored in the cloud remain unchanged and cannot be tampered with by any internal or external party. Which security concept should be prioritized?

A. Confidentiality
B. Availability
C. Integrity
D. Resilience

Answer: C. Integrity

Explanation:

Confidentiality ensures that unauthorized parties cannot access sensitive information. While important, confidentiality does not guarantee that data remains unmodified. Even encrypted data could be deleted, altered, or corrupted without violating confidentiality.

Availability ensures that data and systems remain accessible when needed, but it does not protect against unauthorized modifications or tampering.

Integrity is correct. Ensuring the integrity of database backups means protecting them from unauthorized modification or deletion. Cryptographic techniques such as hashing, digital signatures, and immutable storage systems can guarantee integrity. For example, generating a SHA-256 hash of each backup allows verification after retrieval to ensure the backup has not been altered. Integrity is especially critical for disaster recovery, forensic analysis, and regulatory compliance. Cloud providers often offer features like object lock or write-once-read-many (WORM) storage to enforce immutability, ensuring backups remain trustworthy over time.

Resilience refers to the system’s ability to maintain functionality in the face of disruptions. While this ensures service continuity, it does not protect the data from tampering.

Prioritizing integrity ensures that restored data is accurate and trustworthy, preventing data corruption, insider tampering, or accidental deletion from compromising business operations.

Q9. A penetration tester gains access to a Linux server with limited user privileges. They run a command to view scheduled tasks and discover a root-owned script that executes every five minutes, but the script has write permissions for all users. Which attack technique could the tester perform next?

A. Privilege escalation
B. Lateral movement
C. Credential harvesting
D. Pivoting

Answer: A. Privilege escalation

Explanation:

Privilege escalation is correct. When a root-owned script has insecure permissions (world-writable), a low-privileged user can modify it. When the cron job executes, it runs with root privileges, executing the attacker’s injected commands. This is a classic Linux privilege escalation vulnerability caused by improper file permissions. Mitigation involves enforcing strict ownership and permission policies for scripts and critical system files.

Lateral movement involves spreading to other systems in a network after initial compromise. While privilege escalation can facilitate lateral movement, the immediate opportunity here is local elevation of privileges.

Credential harvesting involves stealing passwords, tokens, or other secrets. In this scenario, the attacker’s objective is to gain higher privileges through file manipulation, not access credentials.

Pivoting is using a compromised system as a launch point for attacking other networked devices. While privilege escalation may enable pivoting later, it is not the direct technique for exploiting insecure script permissions.

The Misconfigured cron jobs, world-writable scripts, and improper permission settings are frequent attack vectors for local privilege escalation. Security controls like file integrity monitoring, auditing, and strict permission enforcement help mitigate these risks.

Q10. A company notices that its website experiences intermittent outages whenever a particular competitor launches promotional events. Analysis reveals an unusually high amount of traffic from diverse global IP addresses, consuming bandwidth and overwhelming servers. Which attack is most likely occurring?

A. ARP poisoning
B. Distributed denial-of-service attack
C. MAC flooding
D. Smurf attack

Answer: B. Distributed denial-of-service attack

Explanation:

ARP poisoning occurs on local networks by sending fake ARP messages to map IP addresses to the wrong MAC addresses. It is limited to LANs and cannot explain internet-wide traffic spikes causing website outages.

Distributed denial-of-service (DDoS) attack is correct. DDoS attacks overwhelm targets using multiple compromised systems, often a botnet, from diverse locations. The global IP diversity, high traffic volume, and correlation with competitor events strongly suggest a coordinated attack to exhaust resources and disrupt availability. DDoS attacks can be volumetric, protocol-based, or application-layer and may involve HTTP floods, SYN floods, or amplification techniques. Mitigation includes traffic scrubbing, rate limiting, content delivery networks, and resilient cloud architectures.

MAC flooding targets switch tables in LANs, causing local network disruption. This attack cannot explain website outages originating from global traffic.

Smurf attacks are ICMP broadcast amplification attacks, historically less common, and typically involve fewer sources. They are not consistent with high-volume, globally distributed traffic patterns.

The DDoS attacks are among the most common methods of disrupting web services, emphasizing the importance of mitigation strategies and infrastructure monitoring to ensure business continuity.

Q11. A new policy requires that employees working remotely must verify their identity using a fingerprint scan in addition to entering their password. Which multifactor authentication principle is being applied?

A. Something you have
B. Something you know
C. Something you are
D. Somewhere you are

Answer: C. Something you are

Explanation:

Something you have refers to physical items like smart cards, tokens, or mobile devices used for authentication. This does not include biometric traits.

Something you know refers to secrets like passwords or PINs, which is already in use in this scenario.

Something you are is correct. Fingerprint authentication is a biometric factor based on unique physical characteristics. By combining it with a password, the organization implements multifactor authentication (MFA), which significantly enhances security. MFA ensures that even if the password is compromised, the biometric factor prevents unauthorized access. Other examples include facial recognition, iris scans, or voice recognition.

Somewhere you are involves verifying location, such as IP geolocation or GPS coordinates, which is unrelated to fingerprint verification.

Biometric MFA is effective in protecting remote access because it requires an attacker to bypass both the knowledge-based factor (password) and the inherent physical trait, providing robust defense against account compromise.

Q12. A security operations center detects repeated attempts from an external IP address trying to identify open ports and running services on a public-facing server. What type of activity does this represent?

A. Exploitation
B. Enumeration
C. Vulnerability scanning
D. Port scanning

Answer: D. Port scanning

Explanation:

Exploitation involves actively taking advantage of discovered vulnerabilities to compromise a system. It occurs after reconnaissance and discovery, which is not indicated here.

Enumeration involves detailed information gathering such as usernames, directories, and shared resources, often after initial discovery of open services. It does not describe the repeated port probing activity.

Vulnerability scanning involves testing systems for weaknesses and misconfigurations, typically using automated tools. It generally follows port discovery and enumeration.

Port scanning is correct. The external IP is probing multiple ports to identify which services are running and potentially vulnerable. This reconnaissance technique helps attackers map the attack surface. Tools like Nmap or Masscan are commonly used for this activity. Detection is possible using intrusion detection/prevention systems (IDS/IPS) configured to alert on unusual scanning patterns.

The Port scanning is a preliminary step in most cyberattacks, helping threat actors identify entry points for exploitation or further attacks.

Q13. An organization wants to prevent attackers from learning internal IP addresses by analyzing traffic sent from internal systems to external websites. Which security technique would best obscure this information?

A. Network address translation
B. VLAN segmentation
C. Access control lists
D. SSL/TLS encryption

Answer: A. Network address translation

Explanation:

Network address translation (NAT) is correct. NAT translates private internal IP addresses to a public-facing IP before sending traffic to external networks. By performing this translation, external entities only see the NAT device’s public IP and cannot discern the organization’s internal addressing scheme, helping to protect the network from reconnaissance and mapping attacks. NAT is widely used to conserve IPv4 addresses while adding a layer of security by hiding internal network structures. NAT also prevents direct access to internal hosts unless explicitly configured, reducing the attack surface for external threats.

VLAN segmentation is primarily used to separate internal network traffic into isolated logical segments, such as separating guest and corporate traffic. While VLANs provide internal network isolation, they do not obscure internal IP addresses from external observers. VLANs are effective for controlling broadcast domains, limiting lateral movement, and implementing internal firewall rules, but NAT is the specific technique that hides internal IP addresses externally.

Access control lists (ACLs) define which IP addresses or devices are permitted to access network resources. ACLs control traffic flow but do not modify or mask IP addresses; they simply allow or deny traffic. They cannot prevent attackers from discovering internal network information if the network is not translated or otherwise hidden.

SSL/TLS encryption secures the content of traffic but does not encrypt network headers, including source and destination IP addresses. Attackers can still see internal IP addresses at the network layer even if the content is encrypted. SSL/TLS is critical for confidentiality and integrity but does not provide anonymity or IP address obfuscation.

By implementing NAT, organizations can ensure that internal network topology is not easily inferred by attackers during reconnaissance, reducing exposure to targeted attacks, scanning, or exploitation attempts. NAT combined with firewalls and monitoring creates a layered defense strategy, providing both operational flexibility and enhanced security.

Q14. A threat intelligence report states that a cybercriminal group is targeting financial institutions with malware that captures screenshots, keystrokes, and clipboard contents. What type of malware capability does this represent?

A. Ransomware
B. Spyware
C. Logic bomb
D. Worm

Answer: B. Spyware

Explanation:

Ransomware encrypts user files or system data and demands a ransom for decryption. While it is highly disruptive, ransomware does not passively monitor user activity or collect data silently.

Spyware is correct. Spyware is designed to secretly observe and collect information from a victim’s system without detection. In the scenario, the malware captures screenshots, keystrokes, and clipboard data, which is consistent with information-gathering behavior. Spyware often targets sensitive environments such as financial institutions to steal credentials, banking data, and confidential documents. It may operate silently, sending collected data back to a command-and-control server, and can be difficult to detect without endpoint protection or behavior monitoring tools. Advanced spyware may include features like keylogging, screen capture, clipboard monitoring, and even audio/video surveillance. Detection and mitigation require endpoint detection and response (EDR) solutions, strict application whitelisting, and user awareness.

Logic bombs are pieces of malicious code that trigger under specific conditions, such as a date or user action. While they can be destructive, they are not designed for continuous surveillance or data collection.

Worms are self-propagating malware that spread across networks, exploiting vulnerabilities to infect other hosts. Worms primarily focus on distribution rather than passive monitoring or data collection.

The Spyware is particularly insidious because it compromises confidentiality without immediate obvious disruption, making it a high-risk threat for organizations that rely on sensitive financial and client data. Comprehensive security strategies against spyware include layered defenses, continuous monitoring, endpoint protection, and strict access controls.

Q15. A system administrator wants to ensure that logs stored on a remote syslog server cannot be modified by attackers even if the primary network is compromised. Which property should be prioritized?

A. Redundancy
B. Nonrepudiation
C. Immutability
D. Elasticity

Answer: C. Immutability

Explanation:

Redundancy improves availability by duplicating data across multiple systems or locations. While it ensures that logs remain accessible if one server fails, redundancy does not inherently prevent tampering. An attacker with access to the network or storage system could still alter redundant copies of logs if immutability mechanisms are not in place. Redundancy is primarily a fault tolerance and reliability measure, ensuring continuity of service rather than integrity of data. For instance, RAID configurations or backup copies are forms of redundancy, but without cryptographic protections or write-once mechanisms, they do not guarantee that logs remain untampered.

Nonrepudiation ensures accountability in the origin of data. It allows organizations to prove that a specific user or system performed an action. Digital signatures and cryptographic techniques support nonrepudiation. However, nonrepudiation does not prevent an attacker who gains administrative access from modifying or deleting log entries. While it can identify who performed an action, it cannot guarantee the immutability of data after creation.

Immutability is correct. Immutability refers to data being unchangeable once written. In the context of syslog servers, implementing immutable logs ensures that entries cannot be deleted or modified, even if an attacker gains access to the network or server. Techniques to enforce immutability include write-once-read-many (WORM) storage, append-only file systems, and cryptographic chaining of log entries (where each log record contains a hash of the previous record).

The importance of immutable logs becomes particularly evident during forensic investigations. For example, if a system is breached, security analysts must trust that collected logs accurately reflect events. Any tampering can obscure attacker activity, making it difficult to identify intrusion methods, timeline of events, or compromised assets. Immutable logging is also critical for compliance with regulations like PCI DSS, HIPAA, GDPR, and ISO 27001. These standards often mandate secure, tamper-proof logging mechanisms to ensure auditability and legal defensibility.

Elasticity refers to the ability to scale system resources up or down to handle fluctuating workloads. While critical for cloud infrastructure and operational efficiency, elasticity does not ensure data integrity. Scaling log storage or processing capacity is unrelated to preventing tampering.

In practice, implementing immutable logs can involve using cloud storage solutions that support object locking, enabling append-only policies, and regularly hashing logs to detect any tampering. Organizations may also configure alerts for any attempted modifications to log storage to enhance security monitoring. By prioritizing immutability, administrators can guarantee the trustworthiness of audit trails, support forensic investigations, and meet stringent compliance requirements.

Q16. A cybersecurity auditor discovers that a web server is running outdated software versions. The organization has no documented process to apply patches, and updates are performed irregularly. Which security weakness does this represent?

A. Poor vendor management
B. Lack of configuration baseline
C. Weak access control
D. Absence of a patch management program

Answer: D. Absence of a patch management program

Explanation:

Poor vendor management focuses on evaluating and monitoring third-party suppliers to ensure that their products and services meet security and operational requirements. While weak vendor oversight could indirectly affect system security, it does not explain why internal servers are running outdated software or lacking updates. Vendor management ensures software quality and contract compliance but is not directly concerned with patching processes.

Lack of a configuration baseline refers to the absence of standardized system settings, including security configurations. While it may result in inconsistent configurations across servers, it does not directly address whether patches are applied in a timely and systematic manner. A server could follow a baseline configuration but remain vulnerable if updates are irregular or ignored.

Weak access control is a vulnerability related to poorly managed permissions or authentication processes. While weak access control is a critical security weakness, it does not account for unpatched software vulnerabilities.

Absence of a patch management program is correct. Patch management is a structured process for identifying, acquiring, testing, and deploying software updates to systems. Organizations without a defined patch management process expose themselves to significant security risks because unpatched software can contain known vulnerabilities that attackers can exploit. For instance, vulnerabilities in web server software such as Apache, Nginx, or Microsoft IIS can be exploited for remote code execution, privilege escalation, or denial-of-service attacks.

Effective patch management programs include several components:

Inventory tracking: Maintaining a list of all hardware and software assets to identify which systems require updates.

Vulnerability assessment: Continuously monitoring for security advisories and known vulnerabilities in installed software.

Prioritization: Categorizing updates based on criticality and potential impact, often focusing first on high-severity vulnerabilities.

Testing: Deploying patches in controlled environments to detect potential compatibility issues.

Deployment: Applying updates systematically across all affected systems.


Verification: Ensuring that patches are successfully installed and that systems function correctly post-update.
Without patch management, web servers may remain susceptible to malware, ransomware, and exploitation attempts. Attackers commonly scan for outdated systems with known vulnerabilities to gain unauthorized access or disrupt operations. Organizations that ignore patching risk regulatory non-compliance, data breaches, and operational downtime.

Q17. A company implements controls so that users can access only the minimum level of privileges necessary to perform their duties. What principle is being enforced?

A. Separation of duties
B. Least privilege
C. Mandatory vacation
D. Role rotation

Answer: B. Least privilege

Explanation:

Separation of duties involves dividing responsibilities among multiple individuals to prevent errors or fraud. For example, the person approving a purchase order should not also be responsible for payment processing. While separation of duties improves accountability and risk management, it does not specifically minimize the privileges each user has on systems.

Least privilege is correct. The principle of least privilege ensures that users and processes operate with only the permissions necessary for their specific tasks. Limiting access helps reduce the potential damage caused by errors, malicious activity, or compromised accounts. For example, an employee in accounting may need access to payroll records but does not require administrative access to server configurations. Similarly, a service account running a web application should have only the permissions required for that application and nothing more.

Implementing least privilege involves:

Role-based access control (RBAC): Assigning permissions based on job roles.

Access reviews: Regular audits to ensure permissions remain aligned with responsibilities.

Time-bound access: Providing elevated privileges temporarily when needed and revoking them afterward.

Separation of administrative duties: Ensuring system administrators have role-specific permissions rather than blanket access.

Least privilege is foundational in reducing attack surfaces. If an attacker compromises a low-privilege account, the potential damage is limited. It also helps prevent lateral movement within networks, a common tactic used in advanced persistent threats (APTs). By restricting unnecessary access, organizations can mitigate insider threats, accidental misuse, and escalation risks.

Mandatory vacation requires employees to take time off, often to detect anomalies in operations. While helpful for fraud detection, it is unrelated to limiting access rights.

Role rotation periodically changes responsibilities to prevent insider abuse. While useful for security, it does not enforce minimal access permissions.

Q18. A network administrator wants to isolate guest Wi-Fi traffic from internal corporate systems to prevent unauthorized access. Which technique should be used?

A. Network tap deployment
B. VLAN segmentation
C. Port security
D. Load balancing

Answer: B. VLAN segmentation

Explanation:

Network taps are passive monitoring devices used to observe network traffic without interrupting it. While they provide visibility into traffic, they do not control access or separate network segments.

VLAN segmentation is correct. Virtual Local Area Networks (VLANs) allow administrators to logically separate network traffic into isolated broadcast domains. By placing guest Wi-Fi traffic on a dedicated VLAN, network administrators can enforce access control policies that prevent guests from reaching internal corporate systems while still providing internet connectivity. This segmentation enhances security by limiting potential attack vectors and reducing the exposure of sensitive internal resources. VLANs are often paired with firewalls, access control lists (ACLs), and intrusion detection systems (IDS) to enforce policies effectively.

Port security restricts device access based on MAC addresses on individual switch ports. While helpful for controlling which devices connect to the network, it does not provide logical separation or isolation of traffic once connected.

Load balancing distributes network or server traffic to improve performance and redundancy. It does not isolate traffic or enhance security between guest and internal networks.

VLAN segmentation is a foundational network security technique. By isolating different types of traffic—such as corporate, guest, and IoT devices—organizations can minimize lateral movement, protect sensitive resources, and implement granular security policies.

Q19. A forensic investigator needs to ensure that a disk image collected from a suspect’s computer has not been altered during the investigation. Which technique is most appropriate?

A. Disk partitioning
B. Hashing
C. Defragmentation
D. Sanitization

Answer: B. Hashing

Explanation:

Disk partitioning reorganizes disk structures, creating separate logical segments. It is unrelated to verifying integrity and would alter data if applied to a disk containing evidence.

Hashing is correct. A cryptographic hash function (e.g., SHA-256) generates a unique fingerprint for a dataset. By calculating the hash before and after imaging or analysis, forensic investigators can verify that the disk image remains unchanged. Even a single-bit modification produces a different hash value, making hash functions a reliable mechanism to ensure data integrity. Hashing is crucial for maintaining chain-of-custody, supporting the admissibility of evidence in legal proceedings, and detecting accidental or malicious alterations.

Defragmentation reorganizes file storage for improved performance, which modifies the disk and invalidates the forensic integrity of evidence.

Sanitization is the secure removal or destruction of data, directly conflicting with evidence preservation.

Forensic best practices also involve maintaining multiple copies of the disk image, storing them in secure, write-protected media, and documenting all handling steps. Hashing is often combined with digital signatures and secure storage to ensure that evidence remains verifiable and legally defensible.

Q20. A financial services company mandates continuous monitoring of its network to detect suspicious behavior such as unusual login times, unexpected data transfers, or abnormal application usage patterns. Which type of security solution best supports this requirement?

A. Static firewall
B. Intrusion prevention system
C. Behavior-based analytics
D. Packet filtering

Answer: C. Behavior-based analytics

Explanation:

Static firewalls operate on predefined rules, permitting or blocking traffic based on port numbers, IP addresses, or protocols. While they provide basic perimeter security, they cannot analyze user behavior or detect anomalies that deviate from normal patterns.

Intrusion prevention systems (IPS) rely primarily on signatures of known threats. Although they can prevent recognized attacks, IPS solutions may fail to detect new or sophisticated threats that do not match predefined patterns.

Behavior-based analytics is correct. This approach continuously monitors network and user activity, establishing a baseline of normal behavior. Deviations from this baseline—such as unusual login times, abnormal data transfers, or irregular application use—are flagged for investigation. These analytics often leverage machine learning and statistical models to adapt to evolving patterns while minimizing false positives. Behavior-based analytics is essential for detecting insider threats, compromised credentials, or subtle attacks that bypass traditional signature-based defenses. For financial institutions, early detection of anomalous behavior can prevent fraud, data exfiltration, and regulatory violations. By integrating alerts, dashboards, and automated response mechanisms, behavior-based analytics enables proactive security measures rather than reactive defense.

Packet filtering inspects traffic headers to enforce access rules but does not assess deviations in user or system behavior, making it unsuitable for advanced threat detection.

Behavior-based analytics represents a shift from static, rule-based defenses to dynamic, intelligence-driven security. In financial services and other high-risk industries, these systems allow organizations to respond to threats in near real-time, significantly reducing the potential impact of security incidents.

 

img