ECCouncil 312-50v13 Certified Ethical Hacker v13 Exam Dumps and Practice Test Questions Set 1 Q1-20
Visit here for our full ECCouncil 312-50v13 exam dumps and practice test questions.
Question 1
Which protocol is primarily used for secure remote login and secure file transfers over an unsecured network?
A) FTP
B) SSH
C) Telnet
D) HTTP
Answer:B) SSH
Explanation:
FTP is a long-standing protocol created strictly for transferring files between computer systems, yet it lacks the foundational protections necessary for secure communication. When FTP transmits data, it sends usernames, passwords, and file contents in plaintext. Anyone positioned on the same network segment or using packet-sniffing tools can intercept this information, making FTP unsuitable for environments where confidentiality is required. While FTPS and SFTP were later introduced to add secure alternatives, the basic FTP protocol itself does not inherently offer encryption or protection from eavesdropping.
SSH, on the other hand, was intentionally engineered for secure remote access across untrusted networks. It uses strong cryptographic algorithms to encrypt all communication between client and server, protecting against eavesdropping, man-in-the-middle attacks, and session hijacking. In addition to secure terminal access, SSH also enables secure file transfers through SFTP and SCP. These features provide both confidentiality and integrity for administrative sessions and file movements. SSH also includes robust authentication methods such as passwords, public key authentication, certificates, and multifactor methods, which significantly strengthen access security. This combination of encryption and versatile secure functionality makes SSH the dominant choice for remote login across enterprise networks.
Telnet, though historically used for remote sessions, sends all credentials and commands in plaintext without any form of encryption. Modern security standards classify Telnet as unsafe for any sensitive use because attackers can easily intercept traffic and capture credentials. Even casual monitoring of network traffic is sufficient to reveal Telnet sessions, making it an outdated and insecure tool in modern infrastructures.
HTTP, commonly used for web browsing, is also insecure because it lacks built-in encryption. Any information transmitted using HTTP—such as form submissions, cookies, and session identifiers—can be intercepted and manipulated. HTTPS improves upon HTTP by adding encryption via SSL/TLS, but HTTPS still does not serve the primary purpose of remote login or secure system administration. Its function remains confined to secure web communication rather than administrative file transfer or shell access.
SSH stands out as the correct answer because it integrates secure remote login, encrypted file transfers, strong authentication, and resistance to network-based attacks—all essential requirements when performing administrative tasks across untrusted networks. Unlike FTP and Telnet, which lack encryption, or HTTP, which is not designed for remote shell access, SSH provides a comprehensive and secure solution for both remote management and safe file transfer operations.
Question 2
What is the main purpose of a penetration test?
A) To patch vulnerabilities in a system
B) To assess security vulnerabilities by simulating an attack
C) To configure firewalls for maximum protection
D) To monitor network traffic continuously
Answer: B) To assess security vulnerabilities by simulating an attack
Explanation:
Patching vulnerabilities is an important defensive measure, but its purpose occurs after weaknesses have been identified. Applying patches is part of the remediation phase where administrators fix known issues, reduce attack surfaces, and strengthen system resilience. However, patching does not identify new vulnerabilities, nor does it simulate an attacker’s mindset or test the effectiveness of existing defenses. Thus, while valuable, patching is not the central purpose of a penetration test.
A penetration test aims to actively and ethically evaluate the security of systems, networks, or applications by imitating the strategies, tools, and methods used by malicious attackers. Testers attempt to exploit vulnerabilities in authentication, authorization, application logic, configuration, or network design. The objective is to discover and demonstrate real-world security weaknesses before adversaries can take advantage of them. Penetration testing provides organizations with a realistic assessment of how an attacker might breach systems, escalate privileges, access sensitive information, or disrupt services. This includes identifying not only technical vulnerabilities but also weaknesses in policies, procedures, or human behavior. Penetration testing is therefore a proactive security measure that strengthens an organization’s overall defensive posture.
Firewall configuration is a vital component of network defense, but it focuses on controlling traffic based on predefined rules rather than evaluating how an attacker may attempt to circumvent existing protections. Configuring firewalls requires knowledge of network flows, access requirements, and security policies but does not include intentionally exploiting vulnerabilities. Therefore, while firewalls reduce exposure to attacks, configuring them does not accomplish the investigative and evaluative goals of penetration testing.
Continuous network monitoring helps detect anomalies such as suspicious traffic patterns, unauthorized access attempts, and system irregularities. Monitoring tools identify signs of ongoing attacks or performance issues. Although this is essential for maintaining operational security, monitoring is reactive in nature. It observes activity but does not intentionally attempt to penetrate defenses or uncover unknown vulnerabilities.
Penetration testing stands apart because it provides a controlled and structured method to assess how well existing security controls can withstand an adversary. It reveals flaws that may go unnoticed in routine monitoring or configuration reviews. Ultimately, the primary purpose of a penetration test is to illuminate system weaknesses through active simulation of real-world attacks, making answer B the correct and most accurate description of its role.
Question 3
Which of the following is a passive reconnaissance technique?
A) Port scanning
B) Social engineering
C) WHOIS lookup
D) Vulnerability scanning
Answer: C) WHOIS lookup
Explanation:
Port scanning involves sending packets to various ports on a target system to discover which ports are open, closed, or filtered. Because this method requires direct interaction with the target device, the activity can generate detectable network traffic. Security devices such as intrusion detection systems often log or alert on port scanning attempts. As this approach actively probes the target environment, it is classified as active reconnaissance rather than passive information gathering.
Social engineering is another form of active interaction. It relies on direct communication with individuals to extract confidential information, often through deception. Whether conducted through phishing emails, voice calls, or face-to-face interaction, social engineering requires deliberate engagement with the target. The attacker attempts to manipulate human behavior to gain insights or access. Because these actions directly involve the target, they are considered active reconnaissance techniques.
WHOIS lookup, by contrast, is a passive method of gathering information. WHOIS records contain publicly available data regarding domain ownership, contact details, registration dates, DNS information, and administrative contacts. Querying WHOIS does not involve any communication with the target’s systems or infrastructure. Instead, the attacker retrieves information from public databases maintained by domain registrars. The target organization remains unaware of the lookup because no packets are sent to their network. WHOIS lookup therefore fits the definition of passive reconnaissance because it gathers useful intelligence without alerting or interacting with the target environment.
Vulnerability scanning involves automated tools that assess systems for known weaknesses, outdated software versions, misconfigurations, and missing patches. These tools send probes, requests, and traffic directly to the target’s infrastructure, attempting to identify issues. Because this process interacts heavily with the target and produces identifiable traffic patterns, vulnerability scanning is classified as active reconnaissance. Even though it is often used defensively by organizations, in the context of reconnaissance it is still considered active due to the level of interaction required.
WHOIS lookup is the correct choice because passive reconnaissance refers to gathering information without touching or interacting with the target’s systems. WHOIS queries rely on public data sources and do not create traffic that would be visible to the target organization. This differentiates it clearly from port scanning, social engineering, and vulnerability scanning, all of which involve active engagement. The subtlety and stealth of WHOIS lookup make it a classic example of passive reconnaissance used in both security assessments and adversarial intelligence gathering.
Question 4
Which type of malware is designed to replicate itself and spread without user intervention?
A) Trojan
B) Worm
C) Spyware
D) Rootkit
Answer: B) Worm
Explanation:
A Trojan is a type of malicious software disguised as a legitimate or desirable program. It relies on user interaction for installation or activation because it must be executed manually. Trojans do not possess autonomous replication capabilities. Instead, attackers use them to gain unauthorized access, install backdoors, steal data, or enable remote control. Although dangerous, Trojans cannot independently spread from one system to another without user assistance.
A worm, by contrast, is designed specifically for self-replication and independent distribution across networks. Worms exploit vulnerabilities, weak passwords, or configuration flaws to spread from host to host without any action from the user. They are capable of scanning networks, identifying vulnerable targets, and transmitting copies of themselves automatically. Worms can consume bandwidth, overload systems, and propagate rapidly across enterprise or global networks. Historical examples such as the SQL Slammer and WannaCry outbreaks highlight how quickly worms can spread and how disruptive they can be. The defining characteristic of a worm is its autonomous replication and movement across networked environments.
Spyware functions differently from both Trojans and worms. Its purpose is to covertly monitor user activity, gather sensitive information such as keystrokes or browsing habits, and transmit the data to attackers. Spyware often arrives bundled with malicious downloads or deceptive applications. While harmful to user privacy and security, spyware does not replicate itself or spread autonomously. Its primary goal is surveillance rather than propagation.
A rootkit is another distinct type of malware focused on concealment. It modifies system components so attackers can maintain hidden, persistent access. Rootkits can hide processes, files, registry entries, and network connections, making detection extremely difficult. However, a rootkit’s function is not to replicate or spread to other machines. It is used after gaining system access, not as a mechanism for spreading malware.
Worms are therefore the correct answer because they uniquely possess the ability to self-replicate and propagate across systems without human intervention. Their ability to spread rapidly and autonomously differentiates them from other forms of malware, which may cause damage, steal information, or maintain stealth but require some level of user interaction. This makes worms exceptionally dangerous in networked environments, as they can compromise large numbers of systems in a short period, creating widespread disruption and consuming significant resources.
Question 5
Which scanning technique attempts to identify the operating system of a target without sending intrusive packets?
A) SYN scan
B) Ping sweep
C) TCP/IP stack fingerprinting
D) UDP scan
Answer: C) TCP/IP stack fingerprinting
Explanation:
A SYN scan is a type of TCP scan that sends SYN packets to determine whether ports are open, closed, or filtered. Although it is stealthier than a full connect scan, it still sends packets directly to the target and generates active traffic. Security systems may detect such scans, especially if they occur rapidly or from suspicious hosts. Because SYN scans require direct interaction with the target, they are categorized as active reconnaissance techniques and are not suitable for environments requiring minimal visibility.
A ping sweep sends ICMP Echo Request packets to multiple hosts to determine which devices are online. This technique is also active because it requires transmission of packets toward the target network. Firewalls often block ICMP traffic, and intrusion detection systems can log ping sweeps. While helpful for discovering live hosts, ping sweeps are easily detectable and do not provide insights into operating system characteristics beyond host availability.
TCP/IP stack fingerprinting is the technique used to infer the operating system of a target by examining subtle variances in how different systems implement the TCP/IP protocol stack. Every operating system handles certain packet fields, flags, timing behaviors, and error responses in slightly different ways. These differences make it possible to identify the OS based on how it responds to crafted probes or sometimes by observing responses to normal network traffic. Some fingerprinting techniques are fully passive, meaning analysts simply observe traffic without sending packets themselves. This makes them much harder to detect and much less intrusive. Even active fingerprinting uses carefully crafted, low-impact packets, making the technique more subtle than typical scanning methods. Because it can operate with minimal interaction and often passively, TCP/IP fingerprinting is considered less intrusive and more suited to stealthy reconnaissance efforts.
A UDP scan attempts to identify open UDP ports by sending UDP packets to the target and analyzing responses or lack thereof. This method generates considerable traffic and can be resource-intensive. Security devices often flag UDP scans due to their anomalous patterns. As such, UDP scanning is classified as an active and detectable activity that does not emphasize stealth.
TCP/IP stack fingerprinting is the correct answer because it identifies an operating system by analyzing how it handles network communication without requiring intrusive packet transmission. Its passive or semi-passive nature reduces the likelihood of detection and allows analysts to gather valuable system information without triggering alerts.
Question 6
Which social engineering method relies on creating fear or urgency to manipulate a victim into taking action?
A) Phishing
B) Pretexting
C) Baiting
D) Scareware
Answer: D) Scareware
Explanation:
Phishing is a broad social engineering method that uses deceptive communication, often through email or messaging, to trick victims into revealing sensitive information such as credentials or financial data. While phishing campaigns may include urgency or alarming statements, not all phishing messages rely on fear tactics. Many are crafted to appear legitimate, focusing on trust and familiarity rather than explicit fear. Phishing primarily exploits the user’s belief that the message originates from a trusted source rather than creating panic.
Pretexting involves fabricating a convincing scenario to obtain information from a target. The attacker creates a false identity or storyline, such as posing as a bank representative, technical support agent, or authority figure. Pretexting relies heavily on building rapport, credibility, and persuasion rather than inducing fear. While some pretexts may involve subtle pressure, the technique does not fundamentally depend on fear or urgency as its main mechanism.
Baiting leverages curiosity or desire by offering a reward or enticing object, such as free downloads, gifts, or infected USB drives. The victim engages because the offer seems beneficial. Unlike scare tactics, baiting plays on greed, interest, or curiosity, making it distinct from methods rooted in psychological intimidation or fear-based manipulation.
Scareware is specifically designed to manipulate users by invoking fear, panic, or urgent emotional responses. It typically appears as alarming pop-ups or warnings claiming the system is infected with malware, that personal data is compromised, or that immediate action is required. These fraudulent alerts pressure victims to download malicious software, pay for fake antivirus tools, or grant remote access. Scareware thrives on exaggeration, urgency, and psychological coercion. Its entire strategy revolves around frightening the victim into making rash decisions.
Scareware stands apart from other social engineering techniques because fear is its primary tool. Whereas phishing depends on deception, pretexting on relationship-building, and baiting on temptation, scareware uses psychological intimidation to bypass rational decision-making. The attacker creates a false emergency to compel immediate, unthinking action.
Therefore, scareware is the correct answer, as it uniquely relies on generating fear and urgency to manipulate victims. It leverages emotional distress to push individuals toward actions they would otherwise avoid, making it one of the most psychologically coercive forms of social engineering.
Question 7
Which type of attack involves intercepting and potentially altering communication between two parties without their knowledge?
A) Brute force
B) Man-in-the-middle
C) Dictionary
D) SQL injection
Answer:B) Man-in-the-middle
Explanation:
A brute force attack is a method used to gain unauthorized access by systematically attempting every possible password or key combination until the correct one is found. This technique focuses exclusively on password cracking and does not involve intercepting communication between systems or altering data transmissions. While brute force attacks can compromise accounts, they operate independently of communication channels and do not place an attacker between two parties.
A man-in-the-middle attack (MITM) uniquely involves positioning the attacker between two communicating parties without their knowledge. The attacker intercepts, monitors, and may modify data exchanged between the parties. This can occur through various techniques, such as ARP spoofing, DNS poisoning, rogue Wi-Fi hotspots, or SSL stripping. In a MITM scenario, both parties believe they are communicating directly with each other, but the attacker secretly relays and manipulates the conversation. This allows the attacker to steal credentials, inject malicious content, alter messages, or observe sensitive information. The defining characteristic of a MITM attack is the interception and potential alteration of communication flows.
A dictionary attack, like brute force, targets password authentication by attempting a list of common words or frequently used passwords. This technique reduces the number of attempts required compared to brute force but still does not involve communication interception. It is strictly an authentication-focused attack, not a communication-based attack.
SQL injection involves manipulating input fields in an application to alter backend database queries. Attackers exploit insufficient input validation to retrieve, modify, or delete database content. SQL injection compromises data integrity and confidentiality but does not involve inserting oneself into communication between two parties.
Given these differences, the man-in-the-middle attack is the correct answer because it specifically targets communication channels. Its purpose is to covertly intercept and potentially manipulate data being transmitted. Unlike brute force and dictionary attacks, which focus on password guessing, or SQL injection, which targets databases, MITM attacks exploit communication pathways. They create an invisible presence between sender and receiver, allowing attackers to observe or alter the data being exchanged. This makes MITM attacks particularly dangerous, especially in environments relying on unsecured or poorly configured networks.
Question 8
What is the primary purpose of a honeypot in cybersecurity?
A) To serve as a decoy system to attract attackers
B) To block incoming malicious traffic
C) To patch vulnerabilities automatically
D) To encrypt sensitive data
Answer: A) To serve as a decoy system to attract attackers
Explanation:
A honeypot functions as a deliberately vulnerable or enticing system designed to attract attackers. Its primary purpose is not to directly protect operational systems but to serve as a controlled environment where malicious activities can be observed, monitored, and analyzed. Honeypots help security teams learn about attack methods, gather intelligence, and detect threats that may otherwise go unnoticed. They may simulate servers, networks, or data repositories, providing attackers with a believable target. By studying interactions within a honeypot, organizations can identify intrusion attempts, malware behavior, and emerging threat trends without risking production systems.
Blocking incoming malicious traffic is typically handled by firewalls or intrusion prevention systems. These security devices evaluate incoming packets, enforce rules, and prevent unauthorized access. They focus on prevention, whereas a honeypot focuses on observation and deception. Honeypots do not filter or block traffic across the network; instead, they invite certain types of traffic to study attacker behavior.
Automatic patching is part of system maintenance and vulnerability management. Tools that perform patching aim to reduce security risks by keeping systems up to date. This function is unrelated to deception or attacker research. Although patching helps secure systems, it does not involve engaging with attackers or monitoring their activities.
Encrypting sensitive data is a core security practice ensuring confidentiality during storage and transmission. Encryption prevents unauthorized parties from reading information even if intercepted. However, its purpose is protective rather than investigative. Encryption does not provide insight into attacker behavior, nor does it function as a decoy or monitoring tool.
Honeypots differ fundamentally from these methods because they focus on engagement rather than prevention or protection. They intentionally appear vulnerable, encouraging attackers to reveal their tactics. This allows security researchers to detect intrusion attempts early, analyze attack vectors, understand adversary behavior, and improve defensive strategies. Some honeypots also serve as early warning systems, alerting defenders when attackers begin probing internal or external systems. By wasting attacker time and resources, honeypots can also reduce the likelihood that real systems will be compromised.
Therefore, a honeypot’s primary purpose is deception—acting as a decoy system to lure attackers. It plays a strategic role in threat intelligence gathering and security research rather than performing traditional defensive tasks such as blocking traffic, patching systems, or encrypting data.
Question 9
Which tool is commonly used for network packet capturing and analysis?
A) Wireshark
B) Nmap
C) Metasploit
D) John the Ripper
Answer: A) Wireshark
Explanation:
Wireshark is widely recognized as a powerful tool for capturing and analyzing network traffic. It operates by monitoring data packets flowing across a network interface in real time or analyzing packet capture files saved earlier. Wireshark allows security analysts to inspect packet headers, payloads, protocol behavior, timing, and communication flows. Its rich interface and protocol decoding capabilities make it indispensable for troubleshooting, forensic investigations, intrusion analysis, and network research. It supports hundreds of protocols and provides deep visibility into network communication patterns.
Nmap is a network scanning and discovery tool used to identify open ports, determine running services, and perform network mapping. Although Nmap provides valuable reconnaissance information, it does not capture or analyze full packet contents. Its function focuses on active probing rather than passive traffic monitoring. While Nmap can detect hosts, services, vulnerabilities, and network configurations, it cannot decode packet structures or provide diagnostic details at the packet level as Wireshark does.
Metasploit is an exploitation framework used by ethical hackers and attackers to launch, test, and manage exploits. It includes modules for payload delivery, privilege escalation, and vulnerability exploitation. Although Metasploit can interact with networks and generate traffic, its purpose is to compromise systems, not capture or analyze packets. It lacks the functionality necessary for deep packet inspection or protocol analysis.
John the Ripper is a password-cracking tool used to test password strength. It operates by attempting to recover plaintext passwords from hashes using dictionary attacks, brute force, and hybrid techniques. This function is entirely unrelated to network traffic analysis. John the Ripper does not interact with or monitor network communication, nor does it capture packets.
Wireshark stands apart because it focuses on packet-level visibility. It allows investigators to view each step of a communication exchange, identifying anomalies such as malformed packets, suspicious traffic flows, unauthorized protocols, or evidence of attacks. Analysts can reconstruct sessions, follow TCP streams, and examine encryption negotiation processes. These capabilities make Wireshark critical for understanding network performance issues, investigating breaches, validating security controls, and ensuring proper protocol behavior.
Thus, Wireshark is the correct answer because it is specifically designed for network packet capturing and detailed analysis. It provides the deep insight necessary to understand how data moves across a network, something tools like Nmap, Metasploit, and John the Ripper are not built to accomplish.
Question 10
Which type of firewall filters traffic based on application-layer data rather than just IP addresses and ports?
A) Packet-filtering firewall
B) Stateful firewall
C) Proxy firewall
D) Circuit-level gateway
Answer:C) Proxy firewall
Explanation:
A packet-filtering firewall evaluates traffic based on basic criteria such as source and destination IP addresses, ports, and protocols. Operating primarily at layers 3 and 4 of the OSI model, these firewalls make forwarding decisions without inspecting the content of the packets. As a result, they cannot analyze or enforce rules based on application-layer information such as commands, payloads, or user behaviors. While effective for simple network control, packet-filtering firewalls lack the sophistication necessary to understand higher-level interactions.
A stateful firewall maintains awareness of active connections by tracking session states such as SYN, SYN-ACK, and ACK exchanges. These firewalls make more informed decisions by monitoring the context of a connection in addition to IP and port information. While this improves security compared to stateless filtering, stateful firewalls still primarily operate at lower layers and do not deeply inspect application-layer content. They can identify whether traffic belongs to an existing session but not whether the application data is malicious or inappropriate.
A proxy firewall, however, operates at the application layer. It acts as an intermediary between clients and servers, processing requests on behalf of the client. Because it terminates client connections and re-establishes separate connections with servers, a proxy firewall can inspect and filter application-level data such as HTTP requests, FTP commands, or SMTP messages. This allows administrators to enforce policies based on specific application behaviors, block malicious content, inspect payloads, prevent data leaks, and provide granular control. Proxy firewalls can also conceal internal network details by masking client identities, offering an additional layer of security.
A circuit-level gateway focuses on validating the legitimacy of session establishment processes, usually by analyzing TCP handshakes. Although it ensures that communication follows proper session protocols, it does not inspect application-layer data or enforce content-based rules. Circuit-level gateways operate mainly at layer 5 of the OSI model, making them limited in their ability to detect application-specific malicious behavior.
Proxy firewalls are therefore the correct answer because they provide the highest level of inspection among the listed options. They analyze not only network and transport-layer information but also the actual content of requests and responses. This gives organizations the ability to enforce fine-grained security policies and detect threats embedded within application data. By functioning at the application layer, proxy firewalls offer enhanced visibility and greater control compared to packet-filtering, stateful, or circuit-level firewalls.
Question 11
Which attack exploits vulnerabilities in web application input validation to execute arbitrary commands?
A) Cross-site scripting
B) SQL injection
C) ARP spoofing
D) Directory traversal
Answer: B) SQL injection
Explanation:
Cross-site scripting involves injecting malicious scripts into web pages viewed by users, and while it also abuses weak input handling, its impact targets the client side rather than executing commands on the server or database. It enables attackers to run JavaScript in a victim’s browser but does not directly interact with backend database logic.
SQL injection, however, specifically abuses improper input validation on server-side database queries. By injecting crafted SQL statements into input fields, an attacker can manipulate how the application interacts with its database, potentially retrieving unauthorized information, deleting data, modifying records, or even executing system-level commands depending on configuration.
ARP spoofing is completely unrelated to web application logic; it is a local network attack where an attacker forges ARP messages to intercept traffic. Directory traversal attempts to access restricted files by navigating the file system using sequences like “../”, but it does not inherently allow execution of arbitrary SQL commands. SQL injection is correct because it directly targets the database layer by exploiting insufficient sanitization of user input, giving attackers the ability to alter backend queries and control the database beyond intended boundaries.
Question 12
Which of the following best describes a zero-day vulnerability?
A) A known vulnerability with available patches
B) A vulnerability disclosed after exploitation
C) An unknown vulnerability exploited before vendor patching
D) A vulnerability in outdated software only
Answer: C) An unknown vulnerability exploited before vendor patching
Explanation:
A known vulnerability with available patches is the opposite of a zero-day because it has already been discovered, documented, and addressed by the vendor. Even if organizations fail to apply the patch, the flaw is no longer considered zero-day because mitigation exists.
A vulnerability disclosed after exploitation may describe certain incidents involving zero-days, but disclosure timing alone is not the defining characteristic; the critical element is whether the vendor had zero days to prepare a fix before exploitation occurred.
A vulnerability in outdated software only is not accurate because zero-day flaws can occur in fully updated, modern systems just as easily as in older ones. Zero-day vulnerabilities are especially dangerous because they are unknown to both the vendor and the security community when attackers first exploit them.
Since defensive tools cannot yet detect or block the threat effectively, attackers can cause significant damage before detection or patch development. Therefore, the correct description is an unknown vulnerability exploited before a vendor can release a fix.
Question 13
Which type of attack is aimed at overwhelming a system’s resources to make it unavailable to legitimate users?
A) Phishing
B) DDoS
C) MITM
D) SQL injection
Answer: B) DDoS
Explanation:
Phishing is a social engineering method that tries to trick users into revealing sensitive information such as passwords, financial data, or other credentials. It does not involve overloading a system or targeting system resources; instead, it focuses on deceiving individuals. A Distributed Denial of Service attack, on the other hand, is specifically designed to flood a target system, service, or network with excessive traffic from multiple compromised devices, overwhelming resources like bandwidth, CPU, memory, or connection limits.
This overload causes legitimate users to experience slow responses or complete unavailability of the targeted service. A man-in-the-middle attack intercepts communication between two parties to manipulate or eavesdrop on transmitted data, but it does not necessarily degrade system resources. SQL injection involves inserting malicious SQL queries into input fields to manipulate databases, retrieve unauthorized information, or corrupt data; it is not intended to consume system resources as its primary purpose.
DDoS is correct because its core objective is to compromise service availability by generating an enormous load from numerous distributed sources, making it impossible for legitimate users to access the service.
Question 14
What is the primary difference between symmetric and asymmetric encryption?
A) Symmetric uses two keys, asymmetric uses one key
B) Symmetric uses one key, asymmetric uses two keys
C) Symmetric encrypts faster, asymmetric encrypts slower
D) Symmetric is only for data at rest, asymmetric only for data in transit
Answer: B) Symmetric uses one key, asymmetric uses two keys
Explanation:
The statement that symmetric encryption uses two keys is incorrect; symmetric systems rely on a single shared key for both encryption and decryption. This means that both sender and receiver must securely exchange and store the same key.
Symmetric encryption is indeed typically faster, but speed is not the primary defining feature. Asymmetric encryption uses a mathematically linked key pair composed of a public key for encryption and a private key for decryption, providing stronger support for secure key exchange and authentication. Claiming that symmetric is only for data at rest and asymmetric only for transit is misleading because both types can protect data in various contexts depending on system design.
Symmetric algorithms often secure bulk data because of efficiency, while asymmetric methods secure key exchange or authentication. The correct answer is that symmetric encryption uses one shared key while asymmetric encryption relies on a two-key system, which is the fundamental structural difference.
Question 15
Which type of wireless attack involves an attacker setting up a rogue access point to intercept client traffic?
A) War driving
B) Evil twin
C) Jamming
D) Bluejacking
Answer:B) Evil twin
Explanation:
War driving refers to scanning or mapping wireless networks while moving around with a device, often to discover unsecured or poorly configured networks. It does not involve intercepting user traffic directly. An evil twin attack is when an attacker creates a rogue access point that mimics a legitimate wireless network by broadcasting the same SSID and sometimes similar signal characteristics.
Unsuspecting users connect to this fake network, believing it to be legitimate, allowing the attacker to intercept communications, capture credentials, or monitor data. Jamming involves transmitting noise or interference to disrupt wireless signals, preventing devices from communicating, but it does not capture or read traffic. Bluejacking allows sending unsolicited messages over Bluetooth but does not create a rogue network or intercept client traffic.
Evil twin is correct because it specifically involves deploying a deceptive access point to lure users so their communications can be intercepted.
Question 16
What is the main purpose of using a VPN in a network environment?
A) To hide malware activity
B) To encrypt traffic and provide secure remote access
C) To scan for vulnerabilities
D) To increase network speed
Answer: B) To encrypt traffic and provide secure remote access
Explanation:
VPNs are not designed for hiding malware. Although malware authors might abuse VPNs, that is not their intended function. A VPN’s primary purpose is to secure communications through encryption, ensuring data confidentiality and integrity while traveling between a user and a remote network. It allows employees or remote users to access internal network resources as if they were directly connected on-site.
Vulnerability scanning involves identifying security weaknesses in systems or networks, which is an entirely different security function from VPN operation. VPNs do not increase network speed; in fact, encryption overhead may slightly slow performance because additional processing is required. The correct answer is that VPNs secure network traffic and enable safe remote connectivity.
Question 17
Which method can be used to prevent ARP spoofing attacks?
A) Using a VPN
B) Static ARP entries
C) Port scanning
D) SQL parameterization
Answer:B) Static ARP entries
Explanation:
Using a VPN encrypts traffic between endpoints but does not address the underlying ARP protocol vulnerabilities on local networks. ARP spoofing occurs when an attacker sends forged ARP messages to associate their MAC address with another device’s IP address, enabling traffic interception.
Static ARP entries prevent this by manually binding specific IP addresses to their correct MAC addresses, making it impossible for malicious ARP replies to alter these mappings. Port scanning identifies open ports on systems but has no role in preventing ARP poisoning.
SQL parameterization defends against SQL injection, a database vulnerability, not a network-layer attack. Thus, static ARP entries are correct because they protect against fraudulent ARP updates.
Question 18
What is the key difference between a black-box and white-box penetration test?
A) Black-box has full knowledge, white-box has no knowledge
B) Black-box simulates an external attacker, white-box simulates an insider
C) Black-box is easier to perform than white-box
D) Black-box only scans networks, white-box only scans applications
Answer: B) Black-box simulates an external attacker, white-box simulates an insider
Explanation:
Black-box testing is often misunderstood, particularly when it comes to the level of information available to the tester. A common misconception is that black-box testers possess full or substantial system knowledge; in reality, the opposite is true. Black-box testing is intentionally designed so that the tester begins with little to no internal information about the target environment. This approach simulates the perspective of an external attacker who must discover everything from scratch, including system architecture, exposed services, application behavior, and potential entry points. Working under these constraints requires the tester to rely heavily on reconnaissance, enumeration, and dynamic analysis techniques to uncover vulnerabilities without any privileged insight.
White-box testing, by contrast, represents a scenario where the tester has unrestricted access to internal details. This includes system architecture documents, source code, configuration files, network diagrams, and often even administrative credentials. Because of this visibility, white-box testers can evaluate the security posture at a much deeper level, identifying subtle logic flaws, insecure coding practices, design weaknesses, and misconfigurations that may not be detectable through external probing alone. This insider viewpoint reflects what a knowledgeable internal threat actor or a highly trusted assessment team could achieve when granted full transparency.
Whether black-box testing is easier or harder largely depends on the scope and objectives of the assessment. While it may seem simpler on the surface due to the absence of complex internal documentation, the lack of information often makes the process more challenging. Testers must spend more time discovering the environment and may miss vulnerabilities hidden behind authentication layers or internal components inaccessible from the outside. Both black-box and white-box approaches can cover network infrastructure, applications, and various system layers—what differentiates them is not the technical scope but the visibility and knowledge granted to the tester. Ultimately, black-box mimics an external attacker’s viewpoint, while white-box corresponds to an insider with full system understanding.
Question 19
Which scanning technique is least likely to trigger an intrusion detection system?
A) SYN scan
B) UDP scan
C) Stealth scan
D) Full connect scan
Answer: C) Stealth scan
Explanation:
SYN scans send half-open TCP requests by not completing the handshake, and while less noisy than full-connect scans, they still often trigger IDS alerts.
UDP scans send packets to many ports and can generate significant detectable traffic, making them more noticeable.
Full-connect scans establish complete TCP connections and are the most easily detected because they generate normal but numerous connection logs.
Stealth scans, however, attempt to avoid detection by using unusual flag combinations, fragmented packets, or minimized traffic patterns to bypass standard IDS signatures. This makes stealth scans least likely to trigger an alert.
Question 20
Which cryptographic technique ensures both data confidentiality and authentication?
A) Hashing
B) Symmetric encryption only
C) Digital signatures with encryption
D) Base64 encoding
Answer: C) Digital signatures with encryption
Explanation:
Hashing verifies data integrity but does not authenticate the sender or protect confidentiality because hashes can be observed publicly.
Symmetric encryption provides confidentiality but does not inherently prove the identity of the sender since anyone with the shared key could have encrypted the data.
Base64 encoding is not a security method at all; it simply transforms data into a different textual format for transmission compatibility.
Digital signatures authenticate the sender by validating ownership of a private key, while combining digital signatures with encryption ensures confidentiality by preventing unauthorized access.
Therefore, using digital signatures along with encryption ensures both confidentiality and authentication.
Popular posts
Recent Posts
