Use VCE Exam Simulator to open VCE files

350-201 Cisco Practice Test Questions and Exam Dumps
Question 1
A NetFlow collector is receiving records from multiple branch routers. Which two fields in a NetFlow v9/v10/IPFIX record are the most helpful in quickly spotting beaconing command-and-control (C2) traffic? (Choose 2.)
A. Destination port
B. Flow end timestamp
C. Flow duration (delta time)
D. TCP flags
E. Source autonomous-system number
Correct Answers: B and C
Explanation:
Beaconing C2 (command-and-control) traffic refers to regular, repetitive communication between an infected host and a remote malicious server. It often occurs at scheduled intervals and can use any protocol—though it commonly uses HTTP, HTTPS, or DNS. Identifying this kind of traffic using NetFlow data relies on observing patterns in timing and behavior rather than just protocol or port usage.
The flow end timestamp (B) and flow duration (delta time) (C) are two of the most important fields for detecting beaconing activity because they help reveal repetitive communication patterns, which are a hallmark of beaconing behavior.
Flow end timestamp helps identify when communications are occurring. By analyzing multiple flow records, a security analyst can look for connections from a single host that consistently end at the same time intervals (e.g., every 10 minutes). This consistent timing can strongly indicate automated or scripted communication like beaconing.
Flow duration (delta time) measures how long the connection lasted. If you observe flows that have similar durations over time, it may suggest that a bot is connecting to its C2 server in a scripted, predictable way. These durations are typically short and very consistent, which distinguishes them from more random or interactive user-generated traffic.
Let’s consider the other options:
A. Destination port: While it might be useful in some investigations (especially if the traffic is on a known malicious port), modern C2 traffic often uses common ports like 443 or 80 to blend in. Therefore, it’s less helpful for uniquely identifying beaconing traffic.
D. TCP flags: These provide insight into the state and control of TCP sessions (e.g., SYN, ACK, FIN), which is useful in understanding how the connections are being handled. However, they do not help much with detecting regular timing patterns typical of beaconing traffic.
E. Source autonomous-system number (ASN): This identifies the origin of the traffic from a routing standpoint, but it doesn't assist with spotting repetitive behavior or timing patterns associated with beaconing.
Question 2
An analyst is developing a script that communicates with Cisco SecureX Orchestration through its REST API.
Which two HTTP methods should be used to either replace an entire resource or partially update an existing one? (Choose 2.)
A. GET
B. PUT
C. POST
D. DELETE
E. PATCH
Correct Answers: B and E
Explanation:
REST (Representational State Transfer) APIs rely on specific HTTP methods to perform actions on resources. When interacting with APIs such as Cisco SecureX Orchestration, it's essential to understand which method performs what type of operation—especially when the goal is to replace or partially update a resource.
PUT (B) is used to replace an entire resource with a new version. When an analyst sends a PUT request to a specific URI, they are typically sending the full representation of the resource that will completely overwrite the existing one at that location. If the resource doesn’t exist, some APIs may create it; others might return an error, depending on implementation.
PATCH (E) is specifically designed to partially modify an existing resource. Unlike PUT, which expects the full updated content, PATCH can be used to send only the changes—making it more efficient when only a small update is needed. For example, if you just want to update the status field in a JSON object, you would use PATCH.
Now let’s briefly look at why the other options are incorrect in this context:
GET (A) is used to retrieve information. It is a read-only method and does not change any data. It is completely inappropriate for making modifications to resources.
POST (C) is typically used to create a new resource, especially when the resource URI is determined by the server rather than the client. While POST can sometimes be used for update operations in non-standard APIs, it is not the conventional or correct method for updating existing resources.
DELETE (D) is used to remove a resource entirely. It is destructive and has nothing to do with updating or replacing existing content.
Question 3
In Cisco Secure Endpoint (formerly known as AMP for Endpoints), which two dispositions will stop a newly observed file from running on the endpoint until a cloud-based decision is made? (Choose 2.)
A. Quarantined
B. Blocked
C. Malicious
D. Soft blocked (unknown prevalence)
E. Triaged
Correct Answers: B and D
Explanation:
Cisco Secure Endpoint provides multiple disposition categories to control file execution behavior based on reputation and real-time threat intelligence. Two such dispositions specifically control how a file behaves while awaiting a verdict from the cloud are Blocked and Soft blocked (unknown prevalence).
Blocked (B):
This is a prevention-based disposition. If a file is pre-classified as suspicious or if policy dictates that files with unknown or questionable reputations must be halted, it is blocked from executing. Even if there’s no immediate cloud verdict, the file is held back to prevent any possible malicious activity until a verdict is received.
Soft blocked (unknown prevalence) (D):
This is a special interim status that applies to files with unknown reputations. When a file is first seen and doesn't have a prior reputation score or prevalence data, Cisco Secure Endpoint can "soft block" it, meaning the file is temporarily blocked until a reputation verdict is obtained from the cloud. This ensures that zero-day or new files are not allowed to execute without a basic level of cloud analysis first. This process is key in early threat detection and reducing the endpoint’s exposure.
Now, let’s address the incorrect options:
Quarantined (A):
This disposition is assigned after a file has been identified as malicious or suspicious and usually after it has already been executed or detected. Quarantining is a response action, not a preemptive block based on cloud reputation.
Malicious (C):
This disposition is used for files already identified as bad either by the local engine or from the cloud. A file marked as Malicious will be blocked or removed automatically. However, this only applies after the verdict has been made, not while waiting for one.
Triaged (E):
This is more of an incident management or alert processing classification, used internally for organizing threat response workflows. It does not prevent file execution nor is it tied to real-time blocking behavior at the endpoint level.
Question 4
A SOC manager is selecting metrics for a dashboard that measures analyst efficiency.
Which two metrics directly reflect analyst performance rather than tool or process latency? (Choose 2.)
A. Mean time to detect (MTTD)
B. Mean time to respond (MTTR)
C. Number of escalations rejected by tier 2
D. Percentage of alerts auto-closed by correlation rules
E. Average EDR sensor dwell time
Correct Answers: B and C
Explanation:
When measuring the efficiency of security analysts specifically, the focus must be on metrics that reflect human performance, decision-making quality, and response speed, rather than automated system actions or tool-based delays. Among the options provided, Mean Time to Respond (MTTR) and Number of escalations rejected by tier 2 are the two metrics that directly reflect analyst behavior and performance.
Mean Time to Respond (MTTR) (B):
This metric captures the average amount of time analysts take to respond to an incident after detection. It measures how quickly human analysts are triaging, investigating, and acting on alerts. Lower MTTR typically indicates better analyst performance and faster response times. While MTTR can sometimes be affected by tool usability or automation, it is primarily a human-driven metric in the context of incident response.
Number of escalations rejected by tier 2 (C):
This metric assesses the quality of work performed by lower-tier analysts (usually Tier 1). If escalations are frequently rejected by Tier 2 analysts, it implies poor triage, incorrect prioritization, or a lack of understanding by Tier 1. Conversely, a low rejection rate suggests that analysts are effectively identifying and escalating genuine threats. This is a clear measure of individual analyst decision-making and skill.
Let’s now break down the incorrect choices:
Mean Time to Detect (MTTD) (A):
MTTD measures how long it takes to identify a threat from the time it first enters the environment. While partially influenced by analyst vigilance, MTTD is heavily dependent on detection tools, alert configurations, and logging availability. It does not solely reflect analyst efficiency.
Percentage of alerts auto-closed by correlation rules (D):
This metric reflects the effectiveness of automation and correlation logic, not analyst behavior. Auto-closed alerts are handled without human intervention, so this is more a measure of system tuning and rule efficiency.
Average EDR sensor dwell time (E):
Dwell time refers to the amount of time an attacker or malware is present on a system before being detected or removed. This metric is influenced by sensor deployment, tool efficacy, and alerting mechanisms, not directly by the actions or speed of a human analyst.
Question 5
A playbook for investigating a suspected phishing email lists several tasks. Which three tasks should occur before detonating any attachment in a sandbox? (Choose 3.)
A. Pull full SMTP headers and analyze received hops
B. Extract URLs and run them through threat-intelligence reputation checks
C. Hash the attachment and query VT/Threat Grid for existing verdicts
D. Execute the attachment in a disposable VM to watch network behavior
E. Search the organization’s secure email gateway logs for matching message-IDs
Correct Answers: A, B, and C
Explanation:
In a phishing investigation playbook, analysts must take a cautious and structured approach to avoid unnecessary risk and optimize response efforts. Before detonating any attachment in a sandbox, several steps should be completed to gather intelligence and potentially reach a verdict without needing to run the file, which is riskier and more resource-intensive.
Pull full SMTP headers and analyze received hops (A):
This is a standard early step in email investigations. By pulling the full SMTP headers, analysts can examine the origin of the email, the path it took through mail servers, and potential spoofing indicators. Analyzing received headers helps determine if the email came from a legitimate or suspicious source and can assist in identifying infrastructure used in phishing campaigns. This information often leads to faster remediation and helps verify the legitimacy of the sender.
Extract URLs and run them through threat-intelligence reputation checks (B):
This step provides low-risk intelligence gathering. By extracting any URLs embedded in the email body or attachments and running them against threat intelligence feeds, analysts can often determine whether the email is linked to known phishing infrastructure or malware hosting. If a URL is found to be malicious, this might negate the need to analyze the attachment further, or it may help prioritize the investigation.
Hash the attachment and query VT/Threat Grid for existing verdicts (C):
This is a critical step in identifying known threats without executing anything. Hashing the attachment (e.g., with SHA256) and checking against platforms like VirusTotal or Cisco Threat Grid can reveal whether the file has already been analyzed and flagged as malicious. If the file is known and classified, sandboxing may be unnecessary or can be deprioritized.
Let’s now address the incorrect option:
Execute the attachment in a disposable VM to watch network behavior (D):
This is detonation, the very action that the question is asking to delay. Sandboxing or detonating an attachment is a later-stage action, typically performed after initial triage and reputation checks. It’s more resource-intensive and risk-prone, so it’s only done when necessary.
Search the organization’s secure email gateway logs for matching message-IDs (E):
While this is a valid task in a broader investigation to determine the message’s distribution, it is not a prerequisite for sandboxing the attachment itself. This step is more about understanding email campaign scope than deciding whether to detonate an attachment.
In summary, a well-designed investigation playbook emphasizes safe and informative steps first, especially those that rely on passive analysis such as header inspection, URL and hash reputation checks. Only when these steps do not yield conclusive results should analysts proceed with riskier tasks like sandboxing, ensuring operational safety and efficiency.
Question 6
Which two tasks are commonly performed during an initial triage phase of an incident response? (Choose 2.)
A. Identifying affected systems
B. Collecting and preserving logs
C. Correlating alerts with threat intelligence
D. Closing false positive incidents
E. Containment and eradication of malicious actors
Correct Answers: A and D
Explanation:
The initial triage phase of incident response focuses on quickly assessing and classifying an event to determine whether it qualifies as a true security incident. This phase is crucial in helping security analysts prioritize response actions and avoid wasting resources on false positives. It typically includes reviewing alerts, identifying affected systems, and ruling out benign or irrelevant activities.
Identifying affected systems (A):
One of the primary goals during triage is to determine which systems, users, or network segments may be involved or affected by the suspected incident. This information helps gauge the scope and severity of the potential threat. Understanding which assets are involved is necessary for planning containment and further investigation steps, making this a standard triage activity.
Closing false positive incidents (D):
Initial triage also involves evaluating the validity of alerts. Many security alerts are false positives due to overly sensitive detection rules or benign behavior being misclassified. Efficient triage includes the elimination of these false positives, allowing responders to focus on real threats. Dismissing false positives early improves operational efficiency and reduces alert fatigue.
Now, let’s analyze the incorrect options:
Collecting and preserving logs (B):
While collecting logs is critical to the investigation process, it typically occurs after the initial triage, once an alert has been deemed a valid incident. Log preservation ensures that forensic evidence is not lost, but it is part of evidence collection and investigation, not the triage phase.
Correlating alerts with threat intelligence (C):
Correlating alerts with external or internal threat intelligence is a valuable investigative step, often performed during incident analysis, not in the triage stage. It helps enrich the alert context, identify threat actors or campaigns, and understand tactics or indicators. While sometimes used during triage for prioritization, this task is generally secondary to identifying and verifying the alert.
Containment and eradication of malicious actors (E):
These actions are part of the incident response process after triage, once an incident has been confirmed. Containment aims to isolate the threat, and eradication removes it from affected systems. These are post-verification tasks, meaning they occur only after triage confirms a real incident is underway.
Question 7
Which two configuration changes can be made to improve the effectiveness of Cisco Umbrella's DNS-layer security? (Choose 2.)
A. Configure DNS policies to block domains based on threat category
B. Enable URL filtering to block specific domains by category
C. Adjust the DNS caching TTL to improve performance
D. Modify IP addresses for DNS resolvers in the Umbrella dashboard
E. Set up geolocation-based filtering to block regions with higher risk
Correct Answers: A and E
Explanation:
Cisco Umbrella's DNS-layer security works by analyzing and enforcing security on DNS requests before a connection to a potentially malicious site is even made. To increase the effectiveness of this protection, administrators can implement configurations that directly reduce exposure to threats and tailor protection based on known risk indicators. The most effective strategies at the DNS layer involve blocking known risky domains and applying geo-based risk filtering.
Configure DNS policies to block domains based on threat category (A):
This is one of the core features of Umbrella and a critical step in strengthening DNS-layer defense. Administrators can configure policies to automatically block domains associated with threat categories such as malware, phishing, command-and-control (C2), newly seen domains, and more. Since Umbrella continuously updates its threat intelligence, this automated categorization ensures that the system reacts quickly to emerging threats.
Set up geolocation-based filtering to block regions with higher risk (E):
Some organizations may choose to restrict or block DNS requests originating from or targeting certain geographical regions that are known sources of cyberattacks or that the organization has no business relationship with. Umbrella allows the creation of DNS policies that filter based on geolocation, helping to reduce exposure to regionally prevalent threats or suspicious infrastructure.
Now, examining the incorrect options:
Enable URL filtering to block specific domains by category (B):
While this feature is important, URL filtering operates at the HTTP/HTTPS layer, not the DNS layer. Cisco Umbrella can integrate both layers, but the question specifically asks about improving DNS-layer security. Therefore, this option is outside the scope of DNS-specific improvements.
Adjust the DNS caching TTL to improve performance (C):
TTL settings relate to performance tuning, not directly to security. Although shorter TTLs might allow more frequent security updates from the DNS provider, this change does not actively enhance DNS-layer protection or detection.
Modify IP addresses for DNS resolvers in the Umbrella dashboard (D):
In the Umbrella dashboard, you configure identities and policies, but changing resolver IPs is not a security-enhancing setting. The DNS resolvers (e.g., 208.67.222.222) provided by Cisco Umbrella are pre-set and optimized. Modifying these may actually lead to less protection or configuration errors, so it’s not a recommended or effective way to boost DNS-layer security.
Question 8
What are two benefits of using Cisco Secure Network Analytics (formerly Stealthwatch) to detect threats within the network? (Choose 2.)
A. Ability to analyze encrypted traffic without decryption
B. Provides behavioral analysis for detecting anomalies in network traffic
C. Allows for fully automated incident response actions
D. Automatically updates threat intelligence feeds from Cisco Talos
E. Detects lateral movement based on machine learning algorithms
Correct Answers: A and B
Explanation:
Cisco Secure Network Analytics (formerly known as Cisco Stealthwatch) is a network traffic analysis tool designed to monitor and detect threats within a network environment by observing patterns in traffic behavior. It provides deep visibility across the entire network infrastructure, including data centers, branch offices, cloud environments, and endpoints, without relying solely on traditional endpoint-based detection.
Ability to analyze encrypted traffic without decryption (A):
This is a unique capability of Secure Network Analytics made possible by Cisco's Encrypted Traffic Analytics (ETA). Rather than decrypting traffic—which can be resource-intensive and may violate privacy—ETA uses packet metadata, flow patterns, and machine learning to identify threats in encrypted traffic without needing to see the content. This helps organizations maintain privacy and regulatory compliance while still detecting malware and command-and-control activity hidden within encrypted sessions.
Provides behavioral analysis for detecting anomalies in network traffic (B):
Behavioral analytics is at the core of Secure Network Analytics. The system continuously establishes baselines of normal behavior for users, hosts, applications, and devices across the network. It then applies anomaly detection techniques to identify deviations from those baselines—such as sudden data exfiltration, unusual port usage, or strange communication patterns—that might indicate malware infections, insider threats, or compromised accounts. This approach is especially useful for identifying zero-day attacks and advanced persistent threats (APTs) that don’t rely on known signatures.
Now, let’s examine the incorrect options:
Allows for fully automated incident response actions (C):
While Secure Network Analytics can integrate with other Cisco security tools to assist in incident response workflows, it is not inherently responsible for fully automated incident response. Tools such as Cisco SecureX or Cisco XDR are more aligned with automated response capabilities. Secure Network Analytics is primarily focused on detection and visibility, not direct action.
Automatically updates threat intelligence feeds from Cisco Talos (D):
Although Cisco Talos does provide intelligence across many Cisco platforms, Secure Network Analytics is not directly reliant on Talos threat feeds for its behavioral analytics or traffic anomaly detection. Unlike tools like Cisco Secure Firewall or Cisco Umbrella, which use Talos feeds for reputation-based blocking, Secure Network Analytics depends more on traffic telemetry and flow data analysis.
Detects lateral movement based on machine learning algorithms (E):
This option is partially true but slightly misleading. Secure Network Analytics does detect lateral movement and uses analytics (including some machine learning), but this is not a primary distinguishing benefit compared to its core strengths of encrypted traffic analysis and behavioral anomaly detection. Additionally, lateral movement detection is not solely based on machine learning—it’s a broader feature of how Secure Network Analytics interprets network behavior.
Question 9
An organization wants to deploy a solution for continuous monitoring of their cloud infrastructure.
Which two tools are best suited for monitoring cloud environments such as AWS and Azure? (Choose 2.)
A. Cisco Secure Network Analytics
B. Cisco Cloudlock
C. Cisco Stealthwatch Cloud
D. Cisco Umbrella
E. Cisco Identity Services Engine (ISE)
Correct Answers: B and C
Explanation:
When considering tools for continuous monitoring of cloud environments like AWS and Azure, it's important to focus on solutions that offer cloud-native visibility, access monitoring, threat detection, and policy enforcement. Cisco offers several cloud security tools, but each one serves a different function within the security ecosystem.
Cisco Cloudlock (B):
Cloudlock is a Cloud Access Security Broker (CASB) that helps secure cloud-based applications such as Microsoft 365, Google Workspace, Box, and others. It enables organizations to monitor user behavior, data usage, and potential insider threats in cloud environments. While it does not monitor infrastructure metrics like CPU or network traffic, it is extremely effective in monitoring SaaS environments and securing identity and data access across cloud services. It is particularly valuable for policy enforcement, anomaly detection, and compliance in multi-cloud setups.
Cisco Stealthwatch Cloud (C):
Stealthwatch Cloud (now part of Secure Cloud Analytics) is explicitly designed for cloud infrastructure monitoring, making it highly relevant for environments like AWS and Azure. It uses network telemetry and flow logs (e.g., AWS VPC flow logs) to deliver real-time visibility, threat detection, and anomaly identification. It enables organizations to detect things like misconfigurations, insider threats, malware activity, and lateral movement in both cloud-native and hybrid environments. This makes it one of the most effective tools for continuous cloud monitoring across IaaS environments.
Now let’s look at the incorrect options:
Cisco Secure Network Analytics (A):
This is the on-premises version of Stealthwatch. While it's excellent for monitoring internal enterprise networks through NetFlow and telemetry, it is not optimized for native cloud environments like AWS or Azure unless integrated with specific collectors and agents. It's more commonly used for data centers, campus networks, and large internal infrastructures.
Cisco Umbrella (D):
Cisco Umbrella is a cloud-delivered DNS-layer security solution that prevents users from connecting to malicious sites by enforcing security at the DNS level. While it plays a critical role in blocking threats before they reach the network, it does not provide deep infrastructure monitoring of AWS or Azure. It is more effective for internet-bound traffic filtering and domain-based threat intelligence, not infrastructure visibility.
Cisco Identity Services Engine (ISE) (E):
ISE is used primarily for network access control, device profiling, and policy enforcement in on-premises networks. It supports segmentation and compliance for endpoints but has limited applicability to monitoring cloud infrastructure directly. ISE does not provide visibility into cloud services, resources, or flow-level data from AWS or Azure environments.
Question 10
Which two actions can a security administrator take to reduce the risk of DNS-based attacks while ensuring uninterrupted service? (Choose 2.)
A. Implement DNSSEC to digitally sign DNS data
B. Set up a local DNS cache to resolve queries faster
C. Configure split-horizon DNS to separate internal and external DNS queries
D. Use DNS filtering to block access to known malicious domains
E. Disable DNS query logging to prevent data exfiltration
Correct Answers: A and D
Explanation:
DNS-based attacks can take many forms, such as DNS spoofing, cache poisoning, domain hijacking, and tunneling for data exfiltration. Therefore, a layered defense strategy is necessary to minimize the risk while maintaining the integrity and availability of DNS services. Among the options provided, implementing DNSSEC and using DNS filtering are two of the most effective techniques to address security concerns in DNS without interrupting service.
Implement DNSSEC (A):
DNSSEC (Domain Name System Security Extensions) enhances DNS security by digitally signing DNS data, which helps ensure the authenticity and integrity of DNS responses. It prevents attackers from injecting forged or spoofed DNS records into a cache (a common tactic in cache poisoning or man-in-the-middle attacks). With DNSSEC, a resolver can verify that the DNS response is genuine and untampered. While it doesn't encrypt DNS traffic, it provides cryptographic assurance that the DNS data is from the authoritative source. By ensuring only valid responses are accepted, DNSSEC reduces the risk of many DNS-based attacks without disrupting service for legitimate queries.
Use DNS Filtering (D):
DNS filtering helps secure networks by blocking access to known malicious domains before a connection is established. It works by checking DNS queries against a threat intelligence database, and if a domain is flagged (e.g., associated with phishing, malware, or command-and-control servers), the request is denied or redirected. DNS filtering is widely used in content control, malware defense, and phishing prevention, and it operates without interrupting normal DNS resolution for trusted domains. Since it's a lightweight, upstream solution, it doesn't burden the network and is easy to implement for both small and large organizations.
Now, let’s consider the incorrect choices:
Set up a local DNS cache (B):
While setting up a local DNS cache improves performance and reduces latency, it doesn't directly contribute to security unless additional protections are added (like DNSSEC or validation mechanisms). Caches can also be a target for poisoning attacks if not properly secured. Thus, while useful for availability and efficiency, it's not primarily a security control.
Configure split-horizon DNS (C):
Split-horizon DNS (also called split-brain DNS) allows organizations to serve different DNS responses based on whether the request is coming from the internal network or the internet. This can help prevent information leakage but is not designed to stop DNS-based attacks like spoofing or tunneling. It enhances segmentation and internal network hygiene, but does not inherently reduce the risk of external DNS-based attacks.
Disable DNS query logging (E):
This is a counterproductive action. Logging DNS queries helps detect abnormal patterns (e.g., data exfiltration via DNS tunneling) and supports incident response and forensics. Disabling logging reduces visibility and weakens security monitoring. While attackers might use DNS to exfiltrate data, the solution is not to stop logging, but rather to detect and analyze the traffic.
Top Training Courses
LIMITED OFFER: GET 30% Discount
This is ONE TIME OFFER
A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.