300-635 Cisco Practice Test Questions and Exam Dumps


Question No 1:

Which two benefits of using network configuration tools such as Ansible and Puppet to automate data center platforms are valid? (Choose two.)

A. consistency of systems configuration
B. automation of repetitive tasks
C. ability to create device and interface groups
D. ability to add VLANs and routes per device
E. removal of network protocols such as Spanning Tree

Answer: A, B

Explanation:

Network configuration tools like Ansible and Puppet are designed to automate and manage network and system configurations in data center platforms. These tools are highly beneficial for large-scale environments due to their ability to streamline processes and reduce the potential for human error. Here's a detailed look at the benefits:

  • A. Consistency of systems configuration: One of the key benefits of using network configuration tools is the consistency they bring to system configurations. By automating the configuration management process, these tools ensure that systems across the network are configured in the same way, without discrepancies that might occur due to manual configuration. Automation allows for templates and predefined configurations to be applied consistently across all devices, which is especially important in large-scale environments where human error could lead to configuration drift.

  • B. Automation of repetitive tasks: Another significant benefit of using tools like Ansible and Puppet is the ability to automate repetitive tasks. Network administrators often need to perform similar configuration tasks across multiple devices, such as applying security patches, updating software, or modifying network settings. By automating these tasks, these tools free up time for network engineers and reduce the possibility of human error, allowing for a more efficient and reliable network operation.

While the other options are also related to network management, they are not as directly tied to the core benefits of configuration management tools like Ansible and Puppet:

  • C. Ability to create device and interface groups: While these tools do allow for creating groups of devices for easier management, this is more of an organizational feature rather than a primary benefit of configuration automation.

  • D. Ability to add VLANs and routes per device: This is a feature that could be performed with these tools, but the main advantage lies in automation and consistency rather than manually configuring VLANs or routes on individual devices.

  • E. Removal of network protocols such as Spanning Tree: Removing or disabling protocols like Spanning Tree is typically done through network design and configuration, but this is not a direct benefit of tools like Ansible or Puppet, which focus on automation, consistency, and task management.

Therefore, the two most valid benefits are A (Consistency of systems configuration) and B (Automation of repetitive tasks).

Question No 2:

A set of automation scripts work with no issue from a local machine, but an experiment needs to take place with a new package found online. 

How is this new package isolated from the main code base?

A. Add the new package to your requirements.txt file.
B. Create a new virtual machine and perform a pip install of the new package.
C. Perform a pip install of the new package when logged into your local machine as root.
D. Create a new virtual environment and perform a pip install of the new package.

Answer: D

Explanation:

When experimenting with a new package that should be isolated from the main code base, it is important to use a method that prevents interference with the existing environment and dependencies. Here's an analysis of the options:

  • A. Add the new package to your requirements.txt file:
    The requirements.txt file is typically used to list all the packages needed for the project. While it's important for managing dependencies, simply adding a new package to the requirements.txt file would not isolate it from the main code base. It would install the new package for the entire project, which is not ideal if you're just experimenting with it. This option doesn't provide isolation.

  • B. Create a new virtual machine and perform a pip install of the new package:
    Creating a new virtual machine (VM) can provide isolation, but it's a heavy-handed solution. Virtual machines are resource-intensive, and you would not need to go to such an extent unless you have other requirements (like needing to isolate the entire environment including the operating system). For just testing a package in isolation, a virtual machine is more than what's needed.

  • C. Perform a pip install of the new package when logged into your local machine as root:
    Installing the package globally as the root user would impact the entire system environment, which is the opposite of isolation. This would also risk conflicting with existing packages and potentially destabilize the environment. This option is not appropriate for isolating the package.

  • D. Create a new virtual environment and perform a pip install of the new package:
    This is the correct and most efficient option. A virtual environment allows you to create an isolated environment in which you can install and test the new package without affecting the main code base or other dependencies. By using virtualenv or Python's built-in venv module, you can isolate the experimental package from the global Python environment and avoid conflicts. This solution ensures that the experiment with the new package remains separate from the main code base.

The best way to isolate a new package from the main code base for testing is to create a new virtual environment and install the new package within that environment. This ensures that your experiment does not interfere with the existing setup, providing a safe and controlled testing environment. Therefore, the correct answer is D.

Question No 3:

Which two statements about gRPC are true? (Choose two.)

A. It is an IETF draft.
B. It is an IETF standard.
C. It runs over SSH.
D. It is an open source initiative.
E. It runs over HTTPS.

Answer: D, E

Explanation:

gRPC is an open-source framework for building remote procedure call (RPC) systems. It is designed to provide a highly efficient and flexible way for services to communicate with each other over a network.

Let's go through each option:

  • A. It is an IETF draft.
    This statement is false. gRPC is not an IETF (Internet Engineering Task Force) draft. It was developed by Google and is based on HTTP/2 and Protocol Buffers (protobufs). While it has gained significant adoption, it is not an IETF draft.

  • B. It is an IETF standard.
    This statement is false. As mentioned above, gRPC is not an IETF standard. It is an open-source project created by Google but has not yet become an official standard from the IETF.

  • C. It runs over SSH.
    This statement is false. gRPC does not specifically run over SSH (Secure Shell). It typically runs over HTTP/2, which is designed to be more efficient and modern than traditional HTTP/1.1. SSH is not a part of the gRPC communication stack.

  • D. It is an open source initiative.
    This statement is true. gRPC is indeed an open-source project that was developed by Google and is available for public use and contribution. It is hosted on GitHub and has an active community of contributors.

  • E. It runs over HTTPS.
    This statement is true. gRPC generally runs over HTTP/2, which can be secured with SSL/TLS (the underlying protocol for HTTPS). Therefore, gRPC can run over HTTPS, providing secure communication between services.

In summary, the correct statements are D and E. gRPC is an open-source initiative (D) and runs over HTTPS (E), ensuring secure communication via SSL/TLS.

Question No 4:

Which statement about synchronous and asynchronous API calls is true?

A. Synchronous API calls wait to return until a response has been received.
B. Synchronous communication is harder to follow and troubleshoot.
C. Synchronous API calls must always use a proxy server.
D. Asynchronous communication uses more overhead for client authentication.

Answer: A

Explanation:

When dealing with API calls, understanding the differences between synchronous and asynchronous communication is essential for both performance and application behavior.

Option A: Synchronous API calls wait to return until a response has been received.
This option is correct. In a synchronous API call, the client sends a request to the server and waits for the server to process the request and return a response. During this waiting period, the client is blocked and cannot proceed with any other operations until the response is received. This type of communication ensures that the client receives the full response before continuing.

Option B: Synchronous communication is harder to follow and troubleshoot.
This option is incorrect. Synchronous communication is generally easier to follow and troubleshoot because the request and response cycle happens in a linear fashion. The client sends a request, waits for a response, and then proceeds. This predictable sequence makes it easier to track the flow of data and identify issues.

Option C: Synchronous API calls must always use a proxy server.
This option is incorrect. Synchronous API calls do not require the use of a proxy server. A proxy server may be used for various purposes, such as load balancing, caching, or filtering, but it is not a mandatory component for synchronous communication. The nature of synchronous calls does not depend on whether a proxy is used.

Option D: Asynchronous communication uses more overhead for client authentication.
This option is incorrect. Asynchronous communication typically reduces the overhead for client authentication because the client does not need to wait for a response before making other requests. In asynchronous communication, the client sends a request and does not block while waiting for a response. Authentication might be handled separately, but it does not necessarily incur more overhead compared to synchronous calls. In fact, asynchronous operations may result in lower overall resource consumption, especially for applications that handle multiple tasks concurrently.

In summary, synchronous API calls wait for a response before continuing, making A the correct answer.

Question No 5:

What are two main guiding principles of REST? (Choose two.)

A. cacheable
B. trackable
C. stateless
D. single-layer system
E. stateful

Answer: A, C

Explanation:

REST (Representational State Transfer) is an architectural style used for designing networked applications. It relies on a set of principles and constraints that ensure simplicity, scalability, and performance. Two key guiding principles of REST are cacheable and stateless.

A. cacheable
One of the key principles of REST is that responses from the server should be explicitly marked as cacheable or non-cacheable. This means that resources can be cached to improve performance, reduce the need for repeated requests, and enhance efficiency, especially in high-latency environments. Cacheability allows client-side caching of responses to reduce the need for redundant server communication, thus improving overall application performance. For REST to be truly effective, caching mechanisms must be in place to allow client-side or intermediary proxies to cache responses and reuse them when needed.

C. stateless
Another fundamental principle of REST is that it is stateless. This means that every request from a client to the server must contain all the information the server needs to understand and process the request. The server does not store any information about the client session between requests. Each request is independent, and there is no reliance on the server's previous knowledge of the client. This statelessness allows for easier scaling and simpler server-side management, as the server does not have to store or manage any client state.

Now, let's examine why the other options are incorrect:

  • B. trackable
    While REST APIs can be monitored and tracked, trackable is not a core guiding principle of REST. Tracking is a separate concern related to monitoring API performance or usage, but it is not part of the architectural principles that define REST.

  • D. single-layer system
    A single-layer system is not a guiding principle of REST. REST can be implemented in multi-layered systems, which is often the case when using intermediate servers, proxies, and gateways. The REST constraints focus more on principles like statelessness, cacheability, and uniform interfaces rather than enforcing a single-layer architecture.

  • E. stateful
    Stateful is the opposite of the stateless principle of REST. A stateful system maintains session state between requests, meaning that the server remembers previous interactions with the client. REST's stateless nature helps achieve better scalability and flexibility in distributed systems.

In conclusion, the two guiding principles of REST are cacheable and stateless, which are critical for maintaining efficiency, performance, and scalability. Therefore, A and C are the correct answers.

Question No 6:

Which action does the execution of this ACI Cobra Python code perform?

A. It prints all LLDP neighbor MAC and IP addresses.
B. It prints all Cisco Discovery Protocol neighbor MAC and IP addresses.
C. It prints all endpoint MAC and IP addresses.
D. It prints all APIC MAC and IP addresses.

Answer: A

Explanation:

To determine which action the ACI Cobra Python code performs, we need to understand the context of the code and the terms involved:

  1. LLDP (Link Layer Discovery Protocol) is a standard protocol used for network discovery. It allows network devices to advertise their identity and capabilities on the local network. This would involve discovering neighbor devices, including their MAC and IP addresses.

  2. Cisco Discovery Protocol (CDP) is Cisco’s proprietary protocol, also used for network device discovery, but it’s not the focus here since we are dealing with LLDP.

  3. Endpoints refer to devices that are connected to the network, and their MAC and IP addresses can be discovered in a different context, specifically when querying endpoint information in Cisco ACI (Application Centric Infrastructure).

  4. APIC (Application Policy Infrastructure Controller) is the central controller in Cisco ACI, and while it does have MAC and IP addresses, they are not typically discovered using LLDP, as LLDP typically focuses on discovering neighboring devices.

Given that the question refers to LLDP (which is commonly used for device discovery in networks) and the fact that neighbor MAC and IP addresses are key pieces of information typically exchanged using this protocol, A is the most appropriate answer.

Thus, the Cobra Python code in this case is most likely performing a task where it discovers and prints the LLDP neighbor MAC and IP addresses. Therefore, A is the correct answer.

Question No 7:

Assuming a new ACI instance, what is the result when this script is run?

A. Ten objects are created and subsequently deleted.
B. Nine objects are created.
C. An exception is thrown.
D. Ten objects are created.

Answer: C

Explanation:

To understand the result of the script when it runs in an ACI (Application Centric Infrastructure) environment, we must consider how ACI handles object creation and deletion, as well as what could lead to an exception being thrown.

When running scripts in an ACI instance, several factors come into play:

  1. Object Creation: ACI typically involves the creation of objects that represent various network components, such as tenants, application profiles, or endpoints. Scripts in this context often perform actions like creating or modifying these objects.

  2. Exceptions: If there is a flaw in the script or if an invalid operation is attempted (e.g., an unsupported API call, trying to create an object with invalid parameters, or attempting to delete an object that cannot be deleted), an exception may be triggered. This is particularly true if the script attempts to create an object that already exists, or there is an issue with the parameters used for the object creation.

Based on the question, if the script runs on a new ACI instance, we can infer that the environment starts with no pre-existing objects. Assuming the script creates objects and attempts to perform certain operations (like deletion), an exception could occur if one of these operations is invalid or conflicts with the existing environment.

  • A. Ten objects are created and subsequently deleted: This might occur if the script is designed to create objects and then delete them, but this would depend on the script itself being properly structured to allow for successful deletion.

  • B. Nine objects are created: This outcome would suggest that one of the object creation operations failed, but it's unlikely without more context.

  • C. An exception is thrown: This is the most likely scenario. If the script encounters an error—such as an invalid parameter, an unsupported command, or a conflict with existing settings—an exception would be thrown.

  • D. Ten objects are created: If the script was designed to create exactly ten objects, this would be the expected result if everything is functioning as expected without any errors.

Since we are dealing with a new ACI instance and the typical issues that might arise when scripting with new configurations, C. An exception is thrown is the most likely outcome, particularly if the script encounters an error during object creation or deletion.

Question No 8:

What is the purpose of Cisco ACI (Application Centric Infrastructure) in a data center?

A) To automate server virtualization
B) To provide network programmability and policy-driven automation
C) To manage database traffic
D) To enable IPv6 routing

Answer: B) To provide network programmability and policy-driven automation

Explanation:

Cisco ACI (Application Centric Infrastructure) is an innovative software-defined networking (SDN) solution that plays a central role in automating and managing the network in data centers. ACI is designed to provide network programmability by abstracting the underlying hardware and treating the network as a unified resource. Through policies, ACI enables the dynamic configuration of the network based on the needs of the applications running on top of it, rather than just the static network infrastructure.

In a traditional network setup, configuring each device individually can lead to complexity, human errors, and increased administrative overhead. Cisco ACI changes this by enabling the network to adapt dynamically to application requirements, where the network automatically adjusts to the application’s demands such as bandwidth, security, and performance.

A key benefit of ACI is its ability to automate tasks like provisioning, policy enforcement, and resource optimization, making the network more agile, scalable, and secure. This level of automation is essential in data center environments where the demand for faster deployment, higher availability, and easier management is continuously growing. By defining policies that align with the application's needs, ACI provides network operators with centralized control over the network, reducing manual intervention and minimizing configuration errors.

Additionally, ACI’s policy-driven approach facilitates better security posture by ensuring that the right security measures are applied automatically as devices and applications are deployed across the data center, maintaining compliance and reducing the risks associated with misconfigurations.

Question No 9:

In the context of Python scripting for data center automation, which of the following libraries is commonly used to interact with Cisco devices?

A) Ansible
B) Netmiko
C) Pytest
D) Flask

Answer: B) Netmiko

Expanded Explanation:

Python scripting has become one of the primary tools used in network automation, especially in the context of interacting with Cisco devices. Among the various libraries available for Python, Netmiko is one of the most popular choices for automating network tasks and interacting with network devices via SSH. Netmiko abstracts the complexities of device interaction and provides a simplified interface to send commands to network devices, collect outputs, and execute various network management tasks.

Netmiko is designed specifically for network engineers and allows them to manage devices from multiple vendors (including Cisco, Juniper, and Arista) without having to write complicated code. Using Netmiko, users can create Python scripts that automate repetitive tasks like configuring network devices, retrieving status information, and even running diagnostics on multiple devices simultaneously.

One of the main advantages of Netmiko is that it supports a wide variety of Cisco devices, including routers, switches, and firewalls. It also works well with Python's built-in libraries, making it easy to extend and integrate with other automation tools. It’s especially useful for automating the deployment of network configurations and integrating with other data center management systems.

Unlike some other libraries, Netmiko simplifies the process of working with SSH, eliminating the need for complex error handling and device-specific communication protocols. This means network engineers can focus on the automation logic itself rather than worrying about the underlying complexities of interacting with networking hardware.

Other options such as Ansible, while also very effective for automation, operate on a slightly higher level, using configuration management approaches, and are not as lightweight or simple as Netmiko for direct device interaction via Python. Pytest and Flask are not typically used for network automation—Pytest is a testing framework, and Flask is a web framework.

Question No 10:

Which of the following Ansible modules would you use to automate the configuration of Cisco Nexus switches?

A) ios_config
B) nxos_config
C) cisco_ise
D) ios_facts

Answer: B) nxos_config

Explanation:

Ansible is a powerful open-source automation tool that is commonly used in network automation. One of the reasons for Ansible’s popularity is the wide range of modules available that are designed specifically for interacting with networking equipment from vendors like Cisco.

When working with Cisco Nexus switches, the appropriate module to use is nxos_config. This module allows administrators to push configuration changes to Cisco Nexus devices, retrieve configurations, and apply updates to the devices in a consistent and repeatable manner. Ansible’s declarative nature allows users to specify what configuration changes are needed, and the tool automatically handles the steps necessary to make those changes.

The nxos_config module is designed to work with Cisco’s Nexus switches running the NX-OS operating system. These switches are widely used in data centers and have their own set of unique configurations and management tools. The nxos_config module ensures that network engineers can automate the management of these devices, reducing manual efforts, improving consistency, and ensuring compliance with configuration standards.

For comparison:

  • ios_config is used for Cisco devices running IOS, not NX-OS.

  • cisco_ise is a module for interacting with Cisco Identity Services Engine, which is used for network security and user authentication.

  • ios_facts is another module for gathering facts about IOS-based devices but doesn't help with configuration.

By leveraging nxos_config, network automation can be extended to large-scale Cisco Nexus environments, improving both speed and accuracy while managing network configurations.


UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.