Use VCE Exam Simulator to open VCE files

350-901 Cisco Practice Test Questions and Exam Dumps
A developer has created an application based on specific customer requirements. The customer needs the application to be highly available and run with minimum downtime to ensure continuity of service. The customer is concerned with both the Recovery Time Objective (RTO), which is the time it takes to recover from a failure, and the Recovery Point Objective (RPO), which determines how much data loss is acceptable.
Considering the need for high-availability and minimizing downtime, what design approach regarding high-availability applications, RTO, and RPO should be taken?
A. Active/passive results in lower RTO and RPO. For RPO, data synchronization between the two data centers must be timely to allow seamless request flow.
B. Active/passive results in lower RTO and RPO. For RPO, data synchronization between the two data centers does not need to be timely to allow seamless request flow.
C. Active/active results in lower RTO and RPO. For RPO, data synchronization between the two data centers does not need to be timely to allow seamless request flow.
D. Active/active results in lower RTO and RPO. For RPO, data synchronization between the two data centers must be timely to allow seamless request flow.
D. Active/active results in lower RTO and RPO. For RPO, data synchronization between the two data centers must be timely to allow seamless request flow.
In high-availability (HA) applications, Recovery Time Objective (RTO) and Recovery Point Objective (RPO) are crucial metrics for defining how quickly an application can recover after an outage and how much data loss is acceptable.
Recovery Time Objective (RTO) refers to the time it takes to recover the application and restore services after a failure.
Recovery Point Objective (RPO) refers to the maximum amount of data loss that can occur during the failure period.
Active/Passive Configuration (Option A and B):
In an active/passive setup, one data center (the active one) handles the requests, while the other (the passive one) is on standby, only activated during a failure.
RTO and RPO in Active/Passive: Generally, active/passive configurations result in higher RTO and RPO due to the downtime that occurs during the failover process and the lack of real-time synchronization between data centers.
In Option A, timely data synchronization is critical to ensure that the passive site has the most current data, which helps minimize downtime. Without timely synchronization, there could be significant data loss, impacting RPO.
Active/Active Configuration (Option C and D):
In an active/active setup, both data centers are active and handle traffic simultaneously. This configuration provides continuous availability and load balancing between sites.
RTO and RPO in Active/Active: An active/active configuration generally results in lower RTO and lower RPO because both data centers are always in sync, and failure at one data center does not result in service disruption.
Option D suggests that timely data synchronization between the data centers is essential to minimize data loss and ensure seamless request flow. In an active/active setup, continuous data synchronization is crucial for ensuring that users experience minimal downtime and there is no significant data loss.
Lower RTO and RPO: Active/active configurations provide immediate failover capabilities, reducing the time to restore service (lower RTO) and ensuring that data synchronization between active data centers minimizes data loss (lower RPO).
Timely Data Synchronization: In an active/active setup, for seamless request flow, the data centers must stay in sync in real time or near real time to prevent discrepancies in the data and maintain continuity of service.
Thus, the correct approach to achieve minimum downtime and high availability is to implement an active/active configuration with timely data synchronization between the two data centers.
The correct design approach to minimize downtime, and ensure seamless service with minimal data loss, is:
D. Active/active results in lower RTO and RPO. For RPO, data synchronization between the two data centers must be timely to allow seamless request flow.
You are working on a cloud-native project where the source code and dependencies are written in Python, Ruby, and JavaScript. Whenever there is a change in the code, a notification is sent to the CI/CD (Continuous Integration/Continuous Deployment) tool to trigger the pipeline. The pipeline automates the process of building, testing, and deploying the application.
Which step should be omitted from the CI/CD pipeline for this cloud-native project?
A. Deploy the code to one or more environments, such as staging and/or production.
B. Build one or more containers that package up code and all its dependencies.
C. Compile code.
D. Run automated tests to validate the code.
C. Compile code.
In a cloud-native project, the focus is on applications that are designed to run in dynamic cloud environments, typically leveraging containerization and microservices. This often means interpreted languages (like Python, Ruby, and JavaScript) are used rather than compiled languages.
Each of the steps in the CI/CD pipeline generally follows a set pattern of activities, which are aimed at ensuring code is tested, built, and deployed correctly. Let's break down each option:
A. Deploy the code to one or more environments, such as staging and/or production.
Correct step. After the code has been tested, it must be deployed to staging or production environments for end-users to access. Deployment is a necessary step in the CI/CD pipeline to push the changes live.
B. Build one or more containers that package up code and all its dependencies.
Correct step. In modern cloud-native applications, it’s common to package the application code and its dependencies into containers (e.g., using Docker). This ensures that the application is portable and can run consistently across different environments. Building containers is an essential step in the pipeline.
C. Compile code.
Omitted step. Python, Ruby, and JavaScript are interpreted languages, which means they are not compiled in the same way as languages like Java or C++. These languages are typically executed directly by their respective interpreters, without needing a compilation step. Therefore, compiling code is unnecessary for these languages and can be omitted from the CI/CD pipeline.
D. Run automated tests to validate the code.
Correct step. Running automated tests is a crucial part of the CI/CD process. It ensures that the changes made to the code do not introduce new bugs or regressions. Automated testing helps maintain the quality and stability of the application as it evolves.
The step that should be omitted from the CI/CD pipeline for this cloud-native project, which uses interpreted languages like Python, Ruby, and JavaScript, is compiling code. These languages do not require a compilation step as they are interpreted, so this step is unnecessary in the pipeline.
Thus, the correct answer is:
C. Compile code.
You are designing an application following the 12-factor app methodology, which is a set of best practices for building modern, scalable applications. These practices focus on optimizing aspects such as application configuration, environment management, and logging, ensuring portability and scalability.
Which two of the following statements align with the best practices according to the 12-factor app methodology for application design?
A. Application code writes its event stream to stdout.
B. Application log streams are archived in multiple replicated databases.
C. Application log streams are sent to log indexing and analysis systems.
D. Application code writes its event stream to specific log files.
E. Log files are aggregated into a single file on individual nodes.
A. Application code writes its event stream to stdout.
C. Application log streams are sent to log indexing and analysis systems.
The 12-factor app methodology is a set of guidelines for building software-as-a-service (SaaS) applications that are scalable, portable, and easy to maintain. Among these practices, logging is a crucial component. Logs are essential for debugging, monitoring, and tracking the health of applications, particularly in cloud environments. Here's a breakdown of each option in the context of the 12-factor app methodology:
A. Application code writes its event stream to stdout.
Best practice. According to the 12-factor app principles, applications should write logs to stdout (standard output) and stderr (standard error) instead of writing them to files. This ensures that logs are handled in a way that’s consistent and portable across different environments. It allows logs to be captured and processed by external logging services without needing to manage files directly within the application.
B. Application log streams are archived in multiple replicated databases.
Not a best practice. The 12-factor app does not recommend logging to replicated databases. The focus is on using external log aggregators or indexing systems, not storing logs in databases. Managing logs in databases can introduce additional complexity and isn't ideal for scalability and portability.
C. Application log streams are sent to log indexing and analysis systems.
Best practice. The 12-factor app encourages applications to send logs to centralized log indexing and analysis systems (e.g., ELK stack, Splunk). These systems allow for efficient log aggregation, search, and analysis. Centralizing logs makes it easier to monitor and troubleshoot applications across different environments.
D. Application code writes its event stream to specific log files.
Not a best practice. Writing logs to specific log files contradicts the 12-factor app’s advice to output logs to stdout and stderr. Managing log files internally within the application can be problematic in cloud-native environments, especially when the application scales across multiple instances.
E. Log files are aggregated into a single file on individual nodes.
Not a best practice. Similar to option D, aggregating log files into a single file on individual nodes is not recommended. It creates challenges in accessing and managing logs, especially in distributed environments, where the logs are better handled by centralized logging systems.
In the context of the 12-factor app methodology, the best practices for handling logs involve writing to stdout and sending logs to centralized logging systems. Therefore, the correct answers are:
A. Application code writes its event stream to stdout.
C. Application log streams are sent to log indexing and analysis systems.
An organization is managing a large cloud-deployed application that follows a microservices architecture. The application is redundantly deployed across three or more data center regions, ensuring that downtime is minimal. However, the organization frequently receives reports about application slowness. Upon reviewing the container orchestration logs, it is evident that various containers are encountering faults, which result in them failing and spinning up new instances.
What action should be taken to improve the resiliency of the application while maintaining the current scale?
A. Update the base image of the containers.
B. Test the execution of the application on another cloud services platform.
C. Increase the number of containers running per service.
D. Add consistent “try/catch (exception)” clauses to the code.
C. Increase the number of containers running per service.
The scenario describes an application with microservices running in containers across multiple data center regions, but the application is experiencing slowness due to faults in the containers, causing them to fail and restart. To address this issue, we need to focus on improving the resiliency of the application while maintaining the existing scale.
Here’s an analysis of each option:
A. Update the base image of the containers.
Updating the base image might resolve security vulnerabilities or outdated libraries, but it does not directly address the issue of containers failing frequently. If the application is experiencing faults due to resource issues, container failures, or orchestration problems, simply updating the base image will not solve these underlying problems. The issue seems to be more related to resiliency under load rather than outdated code or images.
B. Test the execution of the application on another cloud services platform.
While testing on a different cloud platform might uncover platform-specific issues, it’s not the most direct or efficient action for addressing performance problems that arise from the application’s design. The root cause of the slowness and container failures is likely tied to the application architecture or container orchestration, which may still be present on another platform. Switching platforms would also incur significant overhead without necessarily resolving the fundamental issue.
C. Increase the number of containers running per service.
Correct answer. To improve resiliency, increasing the number of containers running per service can help distribute the load more evenly and ensure that the application is highly available even when some containers fail. This approach is especially relevant in microservices architectures, where services should be designed to scale horizontally. Running more instances of each service means that if one container fails, others are readily available to take over the load, reducing downtime and improving the system's overall reliability.
D. Add consistent “try/catch (exception)” clauses to the code.
Adding exception handling is generally a good practice to handle errors and prevent crashes. However, it won’t directly improve resiliency in terms of the application’s ability to handle failures at the container level or across the infrastructure. Exception handling is a programming best practice but is insufficient in addressing infrastructure or orchestration-related faults that are causing the containers to fail in the first place.
To improve the resiliency of the application and ensure that it can handle failures more effectively, the best course of action is to increase the number of containers running per service. This approach ensures that the application can continue to operate smoothly even if some containers experience faults, improving overall availability and fault tolerance.
Therefore, the correct answer is:
C. Increase the number of containers running per service.
You are tasked with designing a web application that can efficiently handle up to 1000 requests per second. The application must ensure that it remains responsive and can serve users without degradation in performance, even when the load reaches its peak.
What design strategy should be implemented to ensure that the application can handle the traffic effectively while maintaining a high level of service?
A. Use algorithms like random early detection to deny excessive requests.
B. Set a per-user request limit (for example, 5 requests/minute/user) and deny requests from users who have reached the limit.
C. Allow only 1000 user connections, and deny further connections to ensure that the current users are served.
D. Queue all requests and process them one by one to ensure that all users are eventually served.
A. Use algorithms like random early detection to deny excessive requests.
The web application must be designed to efficiently handle high traffic, ensuring both performance and availability. Here’s a breakdown of each option and how it applies to the problem:
A. Use algorithms like random early detection to deny excessive requests.
Correct answer. This strategy involves traffic shaping where Random Early Detection (RED) or similar algorithms are used to drop packets or requests early in a queue, preventing a system from being overwhelmed. By intelligently denying excess requests before the system reaches a full load, the application maintains performance by avoiding resource saturation. This is a scalable approach as it maintains server responsiveness without rejecting connections outright.
B. Set a per-user request limit (for example, 5 requests/minute/user) and deny requests from users who have reached the limit.
Setting a per-user limit is a form of rate limiting, which can help prevent abuse or overloading by individual users. However, this method does not handle overall traffic spikes effectively, especially when the system is near its capacity (1000 requests/second). While it can be a part of the solution, relying solely on this strategy could lead to poor user experience, as legitimate users might be throttled unnecessarily during peak times.
C. Allow only 1000 user connections, and deny further connections to ensure that all connected users are served.
This approach limits concurrency by restricting the number of active connections. Although this helps ensure the system isn't overwhelmed, it could lead to a poor user experience, especially if legitimate users are consistently denied service. This approach also doesn't scale well with increasing traffic or changes in traffic patterns, as it simply limits access rather than efficiently managing the load.
D. Queue all requests and process them one by one to ensure that all users are eventually served.
Queuing requests for sequential processing might seem like a way to ensure all requests are handled eventually, but it introduces high latency and could lead to significant delays. Processing requests one by one does not scale well under high traffic volumes and will lead to a poor user experience due to long waiting times. It also creates a bottleneck, which undermines the goal of serving 1000 requests per second efficiently.
To handle 1000 requests per second effectively, a scalable solution like Random Early Detection (RED) is preferred, as it proactively manages the traffic load and helps to maintain high availability without overloading the system. This method denies excessive requests early in the queue to avoid resource exhaustion, ensuring that the system remains responsive for all users.
Thus, the correct answer is:
A. Use algorithms like random early detection to deny excessive requests.
An organization manages a large cloud-deployed application utilizing a microservices architecture across multiple data centers. Reports indicate slowness in the application, and logs from the container orchestration system show that various containers have faults, causing them to fail and automatically spin up new instances.
Which two actions should be implemented to enhance the design of the application and help identify the causes of these faults more effectively? (Choose two.)
A. Automatically remove the container that fails the most over a specified time period.
B. Implement a tagging methodology that traces the application execution from service to service.
C. Add logging on exception and provide immediate notification.
D. Write to the datastore every time an application failure occurs.
E. Implement an SNMP logging system with alerts in case a network link is slow.
B. Implement a tagging methodology that traces the application execution from service to service.
C. Add logging on exception and provide immediate notification.
In a microservices architecture, containers are distributed and may run across various data centers. When faults occur, particularly at the container level, it is crucial to identify the root cause to enhance application performance and reliability. Here’s a breakdown of the actions:
B. Implement a tagging methodology that traces the application execution from service to service.
Correct Answer. A tagging methodology helps track the flow of requests across multiple services in a microservices-based application. This is part of distributed tracing, where each request or transaction is tagged as it moves from one service to another, allowing the development and operations teams to pinpoint the exact microservice or service boundary where the failure or performance degradation occurs. By having clear insights into the service execution flow, it is easier to identify bottlenecks or faulty services that are causing slowness or failures.
C. Add logging on exception and provide immediate notification.
Correct Answer. Adding detailed logging for exceptions allows the application to capture and record critical error information whenever a failure happens. Immediate notifications for failures enable the operations team to respond in real-time, minimizing downtime or performance degradation. These logs can include stack traces, service names, and error messages, helping teams to quickly identify and resolve the underlying issue. This proactive monitoring and alerting mechanism are key to maintaining performance and minimizing impact on the user experience.
A. Automatically remove the container that fails the most over a specified time period.
While automatically removing the most frequently failing container may temporarily reduce the number of failures, it does not address the root cause of the failures. This could mask an underlying issue, and it doesn’t help identify why the failures are happening in the first place.
D. Write to the datastore every time an application failure occurs.
Writing to a datastore every time a failure occurs could add unnecessary overhead, as the application would constantly be writing failure logs to the database. This may worsen the performance problem and could lead to storage issues. It's better to have centralized logging and monitoring systems for failure tracking rather than continually writing to the datastore.
E. Implement an SNMP logging system with alerts in case a network link is slow.
SNMP logging for network issues might help in diagnosing network-level problems, but it doesn't address application-level faults within the containers or microservices. It's more useful for diagnosing network link failures rather than issues specific to microservices performance.
To improve the ability to identify faults and improve the design of the application, a tagging methodology for tracing service execution and logging on exceptions with immediate notifications should be implemented. These actions enable better fault detection and faster responses to issues, ensuring smoother performance in a complex microservices environment.
Thus, the correct answers are:
B. Implement a tagging methodology that traces the application execution from service to service.
C. Add logging on exception and provide immediate notification.
In a continuous integration (CI) environment, software tools such as OWASP are used to check dependencies for vulnerabilities and other issues. These tools help ensure that the dependencies used in the application are secure and compatible.
Which two situations are flagged by these dependency checking tools in CI environments like OWASP? (Choose two.)
A. Publicly disclosed vulnerabilities related to the included dependencies.
B. Mismatches in coding styles and conventions in the included dependencies.
C. Incompatible licenses in the included dependencies.
D. Test case failures introduced by bugs in the included dependencies.
E. Buffer overflows to occur as the result of a combination of the included dependencies.
A. Publicly disclosed vulnerabilities related to the included dependencies.
C. Incompatible licenses in the included dependencies.
In modern software development, particularly in continuous integration (CI) environments, the use of third-party dependencies is common. However, these dependencies can introduce various risks, such as security vulnerabilities or license conflicts, if not properly managed. Tools like OWASP Dependency-Check help identify these risks by scanning the project dependencies. Here's why the selected answers are correct:
A. Publicly disclosed vulnerabilities related to the included dependencies.
Correct Answer. Tools designed for dependency checking like OWASP Dependency-Check focus on identifying vulnerabilities in the dependencies used by the application. These vulnerabilities are often publicly disclosed by the community or vendors, and the tools compare the version of the dependency used in the project with known databases of security vulnerabilities (e.g., CVE databases). If a publicly disclosed vulnerability is found in a dependency, the tool flags it, helping the development team address the issue before it can be exploited.
C. Incompatible licenses in the included dependencies.
Correct Answer. Another important aspect of dependency management is ensuring that the licenses of the included dependencies are compatible with the project's license. Tools like OWASP Dependency-Check can identify license incompatibilities, which may cause legal or distribution issues down the line. For example, some dependencies may be licensed under more restrictive licenses that conflict with the project's chosen license, leading to potential legal complications or restrictions on how the software can be distributed or used.
B. Mismatches in coding styles and conventions in the included dependencies.
This is not typically flagged by dependency-checking tools. Tools like OWASP Dependency-Check focus on identifying security vulnerabilities and license conflicts, not issues related to coding styles or conventions. Coding style mismatches are generally addressed by static analysis tools or linters, not dependency checkers.
D. Test case failures introduced by bugs in the included dependencies.
Test failures resulting from bugs in dependencies would typically be identified during unit testing or integration testing, not by dependency-checking tools. These tools focus on known vulnerabilities and license issues, not runtime behavior or test failures.
E. Buffer overflows to occur as the result of a combination of the included dependencies.
Buffer overflows are a type of security vulnerability but are generally identified through security testing or manual code analysis rather than by dependency-checking tools. These tools are more focused on known vulnerabilities in dependencies (e.g., a CVE entry for a specific buffer overflow) rather than identifying new vulnerabilities like those caused by combinations of dependencies.
In CI environments, the use of dependency-checking tools such as OWASP Dependency-Check is essential for identifying issues related to vulnerabilities and license conflicts. These tools help maintain the security and legal compliance of the project by flagging dependencies with known vulnerabilities or incompatible licenses. Therefore, the correct answers are:
A. Publicly disclosed vulnerabilities related to the included dependencies.
C. Incompatible licenses in the included dependencies.
A network operations team is leveraging the cloud to automate the management of customer and branch locations. They require that all their tooling is ephemeral by design, meaning the entire automation environment can be recreated from scratch without manual intervention. The automation code and configuration state will be stored in Git for change control and versioning. The team plans to use VMs within a cloud provider environment, then configure open-source tools on these VMs to poll, test, and configure remote devices, as well as to deploy the tooling itself.
Which configuration management and/or automation tooling should be used to achieve this solution?
A. Ansible
B. Ansible and Terraform
C. NSO
D. Terraform
E. Ansible and NSO
B. Ansible and Terraform
The goal of this solution is to create an ephemeral environment in the cloud that can be easily recreated and managed through automation. Here's why Ansible and Terraform are the right choices:
Terraform:
Terraform is an infrastructure-as-code (IaC) tool that allows you to define your cloud infrastructure in code, which can be stored in version control systems like Git for change control. It enables you to automate the provisioning of cloud resources, such as VMs and networks, and makes the infrastructure easily reproducible.
In the described scenario, Terraform can be used to define and manage the VMs within the cloud provider environment, ensuring that the entire infrastructure can be recreated without manual intervention. It supports ephemeral infrastructure because it allows for clean creation and destruction of resources.
Ansible:
Ansible is a widely used configuration management and automation tool that can be used to configure the VMs once they are provisioned by Terraform. Ansible is ideal for this scenario because it enables configuration management of remote systems (e.g., installing tooling on VMs, configuring devices, and running tests).
It is agentless, meaning it can configure remote systems without requiring an agent to be installed on the remote devices. Ansible uses playbooks (written in YAML) to define the configurations, and it integrates well with Git for versioning.
A. Ansible:
While Ansible is ideal for configuration management, Terraform is still required to provision the cloud infrastructure. Without Terraform, you would lack the ability to easily provision and manage cloud resources in a repeatable and versioned way.
C. NSO (Network Services Orchestrator):
NSO is a powerful tool for service orchestration in network environments, but it is more suited for network service automation and device configuration in complex network setups. In this case, the focus is on cloud infrastructure and ephemeral tooling, making Terraform and Ansible a more fitting choice.
D. Terraform:
While Terraform is excellent for provisioning infrastructure, Ansible is needed to configure the machines once they are up and running. Terraform alone does not handle the configuration of devices or installation of tools on VMs.
E. Ansible and NSO:
NSO is more tailored for network service automation, which might be overkill for this use case. The solution requires ephemeral infrastructure provisioning and automated configuration, which can be efficiently handled by Ansible and Terraform.
The combination of Ansible for configuration management and Terraform for provisioning cloud infrastructure offers the most efficient and scalable solution. These tools enable ephemeral environments, automation, and version control, aligning perfectly with the requirements of the network operations team.
Therefore, the correct answer is B. Ansible and Terraform.
Top Training Courses
LIMITED OFFER: GET 30% Discount
This is ONE TIME OFFER
A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.