Microsoft AZ-400 Designing and Implementing Microsoft DevOps Solutions Exam Dumps and Practice Test Questions Set 5 Q81-100
Visit here for our full Microsoft AZ-400 exam dumps and practice test questions.
Question 81
You need to gradually release a new API version to a small percentage of users and automatically roll back if latency increases. Which strategy should you choose?
A. Blue-green deployment
B. Rolling deployment
C. Canary deployment
D. Manual approval gate
Answer: C
Explanation:
A canary deployment introduces a new version of an application to a small subset of users first, enabling teams to observe performance, validate stability, and check telemetry before expanding release to all users. It supports automatic rollback when combined with metric-based alerts in Azure Monitor, ensuring service reliability. Blue-green is an all-or-nothing switch, rolling deployments gradually replace instances without user-targeted exposure, and manual approvals do not provide automated rollback. Canary aligns with AZ-400’s progressive exposure and safe deployment patterns.
A canary deployment is a controlled release strategy where a new version of an application is initially exposed to only a small percentage of users. This approach allows teams to observe how the new version behaves under real production conditions before fully rolling it out. By targeting only a limited group, issues such as increased latency, errors, or unexpected behavior can be detected early without impacting the majority of users. This provides a safer and more reliable deployment approach compared to strategies that immediately push changes to all users.
Canary deployments work best when combined with strong monitoring and telemetry. Azure Monitor, Application Insights, and Log Analytics can be used to track performance metrics, error rates, and user experience. If the system detects abnormal behavior, automated rollback mechanisms can revert traffic to the stable version. This ensures minimal downtime and reduces the risk of widespread service failures.
Compared to blue-green deployments, which involve switching all traffic at once between two environments, canary deployments offer more granular control. Rolling deployments gradually replace instances but do not selectively expose features to specific user subsets. Manual approvals add delays and lack automated rollback capabilities. For these reasons, canary deployments align closely with AZ-400 principles that emphasize progressive exposure, automated validation, resilience, and safe release practices in modern DevOps workflows.
Question 82
Which Azure Boards tool helps identify bottlenecks by showing how work items accumulate at each stage over time?
A. Burndown chart
B. Cumulative Flow Diagram
C. Lead time widget
D. Assigned-to-me tile
Answer: B
Explanation:
A Cumulative Flow Diagram (CFD) shows the number of work items in each workflow state across time. When a band widens, it indicates a bottleneck. Burndown charts show remaining work, lead time widgets measure duration, and personal tiles do not show process flow. CFDs are explicitly recommended for process optimization in AZ-400.
A Cumulative Flow Diagram, or CFD, is a powerful visualization tool used in Azure DevOps to track the number of work items in each state over time. It provides a clear view of the workflow, showing how tasks move from backlog to active development, testing, and completion. By analyzing the width of the bands in the diagram, teams can identify bottlenecks or areas where work is accumulating. For example, if the “In Progress” band widens over time, it suggests that tasks are not moving to completion as expected, highlighting inefficiencies or resource constraints. CFDs support data-driven decisions for process improvements, enabling teams to optimize throughput and cycle time. This makes them an essential metric in DevOps environments and directly aligns with AZ-400 guidance on monitoring and improving software delivery processes.
Option A, burndown charts, visualize remaining work over time, showing how much work is left to complete in a sprint or iteration. While useful for tracking progress, burndown charts do not provide detailed information about bottlenecks or state transitions. Option C, lead time widgets, focus on measuring the elapsed time from work item creation to completion. Although important for process efficiency, they do not display state-based accumulation of work items over time. Option D, assigned-to-me tiles, are personalized dashboard elements that show the tasks assigned to an individual user. These tiles are useful for personal task management but provide no insight into overall workflow or process efficiency.
By using CFDs, teams gain visibility into workflow health, detect process bottlenecks early, and drive continuous improvement. This aligns with AZ-400 best practices for leveraging metrics and dashboards to optimize delivery and operational processes in DevOps environments.
Question 83
Which Terraform state storage option is recommended for security and team collaboration?
A. Local disk
B. Azure Storage with remote backend and state locking
C. Git repository
D. Azure DevOps Wiki
Answer: B
Explanation:
Terraform state should be stored in a secure remote backend such as Azure Storage with state locking support. This prevents corruption, enables team collaboration, and supports encryption and RBAC. Local disks are unsafe, Git repositories should never store state files, and Wikis are not storage backends.
Terraform state should always be stored in a secure and reliable remote backend, and Azure Storage is one of the recommended options when working in Azure environments. Storing the state remotely prevents corruption, supports team collaboration, and ensures that only one person or pipeline can modify the state at a time through state locking. Azure Storage accounts support this locking capability when combined with features such as blob leases, which help avoid race conditions that can break infrastructure configurations. Remote storage also provides built-in encryption at rest, network access controls, and integration with Azure RBAC, ensuring that only authorized users and service principals can access or modify the state.
Keeping the state file on a local disk is risky because it can easily be lost, corrupted, or overwritten, and it does not support collaboration across teams. Storing Terraform state in a Git repository is strongly discouraged because the state file may contain sensitive information, such as resource IDs, secrets, or connection strings, which should never be pushed to version control. Additionally, Git does not support locking, making concurrency issues more likely. Using a Wiki or documentation platform is not valid because these systems are not designed to store Terraform state or support required backend functionality. Using a secure remote backend aligns with DevOps best practices for reliability, consistency, and security.
Question 84
Which method allows enabling or disabling a feature at runtime without redeploying?
A. Release approvals
B. Feature flags
C. Immutable infrastructure
D. Manual testing
Answer: B
Explanation:
Feature flags allow toggling features dynamically without deployment cycles, enabling progressive exposure and fast rollback. Other options do not offer runtime control. Feature flags are highlighted in AZ-400 under deployment strategies and progressive delivery.
Feature flags provide a powerful mechanism for controlling application behavior at runtime without requiring new deployments or infrastructure changes. By wrapping new functionality inside conditional checks tied to feature flags, teams can safely introduce changes, expose them to a limited number of users, and gradually increase availability as confidence grows. This progressive exposure significantly reduces risk during releases because issues can be mitigated instantly by disabling the flag rather than triggering a full rollback or redeployment. Feature flags are also highly valuable in A/B testing, experimentation, and phased rollouts, enabling data-driven decisions based on user behavior and performance metrics.
In contrast, traditional deployment methods—such as blue-green, canary, or manual server-side changes—require initiating deployment steps or infrastructure modifications to adjust feature exposure, making them slower and more operationally intensive. These approaches do not offer fine-grained runtime control and cannot instantly toggle behavior without pipeline involvement. Feature flags, on the other hand, integrate directly into the application logic and can be controlled via configuration or a centralized feature management service, such as Azure App Configuration with Feature Management.
AZ-400 explicitly includes feature flags under deployment and release management strategies, highlighting their role in enabling progressive delivery, reducing deployment-related risk, accelerating feedback, and supporting continuous deployment practices.
Question 85
You want to deploy an application to AKS using Helm charts in Azure Pipelines. Which task should you use?
A. Docker Compose task
B. KubernetesManifest task
C. Helm deploy task
D. ARM template deployment task
Answer: C
Explanation:
Helm is the package manager for Kubernetes, and Azure Pipelines includes a Helm task to run helm install or helm upgrade commands. Docker Compose is not for Kubernetes, KubernetesManifest lacks Helm templating, and ARM templates deploy Azure resources, not Helm charts.
When deploying containerized applications as part of an Azure DevOps pipeline, each task option supports a different deployment scenario, and understanding their differences is essential for choosing the correct approach. Option A, the Docker Compose task, is typically used for local development and multi-container orchestration on a single host. It is not well-suited for production-grade orchestration environments such as Azure Kubernetes Service (AKS), because Docker Compose lacks the advanced scheduling, scaling, and self-healing features of Kubernetes. Option B, the KubernetesManifest task, is designed specifically for Kubernetes deployments and works directly with manifest files such as deployments, services, and ingress resources. This task provides native integration with Kubernetes clusters, supports image substitution, and fits well when teams manage raw YAML files instead of higher-level templating tools. Option C, the Helm deploy task, is ideal when using Helm charts to package and templatize Kubernetes resources. Helm provides versioning, reusable templates, and parameterized deployments, making it very effective for complex workloads or when managing multiple environments with consistent configuration patterns. Option D, the ARM template deployment task, is focused on provisioning Azure infrastructure rather than deploying application workloads. ARM templates define Azure resources such as AKS clusters, networks, and storage, but they are not used to deploy containerized applications into the cluster itself. Therefore, while ARM is essential for infrastructure automation, KubernetesManifest and Helm tasks are typically the correct choices for application deployment in Kubernetes environments.
Question 86
Which Azure Artifacts feature helps prevent deletion of important packages?
A. Making the feed public
B. Immutable retention
C. Using check-in notes
D. Storing packages in repos
Answer: B
Explanation:
Immutable retention ensures packages cannot be removed or altered during the retention period, important for compliance or audit scenarios. Public feeds expose packages unnecessarily, check-in notes don’t protect artifacts, and Git repos aren’t intended for package storage.
Immutable retention in Azure Artifacts ensures that once a package version is published, it cannot be modified or deleted for the duration of the defined retention period. This is especially important in organizations that must comply with regulatory, audit, or security requirements where maintaining an unaltered history of package versions is essential. By enabling immutable retention, teams guarantee that dependencies used in past builds remain accessible, reproducible, and trustworthy. This makes option B the correct choice because it directly addresses the need for long-term integrity and traceability of artifacts.
Option A, making the feed public, would unnecessarily expose internal or proprietary packages to external users, creating significant security and compliance risks. Public feeds are typically used for open-source or community scenarios, not for internal enterprise use. Option C, using check-in notes, has no impact on artifact protection. Check-in notes apply to version control workflows and cannot prevent deletion or modification of packages stored in Azure Artifacts. Option D, storing packages in Git repositories, is an anti-pattern because Git is not designed for binary package storage. Large binaries can cause repository bloat, slow cloning, and complicate version history. Azure Artifacts provides proper package versioning, retention policies, and access control, whereas Git repositories do not include these package-management capabilities. For these reasons, immutable retention remains the best solution for ensuring reliable, compliant, and protected long-term package storage.
Question 87
What should you configure to prevent secrets from being accidentally committed?
A. Manual peer review
B. Credential scanning in pipelines
C. SonarQube quality gates
D. Azure Storage access alerts
Answer: B
Explanation:
Credential scanning tools automatically detect exposed secrets such as API keys or passwords in PR validation. Manual review is unreliable, SonarQube checks code quality not secrets, and Azure Storage alerts do not prevent secret leaks.
Credential scanning in pipelines is the most effective and reliable method for detecting leaked secrets during pull request validation. These tools automatically scan code, configuration files, and commit history for sensitive information such as API keys, passwords, connection strings, tokens, and certificates. By running as part of the PR process, credential scanning ensures the issue is caught early, before the code is merged into the main branch or deployed to any environment. This aligns with best DevSecOps practices and AZ-400 principles that emphasize shifting security checks left, making automated scanning the correct choice.
Option A, manual peer review, is helpful for general code quality but cannot reliably detect secrets. Human reviewers often miss subtle patterns, especially when under time pressure. Relying solely on manual review introduces risk because a single overlooked secret can lead to security breaches or compromised infrastructure.
Option C, SonarQube quality gates, focuses on code quality, maintainability, and static analysis issues. Although SonarQube can detect some security vulnerabilities, it is not designed to detect leaked credentials, meaning it cannot replace specialized secret-scanning tools.
Option D, Azure Storage access alerts, has no connection to detecting secrets in source code. These alerts monitor unusual access patterns in storage accounts, which occurs after a compromise, not before. They cannot prevent a secret from being committed in the first place.
Therefore, credential scanning in pipelines is the strongest and most proactive solution for preventing secret leaks during PR validation.
Question 88
You need to trace requests across microservices running in AKS. What should you use?
A. Log Analytics queries only
B. Azure Alerts
C. Application Insights with distributed tracing
D. Container Registry logs
Answer: C
Explanation:
Application Insights enables distributed tracing, allowing developers to track requests flowing across microservices. Log Analytics stores logs but doesn’t provide tracing visualization. Alerts notify issues but don’t trace execution, and ACR logs are irrelevant.
Application Insights with distributed tracing provides comprehensive visibility into applications running in complex environments such as microservices on Azure Kubernetes Service. Distributed tracing allows developers and operations teams to track requests as they flow across multiple services, identify latency bottlenecks, and detect where errors occur in the call chain. By instrumenting applications with Application Insights SDKs, telemetry data including request duration, dependencies, exceptions, and custom events can be collected automatically. This provides actionable insights that help teams diagnose issues faster and optimize performance. Using distributed tracing, teams can correlate events across services, visualize the path of individual requests, and understand the impact of a specific service failure on the overall application.
Option A, Log Analytics queries alone, captures raw log data and metrics but does not provide an integrated visualization of request flow or distributed tracing, making it difficult to analyze multi-service interactions. Option B, Azure Alerts, can notify teams when certain thresholds are crossed, such as high latency or error counts, but alerts do not show the detailed path of requests across services. Option D, Container Registry logs, provide information about container image pushes and pulls but do not include any application-level telemetry or trace data.
Therefore, Application Insights with distributed tracing is the appropriate solution for monitoring request flows, diagnosing performance issues, and enabling observability in microservices environments. This aligns with AZ-400 objectives for collecting telemetry and implementing monitoring for containerized workloads.
Question 89
How can you block vulnerable packages from being used in Azure Artifacts?
A. Branch policies
B. Feed policies
C. Conditional pipeline tasks
D. Email notifications
Answer: B
Explanation:
Feed policies in Azure Artifacts allow enforcing rules such as blocking specific versions or known vulnerable packages. Branch policies apply to code, conditional tasks don’t restrict package usage, and emails don’t enforce security.
Feed policies in Azure Artifacts provide a way to enforce governance and security controls over package usage. They allow teams to define rules that block specific versions of packages, prevent the use of packages with known vulnerabilities, and restrict publishing based on compliance requirements. By implementing feed policies, organizations ensure that only trusted and approved packages are used in builds and deployments, reducing the risk of introducing security vulnerabilities or unstable dependencies into production. Feed policies are particularly important in large teams or enterprises where multiple developers or projects consume shared packages.
Option A, branch policies, control the behavior of source code repositories by enforcing rules on pull requests, required reviewers, build validations, or merge criteria. While branch policies are essential for maintaining code quality and preventing unauthorized merges, they do not directly govern package usage or prevent vulnerable dependencies from being included. Option C, conditional pipeline tasks, allow executing certain steps only when specific conditions are met, such as running tests or deployment steps, but they do not inherently block the usage of insecure or prohibited packages. Option D, email notifications, can inform teams about changes or violations but cannot enforce any package usage rules automatically.
By leveraging feed policies, organizations can create a controlled and secure package management process that aligns with DevOps best practices. This ensures that builds are reproducible, dependencies are safe, and compliance requirements are met, directly supporting AZ-400 skills for implementing secure and reliable package management strategies.
Question 90
Which approach allows CI, security scanning, and CD in one unified definition?
A. Classic release pipeline
B. YAML pipeline with a single stage
C. Multi-stage YAML pipeline
D. Manual deployment
Answer: C
Explanation:
Multi-stage YAML pipelines contain CI, test, scan, and deployment stages in a single versioned file, enabling automation and consistency. Classic pipelines lack full YAML flexibility, single-stage YAML is insufficient, and manual deployments break automation.
Multi-stage YAML pipelines in Azure DevOps provide a modern, integrated approach to continuous integration and continuous deployment. They allow teams to define the entire workflow, including build, test, security scanning, and deployment stages, within a single version-controlled YAML file. This ensures that the entire pipeline is consistent, reproducible, and traceable alongside the application code. By using multi-stage YAML pipelines, teams can enforce dependencies between stages, define conditional execution, and apply approvals or checks at specific points in the process. Each stage can target different environments, such as development, staging, and production, enabling safe, automated promotion of artifacts through the pipeline while minimizing manual intervention. Multi-stage pipelines also support template reuse, variable groups, and artifact sharing, which enhances maintainability and scalability across multiple projects or repositories. This approach aligns directly with AZ-400 best practices for implementing automated DevOps workflows, integrating testing and security, and ensuring deployment reliability.
Option A, classic release pipelines, are the older graphical approach to defining release workflows in Azure DevOps. While still supported, classic pipelines lack the flexibility, versioning, and full code-as-infrastructure benefits that YAML pipelines provide. They require separate maintenance, do not integrate as naturally with source control, and are less suitable for managing complex, multi-environment deployment workflows.
Option B, a single-stage YAML pipeline, is suitable for simple build-and-test scenarios but does not provide the ability to define multiple dependent stages, manage environment-specific deployments, or include complex checks and approvals. This makes it insufficient for teams looking to implement end-to-end CI/CD pipelines that incorporate security scanning, automated testing, and deployment promotion.
Option D, manual deployment, breaks the principle of continuous delivery. It introduces risk, human error, and inconsistency, and it cannot provide repeatable, automated feedback for integration or security issues. Manual processes are slow, error-prone, and do not scale well for modern DevOps practices.
Overall, multi-stage YAML pipelines are the most robust solution for implementing fully automated, end-to-end DevOps workflows that integrate build, test, scanning, and deployment in a secure, repeatable, and maintainable way, directly supporting AZ-400 objectives.
Question 91
What enables sharing standardized steps across multiple pipelines?
A. Variable groups
B. Pipeline templates
C. Work item queries
D. Check-in notes
Answer: B
Explanation:
Templates allow YAML reuse, enabling consistent build/test/deploy logic across repositories. Variables only store values, not steps. Queries and notes are irrelevant.
Pipeline templates in Azure DevOps provide a mechanism to define reusable YAML code for common build, test, or deployment steps. By creating templates, teams can centralize the logic for tasks, jobs, or stages that are shared across multiple pipelines or projects. This ensures consistency, reduces duplication, and makes maintaining pipelines much easier, because updates to the template automatically propagate to all pipelines that reference it. Templates can include parameters, enabling flexibility while still enforcing standardized workflows. For example, a template might define a standard build process with linting, unit tests, security scanning, and artifact publishing, which can then be used by any application repository without rewriting the steps.
Option A, variable groups, are used to store values such as connection strings, API keys, or configuration parameters that can be referenced across multiple pipelines. While variable groups help manage configuration centrally, they do not provide a way to reuse actual build or deployment steps. They control data, not workflow logic.
Option C, work item queries, allow teams to filter and view sets of Azure Boards work items based on criteria such as state, assigned user, or tags. They are useful for reporting and tracking work but do not influence pipeline execution or standardize steps.
Option D, check-in notes, provide metadata attached to version control commits, such as comments or reviewer information. These are purely informational and do not define reusable pipeline behavior.
Using pipeline templates ensures that pipelines follow consistent, repeatable processes, reduces human error, and aligns with AZ-400 best practices for scalable, maintainable DevOps workflows.
Question 92
What should be used to provision infrastructure before deploying an application?
A. Azure portal manual setup
B. Branch policy
C. Terraform or Bicep template stage
D. Manual approval
Answer: C
Explanation:
Infrastructure-as-code workflows use Terraform or Bicep in pipelines to create consistent environments. Manual portal changes risk drift, branch policies are unrelated, and approvals do not provision infrastructure.
Using Terraform or Bicep template stages in Azure DevOps pipelines is a best practice for provisioning infrastructure consistently and reliably. These infrastructure-as-code tools allow teams to define their cloud resources declaratively, including virtual networks, storage accounts, Kubernetes clusters, and other Azure services. By integrating Terraform or Bicep into a pipeline stage, the creation, update, and deletion of infrastructure becomes automated, repeatable, and version-controlled. This eliminates the risks associated with manual changes, ensures compliance with organizational standards, and reduces configuration drift between environments. Additionally, using IaC in pipelines enables the same definitions to be reused across multiple environments such as development, testing, and production, supporting repeatability and consistency. Automation also allows integration with approvals, policy checks, and testing to validate infrastructure changes before they are applied in critical environments.
Option A, manual Azure portal setup, introduces risk because changes are prone to human error, inconsistencies, and lack versioning. Manual setup makes reproducing or scaling environments difficult and prevents automation, which conflicts with DevOps best practices. Option B, branch policies, control how code is merged into branches and enforce build or test validation, but they do not provision or configure infrastructure. They are unrelated to the actual deployment of resources. Option D, manual approval, may be used to gate deployments or infrastructure changes but cannot create or configure resources on its own. Manual approval alone cannot replace automated provisioning.
Therefore, using Terraform or Bicep template stages in pipelines ensures secure, automated, and repeatable infrastructure deployment, aligning with AZ-400 objectives for integrating infrastructure-as-code into DevOps workflows.
Question 93
How can you reduce build times by reusing dependencies?
A. Auto-scale build agents
B. Pipeline caching
C. Manual copying
D. Shallow cloning
Answer: B
Explanation:
Pipeline caching stores dependencies like npm or NuGet packages so subsequent builds are faster. Scaling agents doesn’t reuse dependencies, manual copying is error-prone, and shallow clones reduce commit history only.
Pipeline caching in Azure DevOps is a powerful technique to reduce build times by reusing dependencies across multiple runs. When a pipeline is configured with caching, common dependencies such as npm modules, NuGet packages, Maven artifacts, or other frequently used files are stored in a cache after the first build. Subsequent builds then restore these dependencies from the cache instead of downloading them from external repositories each time, which significantly reduces network calls and improves overall build speed. Cache keys can be defined based on dependency versions or lock files, ensuring that updates trigger cache refreshes while unchanged dependencies are reused efficiently. This approach not only saves time but also reduces load on external package repositories and increases build reliability.
Option A, auto-scaling build agents, helps handle higher build workloads by provisioning more agents, but it does not inherently reduce dependency download times because each agent may start with an empty environment. Option C, manual copying of dependencies, is prone to errors, difficult to maintain, and does not scale well for automated CI/CD pipelines. Option D, shallow cloning of repositories, only reduces the amount of Git history cloned and does not affect dependency resolution or caching of external packages, so its impact on build time is limited.
By leveraging pipeline caching, teams can achieve faster, more reliable, and reproducible builds, aligning with AZ-400 best practices for optimizing pipelines and improving CI/CD efficiency.
Question 94
Which feature enforces PR validation builds before merging?
A. Dashboard widget
B. Branch policies
C. Release gates
D. Wiki page rules
Answer: B
Explanation:
Branch policies enforce build success, tests, and code coverage before allowing merges into protected branches. Dashboards display data, gates apply to releases, and Wikis don’t enforce code rules.
Branch policies in Azure DevOps are a key mechanism for enforcing quality and consistency in code repositories. They allow teams to require that certain conditions are met before a pull request can be merged into a protected branch, such as the main or master branch. These conditions often include successful completion of build pipelines, passing unit or integration tests, and meeting code coverage thresholds. By enforcing these rules automatically, branch policies help prevent broken code, regressions, or untested features from being merged into critical branches. They also ensure that code reviews are conducted appropriately and that organizational standards for code quality are consistently applied across all teams and projects.
Option A, dashboard widgets, are useful for visualizing metrics such as build success, test results, and deployment status, but they are purely informational. They do not enforce rules or prevent merges, so they cannot replace branch policies. Option C, release gates, are applied to deployment pipelines and allow organizations to pause deployments until certain conditions are met, such as monitoring alerts or approval workflows. While release gates enforce conditions for deployment, they do not impact source code merges. Option D, wiki page rules, provide documentation and guidelines but cannot enforce any automated checks or validations in the code repository.
By using branch policies, teams can ensure that all code changes meet required quality standards before entering the main codebase. This reduces errors, increases code reliability, and aligns with AZ-400 best practices for integrating automated checks and quality controls into DevOps workflows.
Question 95
Which platform allows analyzing deployment failures using pipeline logs and telemetry together?
A. Work item queries
B. Excel sheets
C. Azure Pipelines + Azure Monitor integration
D. Monitor workbooks only
Answer: C
Explanation:
Integrating pipeline logs with Azure Monitor centralizes telemetry, errors, and performance data for faster root-cause analysis. Queries and Excel lack telemetry, and workbooks alone do not ingest pipeline logs.
Integrating Azure Pipelines with Azure Monitor provides a centralized platform for analyzing deployment failures by combining pipeline logs with telemetry data. This integration allows teams to correlate build and release steps with application performance, infrastructure metrics, and error events. When a deployment fails, the combined data helps identify the root cause quickly, whether it is due to a failed task, misconfiguration, or runtime issue in the target environment. By using this approach, teams gain real-time insights into failures and can implement proactive measures, such as automated alerts or corrective tasks in pipelines, to prevent recurrence. Centralized monitoring also supports long-term trend analysis, helping teams optimize deployment processes and improve reliability over time.
Option A, work item queries, are used for tracking and filtering tasks, bugs, or incidents within Azure Boards. While useful for project management and reporting, queries do not provide detailed telemetry or logs necessary to diagnose deployment failures. Option B, Excel sheets, can store data exported from pipelines or monitoring tools, but they are static and lack real-time integration, making them ineffective for correlating logs and telemetry for failure analysis. Option D, Monitor workbooks, allow visualization of telemetry data from Azure Monitor but do not automatically ingest or correlate pipeline execution logs unless additional integration is configured, limiting their effectiveness for end-to-end troubleshooting.
Using Azure Pipelines in combination with Azure Monitor ensures that pipeline failures, performance metrics, and errors are centralized, correlated, and actionable, enabling faster diagnosis and resolution. This aligns with AZ-400 objectives for monitoring, telemetry, and operational insights in DevOps workflows.
Question 96
Which metric measures time from incident start to service restoration?
A. Cycle time
B. Deployment frequency
C. MTTR
D. Lead time
Answer: C
Explanation:
MTTR (Mean Time To Restore/Recover) measures operational recovery speed. Cycle time and lead time measure workflow efficiency, and deployment frequency measures how often releases occur.
MTTR, or Mean Time To Restore (sometimes referred to as Mean Time To Recovery), is a key DevOps metric that measures the average time it takes to recover from an incident or failure in a production environment. Tracking MTTR helps organizations understand the effectiveness of their operational processes, incident response, and monitoring systems. A lower MTTR indicates that the team can quickly detect, diagnose, and resolve issues, minimizing downtime and the impact on users. MTTR is often visualized on dashboards or reports and can be integrated with incident tracking systems, Azure Boards work items, or alerting platforms to provide actionable insights and continuous improvement opportunities.
Option A, cycle time, measures the time it takes for a work item to move from active development to completion. While it is useful for assessing development process efficiency and identifying bottlenecks, it does not capture production recovery performance. Option B, deployment frequency, tracks how often code changes are deployed to production. Although this reflects the speed and agility of delivery, it does not measure incident resolution. Option D, lead time, measures the duration from the creation of a work item or feature request until it reaches production. Like cycle time, lead time is a process efficiency metric rather than an operational reliability metric.
Focusing on MTTR allows teams to prioritize operational resilience, incident response automation, and monitoring effectiveness. This aligns with AZ-400 objectives for implementing metrics and monitoring strategies to improve operational performance, reduce downtime, and enhance service reliability in DevOps environments.
Question 97
How can you automatically rotate secrets without storing credentials in code?
A. Store secrets in repo
B. Key Vault with Managed Identity
C. Environment variables
D. Manual refresh
Answer: B
Explanation:
Managed Identity allows apps to authenticate to Key Vault without storing credentials. Key Vault supports automatic rotation. Environment variables and repos are insecure, and manual refresh is inefficient.
Using Azure Key Vault with Managed Identity is the most secure and efficient way to manage secrets in DevOps pipelines and applications. Managed Identity provides a way for applications or pipelines to authenticate to Azure resources, including Key Vault, without the need for hard-coded credentials. This eliminates the risk of secrets being exposed in source code, configuration files, or environment variables. Key Vault also supports features such as automatic key and secret rotation, access policies, and audit logging, ensuring that sensitive information is securely stored and properly managed. By integrating Key Vault with pipelines or applications, teams can retrieve secrets at runtime, enforce least-privilege access, and maintain compliance with security standards.
Option A, storing secrets in repositories, is highly insecure because anyone with repository access can view or clone the secrets. This practice can lead to accidental exposure or leaks of sensitive information. Option C, using environment variables, provides temporary access to secrets but does not include strong access control, auditing, or automatic rotation, making it less secure for production environments. Option D, manual refresh of secrets, is inefficient and error-prone, requiring human intervention to update secrets in pipelines or applications, which increases operational overhead and the potential for mistakes.
By leveraging Key Vault with Managed Identity, organizations achieve a secure, automated, and auditable secret management process. This aligns with AZ-400 objectives for implementing secure pipelines, protecting sensitive information, and integrating automated secret management into DevOps workflows.
Question 98
Which tool allows developers to run CI-like tasks locally to match pipeline environments?
A. Self-hosted agent
B. Git submodules
C. GitHub Codespaces or Dev Containers
D. Classic Editor
Answer: C
Explanation:
Dev Containers/Codespaces replicate pipeline-like environments locally, ensuring consistency. Self-hosted agents run pipelines, not local dev. Submodules manage repos, not environments, and Classic Editor is outdated.
GitHub Codespaces or Dev Containers provide developers with isolated, reproducible development environments that closely mirror the configuration of CI/CD pipelines. These environments are defined using Docker or container configurations, ensuring that every developer works with the same versions of tools, dependencies, and system libraries. By using Dev Containers or Codespaces, teams can avoid the “it works on my machine” problem, because local development matches the production or pipeline environment closely. This consistency reduces integration issues, simplifies onboarding for new developers, and enables faster iteration cycles. These containerized environments can also include pre-installed extensions, runtime configurations, and access to environment variables or secrets, further streamlining development.
Option A, self-hosted agents, are designed to run pipeline jobs and execute CI/CD tasks on a dedicated machine. They are not meant for local development or replicating pipeline-like environments on a developer’s workstation. Option B, Git submodules, allow managing dependencies between multiple repositories but do not provide an isolated or consistent environment for running code. Submodules help organize code, not manage development environments. Option D, the Classic Editor in Azure DevOps, is a graphical interface for defining pipelines. While still functional, it is outdated and does not provide the portability or containerization benefits of Dev Containers or Codespaces.
Using Dev Containers or Codespaces ensures reproducibility, reduces environment-related errors, and aligns local development with CI/CD pipelines. This approach supports AZ-400 objectives of creating reliable, consistent, and efficient development and deployment workflows.
Question 99
How can you reduce code duplication across YAML pipelines?
A. Service hooks
B. Templates
C. Git branches
D. Test plans
Answer: B
Explanation:
Templates allow reusing stages, jobs, and steps across multiple pipelines. Hooks trigger events, branches manage code, and test plans manage manual/exploratory testing.
Templates in Azure DevOps pipelines allow teams to define reusable stages, jobs, and steps in YAML files that can be referenced across multiple pipelines. By using templates, organizations ensure that best practices, standardized processes, and critical tasks such as build steps, testing, security scanning, and deployment logic are consistently applied across different projects or repositories. Templates reduce duplication, improve maintainability, and make updates easier, because a change in the template propagates automatically to all pipelines that reference it. Parameters can be passed to templates to allow flexibility while keeping the core workflow consistent. This approach supports efficient DevOps practices, simplifies scaling pipelines across teams, and aligns with AZ-400 objectives of designing and implementing reliable, maintainable pipelines.
Option A, service hooks, are used to trigger actions in external systems based on events in Azure DevOps, such as work item changes or build completions. They are useful for integrations but do not provide reusable pipeline steps or stages. Option C, Git branches, are used to manage code versions and workflows, enforce policies, and isolate features, but they do not facilitate reusable pipeline logic. Option D, test plans, organize manual or exploratory testing activities and track test results, but they do not define or automate CI/CD processes.
Using templates ensures consistent, repeatable, and maintainable pipelines that reduce errors, improve efficiency, and enable teams to implement scalable DevOps workflows, fulfilling the requirements outlined in AZ-400 skills for pipeline design and automation.
Question 100
Which option enables canary-style traffic splitting for an App Service?
A. Traffic Manager only
B. Alerts
C. Front Door only
D. Azure Deployment Slots with traffic percentage
Answer:D
Explanation:
Deployment Slots allow assigning traffic percentages, enabling canary rollouts. Traffic Manager and Front Door support routing but not app-level canary rules, and alerts don’t control deployments.
Azure Deployment Slots provide a robust mechanism to implement progressive delivery, such as canary deployments, by allowing traffic to be split between the production slot and one or more staging slots. Using deployment slots, teams can deploy a new version of an application to a staging slot, validate it with a small percentage of live traffic, and then gradually increase traffic as confidence grows. This approach enables testing in a real environment, reduces the risk of impacting all users, and supports fast rollback if issues are detected. Traffic allocation can be automated and adjusted dynamically, making it an essential tool for safe, controlled deployments in cloud applications. Deployment slots also integrate with monitoring and alerts, so any performance or error signals can trigger rollback or additional checks, aligning with AZ-400 guidance on safe deployment strategies.
Option A, Traffic Manager, provides DNS-based load balancing and global routing but does not offer application-level traffic splitting for canary rollouts. It cannot gradually expose new application versions in a single environment, so it is not suitable for this scenario. Option B, Alerts, notify teams of issues such as failures or performance thresholds but do not control traffic or deployment strategies. Alerts are reactive rather than proactive deployment mechanisms. Option C, Azure Front Door, provides global routing, load balancing, and security features, but while it can route traffic geographically or based on latency, it cannot manage slot-specific traffic percentages or orchestrate a canary rollout within a single app service.
Therefore, Azure Deployment Slots with traffic percentage allocation are the most effective method for implementing canary deployments, ensuring controlled exposure, fast rollback, and alignment with DevOps best practices as outlined in AZ-400.
Popular posts
Recent Posts
