Microsoft AZ-204 Developing Solutions for Microsoft Azure Exam Dumps and Practice Test Questions Set 9 Q161-180

Visit here for our full Microsoft AZ-204 exam dumps and practice test questions.

Question 161

You need to trigger an Azure Function whenever a message is published to an IoT Hub. Which trigger should you use?

A) Event Grid trigger
B) Service Bus trigger
C) IoT Hub trigger
D) Blob Trigger

Answer: C) IoT Hub trigger

Explanation:

The IoT Hub trigger is built specifically for processing device-to-cloud messages from IoT devices. It handles partitions, device routing, and large, high-throughput message ingestion. When a device sends telemetry to IoT Hub, the trigger activates immediately, ensuring real-time processing. This makes it ideal for IoT workloads like telemetry pipelines, monitoring dashboards, and device alerts.

When designing an event-driven workflow in Azure Functions, choosing the correct trigger is essential to ensure the system behaves reliably and efficiently. Each trigger type—Event Grid, Service Bus, IoT Hub, and Blob—addresses a different integration pattern and is optimized for specific workloads. Understanding their purposes and limitations helps clarify why the Blob Trigger is the correct choice when the goal is to process a file immediately after it is uploaded to Azure Blob Storage.

A) Event Grid Trigger

Event Grid triggers allow Azure Functions to respond to lightweight events raised across Azure services. This system follows a push-based model, meaning Azure services emit events (such as “Blob Created,” “Subscription Updated,” “Resource Deleted”), and Event Grid routes those events to subscribers. Event Grid is highly scalable and very fast, making it excellent for event notification and fan-out patterns.

However, when dealing with file-processing workflows, Event Grid alone is not enough. While Event Grid can inform you that a blob has been created, it does not automatically pass the blob’s content to the function. Developers must write additional logic to fetch the blob manually. Event Grid is best for metadata-level events and for orchestrating event distribution, not for primary blob processing. Its role is awareness—not full file handling.

Thus, Event Grid is helpful but not ideal when the requirement is to automatically process the blob content.

B) Service Bus Trigger

A Service Bus trigger fires when new messages arrive in a Service Bus queue or topic. This trigger is designed for enterprise-integrated systems where guaranteed message delivery, dead-lettering, ordered processing, and duplicate detection are crucial. Service Bus provides extremely reliable messaging pipelines and supports large-scale distributed architectures.

Although this is powerful for asynchronous workflows, it has no relationship to Blob Storage. If a blob is uploaded, the Service Bus system does not detect or respond unless some external system explicitly writes a message into the queue. That means it cannot automatically trigger based on blob uploads.

Service Bus is ideal for microservices communication, order processing, financial transactions, and large distributed systems—not for file uploads in Blob Storage.

C) IoT Hub Trigger

IoT Hub triggers respond whenever IoT devices send data to the cloud. This is perfect for device telemetry, sensor readings, edge computing, and real-time monitoring. IoT Hub guarantees secure device-to-cloud messaging and supports millions of connected devices.

But IoT Hub triggers only fire for IoT device messages. They do not monitor Blob Storage or trigger functions when a file is added or updated. Even if IoT devices upload files to Blob Storage, IoT Hub still cannot detect the blob creation event and trigger a function based on that. Its purpose is tightly bound to IoT communication patterns, not general file processing.

D) Blob Trigger — Correct Answer

A Blob Trigger is built specifically for scenarios where a function needs to run automatically when a blob is uploaded, modified, or deleted in a container. It tightly integrates with Azure Blob Storage and directly passes the blob’s path, metadata, stream, and contents into the function.

This means that as soon as a file lands in the container—whether it’s a log file, image, CSV, JSON artifact, telemetry batch, or backup dump—the function wakes up instantly and processes it. No additional configuration, no manual polling, and no extra messaging systems are needed.

Blob Triggers are ideal for:

ETL/ELT processing

Image resizing or OCR

Log ingestion pipelines

Data transformation workflows

Automated content validation

Batch processing and archival

Backup or export automation

The trigger is reliable, built-in, highly integrated, and purpose-designed for file workflows. That’s why it is the correct answer.

Event Grid triggers can also receive events from IoT Hub, but their purpose is different. Event Grid responds to system events such as device creation or connection changes, not raw telemetry messages. If you need metadata or lifecycle events rather than actual telemetry, Event Grid would be appropriate. However, for direct device message ingestion, it’s not suitable.

Service Bus triggers are used for enterprise messaging patterns such as commands, workflows, or business events. IoT workloads typically generate high-frequency telemetry, and Service Bus does not integrate natively with IoT Hub’s message routing without additional configuration. It creates extra overhead and adds unnecessary complexity.

Blob triggers activate when new blobs are created, which is not relevant to IoT telemetry unless you store messages in blobs first. This introduces delays and reduces real-time processing capability. The IoT Hub trigger is the correct and direct integration.

Question 162

You need to create a workflow that starts automatically when a new message arrives in a Storage Queue. Which service should you use?

A) Azure Automation
B) Azure Logic Apps
C) Azure Virtual Machines
D) Azure Synapse

Answer: B) Azure Logic Apps

Explanation:

Logic Apps provide built-in connectors for Storage Queues, allowing you to automatically trigger workflows whenever a new message arrives. This is ideal for processing orders, notifications, scheduled tasks, or integrating multiple systems without code. Logic Apps are serverless, easy to manage, and offer retries, monitoring, and reliability.

Azure Automation is designed for managing infrastructure or running scripted tasks like patching servers. It does not integrate natively with message-based triggers and cannot respond instantly to queue events.

Virtual Machines can run custom code to poll queues, but this is inefficient, costly, and requires you to maintain servers, scaling, and monitoring. It defeats the purpose of serverless workflow automation.

Azure Synapse is focused on analytics and data warehousing rather than event-driven triggers. It is not designed for queue-based business workflows. Logic Apps directly satisfy the requirement for queue-triggered automation.

Question 163

You must secure your Web App so only requests from an Application Gateway are allowed. Which solution should you implement?

A) Disable HTTPS
B) Use Private Endpoint with VNet integration
C) Allow all public traffic
D) Use custom domains only

Answer: B) Use Private Endpoint with VNet integration

Explanation:

A private endpoint ensures your Web App can only be accessed through an internal IP within the virtual network. Then you configure Application Gateway with a private IP to forward requests securely. This method isolates your app from public exposure and ensures only Application Gateway communicates with it, providing improved security and traffic inspection options.

When securing an Azure-based application, especially one that handles sensitive data, internal APIs, or private workloads, it is essential to restrict public exposure and provide network-level isolation. Azure offers several methods for controlling access to App Services, Storage Accounts, Functions, SQL Databases, and other resources. Among the available options, Private Endpoints with VNet integration provide the strongest and most secure configuration by ensuring all traffic flows privately within your Azure Virtual Network rather than over the public internet.

Understanding why this is the correct approach becomes much clearer when evaluating each option in detail.

A) Disable HTTPS

Disabling HTTPS is extremely insecure and never recommended in any environment—production, development, or testing. HTTPS encrypts traffic between clients and the service, ensuring confidentiality, integrity, and protection from man-in-the-middle attacks. Without HTTPS, sensitive information such as credentials, tokens, API keys, or application data travels in plain text over the network.

Disabling HTTPS creates numerous vulnerabilities:

Attackers can intercept traffic.

Request headers and payloads become readable.

Authentication tokens can be stolen.

Regulatory compliance (HIPAA, PCI-DSS, SOC2, GDPR) is violated.

Beyond security concerns, Azure actively encourages enforcing HTTPS and may warn against insecure configurations. Because disabling HTTPS weakens security drastically, it can never be the correct method for protecting an application.

C) Allow all public traffic

Allowing all public traffic is the opposite of secure network architecture. This option leaves your application exposed to the entire internet, making it vulnerable to:

Port scanning

Botnet probing

Unauthorized access

Brute-force attacks

DDoS attempts

Exploitation of known vulnerabilities

While Azure App Services and PaaS platforms are built with strong defenses, leaving everything publicly open significantly increases risk. It also conflicts with Zero-Trust security principles and modern cloud security models, which require limiting exposure and verifying identity and network trust continuously.

Public exposure may be acceptable for basic websites, but not for internal APIs, business applications, or anything handling confidential data. Thus, this is not the correct option.

D) Use custom domains only

Custom domains allow you to map friendly URLs (like app.company.com) to your Azure App Service. While custom domains improve branding, user experience, and sometimes routing, they do not enhance security by themselves.

A resource using a custom domain can still be publicly exposed and reachable over the internet. Custom domains do not:

Restrict access

Provide private networking

Encrypt traffic beyond what HTTPS already provides

Prevent external threats

Offer network isolation

In essence, custom domains are a cosmetic and functional improvement but do nothing to secure or privatize your application. They cannot ensure private-only access, so this option is not the correct choice for secure connectivity.

B) Use Private Endpoint with VNet integration — Correct Answer

Private Endpoints allow your service (App Service, Storage Account, SQL Database, Function, etc.) to be accessed through a private IP inside your Azure Virtual Network (VNet). Instead of using a public endpoint, your application becomes reachable only through your private network.

Key security advantages include:

No public internet exposure

Traffic stays within the Azure backbone network, never leaving to the public internet

Access controlled through Network Security Groups (NSGs)

Compatible with VPN and ExpressRoute for hybrid environments

Zero-Trust-aligned private connectivity

Eliminates risks from public-facing attack vectors

Paired with VNet integration, your App Service or Function App can also securely reach other private resources (like SQL DB, Storage, Key Vault) without exposing anything publicly.

This combination ensures sensitive workloads operate entirely within isolated, controlled, and encrypted private network boundaries.

For scenarios requiring maximum security, compliance, or internal-only access, Private Endpoints with VNet integration are the industry-standard best practice.

Disabling HTTPS is extremely unsafe and does nothing to control access. It compromises security and exposes traffic to potential interception.

Allowing all public traffic removes any restrictions and makes the application vulnerable to attacks. This contradicts the requirement to restrict access.

Using custom domains provides branding and friendly URLs but does not provide security controls, access restrictions, or network isolation. The Private Endpoint approach delivers the required isolation and security level.

Question 164

You need to run multiple containers in a single environment with a shared network and scaling rules. What should you use?

A) Azure Container Instances (single container mode)
B) Azure Kubernetes Service
C) Azure Functions
D) Logic Apps

Answer: B) Azure Kubernetes Service

Explanation:

AKS is built for orchestrating multiple containers, managing scaling, networking, high availability, service discovery, and deployments. It supports multi-container environments, shared volumes, and advanced routing. For microservices, distributed workloads, and containerized apps that must grow based on demand, AKS is the best solution.

ACI single container mode cannot coordinate multi-container microservices or handle advanced networking. It’s meant for simple container runs, not orchestrated workloads.

Azure Functions are serverless event-driven compute and are not designed to run groups of containers as a service mesh or microservice architecture.

Logic Apps orchestrate workflows but cannot host containers or manage containerized environments. AKS meets all the requirements for multi-container scaling and orchestration.

Question 165

You need to perform sentiment analysis on customer chat messages. Which Azure service should you choose?

A) Azure Machine Learning
B) Azure Cognitive Services Text Analytics
C) Azure Synapse SQL Pools
D) Azure Data Factory

Answer: B) Azure Cognitive Services Text Analytics

Explanation:

Text Analytics provides ready-to-use sentiment analysis with no machine learning experience required. You simply send text, and the service returns sentiment scores and classification. It is fast, scalable, and ideal for chatbots, customer support, and feedback analysis.

Azure Machine Learning can also perform sentiment analysis, but it requires model training, data preparation, and infrastructure management. This is unnecessary when a prebuilt model already exists.

Azure Synapse SQL is not designed for real-time natural language analysis. It handles structured data, not sentiment interpretation.A) Azure Machine Learning

Azure Machine Learning (AML) is a comprehensive platform for building, training, and deploying custom machine learning models. It is ideal for organizations that require highly specialized models or predictive analytics beyond out-of-the-box solutions. While AML can certainly process text data through natural language processing (NLP) models you develop, it requires significant setup, training data, and expertise to create, deploy, and maintain models. It is not a turnkey solution for quickly analyzing text to extract sentiment, key phrases, or entities.

B) Azure Cognitive Services Text Analytics — Correct Answer

Azure Cognitive Services Text Analytics provides prebuilt, AI-powered natural language processing capabilities that require no model training or infrastructure management. It is specifically designed for analyzing textual data and can perform tasks such as:

Sentiment analysis: Determine positive, negative, or neutral sentiment in text.

Key phrase extraction: Identify important words or phrases in a document.

Entity recognition: Detect names of people, organizations, locations, dates, and more.

Language detection: Automatically identify the language of a text snippet.

Text Analytics is ideal for scenarios such as processing customer feedback, social media posts, surveys, chat logs, or emails. Its simplicity, scalability, and prebuilt AI models allow developers to quickly integrate NLP functionality into applications without the overhead of building and maintaining custom models.

Because the question focuses on analyzing text to extract meaning or insights, Azure Cognitive Services Text Analytics is the correct choice.

C) Azure Synapse SQL Pools

Azure Synapse SQL Pools are designed for data warehousing and analytics. They provide high-performance, scalable query capabilities over structured and semi-structured datasets. While Synapse can store and query textual data, it does not provide built-in natural language processing or text analysis. Any text analytics with Synapse would require integrating external services or custom functions. It is optimized for aggregations, reporting, and large-scale structured queries—not for extracting sentiment, entities, or other NLP features from unstructured text.

D) Azure Data Factory

Azure Data Factory (ADF) is a cloud-based ETL (extract, transform, load) and data integration service. It orchestrates data movement, transformation, and pipelines between various data sources. While ADF can move textual data between systems and trigger analytics processes, it does not itself provide text analysis or AI capabilities. ADF is primarily used for workflow automation, scheduling, and integration, not for extracting meaning from text.

Azure Machine Learning: Custom ML models, requires training and setup — overkill for basic text analysis.

Azure Synapse SQL Pools: Optimized for structured data analytics, no NLP capabilities.

Azure Data Factory: Data integration and orchestration, not analysis.

Azure Cognitive Services Text Analytics: Prebuilt AI for text analysis, NLP, sentiment, key phrases, and entities — turnkey solution.

Data Factory is an ETL service, not an AI service. It cannot detect sentiment or interpret text. Cognitive Services is the most appropriate and efficient option.

Question 166

You need to run a container on-demand for short tasks without managing Kubernetes or VMs. What should you choose?

A) Azure Web Apps
B) Azure Container Instances
C) Azure Kubernetes Service
D) Azure Functions

Answer: B) Azure Container Instances

Explanation:

ACI lets you run containers without managing infrastructure. It is ideal for burst workloads, isolated container tasks, CI/CD operations, and quick deployments. Containers start quickly and stop when done, offering cost-efficient execution without orchestrator overhead.

Web Apps can host containers but are long-running environments—not ideal for short on-demand processes.

AKS is for complex container orchestration requiring nodes, scaling, and cluster management. It is far too heavy for simple run-and-stop container tasks.

Azure Functions run code, not entire containers (except custom handlers). They are not designed to run arbitrary containerized workloads directly the same way ACI does. ACI is the perfect match.

Question 167

You want to capture logs and metrics from your AKS cluster for monitoring. What should you enable?

A) Azure Security Center only
B) Container Insights
C) Azure Backup
D) Event Grid

Answer: B) Container Insights

Explanation:

Container Insights collects node metrics, pod status, CPU usage, memory consumption, and logs. It’s the dedicated observability tool for AKS. It also enables dashboards and alerts, making it ideal for proactive health monitoring and debugging.

Security Center enhances cluster security but does not provide detailed metrics or logs like Container Insights.

Azure Backup is for data and VM backups, not cluster telemetry.

Event Grid handles event routing and notifications but does not collect metrics or logs. Container Insights is the correct solution.

A) Azure Security Center only

Azure Security Center (now part of Microsoft Defender for Cloud) primarily provides security management and threat protection. It helps identify vulnerabilities, monitor compliance, and detect security anomalies. While Security Center can report on container security risks and vulnerabilities, it does not provide detailed performance monitoring or telemetry for running containers. Its focus is on security, not operational insights, so it cannot give metrics like CPU usage, memory, or container-level health.

B) Container Insights — Correct Answer

Container Insights is an Azure Monitor solution designed specifically for monitoring the performance and health of container workloads. It collects telemetry such as:

CPU and memory usage per container, pod, or node

Container restart events and failure trends

Node and cluster health in AKS (Azure Kubernetes Service)

Logs and events for troubleshooting

It enables real-time monitoring, alerting, and performance analysis of containers, helping operators maintain reliable and scalable containerized applications. Container Insights integrates with Azure Monitor Workbooks and dashboards, providing visualizations for resource utilization and health metrics at the cluster, node, or container level.

Because the question focuses on monitoring and observing containers, Container Insights is the appropriate and purpose-built tool.

C) Azure Backup

Azure Backup is designed to protect data by taking backups of VMs, SQL databases, and file shares. While it ensures data recovery in case of accidental deletion or corruption, it does not provide live monitoring or performance metrics for running containers. Backup focuses purely on data protection, not operational visibility.

D) Event Grid

Event Grid is a fully managed event routing service that delivers events from Azure resources to subscribers for reactive workflows. While it can notify you of events such as container creation or deletion, it is not designed for continuous monitoring of container performance, resource utilization, or health metrics. Event Grid provides event notifications, not operational insights or analytics.

Azure Security Center: Security-focused, not performance monitoring

Azure Backup: Data protection, not monitoring

Event Grid: Event-driven notifications, not performance visibility

Container Insights: Purpose-built for container performance, health, and telemetry

Question 168

You need to host an API in a fully managed, serverless environment with automatic scaling. What should you use?

A) Azure API Apps (Classic)
B) Azure Functions HTTP Trigger
C) Virtual Machines
D) AKS

Answer: B) Azure Functions HTTP Trigger

Explanation:

HTTP-triggered Functions let you create APIs with serverless compute. They scale instantly, require no infrastructure, and integrate with API Management easily. This is ideal for lightweight APIs, prototypes, mobile backends, and event-driven workflows.

API Apps are older and not fully serverless; they follow the App Service model with dedicated compute.

Virtual Machines require full management, patching, scaling, and networking configuration. They are not serverless and incur continuous cost.

AKS is powerful but designed for container microservices, not simple serverless API endpoints. Functions meet all serverless requirements perfectly.

Question 169

You need to ensure that data stored in a Storage Account cannot be deleted accidentally. Which feature should you enable?

A) Snapshot
B) Access Tier Cool
C) Soft Delete
D) Private Endpoint

Answer: C) Soft Delete

Explanation:

Soft Delete keeps deleted blobs for a configurable retention period, allowing you to restore them if accidentally removed. This protects against accidental or malicious deletion and is essential for compliance and data recovery.

Snapshots create a restore point but do not prevent deletion of the parent blob. They require management and incur extra cost.

Cool access tier reduces storage cost but does not protect against deletions.

Private Endpoints secure access paths but do not address accidental data removal. Soft Delete is the correct choice for deletion protection.When managing data in Azure Storage, protecting against accidental deletion is a critical requirement. Azure provides multiple mechanisms to secure and manage data, including snapshots, access tiers, soft delete, and private endpoints. Each option serves a distinct purpose, and understanding the differences helps determine the best solution for preventing permanent data loss due to accidental deletion or overwriting.

A) Snapshot

A snapshot is a point-in-time, read-only copy of a blob. Snapshots allow users to restore a previous version of a blob if the current version is lost or corrupted. While snapshots provide a form of versioning, they do not prevent the original blob from being deleted. If a user accidentally deletes the base blob, snapshots alone do not automatically protect against this, because snapshots themselves are tied to the base blob’s lifecycle. Snapshots are excellent for backup and recovery in planned scenarios, but they do not provide automated protection against unintentional deletions.

B) Access Tier Cool

Azure Storage provides different access tiers (Hot, Cool, and Archive) to optimize cost based on the frequency of data access. The Cool tier is intended for infrequently accessed data and offers lower storage costs but higher access costs. While selecting an access tier is important for cost management, it does not provide any deletion protection or recovery mechanism. Choosing Cool versus Hot only affects pricing and retrieval latency—it does nothing to prevent accidental deletion of blobs or files.

C) Soft Delete — Correct Answer

Soft Delete is a built-in data protection feature in Azure Blob Storage that protects data from accidental or malicious deletion. When soft delete is enabled, deleted blobs or blob versions are retained for a configurable retention period, typically ranging from 1 to 365 days. During this period, deleted data can be restored without any extra configuration or third-party tools, making it an extremely effective safeguard.

Soft Delete works transparently: if a blob is accidentally deleted, users or administrators can simply restore it to its original state within the retention period. It also supports versioned blobs, meaning that if a new version overwrites an existing blob, previous versions can also be restored. This makes Soft Delete an ideal solution for environments where accidental deletions are possible, such as collaborative data pipelines, user-generated content, or automated workflows that may overwrite critical data.

Key benefits of Soft Delete include:

Accidental deletion protection: Users can recover mistakenly deleted blobs or versions without downtime.

Retention control: Administrators can define the retention period to balance storage costs with data recovery needs.

Seamless integration: Soft Delete works with standard Azure Blob Storage APIs and SDKs, making it easy to implement across existing applications.

Compliance support: By retaining deleted data, Soft Delete can help meet regulatory or auditing requirements.

Unlike snapshots, which are read-only copies that must be managed manually, Soft Delete automates the retention and recovery of deleted data, reducing operational risk and simplifying management.

D) Private Endpoint

Private Endpoints provide secure network access to Azure Storage by allowing resources to be accessed over a private IP in a Virtual Network (VNet). This mechanism enhances security by preventing public internet access and mitigating exposure to attacks. While a Private Endpoint is critical for security and controlling who can access the storage account, it does not prevent data from being deleted. Users with legitimate access through the private network can still delete blobs if no other protection mechanism is in place. Therefore, Private Endpoints enhance security but are unrelated to accidental deletion protection.

Question 170

Your Azure Function needs to connect to a SQL Database securely without passwords. What should you configure?

A) SQL user authentication
B) Connection string in app settings
C) Managed Identity
D) Hidden environment variables

Answer: C) Managed Identity

Explanation:

A Managed Identity enables your Function App to authenticate to SQL using Azure AD tokens instead of passwords. This eliminates secret management and improves security. SQL Database supports Azure AD authentication natively, making this approach ideal for passwordless connectivity.

SQL user authentication requires passwords and storage of credentials.

Connection strings stored in app settings still contain secrets and require manual rotation.

Hidden environment variables only hide values but do not eliminate the underlying secret. Managed Identity directly solves the requirement.

Question 171

You want to run background tasks in a .NET Web App without blocking HTTP requests. What should you implement?

A) Long-running controller actions
B) Azure WebJobs
C) Static constructors
D) Azure Policies

Answer: B) Azure WebJobs

Explanation:

WebJobs run background processes within App Service, perfect for scheduled tasks, queue processing, or long-running jobs. They do not block the web app and operate independently with continuous or triggered modes.

Long-running controller actions block thread pools and degrade performance, causing timeouts and poor scalability.

Static constructors only run once per application lifetime and cannot host background jobs.

Azure Policies enforce governance but have no relation to background job execution. WebJobs are built specifically for this purpose.

A) Long-running controller actions

In web applications, particularly ASP.NET or ASP.NET Core, controller actions are typically designed to handle HTTP requests and return responses quickly. Long-running controller actions are discouraged because they can tie up server resources, cause request timeouts, and degrade the responsiveness of the application. While technically possible, using controllers for background processing is inefficient and does not scale well. There is no built-in retry, scheduling, or monitoring for these long-running tasks, making them unsuitable for reliable background processing in production environments.

B) Azure WebJobs — Correct Answer

Azure WebJobs provide a lightweight, scalable mechanism for running background tasks within the context of an Azure App Service. They can be used to process files, handle queues, perform scheduled tasks, or execute any long-running background job without blocking the main web application.

WebJobs support multiple execution patterns:

Continuous WebJobs: Run constantly in the background, ideal for tasks like real-time queue processing.

Triggered WebJobs: Run on-demand or on a schedule, suitable for batch jobs, ETL processes, or periodic maintenance tasks.

Key advantages of Azure WebJobs include:

Integration with Azure services: Automatically connect to Storage Queues, Service Bus, or Blob Storage to react to events.

Scalability: Can scale along with the App Service plan.

Simplified deployment: Packaged with the web app, eliminating the need for separate hosting infrastructure.

Reliability: Built-in retry logic for triggers and monitoring capabilities through Azure Application Insights or logs.

Because WebJobs run independently of user requests, they are the ideal choice for executing background processing reliably and efficiently, especially for tasks that are long-running or triggered by external events.

C) Static constructors

Static constructors in C# are used to initialize static members of a class. They run once per type and are executed before the type is first used. While they are useful for initializing configuration, constants, or shared resources, they are not designed for background processing. A static constructor cannot run on a schedule, respond to events, or handle long-running operations reliably. Using static constructors for background tasks would block type initialization and potentially delay application startup, making them an improper solution for recurring or asynchronous jobs.

D) Azure Policies

Azure Policies are used to enforce organizational standards and compliance across Azure resources. They allow administrators to audit, enforce, or deny configurations based on rules (e.g., requiring tags, restricting VM SKUs, or enforcing network security rules). While essential for governance and compliance, Azure Policies have no mechanism for executing background processing, running jobs, or handling long-running tasks. They operate at the management plane, not the application or runtime level.

Long-running controller actions: Inefficient and not scalable; can block HTTP requests.

Static constructors: Used for initializing static members; cannot run background jobs or respond to triggers.

Azure Policies: Governance tool; cannot execute background processing.

Azure WebJobs: Purpose-built for background tasks; supports continuous, triggered, or scheduled execution; integrates with queues, storage, and events; scalable and reliable.

Question 172

You need to build an event-driven workflow that responds to resource changes in Azure (like VM creation). What should you use?

A) Event Grid
B) Azure Batch
C) App Service
D) Virtual Machines

Answer: A) Event Grid

Explanation:

Event Grid delivers real-time notifications when Azure resources change. It can trigger Functions, Logic Apps, or WebHooks automatically. It is the recommended service for reactive automation based on Azure events.

Azure Batch is unrelated to resource change events and handles compute workloads.

A) Event Grid — Correct Answer

Azure Event Grid is a fully managed event routing service designed for building reactive, event-driven architectures. It allows you to easily subscribe to events from Azure services, custom applications, or third-party sources, and route them to event handlers such as Azure Functions, Logic Apps, or WebHooks.

Key features of Event Grid include:

Push-based event delivery: Events are pushed to subscribers in near real-time, enabling low-latency reactive workflows.

High scalability: Can handle millions of events per second with automatic scaling.

Integration with multiple services: Works seamlessly with Blob Storage, Resource Groups, Event Hubs, IoT Hub, and custom sources.

Simplified decoupling: Producers and consumers are loosely coupled; the event source does not need to know about the subscriber’s implementation.

Reliable delivery: Built-in retry mechanisms ensure events reach subscribers even in case of transient failures.

Event Grid is ideal for scenarios where you need to react immediately to changes or events, such as:

Triggering a function when a new blob is uploaded.

Automating workflows when a VM is created or deleted.

Notifying systems about IoT device events.

Because of its event-driven design, Event Grid is the best option for scenarios requiring automatic, real-time responses to events.

B) Azure Batch

Azure Batch is a cloud-scale job scheduling service for running large-scale parallel and high-performance computing (HPC) workloads. It is designed for compute-intensive tasks like simulations, rendering, or batch processing of large datasets. While Batch can process jobs triggered by other systems, it is not an event-routing or notification service. Batch is compute-focused, not event-driven, and does not provide automatic subscriptions to resource changes or event propagation.

C) App Service

Azure App Service is a platform-as-a-service (PaaS) for hosting web applications, APIs, and mobile backends. While App Service can process requests, host web APIs, and run background jobs via WebJobs, it does not provide native event-routing functionality. An App Service app can consume events if integrated with Event Grid, but by itself, it cannot automatically trigger actions based on events from other Azure resources.

D) Virtual Machines

Azure Virtual Machines provide IaaS compute resources for running operating systems, applications, and workloads. VMs offer full control over the environment but are not inherently event-driven. To respond to events, you would need to implement custom scripts or monitoring agents, which adds complexity. VMs are better suited for persistent workloads requiring dedicated compute rather than reactive, event-based architectures.

App Service hosts web applications, not event routing.

Virtual Machines do not provide event-based workflows. Event Grid is the correct solution.

Question 173

You need to store configuration settings in a centralized, version-controlled location. What should you use?

A) App Settings
B) Key Vault
C) Azure App Configuration
D) Environment variables

Answer: C) Azure App Configuration

Explanation:

App Configuration provides centralized config storage, dynamic reloading, feature flags, and versioning. It’s ideal for multi-environment distributed applications where consistent configuration is essential.

App Settings are per-app and not centralized.

Key Vault is meant for secrets, not general configuration.

Environment variables are local and not versioned or managed centrally. App Configuration meets all the requirements.

Question 174

You must encrypt data in a Storage Account using a customer-managed key. What should you enable?

A) Service-managed encryption
B) CMK encryption with Key Vault
C) Access tiers
D) Private Link

Answer: B) CMK encryption with Key Vault

Explanation:

Customer-managed keys stored in Key Vault let you control key rotation, deletion, and lifecycle. Storage encrypts all data using your key, ensuring maximum security, compliance, and auditability.

Service-managed encryption uses Microsoft-managed keys, not customer-provided ones.

Access tiers manage cost, not encryption.

Private Link secures network access but does not manage encryption keys. CMK is the correct option.

Question 175

Your application needs to perform workflow-style tasks with conditions, loops, and connectors. Which service should you choose?

A) Azure Logic Apps
B) App Service
C) Container Instances
D) Virtual Machines

Answer: A) Azure Logic Apps

Explanation:

Logic Apps allow visual workflow orchestration with hundreds of connectors, conditional logic, loops, and triggers. They are perfect for business process automation and integration scenarios.

App Service hosts APIs and websites and cannot orchestrate workflows visually.

Container Instances run containers but cannot orchestrate workflow logic by themselves.

Virtual Machines require custom code and maintenance, making them unsuitable for workflow automation. Logic Apps fit perfectly.

Question 176

Your team wants to monitor API response time and failure rates. Which Azure service should you use?

A) Azure Monitor Application Insights
B) Azure Files
C) Azure CDN
D) MySQL in Azure

Answer: A) Azure Monitor Application Insights

Explanation:

Application Insights monitors API performance, response times, dependencies, availability, and exceptions. It provides dashboards, alerts, and distributed tracing, making it ideal for performance monitoring.

Azure Files provides file storage and no monitoring.

Azure CDN accelerates content but does not monitor APIs.

MySQL is a database service and not a monitoring tool. Application Insights is the proper choice

Question 177

You need to publish a .NET API with strict versioning policies. Which service should you use?

A) Azure Repos
B) Azure API Management
C) Logic Apps
D) App Configuration

Answer: B) Azure API Management

Explanation:

API Management supports versioning, revision control, policies, rate limiting, security, and developer portals. It’s ideal for publishing APIs in a managed, structured way.

Azure Repos stores code but does not handle versioned API publishing.

Logic Apps orchestrate workflows, not APIs.

App Configuration stores settings, not API versions. APIM is the correct option.

Question 178

You must store large amounts of semi-structured JSON data with low latency. Which database should you choose?

A) Azure SQL Database
B) MySQL
C) Cosmos DB
D) PostgreSQL

Answer: C) Cosmos DB

Explanation:

Cosmos DB is optimized for JSON storage, low latency, global distribution, and massive scalability. It supports flexible schemas, making it ideal for semi-structured data.

SQL, MySQL, and Postgres are relational databases and less efficient for huge volumes of dynamic JSON data. Cosmos DB directly fits the requirement.

Question 179

You want to protect your web app from DDoS attacks at the network layer. What should you enable?

A) Azure Firewall
B) DDoS Protection
C) Traffic Manager
D) VNet Peering

Answer: B) DDoS Protection

Explanation:

Azure DDoS Protection provides automated detection and mitigation against network-layer DDoS attacks, protecting public applications from traffic floods.

Azure Firewall filters traffic but does not mitigate DDoS floods.

Traffic Manager distributes traffic but does not protect against attacks.

VNet peering connects networks but offers no attack mitigation. DDoS Protection is the right tool.

Question 180

You need to ensure that your app can authenticate users using organizational accounts. Which service should you integrate with?

A) Azure AD
B) Azure AD B2C
C) OAuth provider custom code
D) Active Directory Domain Services

Answer: A) Azure AD

Explanation:

Azure AD provides identity management for corporate users, enabling sign-in with organizational accounts (Work or School accounts). It supports OAuth, OpenID Connect, and enterprise SSO.

AD B2C is for consumer identities, not corporate accounts.

Custom OAuth providers require unnecessary custom coding and are less secure.

AD Domain Services supports legacy protocols but not modern cloud application authentication. Azure AD is the correct option.

img