Microsoft AZ-204 Developing Solutions for Microsoft Azure Exam Dumps and Practice Test Questions Set 4 Q61-80
Visit here for our full Microsoft AZ-204 exam dumps and practice test questions.
Question 61:
You are developing an Azure Function that processes messages from Azure Service Bus Queues. You need reliable processing, fault tolerance, and duplicate prevention. Which approach should you use?
A) ReceiveAndDelete mode with a single consumer
B) Peek-lock mode with duplicate detection enabled
C) Ignore message locks and retry manually
D) Multiple consumers with ReceiveAndDelete mode
Answer: B) Peek-lock mode with duplicate detection enabled
Explanation:
Peek-lock mode is the standard approach for reliable message processing in Azure Service Bus. When a message is read, it is temporarily locked for processing, preventing other consumers from reading it. The message is removed only after the function successfully completes. If processing fails or times out, the message becomes visible again for reprocessing. This ensures at-least-once delivery, which is critical for workloads where missing messages could cause data loss or inconsistent states. Peek-lock combined with duplicate detection enables Service Bus to automatically identify repeated messages within a defined window, so the system can safely ignore duplicates, ensuring consistency without requiring additional application logic. This approach is particularly effective in high-throughput scenarios where multiple instances of the function may scale dynamically to process messages concurrently while maintaining reliability. Azure Functions integrates seamlessly with Service Bus triggers, providing automatic scaling, checkpointing, and built-in retries, which reduces operational complexity and aligns with best practices for serverless architecture. This configuration also allows visibility timeouts and dead-letter queues for handling poison messages, further improving fault tolerance.
ReceiveAndDelete mode removes messages from the queue immediately upon reading. While simpler to implement, it is vulnerable to message loss if the function crashes or fails while processing a message. In high-volume production environments, this can result in data inconsistencies, missed events, and higher operational risk. ReceiveAndDelete is sometimes acceptable for non-critical or idempotent workloads, but it does not meet the high-reliability requirements expected in most AZ-204 scenarios.
Ignoring message locks and relying on manual retries is a poor strategy because it introduces complexity and potential errors. If a function crashes, a message may be processed multiple times or lost entirely. Additionally, implementing manual retry logic requires careful coordination to avoid race conditions and concurrency issues, especially when multiple function instances are processing messages. This approach shifts the responsibility for reliability from the platform to the developer, which is not recommended for production workloads.
Using multiple consumers with ReceiveAndDelete mode allows parallel processing but compounds the risk of lost messages and duplicates, as messages are immediately removed from the queue. Without peek-lock or duplicate detection, scaling the system increases the likelihood of inconsistencies. Although this method may improve throughput, it sacrifices reliability and correctness, which are core requirements for enterprise-grade applications, especially when processing financial transactions, telemetry, or user-critical operations.
Question 62:
You are designing an Azure Logic App to automate order processing. Some connectors may fail intermittently. You want automatic retries and error notifications when failures persist. Which configuration should you use?
A) Recurrence trigger only
B) HTTP trigger without retry
C) Built-in retry policies and run-after configuration
D) Manual retry logic in each step
Answer: C) Built-in retry policies and run-after configuration
Explanation:
Built-in retry policies in Logic Apps are designed for transient fault handling. These policies allow automatic retries for actions that fail due to temporary network issues, throttling, or service downtime. You can configure the number of retries, retry interval, and retry type (fixed or exponential), providing flexibility to optimize workflow reliability without overloading resources. The run-after configuration allows subsequent steps to execute depending on the outcome of the previous action. For example, if an action fails after retries, you can trigger a notification step or compensation logic to alert administrators or revert partial changes. This combination ensures robust, maintainable, and fault-tolerant workflows that align with enterprise-grade automation practices and AZ-204 exam expectations.
Recurrence triggers execute workflows on a fixed schedule, polling for changes at set intervals. While useful for batch processing, they are not ideal for event-driven scenarios like order processing. They introduce latency because workflows only start at the next scheduled interval and cannot respond immediately to new orders. They also do not provide built-in handling for transient connector failures, so additional logic would be required, complicating the workflow.
HTTP triggers allow workflows to start in response to external calls. While suitable for event-driven integration, they do not inherently handle downstream transient errors. Without built-in retries, any connector failure can halt the workflow, requiring developers to implement custom retry mechanisms, error handling, and notification logic. This increases complexity and the potential for mistakes.
Manual retry logic in each step is an alternative but is time-consuming and error-prone. Each action must include custom retry logic, which increases maintenance overhead and reduces readability. Workflows can become inconsistent if retry policies are misconfigured, leading to missed events or duplicate processing. Using built-in capabilities provides a centralized, configurable, and reliable mechanism for retries and error handling.
Question 63:
You are building an Azure App Service API that reads frequently accessed data from Cosmos DB. You want low-latency reads and reduced RU consumption. Which feature should you enable?
A) Automatic indexing
B) Multi-region writes
C) TTL (Time-to-Live)
D) Integrated Cosmos DB caching
Answer: D) Integrated Cosmos DB caching
Explanation:
Integrated Cosmos DB caching stores frequently accessed documents in memory, allowing millisecond-level response times for read-heavy workloads. This reduces the number of requests sent to Cosmos DB, minimizing RU consumption and improving cost-efficiency. The cache supports configurable expiration policies, ensuring stale data is automatically refreshed. This approach is particularly effective for scenarios with high read-to-write ratios, such as dashboards, product catalogs, or telemetry aggregation. Integrated caching also works seamlessly with the Cosmos DB SDK and Azure Functions, supporting serverless architecture patterns and high concurrency while maintaining consistency through cache invalidation strategies.
Automatic indexing in Cosmos DB improves query performance by creating indexes on documents automatically. While beneficial for complex queries, it does not reduce the number of read requests or the RU consumption for frequently accessed items. It optimizes query execution but does not address caching or latency issues.
Multi-region writes enhance global write availability and reduce latency for users worldwide. However, they do not directly improve read latency in a single region and increase operational costs due to cross-region replication. Multi-region writes are not a substitute for caching when the goal is to reduce read load on hot data.
TTL (Time-to-Live) automatically deletes documents after a specified period. This feature is useful for ephemeral or temporary data but does not improve read performance for frequently accessed items. TTL may even increase read operations if clients must repeatedly access and regenerate data that has expired. It is primarily a data lifecycle management feature rather than a performance optimization tool.
Question 64
You are developing an Azure Function that ingests telemetry from Event Hubs. You need high throughput while maintaining message order per device. Which approach should you implement?
A) Single partition with one consumer
B) Multiple partitions without mapping consumers
C) Multiple partitions with one consumer per partition
D) Batch processing ignoring partitions
Answer: C) Multiple partitions with one consumer per partition
Explanation:
Using multiple partitions with a dedicated consumer per partition ensures parallel processing while preserving message order within each partition. Each consumer handles its assigned partition independently. Checkpointing provides fault tolerance, ensuring at-least-once delivery even if a consumer fails. Retry policies for transient errors further enhance reliability. This design scales horizontally to handle high volumes of telemetry data and aligns with Azure serverless best practices for high-throughput IoT scenarios. By maintaining partitioned order, you prevent inconsistencies in downstream processing and analytics, which is critical for scenarios like device telemetry, sensor data, or financial transactions.
A single partition with one consumer guarantees order but limits throughput. All messages are processed sequentially, creating a bottleneck that slows event ingestion under high-volume conditions.
Multiple partitions without mapping consumers may result in unordered processing, as multiple consumers could compete for messages across partitions, violating device-specific ordering guarantees.
Batch processing ignoring partitions can increase throughput but sacrifices message order. Events from the same device may be processed out of sequence, potentially leading to incorrect analytics, alerting, or state updates.
Question 65
You are configuring an Azure API Management (APIM) instance for internal APIs. You need to enforce authentication and capture request logs for auditing. Which configuration should you implement?
A) Anonymous access with logging
B) Basic authentication with local logs
C) IP restrictions only
D) OAuth 2.0 with Azure AD and diagnostic logging
Answer: D) OAuth 2.0 with Azure AD and diagnostic logging
Explanation:
OAuth 2.0 with Azure AD enforces identity-based access control, ensuring only authorized internal users can access APIs. Diagnostic logging captures request and response headers, metadata, and other audit information. Logs can be sent to Log Analytics, Event Hubs, or Storage for auditing, monitoring, and compliance purposes. This setup provides a secure, auditable, and enterprise-grade API management environment, meeting both internal security and compliance requirements. OAuth integration also enables role-based access control and token-based authorization, allowing granular permission management.
Anonymous access with logging captures requests but does not prevent unauthorized access. APIs remain exposed, and logging alone does not enforce security.
Basic authentication with local logs provides credentials-based access but lacks centralized identity management and token-based authorization. Credential rotation and auditing are more complex, reducing maintainability for enterprise workloads.
IP restrictions limit access to certain networks but do not verify individual user identities. Users within allowed networks could access APIs without authentication, making this approach insufficient for internal-only secure environments.
Question 66:
You are developing an Azure Function that processes messages from Azure Storage Queues. You need automatic retries, fault tolerance, and at-least-once delivery. Which approach should you implement?
A) Retry policies with exponential backoff
B) Ignore failures and let the function fail
C) Static sleep loops for retries
D) Rely only on queue visibility timeouts
Answer: A) Retry policies with exponential backoff
Explanation:
Retry policies with exponential backoff are essential for handling transient failures in Azure Functions. When a queue-triggered function encounters a temporary error, such as a network glitch or throttling, the function will retry the operation after a delay that grows exponentially with each attempt. This prevents resource exhaustion and reduces the likelihood of overwhelming the queue or downstream services. Exponential backoff is more efficient than fixed-interval retries because it adapts to the severity and persistence of the issue. Azure Functions integrates these policies natively, allowing developers to configure maximum retries, intervals, and error types, while the platform ensures at-least-once delivery and fault tolerance. Using this approach aligns with serverless best practices, reduces operational overhead, and guarantees reliability for production workloads.
Ignoring failures and letting the function fail leaves the system vulnerable to lost messages. In transient failure scenarios, this could result in missing critical data or events. Without retry logic, developers must manually handle errors, which increases complexity and reduces reliability.
Static sleep loops retry at fixed intervals but do not adapt to repeated failures. They can create bottlenecks and inefficient use of compute resources, as the function may retry too frequently during persistent outages or too slowly during transient errors. This approach lacks the intelligence and flexibility provided by exponential backoff.
Relying solely on queue visibility timeouts provides minimal fault tolerance. When a message becomes visible again after a timeout, it can be reprocessed, but there is no control over retry intervals, exponential growth, or logging. This can result in inconsistent processing, duplicate execution, or delayed retries, making it less reliable than structured retry policies.
Question 67
You are building a Logic App to process invoice approvals. You want the workflow to trigger automatically when a file is uploaded to Azure Blob Storage and ensure each file is processed only once. Which trigger should you use?
A) Recurrence trigger with polling
B) HTTP trigger called externally
C) Blob Storage trigger “When a blob is created”
D) Service Bus queue trigger
Answer: C) Blob Storage trigger “When a blob is created”
Explanation:
The Blob Storage trigger “When a blob is created” is a push-based trigger that reacts immediately when a new blob is uploaded. This ensures low-latency processing and avoids the inefficiencies of polling. The trigger supports concurrency controls to prevent multiple executions for the same file, ensuring each blob is processed exactly once. It integrates seamlessly with Logic Apps actions, enabling event-driven automation for order processing, approvals, or document workflows. Additionally, it supports error handling, retries, and run-after configurations, which ensures robust operation in the presence of transient failures. This approach aligns with serverless and event-driven architecture best practices recommended in AZ-204.
Recurrence triggers operate on a fixed schedule and poll storage at intervals. While simple, they introduce latency and unnecessary execution when no new files exist. They do not respond immediately to file creation events, making them less suitable for real-time processing.
HTTP triggers rely on external systems to invoke the workflow. This approach requires additional orchestration to detect file uploads and call the workflow, increasing complexity and introducing potential points of failure.
Service Bus queue triggers are designed to react to queued messages rather than blob uploads. Using this trigger would require an extra step to send a message to the queue whenever a blob is uploaded, adding overhead and increasing latency compared to the direct blob trigger.
Question 68
You are developing an Azure App Service API that reads frequently accessed documents from Cosmos DB. You want low-latency reads and reduced RU consumption. Which feature should you enable?
A) Automatic indexing
B) Multi-region writes
C) TTL (Time-to-Live)
D) Integrated Cosmos DB caching
Answer: D) Integrated Cosmos DB caching
Explanation:
Integrated Cosmos DB caching stores frequently accessed documents in memory, providing millisecond-level response times and reducing RU consumption. It is particularly effective for read-heavy workloads where repeated queries for the same items can lead to high cost and latency. Cached data can be refreshed automatically or on a configurable schedule, ensuring data remains current. This approach reduces the load on Cosmos DB, lowers operational costs, and aligns with best practices for high-throughput serverless applications. Integrated caching works seamlessly with the Cosmos DB SDK, Azure Functions, and Logic Apps, providing consistent, reliable, and scalable read performance.
Automatic indexing improves query execution speed by creating indexes on documents. While beneficial for complex queries, it does not reduce the number of reads or RU consumption for frequently accessed data. Indexing optimizes query performance but does not provide caching or low-latency reads for hot data.
Multi-region writes improve write availability and latency across regions but do not enhance read performance in a single region. This feature also increases operational costs due to replication and is primarily intended for high-availability write scenarios rather than read optimization.
TTL automatically deletes documents after a defined duration, useful for temporary or ephemeral data. While it helps with storage management, TTL does not cache data or reduce RU consumption for frequently read items. It is primarily a data lifecycle management feature rather than a read optimization mechanism.
Question 69:
You are building an Azure Function that ingests telemetry events from Event Hubs. You need high throughput while preserving message order per device. Which approach should you implement?
A) Single partition with one consumer
B) Multiple partitions without mapping consumers
C) Multiple partitions with one consumer per partition
D) Batch processing ignoring partitions
Answer: C) Multiple partitions with one consumer per partition
Explanation:
Multiple partitions with one consumer per partition allow parallel processing across partitions while maintaining message order within each partition. Each consumer is responsible for a single partition, and checkpointing ensures at-least-once delivery. Retry policies manage transient failures without affecting order guarantees. This design scales horizontally, enabling high throughput for telemetry or IoT scenarios while maintaining message sequence integrity, which is critical for analytics, monitoring, or financial applications.
A single partition with one consumer guarantees order but severely limits throughput. All messages are processed sequentially, creating a bottleneck that can delay event ingestion under high-volume conditions.
Multiple partitions without mapping consumers can result in unordered processing, as multiple consumers might handle messages from different partitions unpredictably. This compromises device-specific ordering, which may affect downstream systems relying on sequence integrity.
Batch processing ignoring partitions can increase throughput but sacrifices ordering. Events from the same device could be processed out of sequence, leading to inconsistencies in telemetry aggregation, alerting, or business logic.
Question 70:
You are configuring an Azure API Management (APIM) instance for internal APIs. You want to enforce authentication and capture request logs for auditing. Which configuration should you implement?
A) Anonymous access with logging
B) Basic authentication with local logs
C) IP restrictions only
D) OAuth 2.0 with Azure AD and diagnostic logging
Answer: D) OAuth 2.0 with Azure AD and diagnostic logging
Explanation:
OAuth 2.0 with Azure AD enforces identity-based access control, ensuring that only authenticated and authorized internal users can access APIs. Diagnostic logging captures request and response details, headers, and metadata, which can be routed to Log Analytics, Event Hubs, or Storage for auditing, monitoring, and compliance purposes. This approach provides a secure, auditable, and enterprise-grade API management solution. It also allows granular role-based access control, token-based authentication, and centralized identity management, reducing administrative overhead. OAuth integration ensures that API consumers cannot bypass authentication and that all access can be audited and traced, satisfying both operational and compliance requirements.
Anonymous access with logging captures request activity but does not enforce security. APIs remain exposed, and logging alone cannot prevent unauthorized access, making it unsuitable for internal-only APIs.
Basic authentication with local logs provides credential-based access but lacks centralized management, token-based authorization, and scalable auditing. Credential rotation and auditing are complex, making this approach less maintainable for enterprise environments.
IP restrictions limit access based on network location but do not verify individual user identity. Users within allowed networks could still access APIs without authentication, making this approach insufficient for enforcing secure, internal-only access.
Question 71:
You are developing an Azure Function that processes high-volume telemetry data from Event Hubs. You need parallel processing while preserving message order per device. Which architecture should you implement?
A) Single partition with one consumer
B) Multiple partitions without mapping consumers
C) Multiple partitions with one consumer per partition
D) Batch processing ignoring partitions
Answer: C) Multiple partitions with one consumer per partition
Explanation:
Using multiple partitions with a dedicated consumer per partition allows parallel processing across partitions while ensuring message order is maintained within each partition. Each consumer processes only its assigned partition, which prevents out-of-order processing that could occur if multiple consumers compete for the same partition. Checkpointing provides fault tolerance, ensuring messages are processed at least once. Azure Functions also supports scaling out automatically as the number of partitions increases, improving throughput without sacrificing order. Retry policies and dead-letter queues further enhance reliability and operational safety, which is critical in IoT, telemetry, or financial systems where event order affects downstream analytics or operations.
A single partition with one consumer guarantees order but severely limits throughput. All messages are processed sequentially, creating a bottleneck that can prevent real-time processing in high-volume scenarios.
Multiple partitions without mapping consumers may result in unordered message processing, as different consumers can process messages from multiple partitions unpredictably. This compromises device-specific order and can lead to data inconsistencies in downstream applications.
Batch processing ignoring partitions may increase throughput but sacrifices message order. Events from the same device could be processed out of sequence, which could cause incorrect analytics, state updates, or business logic execution. This is unsuitable for applications where event sequence matters.
Question 72:
You are designing an Azure App Service API that reads frequently accessed data from Cosmos DB. You need low-latency reads and reduced RU consumption. Which feature should you enable?
A) Automatic indexing
B) Multi-region writes
C) TTL (Time-to-Live)
D) Integrated Cosmos DB caching
Answer: D) Integrated Cosmos DB caching
Explanation:
Integrated Cosmos DB caching stores frequently accessed documents in memory, providing millisecond-level response times for read-heavy workloads. Caching reduces repeated database queries, lowering RU consumption and operational costs. Cache policies can be configured to automatically refresh or invalidate stale data, ensuring consistency while improving performance. This approach is ideal for high-concurrency, read-intensive scenarios, such as dashboards, reporting, telemetry, or product catalogs. It integrates seamlessly with Azure Functions and App Services, supporting serverless patterns and scaling efficiently with minimal developer overhead.
Automatic indexing improves query performance by creating indexes on documents. While this helps queries execute faster, it does not reduce the number of read requests or RU consumption for frequently accessed items. Indexing optimizes the query engine but does not act as a caching mechanism.
Multi-region writes improve write availability globally but do not enhance read performance in a single region. This feature also increases costs due to replication and is intended for scenarios that require high write availability rather than read optimization.
TTL automatically deletes documents after a configured duration. While it helps manage storage, it does not cache frequently accessed items or reduce RU consumption. TTL is useful for ephemeral data but is not a solution for low-latency, high-throughput read workloads.
Question 73:
You are configuring an Azure Function that processes messages from Service Bus Queues. You want reliable processing and avoid duplicate message handling. Which approach should you implement?
A) ReceiveAndDelete mode with a single consumer
B) Peek-lock mode with duplicate detection enabled
C) Ignore message locks and retry manually
D) Multiple consumers with ReceiveAndDelete mode
Answer: B) Peek-lock mode with duplicate detection enabled
Explanation:
Peek-lock mode locks the message during processing and removes it only after the function successfully completes. If processing fails, the message becomes available again for reprocessing. Duplicate detection ensures that the same message is not processed multiple times, providing at-least-once delivery without duplication. This combination is ideal for high-reliability, high-throughput scenarios, such as financial transactions or telemetry ingestion. Azure Functions integrates with Service Bus triggers to handle checkpointing, retries, and scaling automatically, reducing developer effort and operational risk. Dead-letter queues further enhance reliability by handling poison messages.
ReceiveAndDelete mode removes messages immediately upon receipt. If processing fails, the message is lost, which reduces reliability. It is suitable only for non-critical or idempotent workloads where occasional message loss is acceptable.
Ignoring message locks and retrying manually is risky because it can lead to duplicate processing or lost messages if the function fails or multiple instances process the same message. It increases operational complexity and shifts reliability responsibility to the developer.
Multiple consumers with ReceiveAndDelete mode allow parallel processing but do not prevent lost messages or duplicates, as messages are immediately removed. This approach sacrifices reliability for throughput and is unsuitable for enterprise-grade applications that require guaranteed message processing.
Question 74:
You are building a Logic App that triggers on new files in Azure Blob Storage. You want the workflow to trigger immediately and process each file exactly once. Which trigger should you use?
A) Recurrence trigger with polling
B) HTTP trigger called externally
C) Blob Storage trigger “When a blob is created”
D) Service Bus queue trigger
Answer: C) Blob Storage trigger “When a blob is created”
Explanation:
The Blob Storage trigger “When a blob is created” is a push-based, event-driven trigger. It executes immediately when a new blob is added, ensuring low latency and precise execution. Concurrency controls prevent multiple executions for the same blob, guaranteeing exactly-once processing. This approach is highly efficient and reduces operational cost because workflows only run when new files are created, avoiding unnecessary executions. It integrates with Logic App actions, enabling error handling, retry policies, and run-after configurations, providing a robust and maintainable workflow for automation scenarios such as order processing or document approvals.
Recurrence triggers poll storage at fixed intervals. This introduces latency because the workflow is not triggered immediately. Workflows may execute unnecessarily when no new blobs are present, increasing cost and processing overhead.
HTTP triggers rely on external systems to invoke the workflow. While flexible, this approach adds complexity because another process must monitor blob creation and trigger the workflow. It also introduces additional points of failure and latency.
Service Bus triggers react to messages in a queue, not blob uploads. To use this trigger, an extra step must send a message to the queue when a blob is created. This adds operational overhead and latency compared to the direct Blob Storage trigger.
Question 75:
You are configuring an Azure API Management (APIM) instance for internal APIs. You need to enforce authentication and capture request logs for auditing. Which configuration should you implement?
A) Anonymous access with logging
B) Basic authentication with local logs
C) IP restrictions only
D) OAuth 2.0 with Azure AD and diagnostic logging
Answer: D) OAuth 2.0 with Azure AD and diagnostic logging
Explanation:
OAuth 2.0 with Azure AD enforces identity-based access control, ensuring only authenticated and authorized internal users can access APIs. Diagnostic logging captures detailed request and response data, headers, and metadata. Logs can be sent to Log Analytics, Event Hubs, or Storage for auditing, monitoring, and compliance. This setup provides a secure, enterprise-grade API management solution, supporting token-based authentication, role-based access control, and centralized identity management. OAuth integration ensures that all API access can be audited and traced, fulfilling security and compliance requirements. This configuration is aligned with AZ-204 best practices for secure API management in enterprise environments.
Anonymous access with logging captures request activity but does not enforce security. APIs remain exposed, and logging alone cannot prevent unauthorized access.
Basic authentication with local logs provides credential-based access but lacks centralized management, token-based authentication, and robust auditing. Credential rotation and access tracking are complex, reducing maintainability.
IP restrictions limit access to specific networks but do not verify user identity. Users within allowed networks could still access APIs without authentication, making it unsuitable for enforcing secure internal access.
Question 76:
You are developing an Azure Function that processes messages from Azure Service Bus Topics. You want reliable message processing, avoid duplicates, and handle poison messages gracefully. Which approach should you implement?
A) ReceiveAndDelete mode with a single consumer
B) Peek-lock mode with duplicate detection and dead-lettering
C) Ignore message locks and retry manually
D) Multiple consumers with ReceiveAndDelete mode
Answer: B) Peek-lock mode with duplicate detection and dead-lettering
Explanation:
Peek-lock mode ensures that messages are temporarily locked when read, preventing other consumers from processing them until the function completes successfully. Combined with duplicate detection, it prevents the same message from being processed multiple times within a specified window, ensuring at-least-once delivery without duplication. Dead-lettering handles poison messages that repeatedly fail processing, moving them to a dedicated queue for later inspection or manual intervention. This approach is highly reliable, reduces operational complexity, and is recommended for enterprise-grade applications such as financial processing, telemetry ingestion, and IoT workloads. It also integrates with Azure Functions’ automatic scaling, checkpointing, and retry policies, aligning with AZ-204 best practices for serverless messaging.
ReceiveAndDelete mode immediately removes messages from the queue upon retrieval. While simpler, it is unsafe for critical workloads, as failed processing results in message loss. It is only suitable for idempotent or non-critical operations where occasional message loss is acceptable.
Ignoring message locks and retrying manually introduces significant complexity and risk. Multiple consumers may process the same message concurrently, leading to duplicates. The function must implement custom error handling, checkpointing, and retry logic, increasing maintenance overhead and potential for mistakes.
Multiple consumers with ReceiveAndDelete mode allow parallel processing but sacrifice reliability. Messages are removed immediately, so failures during processing result in lost messages, and duplicate detection is not available. This setup favors throughput over correctness and is unsuitable for critical workloads.
Question 77:
You are building a Logic App that triggers when invoices are uploaded to Azure Blob Storage. You need real-time processing and to ensure each file is processed exactly once. Which trigger should you use?
A) Recurrence trigger with polling
B) HTTP trigger called externally
C) Blob Storage trigger “When a blob is created”
D) Service Bus queue trigger
Answer: C) Blob Storage trigger “When a blob is created”
Explanation:
The Blob Storage trigger “When a blob is created” is an event-driven, push-based trigger. It ensures that workflows start immediately when a new blob is uploaded, providing low-latency processing. Concurrency controls and built-in deduplication mechanisms prevent multiple executions for the same blob, ensuring exactly-once processing. The trigger also integrates seamlessly with Logic App actions, allowing error handling, retries, and run-after configuration, which makes the workflow reliable and maintainable. This approach is ideal for document processing, invoice approvals, and automation scenarios, aligning with AZ-204 exam objectives for serverless, event-driven architecture.
Recurrence triggers poll storage at scheduled intervals, which introduces latency and increases operational costs because workflows run even if no new blobs exist. They are not suitable for real-time processing.
HTTP triggers require an external service to call the workflow. While flexible, this adds complexity and latency because the external system must monitor blob creation and initiate the workflow. It also increases the potential for failure points.
Service Bus queue triggers react to queued messages rather than blob uploads. Using this approach requires additional logic to push a message to the queue for each blob upload. This adds complexity and latency compared to using a direct Blob Storage trigger.
Question 78:
You are configuring an Azure Function that processes high-throughput telemetry data from Event Hubs. You need parallel processing and message order preservation per partition. Which architecture should you choose?
A) Single partition with one consumer
B) Multiple partitions without mapping consumers
C) Multiple partitions with one consumer per partition
D) Batch processing ignoring partitions
Answer: C) Multiple partitions with one consumer per partition
Explanation:
Using multiple partitions with a dedicated consumer per partition ensures parallel processing while maintaining message order within each partition. Each consumer processes its assigned partition independently, and checkpointing provides at-least-once delivery. Retry policies handle transient errors gracefully without violating order guarantees. This architecture scales horizontally, allowing high throughput while preserving sequence integrity, which is essential for IoT telemetry, financial transactions, and real-time analytics. Azure Functions automatically manages scaling and load balancing across partitions, aligning with serverless best practices.
A single partition with one consumer guarantees message order but limits throughput because all messages are processed sequentially. This can become a bottleneck for high-volume telemetry ingestion.
Multiple partitions without mapping consumers risk unordered processing, as multiple consumers may consume messages from different partitions unpredictably. This can lead to inconsistent downstream analytics and violates device-specific order constraints.
Batch processing ignoring partitions improves throughput but sacrifices order. Events from the same device may be processed out of sequence, affecting downstream processing, analytics, and operational decisions that depend on the correct event sequence.
Question 79:
You are developing an Azure App Service API that reads frequently accessed data from Cosmos DB. You want low-latency reads and minimal RU consumption. Which feature should you enable?
A) Automatic indexing
B) Multi-region writes
C) TTL (Time-to-Live)
D) Integrated Cosmos DB caching
Answer: D) Integrated Cosmos DB caching
Explanation:
Integrated Cosmos DB caching stores frequently accessed data in memory, reducing repeated database queries and improving read performance. This lowers RU consumption, decreases operational cost, and ensures millisecond-level latency for high-throughput workloads like dashboards, telemetry processing, and product catalogs. Cached data can be refreshed automatically or on-demand to maintain consistency, providing both performance and reliability. It integrates seamlessly with Azure App Services and Functions, supporting serverless and high-concurrency scenarios while minimizing developer overhead.
Automatic indexing optimizes query performance by creating indexes on documents. While beneficial for query efficiency, it does not reduce repeated reads or RU consumption, nor does it provide caching benefits.
Multi-region writes improve write availability and latency across regions but do not improve read performance in a single region. This option increases costs due to replication and is primarily intended for write-heavy, globally distributed applications.
TTL automatically deletes documents after a set period. While useful for ephemeral data, TTL does not improve read performance or caching for frequently accessed items. It is a data lifecycle management feature rather than a read optimization tool.
Question 80:
You are configuring an Azure API Management (APIM) instance for internal APIs. You need to enforce authentication and capture detailed request logs for auditing. Which configuration should you implement?
A) Anonymous access with logging
B) Basic authentication with local logs
C) IP restrictions only
D) OAuth 2.0 with Azure AD and diagnostic logging
Answer: D) OAuth 2.0 with Azure AD and diagnostic logging
Explanation:
OAuth 2.0 with Azure AD provides identity-based access control, ensuring that only authenticated and authorized users can access internal APIs. Diagnostic logging captures detailed request and response data, including headers, payloads, and metadata. These logs can be routed to Log Analytics, Event Hubs, or Storage for auditing, monitoring, and compliance. This configuration supports enterprise-grade security, token-based authentication, role-based access control, and centralized identity management. OAuth integration ensures all access is traceable and auditable, fulfilling both operational and compliance requirements. This aligns fully with AZ-204 exam objectives for secure API management.
Anonymous access with logging captures request activity but does not enforce security. APIs remain exposed, and logging alone cannot prevent unauthorized access.
Basic authentication with local logs provides credential-based access but lacks centralized management and robust auditing capabilities. Credential rotation, access tracking, and token-based authentication are more complex, reducing maintainability.
IP restrictions limit access based on network location but do not verify individual users. Users within allowed networks could access APIs without authentication, making this insufficient for internal secure APIs.
Popular posts
Recent Posts
