Microsoft AZ-801 Configuring Windows Server Hybrid Advanced Services Exam Dumps and Practice Test Questions Set 5 Q81-100

Visit here for our full Microsoft AZ-801 exam dumps and practice test questions.

Question 81 

You manage Windows Server 2022 Hyper-V hosts with Shielded VMs. You need to migrate encrypted VMs between hosts without exposing virtual TPM keys. What should you implement?

A) Node-level TPM passthrough
B) Shielded VMs with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation: 

When migrating encrypted virtual machines between Windows Server 2022 Hyper-V hosts, maintaining the integrity and confidentiality of virtual TPM keys is the most critical design factor. Each encrypted VM relies on its virtual TPM for storing BitLocker keys and other sensitive cryptographic material. If these keys are exposed or mishandled during migration, the security guarantees of Shielded VMs collapse. Therefore, any solution must include a mechanism that validates host trustworthiness, protects key distribution, and enforces strict attestation procedures before a VM can power on or migrate.

Node-level TPM passthrough binds the vTPM directly to the physical TPM of a single host. While this offers strong protection for a static deployment, it prevents migration entirely. The virtual TPM becomes tied to a specific hardware component, which means workloads cannot move to another server without recreating or exposing key material. This directly conflicts with the requirement for secure migration across hosts. Therefore, this approach is unsuitable whenever mobility or elasticity is needed.

Cluster Shared Volume redirected mode is a storage continuity mechanism used during path failures or maintenance operations. It allows a node to route storage traffic through another node, ensuring continuous disk access. Although critical for storage resilience, it has no relation to key protection, encryption workflows, or trust-based host validation. It does not help with vTPM security or encrypted VM mobility.

Migrating virtual machines without encryption completely violates the safety model for handling Shielded VMs. When encryption is bypassed, the VM’s sensitive contents—including vTPM secrets—could be exposed in transit. This approach violates compliance frameworks, undermines confidentiality, and disregards the need for host attestation. It is never an acceptable solution for handling protected workloads.

The only mechanism that fulfills all security and mobility requirements is implementing Shielded VMs backed by a Host Guardian Service. HGS provides key protection, host attestation, and guarded fabric capabilities that ensure only trusted, authorized hosts can run or migrate Shielded VMs. HGS issues decryption keys to legitimate hosts, prevents unauthorized operations, and keeps vTPM material encrypted at all times. This ensures secure live migration while maintaining compliance, confidentiality, and operational flexibility. Because it is specifically designed to protect encrypted workloads during moves between guarded hosts, it is the correct solution.

Question 82

You manage a Windows Server 2022 failover cluster hosting critical SQL Server VMs. You need to prevent multiple critical VMs from running on the same node and allow automatic rebalancing. What should you configure?

A) VM start order
B) Preferred owners
C) Anti-affinity rules with dynamic optimization
D) Cluster quorum settings

Answer: C

Explanation: 

Ensuring that multiple critical SQL Server virtual machines never run on the same node is essential for maintaining high availability and minimizing the blast radius of a single node failure. When several mission-critical VMs end up co-located on a single host, the loss of that node creates a multi-service outage. Therefore, an intelligent mechanism is needed that distributes workloads, enforces separation rules, and automatically rebalances placement as conditions change.

VM start order is useful when orchestrating a deliberate server startup sequence after maintenance, but it has no influence on runtime placement. It ensures dependent services start in the correct order but does not stop critical workloads from ending up on the same node during failover or spontaneous balancing events. It provides operational control but not placement enforcement.

Preferred owners define which hosts are ideal for running a VM. While these preferences guide placement during initial scheduling, they are not binding. If a failover occurs, the cluster may run multiple preferred workloads on a single node. Preferred owners do not provide a method to enforce workload separation or automatically redistribute VMs as resource demand evolves.

Cluster quorum settings determine how many cluster members must be online for the cluster to function. While quorum directly affects fault tolerance and cluster survivability, it has no relationship to VM placement, workload distribution, or load-balancing decisions. It ensures the cluster remains operational, not how VMs are placed.

The only configuration that enforces host separation while continuously managing placement is establishing anti-affinity rules combined with dynamic optimization. Anti-affinity rules define that certain VMs must not run on the same node. The cluster respects these policies during scheduling and failovers. Dynamic optimization adds continuous monitoring and automatic redistribution of virtual machines when load conditions or maintenance events cause unintended placement. This ensures that workloads remain evenly and appropriately distributed across the cluster even after failovers, peak demand periods, or routine operational shifts. Because the environment includes critical SQL Server workloads, ensuring they remain isolated protects against multi-VM outages and aligns with high availability best practices. Combining placement rules with automated balancing ensures long-term compliance with separation requirements, making this the correct configuration.

Question 83 

You manage Windows Server 2022 with Azure File Sync. Branch servers frequently recall large files, causing network congestion. You must prioritize critical files and reduce bandwidth usage. What should you configure?

A) Cloud tiering minimum file age
B) Recall priority policies
C) Offline files mode
D) Background tiering deferral

Answer: B

Explanation: 

Managing Azure File Sync environments efficiently requires balancing file recall behavior and network performance. Branch servers frequently retrieving large files can overwhelm available bandwidth, causing slowdowns across the network. This becomes especially problematic when critical workloads compete with less important recall operations. Therefore, the solution must focus on implementing a mechanism that prioritizes essential file recalls while reducing overall congestion.

Cloud tiering minimum file age controls how soon newly created or modified files can be tiered to the cloud. While useful for delaying tiering of active files, it has no effect on recall prioritization. It does not differentiate between important and non-important files once they are tiered, nor does it help manage network saturation during large recall events. It serves as a tiering management tool rather than a bandwidth prioritization mechanism.

Offline files mode ensures file availability when connectivity is unreliable. Activating offline file features improves end-user access when disconnected but does not shape recall behavior, prioritize retrieval order, or regulate bandwidth usage during sustained recall operations. It improves local accessibility rather than optimizing cloud recall traffic.

Background tiering deferral delays when files are tiered back to the cloud but does not address file recall operations. Tiering deferral only reduces upload activity, not download intensity, during heavy recall periods. Therefore, it does not solve the requirement to prioritize critical files or mitigate bandwidth spikes during simultaneous recall events.

The correct mechanism is recall priority policies. These policies permit administrators to designate critical paths, directories, or file types as higher priority. When multiple recalls occur, Azure File Sync retrieves high-priority items first, delaying or slowing the recall of less important files. This ensures critical workloads access what they need immediately, reducing the performance impact on branch offices. By shaping the recall queue, these policies manage bandwidth consumption intelligently and prevent excessive congestion caused by bulk or low-priority recall traffic. They improve responsiveness, support optimized network usage, and directly address the issue of repeated large-file recalls overwhelming available capacity. Because they align with both prioritization and bandwidth optimization objectives, recall priority policies are the correct configuration.

Question 84 

You manage a Windows Server 2022 RDS deployment integrated with Azure MFA. Users report login failures due to delayed MFA responses. You must reduce authentication failures without compromising MFA security. What should you configure?

A) Reduce session persistence
B) Increase NPS extension timeout
C) Disable conditional access
D) Enable persistent cookies

Answer: B

Explanation: 

In Remote Desktop Services deployments integrated with Azure MFA, authentication delays can disrupt user access and cause repeated login failures. This commonly occurs when the NPS extension requires more time to receive or validate a user’s MFA response. The goal in this scenario is to reduce authentication failures without weakening security or disabling MFA. Therefore, any solution must enhance reliability while preserving strict security enforcement.

Reducing session persistence affects how long an RDS session remains active before requiring reauthentication. Although it can limit stale sessions, it does not influence the timing of MFA response processing. Shorter persistence may even increase MFA prompts, worsening reliability when MFA delays are already present. It affects session lifecycle rather than authentication timing, making it irrelevant to the problem.

Disabling conditional access removes policy-driven MFA enforcement altogether. While it would effectively stop MFA delays, it directly contradicts the requirement to maintain MFA security. Eliminating conditional access invalidates compliance safeguards and leaves RDS vulnerable to unauthorized access, making it an unacceptable solution.

Persistent cookies maintain an authenticated state for users, reducing the frequency of MFA prompts after an initial login. However, they do not improve the processing time of the first MFA interaction during login. When authentication delays stem from timeout issues between RDS, NPS, and Azure MFA, persistent cookies do not resolve the underlying latency. They improve user convenience but not the reliability of the handshake between systems.

Increasing the NPS extension timeout directly addresses the root cause of delayed MFA responses. When the default timeout expires prematurely, even valid MFA approvals fail to arrive in time. Extending the timeout gives the authentication workflow additional time to complete verification while fully maintaining security controls. This prevents failures caused by temporary network delays, Azure service responsiveness, or slow push-notification responses. The adjustment improves reliability without reducing security, weakening policy controls, or bypassing MFA. It directly reinforces the authentication pipeline, stabilizing login experiences for RDS users in hybrid environments.

Because it resolves MFA-related authentication failures without compromising security, increasing the NPS extension timeout is the appropriate action.

Question 85 

You manage a Windows Server 2022 Hyper-V cluster. Certain VMs require encryption and secure mobility between hosts. You need to ensure virtual TPM keys remain protected during migration. What should you configure?

A) Node-level TPM passthrough
B) Shielded VMs with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation: 

Ensuring that virtual TPM keys remain protected during Hyper-V VM migration requires a secure, trusted environment where only authorized hosts can run encrypted workloads. During live migration, sensitive key material must never be exposed in transit or accessible to unauthorized systems. Therefore, key management must be centrally controlled, protected, and distributed only to verified hosts.

Node-level TPM passthrough binds the virtual machine’s TPM functionality to the host’s physical TPM hardware. While this protects the VM while it runs on that specific server, it prevents migration because the vTPM cannot travel with the VM. Attempting to migrate such a configuration breaks the security model and risks exposing protected keys. As a result, this approach is incompatible with any scenario requiring encrypted VM mobility.

Cluster Shared Volume redirected mode handles storage path issues and ensures that VMs retain access to shared storage during outages or maintenance. Although critical for cluster resiliency, it has no involvement in encryption, vTPM protection, or key distribution. It cannot enforce trust boundaries, nor can it protect TPM keys during migration.

Live migration without encryption explicitly contradicts the requirement for secure movement. Sending VM memory and state without encryption exposes secrets, credentials, and sensitive data. This method provides no protection for vTPM key material and would violate compliance and best practices. Therefore, it cannot be considered an acceptable option.

Shielded VMs with Host Guardian Service form the dedicated security framework designed precisely for this scenario. HGS performs host attestation, ensuring that only trusted and compliant Hyper-V hosts receive the keys needed to run or migrate a Shielded VM. It protects vTPM keys by encrypting them and distributing them solely to attested hosts using a guarded fabric. HGS enforces trust integrity across the cluster, prevents unauthorized hosts from accessing encrypted workloads, and ensures that migration workflows maintain full key confidentiality. This model preserves security while still allowing flexible workload mobility, meeting both encryption and operational requirements.

Because it uniquely provides secure vTPM protection, trusted host verification, and safe encrypted migration, implementing Shielded VMs with Host Guardian Service is the correct solution.

Question 86 

You manage Windows Server 2022 Hyper-V hosts running Shielded VMs. You need to allow encrypted VMs to migrate between hosts securely. What should you implement?

A) Node-level TPM passthrough
B) Shielded VMs with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation: 

Node-level TPM passthrough binds virtual TPM keys to a specific host. It allows VMs to run securely on that host, but migrating the VM to another host would expose sensitive keys, which violates security requirements.

Cluster Shared Volume redirected mode provides storage resiliency, ensuring that VMs can continue to access data during I/O failures. However, it does not manage encryption keys or enable secure migration of Shielded VMs. Its purpose is storage availability rather than VM encryption security.

Migrating VMs without encryption removes protections provided by BitLocker and virtual TPM. This exposes sensitive workloads and violates compliance policies. This method does not satisfy security requirements for encrypted VM mobility.

Shielded VMs with Host Guardian Service allow encrypted VMs to migrate between authorized hosts securely. Host Guardian Service manages key distribution and attestation, ensuring that virtual TPM keys are protected at all times. This approach maintains both security and operational flexibility, allowing encrypted workloads to move safely across hosts.

To enable encrypted VM migration while protecting virtual TPM keys, Shielded VMs with Host Guardian Service is required. Therefore, the correct answer is B.

Question 87 

You manage a Windows Server 2022 failover cluster hosting SQL Server VMs. You need to prevent critical VMs from running on the same host and support automatic rebalancing. What should you configure?

A) VM start order
B) Preferred owners
C) Anti-affinity rules with dynamic optimization
D) Cluster quorum settings

Answer: C

Explanation: 

VM start order controls the boot sequence of virtual machines during cluster startup. While this ensures dependencies are maintained, it does not prevent multiple critical VMs from being co-located on a single node. Start order addresses startup sequencing, not isolation of workloads.

Preferred owners guide VM placement by indicating preferred nodes. Although this can influence initial placement, it does not strictly enforce separation. Critical VMs may still reside on the same host during dynamic balancing or maintenance.

Anti-affinity rules with dynamic optimization enforce separation policies across nodes. The cluster continuously monitors placement and resource usage, automatically rebalancing VMs during maintenance or load spikes. This reduces the risk of multiple critical workloads failing if a single node goes down and ensures high availability.

Cluster quorum settings determine the number of nodes required for cluster operation during failures. While critical for resiliency, quorum does not affect VM placement or enforce anti-co-location rules.

For maintaining VM separation and automatic rebalancing, anti-affinity rules with dynamic optimization are required. Therefore, the correct answer is C.

Question 88 

You manage Windows Server 2022 with Azure File Sync. Branch servers frequently recall large files, causing network congestion. You need to prioritize critical files and reduce bandwidth usage. What should you configure?

A) Cloud tiering minimum file age
B) Recall priority policies
C) Offline files mode
D) Background tiering deferral

Answer: B

Explanation: 

Cloud tiering minimum file age delays when new files are considered for local caching. While it reduces unnecessary recalls of newly created files, it does not allow prioritization of critical files over others.

Offline files mode keeps files available locally when disconnected. While it improves availability, it does not manage which files are recalled first or optimize bandwidth consumption.

Background tiering deferral schedules offloading of files to the cloud at later times, reducing overall network activity. However, it does not ensure that high-priority files are retrieved first. Critical files could still be delayed, which does not meet the requirement.

Recall priority policies allow administrators to assign importance levels to specific files or directories. High-priority files are recalled first, ensuring essential workloads remain responsive while reducing network usage for less important files. This approach provides predictable behavior and bandwidth optimization in hybrid environments.

For prioritizing critical files and optimizing network bandwidth, recall priority policies must be configured. Therefore, the correct answer is B.

Question 89 

You manage a Windows Server 2022 RDS deployment integrated with Azure MFA. Users report login failures due to delayed MFA responses. You must reduce authentication failures without compromising MFA security. What should you configure?

A) Reduce session persistence
B) Increase NPS extension timeout
C) Disable conditional access
D) Enable persistent cookies

Answer: B

Explanation: 

Reducing session persistence shortens active session duration. While it affects session expiration, it does not address delays caused by slow MFA responses. Shorter persistence may increase re-authentication attempts, worsening login failures.

Disabling conditional access bypasses MFA verification, compromising security. This does not meet the requirement to maintain enforced multi-factor authentication.

Persistent cookies improve convenience by reducing repeated MFA prompts after authentication, but they do not resolve initial handshake delays causing login failures. Cookies only affect post-authentication behavior, not the authentication process itself.

Increasing the NPS extension timeout allows additional time for MFA verification to complete. This accommodates network latency or temporary service delays without reducing security. Users authenticate successfully while MFA enforcement remains intact.

For hybrid RDS environments with delayed MFA responses, extending the NPS extension timeout ensures reliable authentication. Therefore, the correct answer is B.

Question 90 

You manage Windows Server 2022 Hyper-V hosts. Certain VMs require encryption and secure mobility between hosts. You need to ensure virtual TPM keys remain protected during migrations. What should you configure?

A) Node-level TPM passthrough
B) Shielded VMs with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation:

Node-level TPM passthrough binds virtual TPM keys to a single host. It secures the VM on that host but prevents migration without exposing keys, which violates security requirements.

Cluster Shared Volume redirected mode ensures continued storage access during path failures but does not protect virtual TPM keys or manage secure migration of Shielded VMs. It focuses on storage continuity rather than encryption.

Migrating VMs without encryption removes all protections, exposing sensitive workloads. This approach does not meet compliance or security requirements.

Shielded VMs with Host Guardian Service provide secure key management and attestation. Only authorized hosts can decrypt and run the VMs, and virtual TPM keys are never exposed during migration. This ensures encrypted workloads move safely across hosts while maintaining compliance and confidentiality.

To enable secure VM migration with protected virtual TPM keys, Shielded VMs with Host Guardian Service is required. Therefore, the correct answer is B.

Question 91

You manage a Windows Server 2022 failover cluster hosting SQL Server VMs. You need to ensure critical VMs are not co-located on the same node while enabling automatic load balancing. What should you configure?

A) VM start order
B) Preferred owners
C) Anti-affinity rules with dynamic optimization
D) Cluster quorum settings

Answer: C

Explanation:

Ensuring that critical VMs never run together on the same Hyper-V node while still allowing the cluster to automatically balance workloads requires a mechanism that both enforces placement rules and works dynamically during ongoing cluster operations. VM start order is useful for defining boot sequences during cluster startup, especially when dependencies exist across application tiers. However, it has no influence on where VMs run once the cluster is operational, nor does it prevent sensitive or mission-critical VMs from being placed on the same host. Preferred owners allow administrators to define which nodes should host a particular virtual machine under normal circumstances, but this guidance is soft, not absolute. If the cluster experiences failover, rebalancing events, or maintenance situations, preferred owners cannot guarantee that two critical VMs remain separated across nodes. They provide initial placement preference only but do not enforce anti-collocation.

Cluster quorum settings determine how the cluster maintains availability when nodes go offline. While quorum is crucial for cluster resiliency and ensuring that the cluster remains operational during failures, it provides absolutely no influence on workload placement or VM-to-host distribution. Quorum configurations protect cluster behavior, not VM separation or dynamic rebalancing.

Anti-affinity rules combined with dynamic optimization directly address the requirement by creating policies that tell the cluster to avoid placing designated VMs on the same host. Anti-affinity alone defines separation intent, while dynamic optimization adds active monitoring and automatic balancing capabilities, continuously ensuring compliance even after failovers, host maintenance, or resource fluctuations. As workloads shift or hosts experience load pressure, dynamic optimization automatically migrates VMs to appropriate nodes while honoring anti-affinity settings. This maintains both high availability and placement isolation, ensuring that critical VMs remain distributed across the cluster to minimize impact from a node failure. Therefore, the correct answer is C.

Question 92

You manage Windows Server 2022 with Azure File Sync. Large files are frequently recalled on branch servers, causing high bandwidth usage. You must reduce network impact and ensure important files are prioritized. What should you configure?

A) Cloud tiering minimum file age
B) Recall priority policies
C) Offline files mode
D) Background tiering deferral

Answer: B

Explanation: 

Azure File Sync environments often experience bandwidth pressure when branch servers repeatedly recall large cloud-tiered files, especially when multiple users or applications request substantial datasets. Cloud tiering minimum file age provides a mechanism to delay the local population of recently created or modified files. While this may reduce unnecessary caching activity, it does not offer any capability to prioritize certain files ahead of others. Its function is based on file modification time, not business importance or recall urgency. Offline files mode focuses on allowing cached files to be available offline, typically for roaming users or intermittent connectivity scenarios. This feature does not control how or when recall operations are performed, nor does it help differentiate critical files from less important ones during recall.

Background tiering deferral helps reduce network congestion that occurs when Azure File Sync offloads local files back to the cloud. This process runs independently from recall operations and deals with outbound data movement rather than inbound recall activity. While deferral can optimize times when tiering occurs, it does not provide any prioritization for files being pulled from the cloud.

Recall priority policies are specifically designed for these scenarios. They allow administrators to classify files or directory paths with priority levels that influence recall order. When large volumes of data need to be restored or accessed, the system retrieves high-priority files first, ensuring that important workloads remain highly responsive. This reduces wait times for critical business processes while preventing less essential files from consuming network bandwidth prematurely. By applying recall priority policies, organizations can guarantee that mission-critical datasets are available before lower-value content is fetched, providing predictable performance and helping preserve bandwidth during peak usage periods. Therefore, the correct answer is B.

Question 93 

You manage a Windows Server 2022 RDS deployment integrated with Azure MFA. Users report failed logins due to delayed MFA responses. You must reduce authentication failures without compromising security. What should you configure?

A) Reduce session persistence
B) Increase NPS extension timeout
C) Disable conditional access
D) Enable persistent cookies

Answer: B

Explanation: 

When integrating Windows Server 2022 Remote Desktop Services with Azure MFA using the NPS extension, authentication relies on synchronous MFA validation. If the MFA response from Azure is delayed due to network conditions, temporary service latency, or device-side delays, users may experience failed logins even if their identity and credentials are valid. Reducing session persistence shortens the duration of user sessions and causes more frequent re-authentication, which can actually worsen the problem by creating more MFA challenges. It does not address the root cause, which is insufficient time for the NPS extension to complete MFA validation.

Disabling conditional access might reduce friction by bypassing MFA requirements, but this directly compromises security and breaks compliance. It violates organizational policies that require strong authentication and should never be used to mitigate latency issues. Persistent cookies help minimize repeated MFA prompts for browser-based authentication workloads, but do not apply to NPS-driven RDS authentication flows. Even where they are supported, persistent cookies do not influence whether an MFA request completes successfully during the first authentication attempt.

Increasing the NPS extension timeout increases the amount of time the server waits for MFA verification before declaring the attempt failed. This setting is specifically designed for situations where temporary delays occur between the NPS extension and Azure MFA services. By extending this timeout, users get additional time to approve the MFA challenge on their devices, significantly reducing authentication failures without relaxing security requirements. The login process continues to enforce MFA, but with improved resilience against minor delays. Therefore, the correct answer is B.

Question 94 

You manage Windows Server 2022 Hyper-V hosts. Certain VMs require encryption and secure mobility. You need to ensure virtual TPM keys are protected during migration. What should you configure?

A) Node-level TPM passthrough
B) Shielded VMs with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation:

Securing virtual TPM keys during migration involves ensuring that sensitive key material remains protected even when a VM moves between Hyper-V hosts. Node-level TPM passthrough binds TPM functions to a single physical host, which prevents virtual TPM keys from traveling with the VM during migration. As a result, any attempt to move the VM to another host breaks the security model and makes the VM non-functional. This does not meet the requirement for secure mobility.

Cluster Shared Volume redirected mode ensures storage continuity when direct paths fail, but this mechanism does not provide any protection for encryption keys, virtual TPM data, or migration security. It strictly deals with storage path resiliency and has no involvement in key management or VM encryption.

Live migration without encryption leaves the migration traffic exposed during the move. Sensitive data, including memory content and TPM-related material, could potentially be intercepted. This option is inappropriate for secure workloads.

Shielded VMs combined with Host Guardian Service deliver authenticated and authorized host-based protections for virtual TPM keys. The Host Guardian Service stores and releases keys only to hosts that meet attestation and health requirements. During migration, the destination host must prove its trustworthiness before it receives the keys needed to run the VM. This ensures that even if a VM is moved, its encrypted components—including virtual TPM material—remain protected. Therefore, the correct answer is B.

Question 95 

You manage a Windows Server 2022 failover cluster with SQL Server VMs. Critical workloads must not reside on the same host, and automatic balancing is required. What should you configure?

A) VM start order
B) Preferred owners
C) Anti-affinity rules with dynamic optimization
D) Cluster quorum settings

Answer: C

Explanation: 

VM start order is used to dictate the order in which virtual machines boot when a cluster starts or recovers from a shutdown. While useful for hierarchical applications, it does not influence whether two critical VMs end up running on the same host. It simply ensures a consistent boot order and offers no enforcement of separation or load balancing.

Preferred owners allow administrators to define which hosts should handle which VMs when the cluster is functioning normally. However, these settings are treated as guidance rather than strict rules. During failovers, rebalancing, or host maintenance, preferred owners do not guarantee separation. Two critical VMs may still end up on the same node if the cluster determines doing so is acceptable for resource availability.

Cluster quorum settings determine the number of nodes required for the cluster to remain operational. They ensure the cluster does not split or behave unpredictably during partial failures. Quorum has no relation to workload layout, VM mobility, or any form of placement enforcement.

Anti-affinity rules define a policy that explicitly prevents selected VMs from running together on the same node. However, without dynamic optimization, enforcement only occurs during failover or initial placement. Dynamic optimization expands this by constantly analyzing cluster workloads and automatically rebalancing them as resource demands change. When both are combined, the cluster continuously honors separation requirements and maintains balanced distribution even during maintenance or dynamic load shifts. Therefore, the correct answer is C.

Question 96 

You manage Windows Server 2022 with Azure File Sync. Branch servers frequently recall large files, impacting bandwidth. You need to prioritize essential files and reduce unnecessary network usage. What should you configure?

A) Cloud tiering minimum file age
B) Recall priority policies
C) Offline files mode
D) Background tiering deferral

Answer: B

Explanation:

Cloud tiering minimum file age is a mechanism used to control when recently modified or created files become eligible for tiering. It delays the offloading of new files to the cloud and may help reduce unnecessary recalls. However, it cannot prioritize certain files or ensure that essential files are brought down before others. It operates on time-based criteria rather than importance-based rules.

Offline files mode ensures users can continue accessing cached files while disconnected from the network. This feature is typically used in laptop scenarios and does not influence recall logic, prioritization, or bandwidth efficiency. It cannot control which files Azure File Sync retrieves first.

Background tiering deferral allows administrators to postpone the process of offloading files from local storage to the cloud. This can help with bandwidth optimization, but again, deferral affects outbound tiering operations, not inbound recalls. It offers no ability to prioritize files.

Recall priority policies provide the exact functionality needed. Administrators can mark specific directories or datasets as high priority. When a recall occurs—whether triggered by user access, application need, or server processing—Azure File Sync retrieves high-priority files first. This ensures bandwidth is used intelligently, essential workflows remain responsive, and unnecessary recalls are deprioritized. Recall priority policies give organizations fine-grained control that directly addresses challenges of large-scale recalls in bandwidth-constrained branch environments. Therefore, the correct answer is B.

Question 97 

You manage a Windows Server 2022 RDS deployment with Azure MFA. Users report failed logins due to delayed MFA responses. You must reduce login failures while maintaining security. What should you configure?

A) Reduce session persistence
B) Increase NPS extension timeout
C) Disable conditional access
D) Enable persistent cookies

Answer: B

Explanation: 

Reducing session persistence affects how often users must reauthenticate but does not resolve problems with delayed MFA responses. If anything, shorter session persistence increases authentication frequency, resulting in more MFA prompts and amplifying the user impact of delays.

Disabling conditional access might eliminate MFA failures, but doing so disables required security controls, violates standards, and introduces compliance risks. No organization that relies on MFA for secure access would consider disabling conditional access a viable solution.

Persistent cookies help certain browser-based authentication flows by remembering recent MFA verification and reducing repeated prompts. However, RDS logons authenticated through the NPS extension do not use persistent cookie mechanisms. Even if they did, persistent cookies do not fix delayed MFA responses caused by external latency.

Increasing the NPS extension timeout extends the window in which the RDS server waits for Azure MFA validation. MFA validation can take longer due to mobile network delays, push-notification lags, or temporary Azure service latency. When the timeout is too short, users who approve MFA slightly late will be rejected. Extending the timeout ensures that the authentication process remains stable even when there are minor delays in push notifications. It maintains strong authentication while significantly improving success rates. Therefore, the correct answer is B.

Question 98 

You manage Windows Server 2022 Hyper-V hosts. Certain VMs require encryption and secure mobility between hosts. You need to protect virtual TPM keys during migration. What should you configure?

A) Node-level TPM passthrough
B) Shielded VMs with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation:

Node-level TPM passthrough restricts the virtual TPM to a specific physical server. If the VM moves, the TPM information does not move with it. This makes migration impossible for secure workloads and fails to meet the requirement.

Cluster Shared Volume redirected mode ensures storage reliability when a network path fails but does not protect encryption keys or virtual TPM data. It is purely a storage failover mechanism.

Live migration without encryption leaves VM state data unprotected during transfer. Sensitive TPM data is included in memory state and can be exposed, directly violating the security requirement.

Shielded VMs with Host Guardian Service provide a framework where only trusted hosts can run protected VMs. During migration, the Host Guardian Service verifies the destination host’s health and identity before releasing keys. This ensures virtual TPM keys remain secure during and after migration. The entire migration path is protected, maintaining compliance and preventing key exposure. Therefore, the correct answer is B.

Question 99 

You manage a Windows Server 2022 failover cluster hosting SQL Server VMs. Critical VMs must not reside on the same node, and automatic load balancing is required. What should you configure?

A) VM start order
B) Preferred owners
C) Anti-affinity rules with dynamic optimization
D) Cluster quorum settings

Answer: C

Explanation: 

VM start order deals only with boot sequencing. It does not prevent workloads from collocating or dynamically shifting during load balancing or recovery operations.

Preferred owners suggest where a VM should run, but the cluster may choose other nodes if necessary. They do not enforce separation or guarantee distribution of critical workloads.

Cluster quorum settings ensure the cluster maintains correct operational status but have no effect on VM placement or balancing.

Anti-affinity rules with dynamic optimization ensure critical VMs remain separated at all times and rebalance if required. Dynamic optimization keeps monitoring workload distribution and moves VMs as needed. This ensures separation is continuously honored even during failovers. Therefore, the correct answer is C.

Question 100 

You manage Windows Server 2022 with Azure File Sync. Branch servers recall large files frequently, causing network congestion. You must prioritize critical files and optimize bandwidth usage. What should you configure?

A) Cloud tiering minimum file age
B) Recall priority policies
C) Offline files mode
D) Background tiering deferral

Answer: B

Explanation: 

Cloud tiering minimum file age helps control how quickly newly created or modified files become eligible for tiering, which can reduce unnecessary cloud interactions. Its primary function is to prevent very recent files from being offloaded too soon, preserving them locally for a period in case users need to access them again. While this can reduce certain types of network traffic, it does not offer any mechanism for determining which files are more important to the business. Because it operates strictly based on modification time rather than business value, it cannot ensure that high-priority files are recalled sooner than others. As a result, it does not address situations in which essential datasets must be readily available before less important content.

Offline files mode is designed to keep locally cached files accessible when network connectivity is poor or unavailable. The purpose is continuity of access rather than bandwidth optimization. While this feature ensures that users can continue working with files stored in their offline cache, it does not influence Azure File Sync’s recall behavior. It cannot control which files are retrieved first from cloud storage during periods of heavy recall activity. Offline files mode also does nothing to minimize the bandwidth impact caused by simultaneous recall operations, since its goal is user availability, not cloud synchronization efficiency.

Background tiering deferral helps postpone the offloading of files from servers to the cloud, usually to avoid adding unnecessary load during peak hours. Although useful for scheduling when tiering occurs, it only affects the transfer of files being pushed upward to Azure, not those being recalled downward. Because it handles outbound operations and not inbound recall sequences, it cannot influence bandwidth consumption caused by frequent or large recalls. It also cannot differentiate between critical and non-critical data.

Recall priority policies, however, directly address the problem of bandwidth optimization during recall operations. By allowing administrators to designate certain files, folders, or workloads as higher priority, Azure File Sync retrieves those items before anything else. This ensures that essential business processes receive the data they need immediately, even when the network is under stress. Lower-priority files are retrieved only after high-value content has been restored, preventing bottlenecks and delays. With this approach, branch locations can function efficiently despite limited bandwidth, and organizations retain predictable performance for critical workloads. To ensure that the most important files are recalled first and bandwidth is used intelligently, recall priority policies must be configured. Therefore, the correct answer is B.

 

img