Microsoft AZ-801 Configuring Windows Server Hybrid Advanced Services Exam Dumps and Practice Test Questions Set 10 Q181-200

Visit here for our full Microsoft AZ-801 exam dumps and practice test questions.

Question 181 

You manage Windows Server 2022 Hyper-V hosts running Shielded VMs. You need to migrate encrypted VMs between hosts while keeping virtual TPM keys secure. What should you configure?

A) Node-level TPM passthrough
B) Shielded VMs with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation: 

Shielded VMs with Host Guardian Service (HGS) are designed to ensure the confidentiality, integrity, and secure migration of sensitive virtual machines in a Hyper-V environment. When you deploy Shielded VMs, the VM’s virtual TPM (vTPM) keys are encrypted and only released to trusted hosts that are attested by the HGS. This attestation process validates that a host meets security policies, including operating system version, Hyper-V configuration, and secure boot integrity, before allowing the VM to run. This approach ensures that the sensitive encryption keys never leave a trusted environment and remain protected even during live migration.

Node-level TPM passthrough, on the other hand, allows a VM to directly access a physical TPM on the host. While this secures the VM locally, it creates a significant limitation for migration because the TPM keys are bound to the originating host. Migrating a VM to a different host would require exposing these keys or disabling TPM security, both of which violate security compliance requirements.

Cluster Shared Volume redirected mode improves storage resiliency by redirecting I/O when storage paths fail, ensuring VM uptime during hardware or path failures. However, CSV redirection does not manage encryption or virtual TPM keys, and therefore does not solve the challenge of secure migration of encrypted VMs.

Finally, VM live migration without encryption allows the VM state to transfer across hosts without protection. This exposes all sensitive data and encryption keys in transit, violating security policies and compliance requirements.

By implementing Shielded VMs with HGS, administrators gain a secure framework that integrates attestation, encryption key management, and host authorization. This ensures virtual TPM keys are never exposed, VMs can migrate safely across authorized hosts, and compliance requirements are met. Additionally, this method supports operational flexibility, allowing workloads to move between hosts without downtime while maintaining a high security posture. This makes Shielded VMs with Host Guardian Service the most secure and compliant solution for environments requiring secure migration of encrypted VMs.

Question 182 

You manage a Windows Server 2022 failover cluster hosting SQL Server VMs. Critical workloads must not co-locate, and automatic rebalancing is required. What should you configure?

A) VM start order
B) Preferred owners
C) Anti-affinity rules with dynamic optimization
D) Cluster quorum settings

Answer: C

Explanation: 

In a Windows Server failover cluster, workload placement is critical for both high availability and performance. Anti-affinity rules combined with dynamic optimization provide the most effective method to ensure that critical VMs do not co-locate on the same host while allowing the cluster to automatically rebalance workloads. Anti-affinity rules specify that certain VMs should not reside on the same node, reducing the risk that a single node failure impacts multiple critical workloads.

Dynamic optimization continuously evaluates VM placement within the cluster. If the cluster detects that critical workloads are co-located or that resource utilization is uneven, it automatically migrates VMs to maintain separation and optimize performance. This ensures high availability and load balancing without manual intervention. The system continuously monitors cluster nodes, resource usage, and compliance with anti-affinity policies, providing proactive protection against simultaneous node failures affecting multiple critical workloads.

VM start order only specifies the sequence in which VMs boot. While useful for dependent workloads, it does not enforce separation during runtime or dynamic balancing. Multiple critical VMs could still reside on the same node if dynamic optimization moves them based on resource utilization.

Preferred owners guide the initial placement of VMs during startup or after failover. They offer placement guidance but do not enforce rules during cluster-driven optimization or live migrations, leaving a possibility of co-location under load balancing or maintenance events.

Cluster quorum settings ensure cluster resiliency and determine the number of node failures the cluster can tolerate without service disruption. However, quorum configuration has no impact on VM placement or separation.

Using anti-affinity rules with dynamic optimization combines both preventive and reactive strategies. It prevents co-location, automatically rebalances workloads, and reduces the risk of downtime or service degradation for critical SQL Server VMs. This configuration ensures high availability, operational efficiency, and adherence to best practices for separating critical workloads. Therefore, anti-affinity rules with dynamic optimization are the only solution that meets both separation and automatic balancing requirements.

Question 183 

You manage Windows Server 2022 with Azure File Sync. Branch servers frequently recall large files, resulting in network congestion. You need to prioritize essential files. What should you configure?

A) Cloud tiering minimum file age
B) Recall priority policies
C) Offline files mode
D) Background tiering deferral

Answer: B

Explanation: 

Azure File Sync allows files to be cached on local servers while storing the full dataset in Azure Files. In high-demand branch scenarios, simultaneous recall of large files can overwhelm network bandwidth, reduce performance, and delay access to essential data. Recall priority policies provide administrators with a mechanism to control which files or directories are recalled first, ensuring that critical workloads are served before less important data. These policies help optimize network performance, minimize latency for high-priority workloads, and reduce congestion caused by mass file retrievals.

Cloud tiering minimum file age specifies how long a file must remain local before it is eligible to be tiered to the cloud. While this can prevent unnecessary recalls of newly created files, it does not allow prioritization of essential files already in the cloud. Consequently, it cannot solve the problem of high-priority file retrieval during peak network usage.

Offline files mode enables local availability of cached files even when the server is disconnected from the network. While useful for continuity, offline files do not influence the order in which files are recalled from Azure or the bandwidth allocation for critical workloads.

Background tiering deferral controls the scheduling of cloud tiering operations to reduce network usage during peak hours. Although it helps optimize bandwidth, it does not guarantee that high-priority files are recalled first, so mission-critical file access could still be delayed.

By implementing recall priority policies, administrators can explicitly define which files are most important, ensuring that they are recalled promptly during simultaneous requests. High-priority files are retrieved first, preventing critical workflows from being delayed by less important operations. This configuration enhances both user productivity and network efficiency. Moreover, combining recall priority policies with monitoring tools allows proactive management of branch server workloads, reducing congestion and improving overall system responsiveness. This makes recall priority policies the most effective solution for prioritizing essential files in Azure File Sync deployments.

Question 184 

You manage a Windows Server 2022 RDS deployment integrated with Azure MFA. Users report login failures due to delayed MFA responses. You must reduce authentication failures without compromising security. What should you configure?

A) Reduce session persistence
B) Increase NPS extension timeout
C) Disable conditional access
D) Enable persistent cookies

Answer: B

Explanation: 

In Remote Desktop Services (RDS) integrated with Azure Multi-Factor Authentication (MFA), user authentication involves two stages: primary authentication via credentials and secondary verification via MFA. Delays in the MFA response—caused by network latency, temporary service interruptions, or delayed push notifications—can result in failed login attempts even though credentials are correct.

Reducing session persistence shortens session lifetimes, forcing users to re-authenticate more frequently. While this increases security in some scenarios, it does not address the core problem of delayed MFA responses. In fact, shorter sessions can exacerbate login failures, as users may be logged out mid-authentication if the MFA verification takes too long.

Disabling conditional access policies would bypass MFA entirely. While this eliminates login failures, it also removes a critical security layer, exposing the environment to risk. Organizations that require MFA for compliance purposes cannot safely disable conditional access without violating regulatory or internal security requirements.

Persistent cookies allow the session to remember previous authentication attempts or device trust settings, reducing repeated prompts. However, they do not address handshake or response delays from MFA providers. Users could still encounter failures if the MFA request is slow or times out.

The NPS extension for Azure MFA introduces a configurable timeout that determines how long the system waits for MFA verification to complete. By increasing this timeout, administrators provide additional time for the secondary authentication process to succeed, even under network delays or temporary service slowness. This ensures that valid authentication attempts are not rejected prematurely, while maintaining security protocols and MFA enforcement.

In practice, configuring the NPS extension timeout to accommodate typical network or service delays ensures smoother user experience, fewer login failures, and adherence to organizational security policies. It balances security with usability, providing reliable authentication without compromising the integrity of MFA. Therefore, increasing the NPS extension timeout is the correct solution for environments experiencing delayed MFA responses.

Question 185 

You manage Windows Server 2022 Hyper-V hosts. Certain VMs require encryption and secure migration. You need to ensure virtual TPM keys are protected. What should you configure?

A) Node-level TPM passthrough
B) Shielded VMs with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation: 

Virtual Trusted Platform Module (vTPM) keys provide encryption for sensitive workloads in Hyper-V environments. Protecting these keys is critical for compliance and operational security, especially during VM migration. Shielded VMs combined with Host Guardian Service (HGS) offer a secure framework that manages encryption keys, ensures host attestation, and allows migration across authorized hosts without exposing sensitive keys.

Node-level TPM passthrough allows VMs to access a host TPM directly. While this secures data locally, it ties the VM to a single host. Migrating a VM protected with node-level TPM would require either exposing the keys or reconfiguring TPM, both of which compromise security and violate best practices.

Cluster Shared Volume redirected mode offers storage resiliency during path failures but does not provide encryption management or protect vTPM keys during migration. While CSV redirection ensures uptime and operational continuity, it cannot secure VM encryption during host transfers.

Migrating VMs without encryption removes protections entirely, exposing sensitive workloads and encryption keys to interception or compromise. This approach is unacceptable for workloads that require confidentiality and compliance adherence.

Shielded VMs with HGS ensure that only authorized hosts can run a VM. HGS attests host integrity, validates security compliance, and releases encryption keys securely. This ensures that even during migration, vTPM keys remain protected, preventing unauthorized access. Additionally, Shielded VMs provide operational flexibility, allowing administrators to migrate workloads across hosts for maintenance or load balancing without compromising security.

Implementing Shielded VMs with HGS combines encryption, attestation, and secure key management. It supports both operational needs and compliance requirements, making it the definitive solution for protecting vTPM keys while enabling migration of sensitive workloads.

Question 186 

You manage a Windows Server 2022 failover cluster hosting SQL Server VMs. Critical workloads must not co-locate, and automated load balancing is required. What should you configure?

A) VM start order
B) Preferred owners
C) Anti-affinity rules with dynamic optimization
D) Cluster quorum settings

Answer: C

Explanation: 

Failover clusters require careful workload placement to maximize availability and minimize risk. Anti-affinity rules combined with dynamic optimization are the optimal solution to enforce separation of critical VMs while allowing automated load balancing. Anti-affinity rules explicitly specify which VMs should not reside on the same node, reducing the impact of a single node failure.

Dynamic optimization continuously monitors cluster resources and VM placement. When workloads are unevenly distributed or critical VMs are co-located, the system automatically migrates VMs to comply with anti-affinity policies and balance resources. This proactive approach ensures high availability, minimizes downtime, and reduces the risk of simultaneous failure of multiple critical workloads.

VM start order only defines boot sequencing and has no effect on workload placement or separation after boot. Multiple critical VMs could still co-locate under dynamic cluster operations.

Preferred owners influence initial placement and recovery events but cannot enforce separation during automated balancing or resource optimization. During high utilization or maintenance, VMs may still be co-located against critical policy goals.

Cluster quorum settings determine the number of failures a cluster can tolerate without losing services, but they do not affect VM placement. Quorum configuration ensures cluster resiliency but cannot prevent co-location of critical workloads.

By configuring anti-affinity rules with dynamic optimization, administrators gain continuous monitoring, automatic rebalancing, and enforcement of separation policies. This ensures critical SQL Server VMs never reside on the same node, reduces the risk of simultaneous failures, and provides a fully automated mechanism for maintaining optimal workload distribution. It addresses both high availability and operational efficiency in a single configuration, making it essential for production environments with critical workloads.

Question 187 

You manage Windows Server 2022 with Azure File Sync. Branch servers frequently recall large files, causing network congestion. You need to prioritize essential files. What should you configure?

A) Cloud tiering minimum file age
B) Recall priority policies
C) Offline files mode
D) Background tiering deferral

Answer: B

Explanation: 

Azure File Sync enables branch servers to cache frequently used files locally while storing the full dataset in Azure Files. In high-demand environments, multiple simultaneous file recall requests can saturate network bandwidth, degrade performance, and delay access to critical files. Recall priority policies provide a targeted solution by allowing administrators to designate high-priority files or folders that should be retrieved first when network contention occurs.

Cloud tiering minimum file age specifies how long a file must remain cached before it can be tiered to the cloud. While this prevents immediate recalls for newly created files, it does not prioritize essential files already in the cloud. As a result, high-priority workflows may still experience delays during peak activity.

Offline files mode ensures local availability of files when disconnected from the network. Although beneficial for continuity, it does not influence the order of file recall requests from Azure, nor does it optimize network usage for simultaneous high-priority requests.

Background tiering deferral schedules file offloading to reduce network load during peak periods. While it helps optimize bandwidth utilization, it does not enforce priority retrieval for critical files during recall operations, which is the key requirement.

By configuring recall priority policies, administrators can ensure that critical files are recalled first, reducing delays for essential workflows while preventing network congestion from non-critical requests. This allows the branch server to handle mission-critical workloads efficiently and enhances overall network performance. Additionally, administrators can combine recall priority policies with monitoring and reporting tools to track performance, detect bottlenecks, and adjust policies dynamically. Implementing recall priority policies ensures both responsiveness and bandwidth optimization, making it the most effective solution for environments where essential files must be prioritized during recalls.

Question 188 

You manage a Windows Server 2022 RDS deployment integrated with Azure MFA. Users report login failures caused by delayed MFA responses. You must reduce authentication failures without compromising security. What should you configure?

A) Reduce session persistence
B) Increase NPS extension timeout
C) Disable conditional access
D) Enable persistent cookies

Answer: B

Explanation: 

In RDS environments with Azure Multi-Factor Authentication, login failures often occur because the MFA process requires time for verification, such as receiving push notifications or responding to SMS codes. Network latency, MFA service delays, or temporary interruptions can cause authentication attempts to fail prematurely. Increasing the NPS extension timeout allows the system to wait longer for MFA verification to complete, ensuring legitimate users are authenticated even when delays occur.

Reducing session persistence shortens session durations, potentially increasing login prompts. This does not solve delays in MFA responses and may exacerbate login failures.

Disabling conditional access bypasses MFA entirely, removing a critical layer of security. This compromises compliance requirements and exposes the environment to risk.

Persistent cookies reduce repeated prompts for returning users but do not extend the time allowed for MFA verification. Delayed responses could still result in authentication failures even if cookies are enabled.

By increasing the NPS extension timeout, administrators provide sufficient time for the secondary authentication to complete while maintaining security policies. This approach balances usability and compliance, ensuring that delayed MFA responses do not block legitimate users while retaining robust authentication. Monitoring network performance and MFA service health alongside timeout adjustments ensures optimal configuration. Therefore, increasing the NPS extension timeout is the correct solution to reduce login failures due to delayed MFA responses.

Question 189 

You manage Windows Server 2022 Hyper-V hosts. Certain VMs require encryption and secure migration. You need to ensure virtual TPM keys remain protected. What should you configure?

A) Node-level TPM passthrough
B) Shielded VMs with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation: 

Virtual TPM keys are crucial for securing encrypted workloads in Hyper-V. Protecting these keys during migration is essential to ensure data confidentiality and compliance. Shielded VMs with Host Guardian Service (HGS) are specifically designed for this purpose. HGS attests host integrity and releases encryption keys only to trusted hosts, enabling secure migration of VMs without exposing sensitive keys.

Node-level TPM passthrough ties the vTPM to a single host. While secure locally, this approach prevents migration without exposing keys, which violates security best practices.

Cluster Shared Volume redirected mode provides storage resiliency, allowing VMs to remain operational during path failures. However, it does not secure encryption keys or protect vTPMs during migration, leaving sensitive data vulnerable.

Migrating VMs without encryption removes protections entirely, exposing workloads to compromise. This approach is not acceptable for sensitive workloads requiring compliance.

Implementing Shielded VMs with HGS ensures that vTPM keys remain encrypted and only accessible to attested hosts. Administrators can migrate VMs securely, maintain operational flexibility, and comply with security policies. This solution combines encryption, host attestation, and key protection, providing a robust framework for managing sensitive workloads. Therefore, Shielded VMs with HGS are required to secure virtual TPM keys while enabling safe migration across authorized hosts.

Question 190 

You manage a Windows Server 2022 failover cluster hosting SQL Server VMs. Critical workloads must not co-locate, and automated load balancing is required. What should you configure?

A) VM start order
B) Preferred owners
C) Anti-affinity rules with dynamic optimization
D) Cluster quorum settings

Answer: C

Explanation: 

Failover clusters require proper VM placement to prevent simultaneous failures of critical workloads. Anti-affinity rules with dynamic optimization provide the best solution to enforce separation while automating load balancing. Anti-affinity rules define which VMs cannot reside on the same node, minimizing the risk that multiple critical workloads are affected by a single node failure.

Dynamic optimization continuously monitors cluster resources and VM placement. If VMs violate anti-affinity rules or resources become unbalanced, the cluster automatically migrates VMs to restore compliance and balance. This proactive mechanism ensures operational continuity and optimal resource utilization without manual intervention.

VM start order only determines the sequence of booting VMs and does not influence placement or separation after boot. Multiple critical VMs could still reside on the same node.

Preferred owners provide guidance for initial VM placement or failover but cannot enforce anti-affinity during dynamic balancing or automated migrations.

Cluster quorum ensures resiliency by maintaining cluster availability but does not influence workload separation or automated balancing.

By combining anti-affinity rules with dynamic optimization, administrators achieve continuous monitoring, enforced separation, and automatic workload rebalancing. This configuration protects critical SQL Server VMs from co-location risk, improves availability, and supports high operational efficiency. Therefore, anti-affinity rules with dynamic optimization are essential for automated balancing and separation of critical workloads.

Question 191 

You manage Windows Server 2022 Hyper-V hosts running Shielded VMs. You need to migrate encrypted VMs between hosts while keeping virtual TPM keys secure. What should you configure?

A) Node-level TPM passthrough
B) Shielded VMs with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation: 

Node-level TPM passthrough allows a virtual machine (VM) to use the physical TPM (Trusted Platform Module) of a single host. While this provides a high level of security for protecting encryption keys on that host, it significantly limits operational flexibility because the VM cannot be migrated to another host without exposing sensitive keys. This is especially critical in environments with Shielded VMs, where the integrity and confidentiality of the virtual TPM (vTPM) keys must be preserved at all times. If a VM were moved between hosts without proper key management, administrators would risk key compromise, violating both organizational security policies and regulatory compliance requirements.

Cluster Shared Volume (CSV) redirected mode primarily provides resiliency for storage paths in a failover cluster. When a storage path fails, CSV redirected mode allows the cluster to continue providing access by redirecting I/O through another node. However, this feature does not offer mechanisms to secure VM encryption keys during migration. It only addresses storage availability, leaving vTPM keys vulnerable if the VM moves to an unauthorized or non-attested host.

Live migration without encryption disables protections during VM transfer. Although the VM may continue to operate, sensitive data and vTPM keys are transmitted in plaintext over the network, exposing the VM to interception, tampering, or unauthorized access. This approach is incompatible with compliance requirements for sensitive workloads.

Shielded VMs combined with Host Guardian Service (HGS) solve these issues. HGS provides attestation and key management services that ensure only authorized hosts can start or run a Shielded VM. During live migration, HGS securely manages the vTPM keys, allowing encrypted VMs to move between hosts without exposing sensitive cryptographic material. This mechanism guarantees operational flexibility, maintains data confidentiality, and ensures regulatory compliance. In addition, HGS enforces a strict policy where only hosts meeting the defined attestation requirements can decrypt and run the VM, reducing the risk of lateral attacks or compromise. Therefore, implementing Shielded VMs with HGS is the only solution that ensures both secure migration and protection of vTPM keys, making B the correct answer.

Question 192 

You manage a Windows Server 2022 failover cluster hosting SQL Server VMs. Critical workloads must not co-locate, and automatic rebalancing is required. What should you configure?

A) VM start order
B) Preferred owners
C) Anti-affinity rules with dynamic optimization
D) Cluster quorum settings

Answer: C

Explanation: 

VM start order in a failover cluster specifies the sequence in which virtual machines boot during node startup or recovery. While important for dependencies—such as ensuring a database VM starts before an application VM—it does not enforce co-location or separation policies. Multiple critical workloads could inadvertently run on the same node, increasing the risk of simultaneous failure if that node encounters hardware or software issues.

Preferred owners allow administrators to guide initial placement of VMs across cluster nodes. While this can help distribute workloads according to capacity or policy, it does not enforce strict separation. During cluster maintenance, failovers, or dynamic load balancing, VMs may still end up co-located on a single node if the preferred node is unavailable. Thus, preferred owners provide guidance, not guarantees.

Cluster quorum settings maintain overall cluster health and availability. Quorum determines how many votes are required to keep a cluster operational and prevent split-brain scenarios. While critical for resiliency, quorum settings do not influence VM placement or workload separation, and therefore cannot prevent multiple high-risk VMs from residing on the same node.

Anti-affinity rules with dynamic optimization are designed specifically for this scenario. Anti-affinity policies prevent certain VMs from running on the same physical node simultaneously. When combined with dynamic optimization, the cluster continuously monitors VM placement and automatically moves VMs if the policy is violated. This ensures that critical workloads remain isolated, minimizing the risk of simultaneous node failure affecting multiple key services. Dynamic optimization works alongside live migration to balance loads, reduce contention, and maintain availability while respecting anti-affinity policies.

In large-scale deployments, failing to enforce workload separation can lead to significant operational and business impact. By configuring anti-affinity rules with dynamic optimization, administrators can achieve automated, policy-driven workload isolation that is continuously enforced across the cluster. Therefore, C is the correct solution for ensuring critical SQL Server VMs remain separated while allowing automated balancing.

Question 193 

You manage Windows Server 2022 with Azure File Sync. Branch servers frequently recall large files, causing network congestion. You need to prioritize essential files. What should you configure?

A) Cloud tiering minimum file age
B) Recall priority policies
C) Offline files mode
D) Background tiering deferral

Answer: B

Explanation: 

Azure File Sync allows organizations to cache frequently used files on local branch servers while storing the authoritative copy in Azure. Cloud tiering is a key feature that enables this by automatically maintaining local copies of “hot” files, while offloading rarely accessed data to the cloud to optimize local storage. However, in branch environments with multiple users and high volumes of file requests, simultaneous recall of large files can saturate network bandwidth and degrade performance. Simply using cloud tiering minimum file age does not prioritize which files are recalled first. This setting only ensures that files remain in the cloud for a defined period before caching locally, and does not influence bandwidth allocation during multiple concurrent recall requests.

Offline files mode allows users to access files even when disconnected from the network, improving local availability. However, this mode does not control which files are recalled first when network bandwidth is limited. Background tiering deferral reduces network impact by scheduling offloading during low-usage periods, but again, it does not provide a mechanism to prioritize specific essential files for immediate recall.

Recall priority policies are designed to address this precise problem. These policies allow administrators to assign importance levels to files or directories. During simultaneous recall requests, the system honors these priorities, retrieving high-priority files first. This ensures critical workloads, such as business-sensitive documents or applications, remain responsive even under network strain. Additionally, prioritizing file recalls optimizes network utilization by preventing non-essential data from consuming bandwidth unnecessarily.

Implementing recall priority policies allows IT teams to balance performance and resource utilization effectively. This is especially important in branch offices with limited WAN capacity or environments where real-time access to certain files is essential for operations. By prioritizing critical files, administrators can reduce latency, improve user experience, and maintain overall network efficiency. Therefore, configuring recall priority policies is the correct choice to address network congestion while ensuring essential files remain available, making B the correct answer.

Question 194 

You manage a Windows Server 2022 RDS deployment integrated with Azure MFA. Users report login failures due to delayed MFA responses. You must reduce authentication failures without compromising security. What should you configure?

A) Reduce session persistence
B) Increase NPS extension timeout
C) Disable conditional access
D) Enable persistent cookies

Answer: B

Explanation: 

In a Remote Desktop Services (RDS) environment integrated with Azure Multi-Factor Authentication (MFA), users may experience login failures if the MFA verification process is delayed due to network latency, service throttling, or other transient issues. Reducing session persistence, which shortens the duration of active RDS sessions, does not mitigate this problem. While it may force more frequent logins, it does not address delays in completing the MFA handshake, which is the underlying cause of authentication failures.

Disabling conditional access might reduce login failures, but it bypasses MFA entirely, undermining security and potentially violating regulatory compliance requirements. Organizations cannot compromise MFA enforcement simply to address transient authentication issues. Similarly, enabling persistent cookies allows users to bypass repeated MFA prompts for the same device or session, improving convenience. However, it does not solve delays in the initial verification process, particularly for high-latency networks or when Azure MFA service response is slow.

Increasing the NPS (Network Policy Server) extension timeout is a targeted solution. The NPS extension integrates with Azure MFA to enforce multi-factor verification during RADIUS authentication requests. By default, the timeout period may be insufficient for users in geographically distributed locations or in cases where network congestion delays communication with Azure MFA. Extending the timeout allows the RDS infrastructure to wait longer for a valid MFA response before denying access, reducing login failures while maintaining strict security enforcement.

This approach addresses the root cause without weakening authentication policies. It ensures users can successfully log in even when MFA responses are delayed and allows administrators to maintain compliance with security policies. Extending the NPS extension timeout is both a secure and operationally effective solution to reduce authentication failures caused by network or service-induced delays. Therefore, B is the correct answer.

Question 195 

You manage Windows Server 2022 Hyper-V hosts. Certain VMs require encryption and secure migration. You need to ensure virtual TPM keys are protected. What should you configure?

A) Node-level TPM passthrough
B) Shielded VMs with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation: 

Securing virtual TPM (vTPM) keys for encrypted VMs is critical in Hyper-V environments where sensitive workloads are hosted. Node-level TPM passthrough allows a VM to access the physical TPM on a single host. While this protects vTPM keys locally, it prevents secure migration because the keys cannot leave the host safely. Exposing them for migration would compromise encryption, making this method unsuitable for environments requiring mobility.

Cluster Shared Volume redirected mode ensures that storage remains accessible during path failures, improving availability. However, this feature does not manage encryption keys or provide mechanisms to protect vTPM keys during migration. It addresses storage resiliency but not cryptographic security.

Live migration without encryption is not an option for sensitive workloads because it transfers the VM in plaintext over the network, exposing data and keys to potential interception or tampering. This violates both security policies and compliance requirements.

Shielded VMs combined with Host Guardian Service (HGS) solve these challenges. HGS provides attestation and key management, ensuring only authorized hosts can run Shielded VMs. During live migration, HGS securely provisions the vTPM keys on the destination host without exposing them. This maintains the confidentiality and integrity of the keys while enabling migration, ensuring operational flexibility without compromising security.

Implementing Shielded VMs with HGS enables administrators to maintain compliance, reduce operational risk, and support dynamic workloads across multiple hosts. This solution is designed for modern Hyper-V deployments that require both encryption and secure mobility, making B the correct choice.

Question 196 

You manage a Windows Server 2022 failover cluster hosting SQL Server VMs. Critical workloads must not co-locate, and automated load balancing is required. What should you configure?

A) VM start order
B) Preferred owners
C) Anti-affinity rules with dynamic optimization
D) Cluster quorum settings

Answer: C

Explanation: 

VM start order defines the boot sequence of virtual machines within a failover cluster. While necessary for managing service dependencies, it does not enforce workload separation. Critical workloads could still reside on the same node, risking simultaneous failure during outages.

Preferred owners guide initial placement but cannot guarantee ongoing separation during dynamic balancing. When nodes become unavailable or maintenance occurs, workloads may co-locate despite preferred owner settings.

Cluster quorum settings maintain cluster availability by ensuring a minimum number of votes are present. While essential for resiliency, quorum does not affect VM placement or separation policies.

Anti-affinity rules combined with dynamic optimization prevent specific VMs from running on the same node. Dynamic optimization continuously monitors placement and automatically migrates VMs if anti-affinity rules are violated. This ensures critical workloads remain isolated, minimizing risk of simultaneous failures while balancing resource utilization. It provides automated, policy-driven separation that adjusts in real time.

For automated balancing and separation of critical SQL Server VMs, anti-affinity rules with dynamic optimization are required, making C the correct answer.

Question 197 

You manage Windows Server 2022 with Azure File Sync. Branch servers frequently recall large files, causing network congestion. You need to prioritize essential files. What should you configure?

A) Cloud tiering minimum file age
B) Recall priority policies
C) Offline files mode
D) Background tiering deferral

Answer: B

Explanation: 

Cloud tiering in modern storage solutions provides a mechanism to move less frequently accessed files to the cloud, freeing up local storage and optimizing cost. One of the controls within cloud tiering is the minimum file age setting, which defines how long a newly created or modified file must exist before it becomes eligible for caching or tiering. While this delay prevents immediate tiering of new files, it does not influence the order in which files are recalled when multiple requests occur simultaneously. Therefore, relying solely on minimum file age cannot address situations where network congestion arises due to frequent access to high-demand files.

Similarly, offline files mode enables users to access files locally even when disconnected from the network. While this improves availability, it does not provide any mechanism to prioritize which files are retrieved first once connectivity is restored or when multiple files are requested from the cloud. Users may still experience delays in accessing critical data if high-demand files are buried behind less important requests.

Background tiering deferral is another feature designed to offload data to the cloud at predefined schedules. Although it can help manage storage usage and ensure tiering operations occur during off-peak hours, it does not control the recall sequence for files, nor does it ensure that critical files are retrieved first when they are needed.

To effectively manage network congestion while keeping essential files immediately accessible, organizations must configure recall priority policies. These policies allow administrators to assign specific priorities to files or directories. When multiple files are requested simultaneously, high-priority files are recalled first, reducing latency for critical workloads and optimizing bandwidth usage. By leveraging recall priority policies, administrators can maintain efficient network utilization and ensure that important data remains available promptly, even in high-demand environments.

For scenarios where minimizing access delays for essential files is crucial while controlling overall network traffic, recall priority policies provide the most effective solution, making them the correct configuration choice.

Question 198

You manage a Windows Server 2022 RDS deployment integrated with Azure MFA. Users report login failures caused by delayed MFA responses. You must reduce authentication failures without compromising security. What should you configure?

A) Reduce session persistence
B) Increase NPS extension timeout
C) Disable conditional access
D) Enable persistent cookies

Answer: B

Explanation: 

Reducing session persistence shortens session duration but does not address MFA verification delays.

Disabling conditional access bypasses MFA, reducing security and violating compliance requirements.

Persistent cookies reduce repeated prompts but do not address handshake delays causing login failures.

Increasing the NPS extension timeout provides extra time for MFA verification, accommodating temporary network or service delays. Authentication succeeds reliably while maintaining MFA enforcement.

To reduce login failures caused by delayed MFA responses, increasing the NPS extension timeout is the correct approach. Therefore, the correct answer is B.

Question 199 

You manage Windows Server 2022 Hyper-V hosts. Certain VMs require encryption and secure migration. You need to ensure virtual TPM keys remain protected. What should you configure?

A) Node-level TPM passthrough
B) Shielded VMs with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation: 

Node-level TPM passthrough secures keys locally but prevents migration without exposing them.

Cluster Shared Volume redirected mode provides storage resiliency but does not protect virtual TPM keys.

Migrating VMs without encryption removes protections, exposing sensitive workloads.

Shielded VMs with Host Guardian Service securely manage encryption keys and allow migration across authorized hosts while keeping virtual TPM keys protected. This ensures compliance and operational flexibility.

To secure virtual TPM keys while enabling migration, Shielded VMs with Host Guardian Service must be implemented. Therefore, the correct answer is B.

Question 200 

You manage a Windows Server 2022 failover cluster hosting SQL Server VMs. Critical workloads must not co-locate, and automated load balancing is required. What should you configure?

A) VM start order
B) Preferred owners
C) Anti-affinity rules with dynamic optimization
D) Cluster quorum settings

Answer: C

Explanation: 

VM start order in a failover cluster determines the sequence in which virtual machines boot during node startup or recovery. While this ensures that dependent services start in the correct order, it does not provide any mechanism to enforce separation of critical workloads. Multiple high-priority VMs could still reside on the same physical node, creating a risk of simultaneous failure if that node experiences hardware or software issues.

Preferred owners allow administrators to guide the initial placement of VMs across cluster nodes. This feature can help distribute workloads according to resource availability or organizational policy, but it cannot enforce strict separation. During cluster maintenance, failover, or dynamic load balancing, critical VMs may still end up co-located if the preferred node is unavailable.

Anti-affinity rules combined with dynamic optimization address these limitations by enforcing workload separation policies automatically. The cluster continuously monitors VM placement and migrates VMs when anti-affinity rules are violated, ensuring that critical workloads do not share the same node. This automated rebalancing reduces the risk of simultaneous failures and maintains high availability for essential services.

Cluster quorum settings maintain overall cluster resiliency and prevent split-brain scenarios but do not influence VM placement or workload separation. For automated balancing and enforced isolation of critical workloads, anti-affinity rules with dynamic optimization are required. Therefore, the correct answer is C.

 

img