Microsoft AZ-801 Configuring Windows Server Hybrid Advanced Services Exam Dumps and Practice Test Questions Set 6 Q101-120

Visit here for our full Microsoft AZ-801 exam dumps and practice test questions.

Question 101 

You manage Windows Server 2022 Hyper-V hosts running Shielded VMs. You need to allow encrypted VMs to migrate between hosts securely. What should you implement?

A) Node-level TPM passthrough
B) Shielded VMs with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation: 

Node-level TPM passthrough is a configuration where the virtual TPM (vTPM) key of a VM is directly tied to a specific physical host’s TPM. While this approach secures the VM’s keys on that host and ensures that the VM cannot be cloned or moved without authorization, it introduces a critical limitation: the VM cannot be migrated securely to another host without exposing or transferring the TPM key. This creates a significant security risk and violates the requirement for secure VM mobility. Any method that compromises the integrity of encryption keys during migration is not suitable for environments handling sensitive workloads.

Cluster Shared Volume (CSV) redirected mode is designed primarily for storage resiliency. When a path to a CSV fails, the redirected mode allows clients and VMs to continue accessing data from alternate nodes. While this enhances availability and ensures that storage failures do not interrupt VM operations, it does not provide protection for virtual TPM keys or manage secure VM migration. Its focus is on storage path availability rather than encryption or attestation of VM integrity.

VM live migration without encryption removes all BitLocker and vTPM protections during the migration process. Although it allows mobility between hosts, it exposes sensitive data and encryption keys in transit. This approach is incompatible with compliance-driven environments and security best practices. Unencrypted migrations of shielded VMs are considered high-risk because any interception could compromise the integrity of confidential workloads.

Shielded VMs paired with Host Guardian Service (HGS) solve these problems by providing attestation and key management. HGS verifies that the target host is authorized and healthy before releasing the VM’s encryption keys. It ensures that vTPM keys are never exposed in transit and allows encrypted VMs to migrate securely between hosts. This setup provides both operational flexibility and strict security compliance. Using HGS, administrators can confidently move sensitive workloads across data centers while maintaining BitLocker protection and virtual TPM integrity.

Question 102 

You manage a Windows Server 2022 failover cluster hosting SQL Server VMs. You must prevent critical VMs from running on the same node and support automatic rebalancing. What should you configure?

A) VM start order
B) Preferred owners
C) Anti-affinity rules with dynamic optimization
D) Cluster quorum settings

Answer: C

Explanation:

VM start order is a feature that determines the sequence in which VMs are powered on during cluster startup or after a node reboot. While useful for dependency management (for instance, ensuring that database VMs start before application VMs), it does not enforce isolation of critical workloads across different nodes. It ensures a proper boot sequence but has no impact on runtime placement, resource contention, or balancing across the cluster.

Preferred owners allow administrators to designate which nodes a VM should run on, providing guidance for initial placement. However, preferred owners are not enforcement mechanisms. During dynamic load balancing, maintenance, or failover, critical VMs could still end up co-located on the same host, which increases risk and does not satisfy strict separation requirements.

Cluster quorum settings define how many nodes must be operational to maintain cluster functionality. This is crucial for availability and resiliency but unrelated to VM placement or distribution policies. Quorum ensures that the cluster continues to function in degraded states but cannot prevent VMs from running on the same node.

Anti-affinity rules with dynamic optimization are specifically designed to address the co-location problem. Anti-affinity rules prevent critical workloads from residing on the same host by enforcing separation policies. Dynamic optimization continuously monitors cluster load and VM placement, automatically rebalancing workloads when nodes become overloaded or during maintenance activities. This ensures high availability and minimizes the risk that multiple critical VMs are simultaneously affected by a single node failure. Additionally, dynamic optimization allows administrators to define separation at a granular level, including VM groups, workload types, and priority levels, ensuring a robust and resilient environment.

Question 103 

You manage Windows Server 2022 with Azure File Sync. Branch servers frequently recall large files, causing high bandwidth usage. You need to ensure critical files are prioritized and network congestion is reduced. What should you configure?

A) Cloud tiering minimum file age
B) Recall priority policies
C) Offline files mode
D) Background tiering deferral

Answer: B

Explanation: 

Cloud tiering minimum file age is a setting in Azure File Sync that delays caching of newly created files on branch servers. This helps reduce unnecessary recalls of files that may soon change or be deleted, but it does not provide any mechanism to prioritize critical files during network congestion. While helpful for reducing some network traffic, it cannot ensure that high-priority files are delivered first during recall.

Offline files mode is a Windows feature that allows users to continue accessing files locally when disconnected from the network. Although it improves availability for end users, it does not control the order in which files are recalled from the cloud, nor does it optimize bandwidth usage for critical workloads. Offline files are intended for disconnected scenarios rather than prioritization of network resources.

Background tiering deferral allows administrators to schedule when cloud tiering operations occur, reducing network congestion during peak hours. While this can lower overall bandwidth usage, it cannot prioritize specific files for recall. Critical files may still be delayed if the deferral schedule is in effect, which could negatively impact user experience in high-priority scenarios.

Recall priority policies are specifically designed to assign importance levels to files or directories in Azure File Sync. By designating certain files or folders as high priority, administrators ensure that these files are recalled first, even during periods of heavy network usage. This improves responsiveness for critical workloads while optimizing network utilization, preventing large low-priority files from consuming bandwidth unnecessarily. The system respects these policies dynamically, adjusting retrieval order to maintain performance and meet business requirements.

To reduce bandwidth usage while prioritizing critical files, recall priority policies must be implemented. This ensures predictable network behavior, improves end-user experience, and aligns with operational best practices in hybrid storage deployments. Therefore, the correct answer is B.

Question 104 

You manage a Windows Server 2022 RDS deployment integrated with Azure MFA. Users report login failures due to delayed MFA responses. You must reduce authentication failures without compromising security. What should you configure?

A) Reduce session persistence
B) Increase NPS extension timeout
C) Disable conditional access
D) Enable persistent cookies

Answer: B

Explanation: 

Reducing session persistence shortens the duration of a user’s active RDS session. While this may reduce resource usage on the server, it does not address the root cause of delayed MFA responses. In fact, shorter sessions could exacerbate authentication issues by increasing the frequency of login prompts, leading to a higher likelihood of failed MFA attempts and user frustration.

Disabling conditional access bypasses MFA entirely, which may temporarily resolve authentication failures but compromises security. Conditional access policies enforce organizational security requirements, including MFA verification. Disabling them would leave the environment exposed to potential unauthorized access, violating compliance and security standards.

Persistent cookies allow users to avoid repeated MFA prompts on the same device. While improving convenience, they do not resolve handshake delays or server-side timeouts during MFA verification. Therefore, enabling persistent cookies may reduce prompts but does not address the underlying network or service latency causing login failures.

Increasing the NPS (Network Policy Server) extension timeout provides the authentication system with more time to complete the MFA handshake. Azure MFA and on-premises NPS may experience transient network latency or temporary delays in verification, and extending the timeout ensures that these delays do not result in authentication failures. This adjustment does not weaken security because the MFA process is still completed; it simply accommodates real-world conditions in hybrid deployments where latency and transient failures can occur. Extending the NPS extension timeout is a best practice for RDS environments integrating Azure MFA, as it preserves both usability and security without compromising compliance.

Question 105 

You manage Windows Server 2022 Hyper-V hosts. Certain VMs require encryption and secure mobility between hosts. You need to ensure virtual TPM keys remain protected during migration. What should you configure?

A) Node-level TPM passthrough
B) Shielded VMs with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation: 

Node-level TPM passthrough binds the VM’s virtual TPM to a single physical host. While this ensures that keys are secure on that host, it prevents the VM from being migrated securely to another host without transferring or exposing the keys. In environments requiring mobility of encrypted workloads, exposing keys is a serious security violation, and thus node-level TPM passthrough alone is insufficient.

Cluster Shared Volume (CSV) redirected mode is designed for storage path resiliency, ensuring that VMs maintain access to their disks during failures. Although CSV ensures data availability, it does not provide security for virtual TPM keys or manage encryption for VM migration. Its primary role is storage redundancy, not cryptographic key protection.

VM live migration without encryption allows mobility but removes BitLocker and vTPM protections during transfer. This exposes sensitive data and encryption keys, creating a high-security risk. Such an approach is not acceptable for compliance-sensitive or security-conscious organizations, as the VM’s confidentiality and integrity could be compromised during migration.

Shielded VMs combined with Host Guardian Service (HGS) provide both encryption and secure migration. HGS verifies that the target host is healthy and authorized before releasing encryption keys for the VM. This ensures that vTPM keys are never exposed in transit, enabling secure migration between hosts while maintaining compliance and confidentiality. HGS also manages attestation and protects against unauthorized host access, providing a holistic security framework for sensitive workloads.

Question 106 

You manage a Windows Server 2022 failover cluster hosting SQL Server VMs. You need to prevent co-location of critical VMs and support automated load balancing. What should you configure?

A) VM start order
B) Preferred owners
C) Anti-affinity rules with dynamic optimization
D) Cluster quorum settings

Answer: C

Explanation: 

VM start order controls the boot sequence of virtual machines in a cluster. While it is useful for managing dependency relationships—such as ensuring that databases start before applications—it does not prevent multiple critical VMs from running on the same node. Start order focuses on sequencing rather than placement or separation.

Preferred owners allow administrators to suggest nodes where VMs should run. However, during maintenance, failover, or dynamic balancing, VMs may still reside on the same host. Preferred owners provide guidance, not enforcement, so co-location risks remain for critical workloads.

Cluster quorum settings ensure cluster resiliency by defining the minimum number of nodes required for the cluster to remain operational. Quorum affects high availability but does not influence VM placement or enforce separation policies.

Anti-affinity rules with dynamic optimization prevent co-location of critical VMs across nodes. Dynamic optimization continuously monitors VM placement and resource utilization, automatically rebalancing workloads to reduce the likelihood that multiple critical VMs fail if a single node experiences downtime. This ensures high availability and performance for critical workloads. The combination of anti-affinity rules and dynamic optimization enforces separation while maintaining cluster efficiency, making it the optimal solution for workload isolation and automated load balancing.

Question 107 

You manage Windows Server 2022 with Azure File Sync. Branch servers frequently recall large files, causing network congestion. You must prioritize essential files and reduce unnecessary bandwidth usage. What should you configure?

A) Cloud tiering minimum file age
B) Recall priority policies
C) Offline files mode
D) Background tiering deferral

Answer: B

Explanation: 

Cloud tiering minimum file age delays caching of new files on local servers. While it reduces unnecessary recalls for recently created files, it does not allow administrators to prioritize certain files over others. All files are treated equally for recall purposes, so critical files may still experience delays.

Offline files mode provides local access for disconnected users, ensuring availability during network outages. However, it does not influence the order of file recall or optimize bandwidth usage, so high-priority files may not be recalled first under network congestion.

Background tiering deferral schedules when cloud tiering and recall operations occur. This can help reduce overall network utilization but does not prioritize specific files. During periods of high activity, non-critical files could still consume bandwidth, delaying the recall of essential files.

Recall priority policies allow administrators to assign importance levels to files or directories in Azure File Sync. High-priority files are recalled first, even under heavy network usage. This ensures critical workloads remain responsive while network congestion is minimized. The system dynamically manages bandwidth and recall order based on priority, aligning network utilization with business requirements. Implementing recall priority policies provides predictable behavior, efficient bandwidth usage, and ensures that essential files are always retrieved promptly, enhancing both performance and user experience.

Question 108 

You manage a Windows Server 2022 RDS deployment with Azure MFA. Users report failed logins due to delayed MFA responses. You must reduce authentication failures while maintaining MFA security. What should you configure?

A) Reduce session persistence
B) Increase NPS extension timeout
C) Disable conditional access
D) Enable persistent cookies

Answer: B

Explanation: 

Reducing session persistence shortens the active duration of RDS sessions. While this might reduce server resource usage, it does not resolve delayed MFA responses. Shorter sessions can lead to more frequent authentication prompts, increasing the likelihood of failures due to delayed verification.

Disabling conditional access bypasses MFA entirely, allowing users to authenticate without completing the verification process. While this may temporarily eliminate login failures, it exposes the environment to security risks and violates compliance standards. Bypassing conditional access is not a viable solution for organizations that require secure authentication.

Persistent cookies reduce the need for repeated MFA prompts for users on trusted devices. This improves convenience but does not address network latency or delays in the MFA handshake, which are the root causes of failed login attempts.

Increasing the NPS extension timeout provides additional time for the MFA process to complete. Azure MFA, combined with NPS, may experience transient network delays or temporary service interruptions. Extending the timeout ensures that the MFA handshake completes successfully without failing due to timing constraints. This approach maintains security while accommodating real-world network conditions. Users can authenticate reliably, and MFA enforcement remains intact, striking a balance between usability and security.

Question 109 

You manage Windows Server 2022 Hyper-V hosts. Certain VMs require encryption and mobility. You need to ensure virtual TPM keys are protected during migration. What should you configure?

A) Node-level TPM passthrough
B) Shielded VMs with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation: 

Node-level TPM passthrough secures virtual TPM keys on a specific host but prevents migration without exposing the keys. This approach does not meet the requirement for secure mobility of encrypted workloads. Any attempt to move the VM to another host would require either key export or bypassing security, which violates best practices.

Cluster Shared Volume redirected mode ensures storage resiliency during path failures but does not manage encryption keys or secure migration of shielded VMs. Its primary purpose is high availability and continuous storage access, not cryptographic protection.

Migrating VMs without encryption disables protections, exposing sensitive data and encryption keys. This is not acceptable for compliance-driven or security-sensitive workloads and increases the risk of data breaches during migration.

Shielded VMs with Host Guardian Service manage keys securely and allow encrypted VMs to migrate only to authorized hosts. HGS handles attestation and key distribution, ensuring virtual TPM keys remain protected at all times. This ensures compliance and maintains confidentiality during VM mobility. For environments requiring both encryption and secure migration, this configuration is essential.

Question 110 

You manage a Windows Server 2022 failover cluster with SQL Server VMs. Critical workloads must not reside on the same node. Automatic balancing is required. What should you configure?

A) VM start order
B) Preferred owners
C) Anti-affinity rules with dynamic optimization
D) Cluster quorum settings

Answer: C

Explanation:

VM start order ensures that VMs boot in a specific sequence during cluster startup. While useful for dependency management, it does not enforce separation of critical workloads across nodes. Multiple critical VMs could still end up running on the same node during dynamic balancing.

Preferred owners provide guidance for initial VM placement but cannot enforce strict isolation. During cluster operations, maintenance, or dynamic optimization, critical VMs may still co-locate on the same host, which does not satisfy high-availability requirements.

Cluster quorum settings maintain cluster resiliency by specifying the minimum number of nodes required for cluster operation. Quorum does not influence VM placement or enforce workload separation policies.

Anti-affinity rules with dynamic optimization are designed to prevent critical VMs from residing on the same node. Dynamic optimization monitors VM placement and resource utilization in real time, automatically rebalancing workloads when nodes become overloaded or during maintenance. This reduces the risk that multiple critical VMs fail simultaneously due to a single node failure and ensures high availability. Anti-affinity rules allow granular configuration of workload separation policies, providing both operational flexibility and high reliability for critical workloads.

Question 111 

You manage Windows Server 2022 Hyper-V hosts running Shielded VMs. You need to migrate encrypted VMs between hosts while ensuring virtual TPM keys remain protected. What should you implement?

A) Node-level TPM passthrough
B) Shielded VMs with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation: 

Node-level TPM passthrough binds a virtual TPM (vTPM) directly to the physical TPM of a single Hyper-V host. This approach ensures that the vTPM keys are securely stored and protected on that host, making it highly resistant to tampering. However, this security comes with a limitation: because the keys are tied to one host, migrating the VM to another Hyper-V host would require either exporting the keys or disabling encryption temporarily. Both scenarios expose sensitive keys during migration, creating a security risk and violating compliance requirements for sensitive workloads. Therefore, node-level TPM passthrough is unsuitable for scenarios requiring secure VM migration across multiple hosts.

Cluster Shared Volume (CSV) redirected mode is designed to provide storage resiliency within failover clusters. It redirects I/O traffic from a failing node to a healthy node, ensuring continuity of access to shared storage. While this feature is useful for maintaining storage availability during failures, it does not manage encryption or protect vTPM keys. It focuses on storage fault tolerance rather than VM security during live migration.

Migrating VMs without encryption would remove the protection provided by vTPMs and shielded VM features. This exposes all VM data to potential attacks during transit or on the destination host, violating both security best practices and organizational compliance requirements. Unencrypted migration should never be used for sensitive workloads that rely on vTPM protections.

Shielded VMs combined with Host Guardian Service (HGS) are specifically designed to secure VMs against unauthorized access while allowing secure migration between trusted hosts. HGS manages encryption keys and performs host attestation, verifying that the destination host is authorized and meets security requirements before releasing vTPM keys. This ensures that keys are never exposed in plaintext during migration. Additionally, HGS supports centralized key management, auditing, and compliance, allowing IT administrators to maintain strong security controls without hindering operational flexibility.

In scenarios requiring the migration of encrypted VMs while protecting virtual TPM keys, Shielded VMs with Host Guardian Service provide the necessary security, manageability, and compliance features. This solution ensures both operational flexibility and protection of sensitive workloads, making it the correct choice for this scenario.

Question 112 

You manage a Windows Server 2022 failover cluster hosting critical SQL Server VMs. You must prevent critical VMs from being placed on the same host and support automated load balancing. What should you configure?

A) VM start order
B) Preferred owners
C) Anti-affinity rules with dynamic optimization
D) Cluster quorum settings

Answer: C

Explanation:

VM start order is a configuration in failover clusters that determines the sequence in which VMs power on during cluster startup. While this ensures that dependencies, such as SQL Server services, start in the correct sequence, it does not influence which hosts the VMs are placed on. Multiple critical VMs could still end up on the same host during initial placement or after dynamic load balancing, leaving workloads vulnerable to a single node failure. Therefore, VM start order alone cannot meet the requirement to prevent co-location.

Preferred owners allow administrators to define a list of hosts that a VM should prefer for placement. During failover or initial placement, the cluster will attempt to start the VM on one of the preferred hosts. However, preferred owners do not enforce strict separation. During automatic rebalancing or maintenance events, critical VMs may still end up co-located on the same node because the cluster prioritizes availability over separation.

Cluster quorum settings determine how many nodes must be online for the cluster to remain operational. Quorum ensures cluster resiliency and avoids split-brain scenarios. While vital for overall cluster health, quorum does not control VM placement, load balancing, or enforcement of separation policies. It is unrelated to preventing critical workloads from co-locating.

Anti-affinity rules with dynamic optimization directly address the requirement. Anti-affinity rules allow administrators to specify that certain VMs should never run on the same host simultaneously. Dynamic optimization continuously monitors the cluster, automatically moving VMs to enforce these rules while balancing workloads across all nodes. This ensures that critical VMs are not placed together, reducing the risk of downtime due to hardware failures. Dynamic optimization also maintains performance by balancing resources efficiently.

By implementing anti-affinity rules with dynamic optimization, administrators achieve both high availability and operational efficiency. Critical workloads are separated across nodes, risk is minimized, and automated load balancing ensures optimal resource utilization. This makes option C the correct solution.

Question 113

You manage Windows Server 2022 with Azure File Sync. Branch servers frequently recall large files, leading to network congestion. You need to ensure critical files are prioritized. What should you configure?

A) Cloud tiering minimum file age
B) Recall priority policies
C) Offline files mode
D) Background tiering deferral

Answer: B

Explanation: 

Cloud tiering minimum file age is a setting that determines how long newly created or modified files must exist locally before they are eligible for cloud tiering. While this reduces unnecessary recalls for very new files and can help prevent excessive network traffic for recently accessed data, it does not provide a mechanism for prioritizing specific files over others. All files are still recalled in a general FIFO or request order, so critical files may still be delayed if other files are being recalled first.

Offline files mode ensures that copies of files are available locally even when the network or server is unavailable. This feature improves resiliency and user experience during disconnections but does not influence the order in which files are recalled from the cloud. Files are still recalled based on standard access patterns, meaning critical files may not always be retrieved first when multiple files are requested simultaneously.

Background tiering deferral schedules the offloading of files to the cloud during periods of lower activity. While this can reduce network congestion and optimize bandwidth usage, it operates independently of file priority. This approach helps manage overall network load but does not provide a mechanism for ensuring that important files are recalled before less critical ones.

Recall priority policies are specifically designed to address this challenge. They allow administrators to assign priority levels to files or directories. When multiple files are requested, high-priority files are recalled first, ensuring that critical workloads remain responsive even under network constraints. This mechanism optimizes bandwidth usage, prevents bottlenecks, and ensures business-critical operations are not delayed due to network congestion.

In environments with frequent file recalls, particularly in branch offices with limited network capacity, implementing recall priority policies ensures that essential data is retrieved first. This reduces latency for critical workloads, improves user productivity, and minimizes network congestion for less important file transfers. Therefore, recall priority policies are the correct configuration for this scenario.

Question 114 

You manage a Windows Server 2022 RDS deployment integrated with Azure MFA. Users report login failures due to delayed MFA responses. You must reduce authentication failures while maintaining MFA security. What should you configure?

A) Reduce session persistence
B) Increase NPS extension timeout
C) Disable conditional access
D) Enable persistent cookies

Answer: B

Explanation: 

Reducing session persistence shortens the duration for which user sessions remain active. While this might marginally impact performance, it does not address the root cause of MFA login failures, which is delayed response from the MFA provider. Shortening session persistence can inadvertently increase login failures because sessions may expire before users complete authentication, exacerbating the problem rather than solving it.

Disabling conditional access effectively bypasses MFA policies. This approach would remove delays caused by MFA, but it completely compromises security and organizational compliance. Disabling conditional access is not an acceptable solution in environments where MFA is required for sensitive applications, such as RDS deployments.

Persistent cookies improve the user experience by reducing the number of times users are prompted for MFA on subsequent logins. However, they do not address delays in the initial authentication process. If the MFA handshake itself takes longer than the system’s timeout, persistent cookies will not prevent authentication failures during the first login attempt.

Increasing the NPS extension timeout is the proper solution. The Network Policy Server (NPS) extension for Azure MFA controls how long the system waits for an MFA response from the user or device. By increasing this timeout, administrators provide additional time for the MFA handshake to complete, accommodating network latency, service delays, or user interaction delays. This ensures that users are not prematurely denied access, maintaining both security and usability.

Extending the NPS extension timeout maintains strong MFA enforcement while reducing failed login attempts. It allows users to complete the second-factor authentication reliably without compromising organizational security policies. This approach balances security requirements with operational efficiency, making it the most suitable configuration in this scenario.

Question 115 

You manage Windows Server 2022 Hyper-V hosts. Certain VMs require encryption and secure migration. You need to ensure virtual TPM keys are protected. What should you configure?

A) Node-level TPM passthrough
B) Shielded VMs with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation: 

Node-level TPM passthrough binds a virtual TPM directly to a single host’s physical TPM. While this protects the vTPM keys on that host, it severely limits migration flexibility. If the VM needs to be moved to another host, the keys must either be exposed temporarily or reconfigured, which creates a security risk and is incompatible with compliance requirements for sensitive workloads.

Cluster Shared Volume redirected mode provides storage resiliency by redirecting I/O to available nodes during storage failures. It ensures continuity of access to shared storage but does not manage encryption or protect vTPM keys. This mode focuses on maintaining uptime rather than securing encrypted workloads during live migration.

Migrating VMs without encryption removes all protections associated with vTPMs. Any sensitive data within the VM is exposed during migration, which is unacceptable for workloads requiring encryption and compliance. This option compromises both security and operational integrity.

Shielded VMs with Host Guardian Service allow VMs to remain encrypted while migrating between authorized hosts. HGS ensures that vTPM keys are managed securely and released only to hosts that have passed attestation. This means that keys are never exposed during migration. Additionally, HGS provides centralized management, auditing, and compliance features, making it suitable for enterprise environments where security and operational flexibility must coexist.

For workloads that require both encryption and secure migration, Shielded VMs with Host Guardian Service are the correct solution. They ensure that vTPM keys remain protected, workloads stay compliant, and VMs can migrate seamlessly across trusted hosts.

Question 116 

You manage a Windows Server 2022 failover cluster hosting SQL Server VMs. Critical VMs must not co-locate and automatic balancing is required. What should you configure?

A) VM start order
B) Preferred owners
C) Anti-affinity rules with dynamic optimization
D) Cluster quorum settings

Answer: C

Explanation: 

VM start order is designed to sequence the booting of VMs in a cluster to ensure dependencies are respected. For example, a SQL Server VM may need to start after its underlying database storage is ready. While useful, this setting does not control which hosts the VMs run on and therefore cannot prevent multiple critical workloads from being co-located on the same host, leaving them vulnerable to a single-node failure.

Preferred owners allow administrators to guide the initial placement of VMs, suggesting which hosts should run specific VMs. However, these preferences are not strictly enforced. Dynamic optimization, maintenance, or unexpected failover events can still result in multiple critical VMs being placed on the same node, which does not satisfy the requirement for strict separation.

Cluster quorum settings ensure that a sufficient number of nodes are available for cluster operation. While critical for cluster resiliency and avoiding split-brain scenarios, quorum does not control VM placement or workload separation. It cannot enforce anti-affinity policies.

Anti-affinity rules with dynamic optimization enforce strict separation of critical VMs. Anti-affinity rules prevent specified VMs from running on the same host, reducing the risk of multiple critical workloads being affected by a single host failure. Dynamic optimization continuously monitors the cluster and automatically rebalances workloads to maintain adherence to anti-affinity policies while optimizing resource usage. This ensures both high availability and operational efficiency.

For environments hosting critical workloads, implementing anti-affinity rules with dynamic optimization ensures that no single host failure can impact multiple important VMs. It is the only option that satisfies the requirements for automated load balancing combined with enforced workload separation.

Question 117 

You manage Windows Server 2022 with Azure File Sync. Branch servers recall large files frequently, impacting bandwidth. You must prioritize essential files. What should you configure?

A) Cloud tiering minimum file age
B) Recall priority policies
C) Offline files mode
D) Background tiering deferral

Answer: B

Explanation: 

Cloud tiering minimum file age determines the eligibility of new files for cloud tiering. While it can reduce unnecessary traffic for recently created or modified files, it does not prioritize files based on their criticality. Files of different importance levels are treated the same, meaning that critical files may be recalled after less important ones.

Offline files mode ensures that files are available locally even when the network or server is inaccessible. This is useful for disconnected scenarios but does not influence recall prioritization or bandwidth management.

Background tiering deferral schedules the offloading of files to the cloud during less busy times. While it helps manage overall network load, it does not ensure that high-priority files are recalled first when multiple requests occur simultaneously.

Recall priority policies explicitly allow administrators to assign priority levels to files or directories. During recall, high-priority files are retrieved first, ensuring essential workloads remain responsive. This mechanism is critical in branch offices or locations with limited bandwidth, where indiscriminate recall could degrade performance and slow access to important data.

By implementing recall priority policies, administrators optimize both network usage and user experience. Essential files are always prioritized, minimizing delays and ensuring business-critical operations can continue uninterrupted.

Question 118 

You manage a Windows Server 2022 RDS deployment with Azure MFA. Users report login failures caused by delayed MFA responses. You must reduce authentication failures without compromising security. What should you configure?

A) Reduce session persistence
B) Increase NPS extension timeout
C) Disable conditional access
D) Enable persistent cookies

Answer: B

Explanation: 

Reducing session persistence shortens the time users remain logged in, which does not resolve delays caused by MFA verification. In fact, shorter sessions may exacerbate login failures if MFA responses are slow, as users may be logged out before completing authentication.

Disabling conditional access bypasses MFA enforcement entirely. This compromises security and is not acceptable in scenarios requiring multi-factor authentication for compliance or risk mitigation.

Persistent cookies reduce repeated MFA prompts after successful authentication, enhancing convenience. However, they do not solve the root cause of login failures caused by delayed MFA response times during initial authentication.

Increasing the NPS extension timeout provides additional time for the MFA handshake between the RDS environment and Azure MFA service. Network latency, service delays, or temporary high load can cause delayed responses. By extending the timeout, administrators ensure users have sufficient time to complete authentication successfully without compromising security policies. This improves login reliability while maintaining strong MFA enforcement.

Extending the NPS extension timeout addresses the root cause of authentication failures while preserving security, making it the correct solution.

Question 119 

You manage Windows Server 2022 Hyper-V hosts. Certain VMs require encryption and secure migration. You need to protect virtual TPM keys during migration. What should you configure?

A) Node-level TPM passthrough
B) Shielded VMs with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation: 

Node-level TPM passthrough is a method that binds a virtual TPM (vTPM) directly to the physical TPM of a specific Hyper-V host. This approach ensures that the vTPM keys are highly secure while the VM resides on that host, as the keys are never exposed outside the physical TPM. However, this strong security comes with a significant limitation: because the vTPM keys are tightly tied to a single host, migrating the VM to another Hyper-V host is not feasible without exposing the keys or decrypting the VM temporarily. Such exposure introduces a major security risk, making node-level TPM passthrough incompatible with scenarios that require mobility of encrypted VMs. Therefore, while it provides excellent local protection, it does not support secure live migration between hosts.

Cluster Shared Volume (CSV) redirected mode is designed to maintain storage availability in failover cluster environments. When a node or storage path fails, CSV redirected mode automatically redirects I/O traffic to a healthy node, ensuring uninterrupted access to shared storage. While this capability improves storage resilience and continuity of operations, it does not address VM encryption or protection of vTPM keys during migration. CSV redirected mode focuses solely on storage-level fault tolerance and does not provide mechanisms for securing encryption keys or managing authorized hosts for sensitive workloads. Therefore, it cannot meet the requirement for securely migrating encrypted VMs.

Migrating virtual machines without encryption is the least secure option. Without encryption, all VM data—including sensitive applications, configurations, and credentials—is exposed during the migration process. This exposes workloads to potential interception, tampering, or compliance violations, and is not acceptable for critical workloads that rely on encryption or vTPM protection. While this approach may simplify migration from a technical standpoint, it completely undermines the security of the VM and fails to meet enterprise compliance standards.

Shielded VMs combined with Host Guardian Service (HGS) provide the proper solution for securely migrating encrypted workloads. HGS centrally manages encryption keys and performs host attestation, ensuring that only authorized Hyper-V hosts can access the vTPM keys needed to run the VM. During migration, HGS validates the destination host’s security posture and releases keys securely, preventing exposure at any point. This solution maintains full encryption protections, preserves compliance requirements, and allows operational flexibility by enabling secure migration across authorized hosts.

For workloads that require both encryption and secure mobility, Shielded VMs with Host Guardian Service are the only solution that ensures vTPM keys remain protected, VMs stay encrypted during transit, and organizational security policies are maintained.

Question 120 

You manage a Windows Server 2022 failover cluster hosting SQL Server VMs. Critical workloads must not co-locate, and automatic load balancing is required. What should you configure?

A) VM start order
B) Preferred owners
C) Anti-affinity rules with dynamic optimization
D) Cluster quorum settings

Answer: C

Explanation: 

VM start order is a configuration setting within a Windows Server failover cluster that determines the sequence in which virtual machines (VMs) are powered on when the cluster starts or recovers from a failure. This feature is particularly useful for managing dependencies between services—for example, ensuring that a storage service or a database server starts before dependent application servers. However, while VM start order ensures proper sequencing, it does not influence the placement of VMs across cluster nodes. This means that multiple critical VMs could still be assigned to the same physical host, leaving them vulnerable to outages if that host fails. Therefore, while start order is valuable for boot sequencing and dependency management, it does not provide any mechanism to prevent co-location of critical workloads or to enforce distribution policies within the cluster.

Preferred owners is another cluster configuration that allows administrators to designate a list of nodes that a VM should preferably run on. During failover or initial placement, the cluster will attempt to start the VM on one of the preferred nodes. While this can guide VM placement and provide some level of control, it is not a strict enforcement mechanism. In situations involving dynamic optimization, cluster rebalancing, or unexpected failovers, critical VMs may still end up co-located on the same host because preferred owner settings are treated as suggestions rather than hard rules. Consequently, preferred owners alone cannot guarantee that critical workloads remain isolated from each other.

Cluster quorum settings are essential for maintaining the overall resiliency and health of a failover cluster. The quorum determines the minimum number of nodes that must be operational for the cluster to remain online and make decisions about failover. While quorum is critical for avoiding split-brain scenarios and ensuring cluster stability, it does not influence VM placement or enforce workload separation. Quorum helps maintain cluster functionality, but it does not prevent multiple critical VMs from being assigned to a single host.

Anti-affinity rules with dynamic optimization directly address the requirement to prevent co-location of critical workloads. Anti-affinity rules allow administrators to specify that certain VMs must not run on the same host simultaneously. Dynamic optimization continuously monitors resource utilization and VM placement, automatically moving VMs as needed to enforce these separation policies. This ensures that critical workloads are evenly distributed across nodes, minimizing risk from hardware failures and maintaining high availability. By combining anti-affinity rules with dynamic optimization, administrators can achieve both automated load balancing and strict enforcement of VM separation, ensuring that critical workloads remain protected while the cluster operates efficiently.

 

img