Microsoft AZ-801 Configuring Windows Server Hybrid Advanced Services Exam Dumps and Practice Test Questions Set 4 Q61-80

Visit here for our full Microsoft AZ-801 exam dumps and practice test questions.

Question 61

You manage Windows Server 2022 Hyper-V hosts with Shielded VMs. Some VMs need to be moved between clusters while maintaining encryption and compliance. What should you implement?

A) Node-level TPM passthrough
B) Shielded VM with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) Live migration without encryption

Answer: B

Explanation: 

Protecting Shielded VMs during migration requires a mechanism that maintains encryption keys securely. Passing TPM devices to nodes exposes keys locally but does not enable secure transfer between clusters. Nodes without access to the correct TPM cannot host the VMs, which prevents seamless movement. This approach is not suitable for secure migrations.

Clustered storage redirect mode provides continuity when accessing shared volumes during failures, but it does not manage encryption keys or VM TPM secrets. It is a storage-level feature, not an encryption or migration solution.

Migrating VMs without encryption removes all TPM protections, which violates compliance and security policies. This approach does not meet the requirement to maintain encryption and secure key management.

Using Shielded VMs with the Host Guardian Service allows encrypted VMs to move between authorized clusters securely. The service manages attestation and key distribution, ensuring VMs remain encrypted and compliant during migration. Authorized hosts can decrypt and run VMs without exposing TPM keys.

For secure migration of Shielded VMs between clusters, the correct solution is B.

Question 62 

You manage Windows Server 2022 RDS servers integrated with Azure MFA) Users report slow logons due to MFA delays. You need to reduce login failures while maintaining MFA security. What should you configure?

A) Reduce session persistence
B) Increase NPS extension timeout
C) Disable conditional access
D) Enable persistent cookies

Answer: B

Explanation: 

Authentication failures due to MFA delays often result from the server timing out before receiving verification. Reducing session persistence decreases the duration of active sessions but does not prevent MFA handshake timeouts. This may actually increase login failures as users are forced to re-authenticate more frequently.

Disabling conditional access may bypass MFA prompts but reduces security, which does not satisfy the requirement of maintaining MFA enforcement.

Persistent cookies improve user convenience by allowing reauthentication without repeated prompts, but they do not affect initial MFA handshake delays. They do not mitigate failures caused by latency in MFA responses.

Increasing the timeout in the NPS extension allows additional time for the server to complete the MFA verification process. This ensures that temporary network or service latency does not prevent logons, while security requirements remain intact. All authentication steps, including MFA, complete successfully, reducing login failures without compromising security.

To reliably complete hybrid MFA authentication, extending server-side timeout is required. Therefore, the correct answer is B.

Question 63 

You manage a Windows Server 2022 cluster running SQL VMs. Critical VMs must not co-host on the same node, and workloads should rebalance automatically during maintenance. What should you configure?

A) VM start order
B) Preferred owners
C) Anti-affinity rules with dynamic optimization
D) Cluster storage quorum

Answer: C

Explanation: 

VM start order ensures the sequence of VM startup during cluster boot, but it does not prevent critical VMs from residing on the same host simultaneously. Start order helps with dependencies but does not enforce separation.

Preferred owners define preferred nodes for VM placement, but they do not strictly enforce anti-co-location. VMs may still end up on the same node if the scheduler deems it necessary.

Dynamic optimization combined with anti-affinity rules ensures that critical VMs remain separated across nodes. The cluster automatically rebalances workloads during maintenance or resource changes while adhering to separation constraints. This approach maximizes availability and reduces risk from single-node failures.

Cluster quorum settings determine the number of nodes required for cluster operations and resilience but do not influence VM placement or anti-affinity. Quorum management only affects cluster availability during node failures.

For automatic separation and rebalancing of critical VMs, anti-affinity rules with dynamic optimization are required. Therefore, the correct answer is C.

Question 64 

You manage Windows Server 2022 file servers using Azure File Sync. Branch servers experience high network usage during large recalls. You must prioritize important files and reduce bandwidth impact. What should you configure?

A) Cloud tiering minimum file age
B) Recall priority policies
C) Offline files mode
D) Background tiering deferral

Answer: B

Explanation: 

Adjusting minimum file age delays the retrieval of new files but does not allow prioritization of critical files over others. While it reduces unnecessary network load, it does not guarantee priority access for essential files.

Offline files mode ensures local availability during disconnection but does not influence bandwidth prioritization for recalls.

Deferring background tiering schedules offloading activity to later times, which reduces general network load but does not control which files are retrieved first during recall.

Recall priority policies allow administrators to assign higher priority to critical files during recall operations. When bandwidth is limited, these files are retrieved first, ensuring important workloads remain responsive. This mechanism directly addresses both performance and network efficiency in hybrid deployments.

Question 65 

You manage Windows Server 2022 Hyper-V hosts. You must enforce high availability for VMs while preventing certain VMs from running on the same node. Which configuration ensures separation?

A) VM start order
B) Preferred owners
C) Anti-affinity rules
D) Live migration settings

Answer: C

Explanation: 

VM start order defines boot sequence but does not prevent co-hosting of critical workloads. It addresses dependencies, not placement separation.

Preferred owners specify preferred nodes for VMs but cannot guarantee strict separation between multiple VMs. The cluster scheduler may still place multiple VMs on the same host if resources allow.

Anti-affinity rules enforce that specific VMs cannot run on the same node. This ensures workloads are distributed across nodes, reducing the impact of a single host failure. It is the standard method for achieving high availability with isolation requirements.

Live migration settings control how VMs move between hosts for maintenance or balancing but do not enforce separation rules. Migration policies support mobility but do not dictate placement constraints for high availability.

Question 66 

You manage a Windows Server 2022 Hyper-V cluster. Certain VMs require encryption at rest using BitLocker, and you need to migrate them between hosts without decryption. What should you configure?

A) Node-level TPM passthrough
B) Shielded VM support
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation: 

Securing virtual machines while maintaining operational flexibility requires a system that protects encryption keys during migration. Node-level TPM passthrough binds encryption to a specific host. While VMs can run on that node, migration to other hosts is not possible without exposing encryption keys, which violates security requirements. This solution does not support encrypted workload mobility.

Cluster Shared Volume redirected mode provides alternate paths to access shared storage in case of I/O issues but does not handle VM encryption or key management. It ensures continuity but does not address encrypted migration requirements.

Migrating virtual machines without encryption removes BitLocker protection entirely. This violates the requirement to maintain encryption and exposes sensitive workloads, making it unsuitable for production environments with compliance requirements.

Shielded VM support integrates with Host Guardian Service to provide secure key management. Authorized hosts can decrypt and run VMs without exposing virtual TPM secrets. This allows encrypted workloads to be migrated seamlessly between hosts while maintaining encryption at rest. It ensures both security and operational flexibility in clustered Hyper-V environments.

Question 67 

You manage a Windows Server 2022 failover cluster hosting SQL Server VMs. You need to prevent critical VMs from running on the same node and ensure automatic rebalancing during maintenance. What should you configure?

A) VM start order
B) Preferred owners
C) Anti-affinity rules with dynamic optimization
D) Cluster storage quorum

Answer: C

Explanation:

VM start order handles only the boot sequencing of virtual machines when a cluster comes online. It ensures that dependent services start in the correct order but provides no mechanism to prevent critical workloads from landing on the same host. Because it does not enforce placement rules or separation logic, it cannot guarantee distribution of important VMs across cluster nodes.

Preferred owners allow administrators to specify nodes that a virtual machine would ideally run on. While this influences initial placement, it does not impose a strict separation of sensitive workloads. During failover, rebalancing, or resource pressure, the cluster may still place multiple critical VMs on the same host. It lacks continuous enforcement and does not deliver automated distribution during ongoing operations.

Anti-affinity rules with dynamic optimization ensure that specific VMs remain separated across nodes and are continuously rebalanced as the cluster state changes. Anti-affinity rules define that designated VMs should not reside together, while dynamic optimization automatically evaluates resource usage and VM placement to maintain compliance. During maintenance or failover, the cluster will redistribute workloads to preserve the separation of critical VMs. This combination provides both enforcement and automatic correction, delivering the required availability and balancing behavior.

Cluster storage quorum determines how the cluster maintains availability in the event of node failures by requiring a majority of votes to continue operation. While quorum configuration is essential for cluster resilience, it plays no role in virtual machine placement or separation of workloads. It cannot address the need to keep critical VMs on separate nodes or automate rebalancing.

To effectively separate critical workloads and provide automated rebalancing during maintenance windows, configuring anti-affinity rules together with dynamic optimization is the correct approach.

Question 68 

You manage Windows Server 2022 with Azure File Sync. Branch servers frequently recall large files unnecessarily, causing network congestion. You must ensure critical files are retrieved first and reduce redundant downloads. What should you configure?

A) Cloud tiering minimum file age
B) Recall priority policies
C) Offline files mode
D) Background tiering deferral

Answer: B

Explanation:

Adjusting the cloud tiering minimum file age affects when newly created or modified files become eligible to be moved from local storage to the cloud. This setting helps ensure that frequently accessed or recently updated files remain available locally for a certain period before being tiered out. Although this can reduce some unnecessary downloads, it does not help distinguish between critical and non-critical files during recall operations. It simply controls eligibility for tiering and offers no mechanism for prioritizing recall traffic.

Offline files mode ensures that files are cached locally so users can continue working even when the server is unreachable. However, Azure File Sync operates independently of traditional offline file functionality. Configuring offline files mode does not change how Azure File Sync retrieves tiered files from the cloud, nor does it influence recall order, bandwidth handling, or prioritization behavior. Its scope is limited to providing file availability during disconnection events.

Background tiering deferral allows administrators to delay when the system uploads files to the cloud, helping reduce unnecessary outbound network usage. While this can optimize general bandwidth consumption, it does not impact the inbound file recall process. It cannot prioritize which files get retrieved during simultaneous requests, nor can it reduce redundant downloads when users open large files that are not workload-critical.

Recall priority policies are specifically designed to address both the need for recall efficiency and the need to differentiate file importance. Administrators can classify files or directories so that higher-priority files are retrieved first whenever multiple recalls occur. This is especially important for branch offices with limited bandwidth, where simultaneous recalls can overwhelm the network. By ensuring critical files take precedence, the overall user experience improves, and unnecessary congestion is avoided. These policies also reduce redundant downloads by allowing controlled recall behavior tailored to business importance. For environments needing predictable performance and minimized network saturation, implementing recall priority policies is the correct approach. 

Question 69 

You manage a Windows Server 2022 RDS deployment integrated with Azure MFA) Users experience failed logons due to delayed MFA responses. You need to reduce login failures without compromising MFA security. What should you configure?

A) Reduce session persistence
B) Increase NPS extension timeout
C) Disable conditional access
D) Enable persistent cookies

Answer: B

Explanation:

Reducing session persistence shortens the length of time a Remote Desktop session can remain connected or reconnected without requiring reauthentication. Although this sometimes enhances security, it tends to increase the number of authentication attempts users must complete. Because the problem involves delayed MFA responses, forcing users to authenticate more frequently actually increases the likelihood of additional failures. This setting does nothing to address latency or communication delays with the MFA provider.

Disabling conditional access removes or weakens policy enforcement that controls MFA requirements. While this would effectively reduce MFA-related failures, it dramatically undermines security. Since the requirement is to maintain MFA protection, removing conditional access does not meet the operational or compliance objectives. It solves the symptom by eliminating authentication requirements rather than solving the underlying cause.

Enabling persistent cookies helps users avoid repeated MFA prompts by caching authentication information for a defined period. This improves convenience during routine access but does not affect the initial MFA handshake. When slow responses from the MFA provider delay verification, the authentication request may still time out. Persistent cookies do not extend or modify server-side timeouts, so they cannot prevent the failures described.

Increasing the NPS extension timeout directly addresses the root issue. When an MFA verification takes too long—due to network latency, temporary slowdown of the MFA service, or delays on the client side—the authentication request may time out before the MFA system responds. Increasing the timeout allows the NPS extension to wait longer for MFA approval, reducing instances where valid MFA attempts fail simply because the response arrives slightly late. Importantly, this does not weaken MFA security; it merely accommodates slower responses without bypassing verification. Because it resolves the problem while maintaining MFA enforcement, increasing the NPS extension timeout is the correct solution.

Question 70 

You manage Windows Server 2022 Hyper-V hosts. Certain VMs require encryption and secure mobility between hosts. What is the correct solution to protect virtual TPM keys during migration?

A) Node-level TPM passthrough
B) Shielded VM with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation:

Node-level TPM passthrough binds the virtual TPM to the physical TPM chip of the host machine. Although this provides strong, hardware-based trust, it creates a major limitation: the VM becomes tied to a single host. Attempting to migrate the VM to another node either fails outright or exposes sensitive TPM key material, which violates security requirements. This mechanism is therefore unsuitable for environments needing encrypted VM mobility across multiple hosts.

Cluster Shared Volume redirected mode ensures VMs retain access to storage even if the preferred data path fails. It works by redirecting I/O through another cluster node temporarily. While valuable for storage resilience, this function has no involvement in managing virtual TPM keys, protecting encrypted state, or governing secure migration processes. It improves availability but does not address encryption or secure host-to-host mobility.

Live migration without encryption exposes the VM’s state and memory over the network in unprotected form. Sensitive data, including security keys, could be intercepted, making this practice incompatible with compliance-driven or security-sensitive workloads. This method cannot be used when encrypted VMs or virtual TPM protection are required.

Shielded VMs with Host Guardian Service offer a complete solution for secure VM operation and migration in Windows Server 2022. Shielded VMs encrypt their disks, boot data, and state information while using virtual TPMs managed through HGS. Host Guardian Service ensures that only trusted hosts can run the VM by performing attestation checks before releasing sensitive key material. During migration, the virtual TPM keys remain protected and are only accessible to properly attested hosts. This enables secure, compliant mobility while maintaining confidentiality and key integrity. Because it satisfies the requirement to protect virtual TPM keys during migration while still allowing full VM mobility, Shielded VMs with Host Guardian Service is the correct solution.

Question 71 

You manage a Windows Server 2022 environment using Azure Monitor. You need to trigger an automated remediation script whenever CPU usage exceeds 85% for 10 minutes on any server. What should you configure?

A) Azure Log Analytics saved queries
B) Azure Monitor metric alerts with action groups
C) Performance Monitor counters with local scripts
D) Event Viewer custom views

Answer: B

Explanation:

Saved queries in Azure Log Analytics are powerful tools for reviewing collected data and conducting historical analysis across your environment. They allow administrators to build queries that identify trends, detect anomalies, and examine system behavior over time. However, they operate retrospectively, meaning they do not continuously monitor real-time conditions or take action when thresholds are exceeded. Because they lack the capability to automatically initiate tasks such as running remediation scripts, they are not designed for immediate response scenarios that require automation based on metric thresholds.

Performance Monitor counters paired with local scripts offer another way to track CPU utilization directly on each Windows Server 2022 machine. Although this method can trigger local scripts when limits are exceeded, it lacks centralization. In a distributed hybrid environment, managing and maintaining this configuration on every server becomes complicated and error-prone. There is no unified mechanism to trigger automated actions across all servers, making this approach unsuitable for scalable operational oversight and automated remediation.

Event Viewer custom views help filter specific logs for troubleshooting or administrative investigation. While useful for manually reviewing issues, they do not monitor CPU utilization metrics or run automated tasks in response to performance thresholds. They operate only as visual filters and do not provide alerting, centralized evaluation, or action triggers.

Azure Monitor metric alerts provide continuous evaluation of performance metrics such as CPU usage directly from monitored servers. These alerts can be configured with specific thresholds and evaluation periods—for example, CPU usage exceeding 85% for a duration of 10 minutes. When such a condition is met, metric alerts trigger an action group. Action groups can run automation scripts, invoke Azure Automation Runbooks, send notifications, or execute Logic Apps. This enables immediate, centralized, uniform response across all monitored servers. Because metric alerts and action groups offer scalable monitoring, consistent policy enforcement, and automated remediation, they fully meet the requirement for triggering scripts when CPU usage remains elevated. Therefore, Azure Monitor metric alerts with action groups is the correct solution.

Question 72 

You manage a Windows Server 2022 Hyper-V cluster. Certain VMs require encryption and must be movable between hosts without exposing virtual TPM keys. What should you implement?

A) Node-level TPM passthrough
B) Shielded VMs with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation:

Node-level TPM passthrough directly binds virtual TPM keys to the physical TPM chip of the host server. While this offers hardware-based trust and encryption benefits, it limits VM mobility because the encryption keys are anchored to a specific host. Any attempt to migrate the VM to another node compromises security or becomes blocked entirely. This violates the requirement for secure, seamless movement of encrypted VMs across multiple hosts in a Windows Server 2022 Hyper-V cluster.

Cluster Shared Volume redirected mode helps ensure continued storage access during storage path failures or cluster node connectivity issues. Its primary role is sustaining I/O availability by redirecting traffic through alternative channels. However, it provides no mechanisms for managing virtual TPM keys, securing sensitive encryption material, or governing VM mobility. It is entirely unrelated to the security concerns described.

VM live migration without encryption exposes sensitive virtual machine state information and potentially confidential memory contents. Because these operations occur over the network unprotected, this method severely weakens security and fails compliance standards. It cannot be adopted for handling encrypted workloads requiring confidentiality during migration.

Shielded VMs combined with Host Guardian Service provide the required security model. Shielded VMs encrypt virtual disks, protect state data, and rely on virtual TPMs whose keys are centrally managed by HGS. Host Guardian Service ensures only trusted, attested hosts can run these VMs. During live migration, encryption keys remain protected and are released only to authorized hosts after attestation. This setup ensures both mobility and strong security for sensitive workloads. Because it supports encrypted VM migration without exposing virtual TPM keys, Shielded VMs with Host Guardian Service is the correct and only suitable solution.

Question 73 

You manage a Windows Server 2022 failover cluster running critical SQL workloads. You need to prevent multiple critical VMs from co-hosting on the same node and enable automatic rebalancing during maintenance. What should you configure?

A) VM start order
B) Preferred owners
C) Anti-affinity rules with dynamic optimization
D) Cluster quorum settings

Answer: C

Explanation:

VM start order allows administrators to control the sequence in which virtual machines power on following a cluster reboot or failover. This is important for ensuring that dependent services start in the correct order. However, start order does not influence VM placement, resource distribution, or the separation of critical workloads across nodes. It cannot enforce rules preventing key workloads from running on the same host.

Preferred owners let administrators specify which nodes should ideally run a particular VM. While this affects initial placement, it does not strictly enforce separation. During failovers, rebalancing, or maintenance operations, multiple critical VMs may end up sharing a host of resources. The preference is advisory rather than mandatory, making it insufficient for ensuring isolation of critical workloads.

Cluster quorum settings determine the number of votes required for the cluster to remain operational. Quorum ensures cluster stability and prevents split-brain scenarios but does not influence VM distribution, workload separation, or automated rebalancing processes. It is focused on cluster survivability, not workload placement policies.

Anti-affinity rules with dynamic optimization directly address both requirements. Anti-affinity rules ensure that specific VMs never run together on the same cluster node. Dynamic optimization continuously evaluates host resource usage and automatically moves virtual machines to maintain performance balance while honoring anti-affinity constraints. During maintenance or resource pressure scenarios, the system will redistribute workloads automatically while still preventing critical VMs from being co-located. This combination provides automated rebalancing and strict workload separation to minimize the impact of node failures. Therefore, configuring anti-affinity rules with dynamic optimization is the correct solution.

Question 74 

You manage Windows Server 2022 using Azure File Sync. Branch servers experience high network usage during recall of large files. You must prioritize critical files and reduce unnecessary bandwidth usage. What should you configure?

A) Cloud tiering minimum file age
B) Recall priority policies
C) Offline files mode
D) Background tiering deferral

Answer: B

Explanation:

Cloud tiering minimum file age determines how long a newly created or modified file must remain on the branch server before it becomes a candidate for tiering to the cloud. While this setting can influence when data is offloaded and reduce unnecessary recall of recently accessed files, it does not control the order or priority of file recall operations. It cannot differentiate between critical and non-critical files during bandwidth-heavy recall events.

Offline files mode allows users to access cached data locally during network interruptions. However, Azure File Sync operates independently of traditional offline file caching behaviors. Enabling offline files mode does not affect Azure File Sync’s recall logic, prioritization behaviors, or network optimization. It merely ensures availability during disconnection, not efficient bandwidth usage.

Background tiering deferral shifts when files are offloaded to the cloud to reduce overall network activity. While helpful for scheduling cloud uploads, it does not control which files get recalled from the cloud during heavy user activity. Tiering deferral affects outbound operations, not inbound recall behavior.

Recall priority policies allow administrators to define which files or directories should receive priority during recall operations. When multiple recall requests occur or bandwidth is constrained, high-priority files are recalled before others. This ensures critical workloads receive prompt access while minimizing unnecessary network consumption by deprioritizing less important files. This mechanism provides predictable performance, reduces redundant traffic, and ensures that essential files arrive first during peak demand. Because the requirement is to prioritize important files and control bandwidth usage during recall, recall priority policies are the correct configuration.

Question 75 

You manage a Windows Server 2022 RDS deployment with Azure MFA) Users report failed logons due to delayed MFA responses. You must reduce login failures while maintaining MFA security. What should you configure?

A) Reduce session persistence
B) Increase NPS extension timeout
C) Disable conditional access
D) Enable persistent cookies

Answer: B

Explanation:

Reducing session persistence shortens how long a Remote Desktop session can remain active or reconnected without requiring reauthentication. While this may improve security in certain environments, it increases the frequency of MFA prompts. More prompts mean more opportunities for delayed MFA responses to cause login failures. This does not address the underlying cause of the issue, which is response delay from the MFA verification service.

Disabling conditional access weakens authentication controls by removing MFA enforcement under certain conditions. Although this would eliminate MFA-related failures, it undermines the security posture and violates the requirement to maintain MFA protections. It is not a viable solution for environments that rely on strong identity verification.

Persistent cookies allow users to avoid repeated MFA prompts for a defined period after a successful verification. However, this only affects subsequent authentications. It does not improve the reliability of the initial MFA handshake, where delays commonly occur. If the MFA response arrives late, the initial attempt still fails regardless of persistent cookies.

Increasing the NPS extension timeout directly resolves the issue by giving the system more time to complete MFA verification. When the default timeout is too short, even a valid MFA approval may arrive after the authentication process concludes, causing the login to fail. By extending the timeout, the server accommodates slower network conditions or temporary delays in the MFA service. This preserves full MFA enforcement while significantly reducing failed login attempts. Because it improves reliability without compromising security, increasing the NPS extension timeout is the correct configuration.

Question 76 

You manage a Windows Server 2022 Hyper-V cluster. Certain VMs require encryption and must migrate between hosts without exposing virtual TPM keys. What should you implement?

A) Node-level TPM passthrough
B) Shielded VMs with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation: 

Node-level TPM passthrough provides a method for binding a virtual machine’s encryption keys directly to the hardware-level TPM of a specific host. While this ensures that the VM benefits from trusted hardware protections, it introduces an immediate limitation: the virtual TPM keys cannot move securely with the VM during migration. This creates a significant concern for environments that require seamless mobility, since any migration would require exposing or re-generating sensitive key material. Because of this restrictive binding, node-level passthrough is not a viable solution for workloads that must migrate safely across cluster nodes.

Cluster Shared Volume redirected mode is engineered to maintain storage continuity when the primary path fails or when nodes experience network disruptions. Its purpose is to ensure uninterrupted I/O operations during storage-related failovers. However, this mode does not offer any built-in mechanism for key protection or encryption­-related controls. It is entirely unrelated to the secure handling of virtual TPM keys and therefore cannot satisfy the requirements of encrypted VM mobility.

Unencrypted live migration allows virtual machines to move between hosts without tunneling protections or security controls. Although it simplifies performance characteristics, it exposes sensitive workloads to interception risks and compliance violations. Without encryption protections, any environment handling confidential or regulated data cannot use this method, as it contradicts proper security architecture.

Shielded VMs with Host Guardian Service provide a comprehensive framework for protecting virtual TPM keys, enforcing attestation, and ensuring that only trusted hosts are able to run sensitive virtual machines. Virtual TPM keys are securely guarded and never exposed externally during migration. HGS validates host integrity before approval, allowing encrypted VMs to move safely across approved nodes. This framework is specifically designed for environments where confidentiality, compliance, and secure mobility are mandatory. For secure migration of encrypted VMs while keeping virtual TPM keys protected, Shielded VMs with Host Guardian Service is the only appropriate approach.

Question 77 

You manage a Windows Server 2022 failover cluster running SQL Server VMs. You need to prevent critical VMs from being placed on the same host and enable automatic rebalancing during maintenance. What should you configure?

A) VM start order
B) Preferred owners
C) Anti-affinity rules with dynamic optimization
D) Cluster quorum settings

Answer: C

Explanation:

VM start order is used when a cluster comes online after reboot or failure. It allows administrators to specify which virtual machines should power on first, particularly when certain services depend on others. While important for controlled service startup, it does not influence proactive host placement, separation of critical workloads, or dynamic redistribution during maintenance cycles. Therefore, it cannot enforce the requirement of preventing important VMs from running on the same host.

Preferred owners assign a list of nodes that a particular VM should run on whenever possible. While this can influence placement priorities, it does not guarantee separation between specific workloads. Both critical VMs could still be placed on the same host if the cluster considers that host available. This behavior is especially likely during load balancing events or recovery scenarios where the cluster prioritizes availability over placement preferences.

Cluster quorum settings ensure that the cluster remains functional as long as a majority of nodes or votes are available. Quorum focuses strictly on fault tolerance and continuity, not resource placement or VM distribution logic. Because quorum settings do not influence where virtual machines run or how they are separated, they cannot solve the requirement of ensuring workload separation.

Anti-affinity rules enforce a logical separation between specified virtual machines by preventing them from being hosted on the same cluster node. When combined with dynamic optimization, the cluster continuously evaluates host resource usage, placement restrictions, and performance metrics. During maintenance events or imbalanced resource conditions, the cluster automatically redistributes virtual machines while preserving anti-affinity constraints. This combination ensures that critical workloads remain isolated to minimize simultaneous failure risk, while still benefiting from automated balancing. Because anti-affinity rules with dynamic optimization directly address separation and controlled redistribution, this configuration is the correct solution.

Question 78 

You manage Windows Server 2022 with Azure File Sync. Branch servers frequently download large files unnecessarily during recall. You must prioritize important files and reduce redundant network usage. What should you configure?

A) Cloud tiering minimum file age
B) Recall priority policies
C) Offline files mode
D) Background tiering deferral

Answer: B

Explanation:

Cloud tiering minimum file age determines how long newly created files must remain local before they become eligible for tiering to the cloud. While this setting helps manage which files remain cached locally, its primary function is to prevent immediate offloading of recently accessed files. It cannot ensure that important files are prioritized for recall, nor can it prevent unnecessary recall of less important data. Therefore, it is insufficient for environments with recurring bandwidth consumption issues during recall events.

Offline files mode provides local caching that allows users to access files during network outages. However, because Azure File Sync manages cloud tiering differently than traditional offline file caching, enabling offline mode does not influence the order in which files are recalled or which workloads receive higher priority. It also does not reduce redundant downloads when multiple clients request the same content.

Background tiering deferral delays the automated process of moving files to the cloud. Although this can reduce network activity at certain times, it does not solve the issue of files being recalled unnecessarily or repeatedly. It simply schedules outbound traffic differently and offers no mechanism to prioritize files that must be recalled before others.

Recall priority policies directly address the challenge described. They allow administrators to categorize important files or directories so that they are retrieved before low-priority data whenever bandwidth is limited or when branch servers initiate multiple recall operations simultaneously. By marking crucial datasets as higher priority, the system ensures that essential files are retrieved quickly without consuming unnecessary network resources for less important items. This targeted control reduces bandwidth strain, minimizes redundant downloads, and ensures that vital workloads receive timely access. Because recall priority policies specifically provide prioritization and optimization for file retrieval, they are the correct configuration for this scenario.

Question 79

You manage a Windows Server 2022 RDS deployment with Azure MFA) Users occasionally experience failed logons due to delayed MFA responses. You must reduce login failures without weakening MFA security. What should you configure?

A) Reduce session persistence
B) Increase NPS extension timeout
C) Disable conditional access
D) Enable persistent cookies

Answer: B

Explanation: 

Reducing session persistence shortens the amount of time that an authenticated Remote Desktop session remains active without requiring reauthentication. Although this may increase the frequency of MFA prompts, it does not address delays in receiving MFA approvals. Instead, it may force users to authenticate more often, causing more failures whenever response times are slow. This results in poorer user experience without improving reliability.

Disabling conditional access undermines the security posture of the environment by effectively reducing or removing MFA enforcement for certain scenarios or users. While this would eliminate MFA-related failures, it weakens the core security requirement and contradicts the necessity of maintaining strong authentication controls. Therefore, it is not a viable option for maintaining compliance or organizational security standards.

Persistent cookies allow MFA systems to remember successful authentication events for a period of time, reducing repeated prompts. While beneficial for user convenience, persistent cookies influence only repeated authentication attempts. They do not improve success rates for the initial MFA challenge, where delayed responses cause logon failures. As a result, persistent cookies do not solve the underlying performance issue affecting MFA verification timing.

Increasing the NPS extension timeout directly addresses the root cause of delayed MFA responses. When the default timeout is too short, any latency in processing the MFA challenge results in authentication failures even when the user approves the prompt. By lengthening the timeout window, the system is given additional time for the MFA response to return successfully. This maintains full security while ensuring that users experience fewer failed logons. The extended timeout is a safe, recommended approach for hybrid RDS deployments involving Azure MFA, especially where intermittent delays occur. Because it resolves failures without weakening security, increasing the NPS extension timeout is the correct solution.

Question 80 

You manage a Windows Server 2022 Hyper-V cluster. Certain VMs require encryption and secure mobility between hosts. You need to ensure virtual TPM keys remain protected during migration. What should you configure?

A) Node-level TPM passthrough
B) Shielded VMs with Host Guardian Service
C) Cluster Shared Volume redirected mode
D) VM live migration without encryption

Answer: B

Explanation:

Node-level TPM passthrough maps a virtual TPM directly to the underlying hardware TPM of a specific host. Although this approach strengthens security for workloads that remain stationary, it prevents encrypted virtual machines from migrating safely because the encryption keys are tied to one host. Any attempt to move the VM requires exposing or transferring the key material, which is not acceptable for secure or compliant environments. This limitation makes node-level passthrough unsuitable for scenarios requiring secure VM mobility.

Cluster Shared Volume redirected mode is designed for storage resiliency rather than encryption security. It enables continuous I/O operations during network disruptions or cluster node failures by rerouting storage traffic. While valuable for maintaining availability, it does not manage virtual TPM keys or enforce protections during live migration. Therefore, it cannot satisfy the requirement for encrypted VM movement between hosts.

Live migration without encryption disables important security protections, allowing the movement of virtual machines in plain form across the network. This exposure risk is incompatible with security-sensitive workloads and would violate compliance rules and industry standards. Environments handling regulated or confidential data cannot use unencrypted migration under any circumstances.

Shielded VMs with Host Guardian Service provide the full security framework required for protecting virtual TPM keys, enforcing host attestation, and enabling secure live migration. HGS ensures that only verified and trusted hosts can run shielded workloads while protecting key material throughout the migration process. Virtual TPM keys are never exposed, mirrored, or transferred in insecure ways. Because this design enables encrypted workloads to move freely and safely across approved cluster hosts while maintaining compliance, it is the correct and only appropriate configuration for this scenario.

img