Microsoft DP-300 Administering Microsoft Azure SQL Solutions Exam Dumps and Practice Test Questions Set 5 Q81-100

Visit here for our full Microsoft DP-300 exam dumps and practice test questions.

Question 81 

You need to ensure sensitive columns can be queried safely without exposing plaintext to the database engine. Which feature should you implement?

A) Always Encrypted
B) Transparent Data Encryption
C) Dynamic Data Masking
D) Row-Level Security

Answer:  A) Always Encrypted

Explanation:

Always Encrypted is a feature in Azure SQL Database and SQL Server designed to protect sensitive data such as credit card numbers, social security numbers, or other personally identifiable information. What makes it unique is that encryption and decryption occur entirely on the client side. The database engine never receives the plaintext values, meaning even database administrators cannot see the actual sensitive data. This design ensures strong privacy protection and helps organizations meet strict regulatory requirements such as GDPR, HIPAA, and PCI-DSS while still allowing queries to operate on encrypted columns using deterministic or randomized encryption.

Transparent Data Encryption encrypts the entire database at rest, including data files, backups, and logs, preventing unauthorized access to physical storage. However, when queries are executed, TDE decrypts the data transparently on the server, which means the plaintext is visible to the database engine and to anyone with sufficient server privileges. While TDE is essential for protecting against physical theft or unauthorized access to storage, it does not protect sensitive column data during query execution, which is a key requirement in this scenario.

Dynamic Data Masking obscures the data in query results by replacing sensitive values with masked representations based on defined rules. For example, a social security number might be displayed as XXX-XX-1234. However, masking only affects the output of queries and does not encrypt or protect the underlying data. Anyone with direct access to the table or administrative privileges can still see the unmasked data. Therefore, it does not provide true encryption or protection for sensitive columns at rest or in motion.

Row-Level Security restricts access to rows within a table based on user attributes or roles. This can be useful for enforcing access policies where different users should see only specific subsets of data. While RLS provides fine-grained access control, it does not encrypt the data. Users who bypass the security policies or access the table directly will still be able to view plaintext data. Considering the requirement to query sensitive data without exposing plaintext to the database engine, Always Encrypted is the only solution that provides client-side encryption while still enabling queries on encrypted columns, ensuring both operational functionality and regulatory compliance.

Question 82 

You want to offload read-only reporting workloads from a primary database in the Business Critical tier. Which feature should you enable?

A) Read Scale-Out
B) Auto-Failover Groups
C) Elastic Pool
D) Hyperscale replicas

Answer:  A) Read Scale-Out

Explanation:

Read Scale-Out is a feature available in the Business Critical tier of Azure SQL Database that provides one or more read-only secondary replicas. These replicas allow read-only queries, such as reporting, analytics, or other data-intensive operations, to be offloaded from the primary database. By routing read workloads to the secondary replicas, the primary database can focus on write operations, improving overall performance and reducing latency for both transactional and reporting operations. This separation of read and write workloads is critical for organizations with high read-intensive reporting requirements.

Auto-Failover Groups provide automatic failover and high availability for databases across different regions. They ensure continuity in case of a primary database outage but do not provide offloading of read-only workloads. Their primary focus is resilience and disaster recovery rather than performance scaling for reporting or analytics workloads. While valuable for high availability, they are not designed to reduce the load on the primary database for read queries.

Elastic Pool allows multiple databases to share compute and storage resources, optimizing cost for databases with varying and unpredictable workloads. While it is useful for managing resource consumption and cost efficiency across multiple databases, it does not create secondary replicas or specifically offload read-only operations from the primary database. Therefore, it does not address the requirement for offloading reporting workloads.

Hyperscale replicas are a feature specific to the Hyperscale service tier, which is designed for very large databases with dynamic scaling of compute and storage. Hyperscale replicas can offload reads in the Hyperscale environment, but they are not available for databases in the Business Critical tier. Since the requirement is specifically for Business Critical databases, Read Scale-Out is the correct feature because it enables secondary read-only replicas to handle reporting queries without impacting the performance of the primary transactional database.

Question 83 

You want to encrypt database backups to comply with regulatory standards. Which feature should you enable?

A) Transparent Data Encryption
B) Always Encrypted
C) Dynamic Data Masking
D) Column-Level Permissions

Answer:  A) Transparent Data Encryption

Explanation:

Transparent Data Encryption (TDE) encrypts the entire database, including data files, transaction logs, and backups. This ensures that any data stored at rest, whether in the primary database or in backup files, is fully encrypted and protected against unauthorized access. TDE operates transparently, meaning applications and queries do not need to change to benefit from encryption. It is widely used to meet regulatory compliance requirements, including HIPAA, PCI DSS, and GDPR, where encryption of data at rest is mandatory.

Always Encrypted is designed to protect sensitive data within specific columns, ensuring that data remains encrypted during query execution and never exposed as plaintext to the database engine. While Always Encrypted is excellent for protecting sensitive column data, it does not automatically encrypt full database backups. The underlying database files and backups remain encrypted only if TDE is also enabled. Therefore, Always Encrypted alone is insufficient for regulatory compliance that specifically requires encryption of backups.

Dynamic Data Masking obscures sensitive values in query results by replacing them with masked versions. While this is useful for preventing accidental exposure of data to certain users, it does not encrypt the underlying data or backups. Masking only affects query results and does not meet regulatory requirements for encrypting data at rest or in backups.

Column-Level Permissions restrict access to specific columns in a table by user role or permission. While this can limit who can see sensitive data, it does not provide encryption or secure storage of backups. Users with sufficient privileges or administrators could still access the data directly. TDE is the only feature that provides transparent encryption for the entire database and its backups, making it the appropriate choice for ensuring regulatory compliance in scenarios where encryption at rest, including backups, is required.

Question 84 

You want to store audit logs in a centralized, durable, and secure location for compliance purposes. Which destination should you use?

A) Azure Storage account
B) Log Analytics workspace
C) Event Hubs
D) Power BI

Answer:  A) Azure Storage account

Explanation:

Azure Storage accounts provide a reliable, secure, and durable repository for storing audit logs. They support long-term retention and can be configured with redundancy options like locally redundant storage (LRS) or geo-redundant storage (GRS), ensuring data durability and availability even in case of hardware failures or regional outages. Storage accounts also support encryption at rest, access controls, and integration with Azure policies, making them a highly suitable choice for compliance-focused auditing scenarios.

Log Analytics workspace is primarily designed for monitoring and querying log data. While it is excellent for real-time analysis, dashboards, and alerting, it is not optimized for long-term retention or archival storage. Retaining large volumes of historical audit logs in Log Analytics can become expensive and does not offer the same durability guarantees as a storage account.

Event Hubs is a messaging platform that enables streaming of large volumes of event data to other services for processing or analytics. While it can temporarily capture audit events, it is not intended for long-term storage or compliance retention. Data stored in Event Hubs is transient, and it is better used as a pipeline for delivering events to storage or analytics platforms rather than serving as a centralized storage destination.

Power BI is a visualization and reporting tool. It is designed to present and analyze data rather than provide secure, durable storage. While Power BI can visualize audit logs, it cannot meet the requirements for secure, long-term retention or regulatory compliance. Therefore, an Azure Storage account is the correct choice, providing secure, centralized, and durable storage for audit logs while allowing integration with other services for monitoring and compliance reporting.

Question 85 

You want to scale an Azure SQL Database automatically based on workload demand and pause it when idle to reduce costs. Which tier should you use?

A) Serverless compute tier
B) Business Critical tier
C) Hyperscale tier
D) Elastic Pool

Answer: A) Serverless compute tier

Explanation:

The Serverless compute tier is designed for databases with intermittent or unpredictable workloads. It automatically scales compute resources based on current demand, allowing applications to handle spikes without manual intervention. When the database is idle, it automatically pauses, reducing compute costs to near zero while maintaining storage and state. This dynamic scaling and auto-pausing make it highly cost-efficient for development, testing, or applications with variable usage patterns.

Business Critical tier provides high-performance compute resources and low-latency storage optimized for transactional workloads. While it offers high availability and excellent performance, it uses fixed compute resources and does not automatically pause when idle. This means costs remain constant regardless of workload activity, making it less suitable for scenarios requiring automatic scaling and cost savings during idle periods.

Hyperscale tier is designed to support very large databases and decouples compute and storage, allowing independent scaling of each. It is ideal for massive datasets but does not provide automatic pause functionality. Its primary focus is performance and scalability rather than cost optimization for variable workloads, so it does not meet the requirement to reduce costs during periods of inactivity.

Elastic Pool allows multiple databases to share a set of compute and storage resources. While it is beneficial for cost optimization across multiple databases with varying workloads, it does not automatically scale individual databases or pause them when idle. The Serverless tier remains the only option that fully meets the requirement to automatically scale based on workload demand and pause when idle, providing both cost savings and operational flexibility.

Question 86 

You want to automatically detect and correct query performance regressions in Azure SQL Database. Which feature should you enable?

A) Automatic Plan Correction
B) Query Store
C) Intelligent Insights
D) Extended Events

Answer:  A) Automatic Plan Correction

Explanation:

Automatic Plan Correction is designed specifically to address query performance regressions in Azure SQL Database. It continuously monitors the execution plans of queries and automatically detects when a previously efficient query plan has regressed, meaning it is performing worse than expected. Once a regression is detected, Automatic Plan Correction forces the query to use the last known good execution plan, ensuring stable and predictable performance. This process is fully automated, which minimizes the need for database administrators to manually intervene when performance issues occur. It is particularly valuable in production environments where consistent query performance is critical.

Query Store is often mentioned in the context of performance monitoring because it stores historical data on query execution plans and performance metrics. While Query Store enables administrators to identify regressions, analyze trends, and review execution statistics, it does not automatically enforce any corrective actions. The tool is primarily diagnostic; an administrator would need to manually identify a bad plan and force a good plan if needed. This makes Query Store more of a supportive component rather than a fully automated solution for resolving regressions.

Intelligent Insights provides detailed performance diagnostics and recommendations for optimizing workloads. It analyzes the database’s workload and identifies potential performance issues, suggesting indexes, query modifications, or other tuning strategies. However, Intelligent Insights does not automatically correct any performance regressions; it relies on a human operator to review the recommendations and implement changes. Therefore, while useful for proactive optimization, it is not sufficient for automatic correction of regressions in real time.

Extended Events is a general-purpose event monitoring system in SQL Server and Azure SQL Database. It can capture extensive diagnostic information about queries, transactions, errors, and other database operations. Extended Events is highly flexible and valuable for deep troubleshooting or auditing scenarios. However, it does not include any mechanism to automatically resolve performance regressions or adjust query plans. It functions solely as an observability tool, providing the data administrators need to make decisions, but leaving the actual remediation to humans or other automated features.

The correct solution is Automatic Plan Correction because it uniquely combines monitoring and automated remediation of query performance regressions. Unlike Query Store, Intelligent Insights, or Extended Events, it actively intervenes when a regression is detected, restoring query performance without administrative effort. This ensures that applications experience minimal disruption, making it the ideal feature for environments where continuous performance stability is a priority.

Question 87 

You want to restrict access to rows in a table based on the user’s department. Which feature should you enable?

A) Row-Level Security
B) Dynamic Data Masking
C) Always Encrypted
D) Transparent Data Encryption

Answer:  A) Row-Level Security

Explanation:

Row-Level Security (RLS) enables fine-grained access control within a table by filtering rows based on user attributes. For example, using RLS, you can configure policies that allow employees in the Sales department to access only rows related to sales, while employees in HR see only HR-related data. This filtering happens at the database level and is transparent to the application, ensuring that users can never bypass the restriction by querying directly.

Dynamic Data Masking (DDM) is often confused with RLS but serves a different purpose. DDM hides sensitive column values by masking them when they are retrieved in queries. The underlying data remains unchanged, and users with sufficient privileges can still see the original values. DDM does not prevent access to rows; it only obscures data in query results. Thus, it cannot implement department-specific access policies.

Always Encrypted protects sensitive column data by encrypting it on the client side. This ensures that the database never sees plaintext data, providing strong protection for sensitive information such as social security numbers or credit card numbers. However, Always Encrypted does not control who can access specific rows; it is focused solely on protecting data confidentiality rather than enforcing access rules.

Transparent Data Encryption (TDE) is used to secure data at rest by encrypting database files on disk. TDE protects against unauthorized access to the underlying storage but does not impact query-level or row-level access. It cannot implement department-based filtering or any other row-level access restriction.

Row-Level Security is the correct choice because it directly implements dynamic access control based on user attributes. By creating security predicates or functions that filter rows according to the department or other attributes, RLS enforces fine-grained access policies that meet the requirement of restricting access at the row level.

Question 88 

You want to offload reporting queries to secondary replicas to avoid affecting primary database performance. Which feature should you use?

A) Read Scale-Out
B) Auto-Failover Groups
C) Elastic Pool
D) Hyperscale tier

Answer:  A) Read Scale-Out

Explanation:

Read Scale-Out is a feature available in the Business Critical tier of Azure SQL Database that allows read-only queries to be directed to secondary replicas. These replicas are synchronized copies of the primary database. By routing analytics or reporting queries to the secondary replicas, the primary database remains dedicated to transactional workloads, ensuring low latency and high performance for critical operations. This makes Read Scale-Out ideal for scenarios where reporting queries could otherwise impact the responsiveness of the main database.

Auto-Failover Groups provide automatic failover between primary and secondary databases in the event of an outage, ensuring high availability and disaster recovery. While they involve secondary replicas, these replicas are primarily intended for failover purposes, not for load distribution or offloading read queries. They do not inherently provide the capability to route read-only workloads away from the primary database, making them unsuitable for reporting offload scenarios.

Elastic Pool is designed to share compute and storage resources among multiple databases. It optimizes costs and manages unpredictable workloads across a group of databases. However, Elastic Pool does not provide secondary replicas or mechanisms to offload queries for reporting purposes. Its purpose is resource sharing rather than distributing read workloads.

The Hyperscale tier allows databases to scale out storage and compute independently, supporting very large datasets and rapid growth. Although it provides high scalability, it does not inherently provide secondary replicas for read-only workloads. Read Scale-Out is the feature specifically designed for offloading queries from the primary database to secondary replicas in Business Critical environments.

Read Scale-Out is the correct solution because it directly addresses the need to offload reporting queries without affecting primary database performance. It leverages secondary replicas for read workloads, ensuring transactional efficiency and system stability.

Question 89 

You need to reduce contention in tempdb for a high-concurrency Azure SQL Managed Instance. Which configuration should you modify?

A) Tempdb file count
B) Availability Zone
C) Service Endpoint
D) Geo-Restore settings

Answer:  A) Tempdb file count

Explanation:

Increasing the number of tempdb data files is the recommended approach to reduce contention in high-concurrency workloads. tempdb is a shared resource used for temporary tables, sorting, version stores, and other internal operations. When multiple sessions attempt to access tempdb simultaneously, contention can occur on allocation structures, leading to bottlenecks. By configuring multiple tempdb files, SQL Server distributes allocation requests across files, reducing contention and improving performance for concurrent workloads.

Availability Zones enhance high availability and disaster recovery by distributing resources across physically isolated data centers. While critical for fault tolerance, Availability Zones do not influence tempdb contention or query performance. They provide resilience rather than workload optimization.

Service Endpoints control network access to Azure resources. They are unrelated to database internal operations or tempdb configuration. Adjusting Service Endpoints has no effect on tempdb allocation or concurrency behavior, so they cannot mitigate contention.

Geo-Restore settings allow restoring databases to different regions in case of disaster. While useful for recovery, Geo-Restore does not address performance or contention within tempdb. It is solely a backup and recovery feature, not a tuning mechanism for high-concurrency workloads.

Modifying the tempdb file count is the correct approach because it directly targets the internal allocation bottlenecks that arise under high concurrency. Properly configuring multiple data files ensures that SQL Server can handle simultaneous operations efficiently, mitigating tempdb contention and improving overall database performance.

Question 90 

You want to monitor anomalous access patterns and receive alerts for potential security threats in Azure SQL Database. Which feature should you enable?

A) Threat Detection
B) Query Store
C) Automatic Plan Correction
D) SQL Auditing

Answer:  A) Threat Detection

Explanation:

Threat Detection in Azure SQL Database continuously monitors for suspicious activities such as SQL injection attacks, anomalous login attempts, or unauthorized access from unusual locations. When such patterns are detected, it generates alerts and provides recommendations for remediation. This proactive monitoring helps administrators respond quickly to potential security incidents and strengthens the overall security posture of the database environment.

Query Store, while powerful for performance tracking, only captures query performance history and execution plans. It provides insight into workload trends but has no mechanism to detect or alert on security anomalies. Its purpose is performance diagnostics, not threat monitoring.

Automatic Plan Correction addresses query performance regressions by automatically reverting queries to previously good execution plans. While important for performance stability, it does not monitor security events or detect suspicious access patterns. It is unrelated to the detection of security threats.

SQL Auditing logs database activities and can be used for compliance purposes. It records who accessed which data and what operations were performed, creating a detailed audit trail. However, SQL Auditing does not proactively analyze behavior or generate alerts for anomalous patterns; it requires manual review or integration with monitoring systems for alerting.

Threat Detection is the correct solution because it actively monitors for unusual behaviors and provides immediate alerts to administrators. Unlike Query Store, Automatic Plan Correction, or SQL Auditing, Threat Detection is purpose-built to identify security threats and notify stakeholders, ensuring timely response to potential compromises.

Question 91 

You want to reduce contention in tempdb for a high-concurrency Azure SQL Managed Instance. Which configuration should you modify?

A) Tempdb file count
B) Availability Zone
C) Service Endpoint
D) Geo-Restore settings

Answer:  A) Tempdb file count

Explanation:

Tempdb file count directly affects how SQL Server manages temporary objects and internal structures in a high-concurrency environment. Each tempdb file can handle a portion of allocation requests, so having multiple files reduces contention on system pages like PFS, GAM, and SGAM pages. In scenarios with many concurrent sessions, increasing the number of tempdb files ensures that temporary table creation, table variable allocations, sorting operations, and intermediate result sets are distributed evenly across files. This approach prevents bottlenecks that could lead to slower query execution and CPU spikes. The optimal number of tempdb files typically matches or slightly exceeds the number of logical processor cores to balance workload distribution.

Availability Zones are designed to improve the high availability of Azure SQL Managed Instances by physically separating instances across multiple datacenters. While they protect against data center failures, they do not influence tempdb performance or contention. Using Availability Zones ensures business continuity during regional disruptions, but it cannot address the bottlenecks caused by concurrent access to tempdb.

Service Endpoints provide a way to secure network traffic between Azure resources, such as limiting access from specific subnets to the managed instance. They focus on network security and routing rather than internal database performance. Configuring service endpoints does not impact how tempdb handles concurrent operations or allocation contention, so they are irrelevant for this specific scenario.

Geo-Restore settings enable point-in-time restore of Azure SQL databases in different regions for disaster recovery purposes. While they are critical for business continuity planning and safeguarding against catastrophic failures, they do not affect tempdb file allocation or concurrency performance. Geo-Restore addresses recovery strategies rather than operational throughput or contention issues.

The correct approach is to adjust the tempdb file count. By increasing the number of tempdb data files, the workload is distributed more evenly, reducing allocation contention during high-concurrency operations. This configuration ensures faster response times for temporary object creation, sorting, joins, and other memory-intensive operations. It is a best practice in SQL Server and Azure SQL Managed Instance deployments with heavy concurrent usage, ensuring consistent performance under load.

Question 92 

You want to monitor Azure SQL Database for suspicious activity and generate alerts for potential security threats. Which feature should you enable?

A) Threat Detection
B) Query Store
C) Automatic Plan Correction
D) SQL Auditing

Answer:  A) Threat Detection

Explanation:

Threat Detection is a security-focused feature in Azure SQL Database that continuously monitors database activity to identify anomalous behavior that may indicate potential attacks. It detects patterns such as SQL injection attempts, unexpected login attempts, and privilege escalation activities. Once suspicious activity is detected, alerts are generated in real time and can be routed through email notifications, Azure Security Center, or other monitoring solutions. This proactive alerting helps administrators respond quickly before threats escalate into serious security breaches.

Query Store is primarily a performance monitoring tool that tracks query execution plans and runtime statistics over time. It is invaluable for identifying query regressions and tuning performance, but it does not analyze database activity for security anomalies. Query Store can help optimize performance but cannot generate alerts for malicious behavior.

Automatic Plan Correction focuses on maintaining consistent query performance by detecting regressions in execution plans and automatically forcing previously known good plans. While this improves performance stability, it is unrelated to security monitoring and cannot detect or alert administrators about attacks or suspicious activity.

SQL Auditing tracks database events, such as logins, schema changes, and data modifications, storing these records for compliance and forensic purposes. While auditing provides a detailed log of activity, it does not proactively detect anomalies or trigger alerts for unusual behavior. Analysts must manually review logs to identify potential threats.

The correct choice is Threat Detection. It actively monitors for suspicious activities and provides real-time alerts, which is crucial for proactive database security. Unlike auditing or performance tools, it focuses on identifying abnormal patterns that may signal an attack, enabling administrators to take immediate corrective action and prevent compromise.

Question 93 

You need to provide multiple readable secondary replicas for offloading reporting workloads from the primary database. Which feature should you enable?

A) Read Scale-Out
B) Auto-Failover Groups
C) Elastic Pool
D) Transparent Network Redirect

Answer:  A) Read Scale-Out

Explanation:

Read Scale-Out allows secondary replicas in Azure SQL Database to handle read-only workloads, such as reporting or analytical queries. By offloading read-heavy queries to these replicas, the primary database can focus on transactional workloads, ensuring optimal performance and responsiveness. This feature is especially beneficial in environments where analytics and reporting would otherwise compete with operational transactions for resources.

Auto-Failover Groups provide high availability and disaster recovery by replicating databases to secondary regions. They allow seamless failover if the primary database becomes unavailable, but they do not inherently offload read workloads. The secondary replicas in a failover group are primarily for high availability and not optimized for continuous read reporting.

Elastic Pool allows multiple databases to share compute and storage resources to optimize cost and efficiency for variable workloads. While it ensures resource management across databases, it does not create secondary replicas or specifically enable offloading of read operations from the primary database.

Transparent Network Redirect manages client reconnections after failover events, ensuring applications reconnect seamlessly. It does not create secondary replicas or enable read-only workload distribution, so it cannot achieve the performance benefits that Read Scale-Out provides.

Read Scale-Out is the correct choice because it specifically enables multiple readable secondary replicas, distributing read-only workloads away from the primary database. This improves reporting performance and ensures the primary database remains responsive for transactional operations.

Question 94 

You want to store auditing logs in a secure, centralized location for compliance purposes with long-term retention. Which destination should you use?

A) Azure Storage account
B) Log Analytics workspace
C) Event Hubs
D) Power BI

Answer:  A) Azure Storage account

Explanation:

Azure Storage accounts provide durable, secure, and cost-effective storage for audit logs. They support long-term retention policies, encryption at rest, and access control mechanisms necessary for regulatory compliance. Centralizing logs in storage ensures they remain accessible for auditing, investigation, and compliance verification, meeting organizational and legal requirements.

Log Analytics workspace allows collection, analysis, and querying of telemetry data from multiple sources. While useful for monitoring and diagnostic purposes, it may not support extended retention durations required for compliance or provide a cost-efficient mechanism for long-term log storage.

Event Hubs is designed for streaming telemetry and event data to downstream services for processing or analytics. It is ideal for high-volume, real-time data ingestion but does not provide long-term storage with retention guarantees suitable for compliance audit logs.

Power BI is a visualization tool for business intelligence and reporting. While it can display summarized audit data, it is not a storage solution and does not meet regulatory or security requirements for maintaining raw audit logs.

The correct choice is Azure Storage account, as it ensures secure, centralized, and durable log storage with configurable retention policies. This approach meets compliance needs and allows organizations to maintain a historical record of database activity for auditing purposes.

Question 95 

You need to monitor query performance and preserve historical execution plans to detect regressions over time. Which feature should you enable?

A) Query Store
B) Extended Events
C) SQL Auditing
D) Intelligent Insights

Answer:  A) Query Store

Explanation:

Query Store continuously collects and stores query execution statistics and execution plans, preserving a historical record that administrators can analyze over time. This enables identification of performance regressions, trends, and changes in execution behavior, which is critical for maintaining performance stability in production environments. Query Store also provides tools to force previous plans to prevent regressions.

Extended Events provide flexible and powerful event collection for diagnostic purposes. While capable of capturing detailed runtime and performance data, it requires manual configuration, querying, and analysis to identify regressions. It does not automatically preserve historical query plans in a structured way for ongoing regression detection.

SQL Auditing records database activity for compliance and security purposes. It tracks actions such as login attempts, schema changes, and data modifications, but it does not monitor query performance or maintain execution plan history. It is unrelated to performance analysis.

Intelligent Insights analyzes workload patterns and provides recommendations for performance optimization. While useful for proactive tuning and guidance, it does not retain historical execution plan data or directly allow regression detection. It is advisory rather than diagnostic.

Query Store is the correct choice because it systematically captures and maintains query execution history, enabling administrators to monitor trends and identify regressions. By preserving both runtime statistics and execution plans, it ensures the ability to detect performance deterioration over time and take corrective action efficiently.

Question 96 

You need to detect and automatically remediate query plan regressions in Azure SQL Database. Which feature should you use?

A) Automatic Plan Correction
B) Query Store
C) Intelligent Insights
D) Extended Events

Answer:  A) Automatic Plan Correction

Explanation:

Automatic Plan Correction is a feature in Azure SQL Database designed to detect when query performance regresses due to changes in execution plans. It continuously monitors query workloads and identifies those that have deviated from their historically optimal performance. When a regression is detected, it can automatically force the previously known good execution plan to ensure consistent query performance. This proactive approach eliminates the need for administrators to manually identify and remediate performance regressions, making it highly suitable for environments where consistent performance is critical.

Query Store is often associated with Automatic Plan Correction because it stores historical query performance data, including execution plans, runtime statistics, and plan choices. It enables administrators to analyze performance trends and investigate regressions. However, Query Store itself does not automatically correct a plan; it is purely a diagnostic and tracking tool. Administrators must manually select a previous execution plan to force if performance has regressed. While crucial for monitoring, Query Store alone cannot meet the requirement of automatic remediation.

Intelligent Insights is another monitoring feature that focuses on diagnostics. It analyzes the database workload, identifies potential performance bottlenecks, and generates recommendations for improving performance. Although it provides actionable guidance, it does not automatically apply these changes. Intelligent Insights is a valuable tool for administrators to understand the reasons behind performance issues, but its lack of automation makes it unsuitable when the goal is automatic detection and correction of query plan regressions.

Extended Events is a lightweight performance monitoring framework that allows capturing detailed events, such as query execution, deadlocks, and resource usage. It is highly flexible for auditing and diagnostics but does not provide any automatic corrective capabilities. While Extended Events can help identify the occurrence of performance regressions, it cannot fix or enforce execution plans, so it does not satisfy the automatic remediation requirement.

Automatic Plan Correction is the only option that combines detection, analysis, and automatic remediation in a single feature. It leverages Query Store to identify regressions and then applies corrective measures without manual intervention, ensuring stable and predictable query performance over time.

Question 97 

You want to encrypt sensitive columns and allow applications to perform queries without exposing plaintext to administrators. Which feature should you enable?

A) Always Encrypted
B) Transparent Data Encryption
C) Dynamic Data Masking
D) Row-Level Security

Answer:  A) Always Encrypted

Explanation:

Always Encrypted is designed to protect sensitive data at the column level in Azure SQL Database and SQL Server. With Always Encrypted, data is encrypted on the client side and stored in its encrypted form in the database. The SQL Server engine and administrators never see the plaintext values. Applications can perform operations such as equality comparisons, range queries, and computations on encrypted columns without decrypting the data on the server. This ensures maximum protection against unauthorized access while maintaining query functionality.

Transparent Data Encryption (TDE) encrypts the entire database at rest, including backups and storage files, ensuring that unauthorized users cannot read the data directly from disk. However, during query execution, TDE decrypts the data on the server side. As a result, administrators or anyone with access to the database engine can see plaintext values. TDE protects against disk-level threats but does not provide column-level protection during query execution.

Dynamic Data Masking (DDM) is used to obfuscate sensitive data in query results. It restricts what a user can see by showing masked values instead of actual data based on their role or permission. DDM is useful for limiting exposure in reporting scenarios but does not encrypt the data stored in the database. It is a masking technique, not a true encryption mechanism, and it does not prevent administrators from accessing the plaintext data.

Row-Level Security (RLS) enforces access control at the row level based on user attributes. It allows fine-grained access management so that users see only the rows they are authorized to access. RLS does not encrypt or protect data; it merely filters rows based on a predicate function. Sensitive columns can still be read in plaintext if users have access to the row.

Always Encrypted is the only solution that encrypts sensitive data end-to-end while allowing applications to query it safely. It ensures compliance and data protection without exposing plaintext to administrators, making it the correct choice.

Question 98 

You want to maintain backup copies for several years to comply with regulatory retention policies. Which feature should you enable?

A) Long-Term Backup Retention
B) Geo-Redundant Backup Storage
C) Auto-Failover Groups
D) Transparent Data Encryption

Answer:  A) Long-Term Backup Retention

Explanation:

Long-Term Backup Retention (LTR) allows Azure SQL Database to store full database backups for extended periods, ranging from months to several years. It is designed to support regulatory and compliance requirements for long-term data retention. LTR provides point-in-time restore for backups within the retention period and stores the data securely in Azure Blob Storage. Administrators can schedule backups and manage retention policies to ensure compliance with specific regulatory standards.

Geo-Redundant Backup Storage (GRS) ensures that backups are replicated to a secondary region, providing disaster recovery in case of regional outages. While GRS increases resilience, it does not extend the retention period beyond the default retention policies. It is primarily focused on availability rather than compliance with multi-year retention mandates.

Auto-Failover Groups provide high availability by enabling automatic failover of databases in case of an outage. They allow read and write operations to continue with minimal disruption during failover events. While critical for business continuity, failover groups do not address long-term backup retention or compliance requirements.

Transparent Data Encryption protects data at rest in the database and backups but does not provide the ability to store backups for extended periods beyond standard retention policies. TDE secures data but does not address the regulatory requirement of multi-year backup retention.

Long-Term Backup Retention is the only feature that directly addresses the need for storing database backups for several years, making it the appropriate choice for compliance with retention policies.

Question 99 

You want to reduce compute costs for a database that is idle most of the day while ensuring automatic scaling when needed. Which deployment model should you select?

A) Serverless compute tier
B) Hyperscale tier
C) Business Critical tier
D) Elastic Pool

Answer:  A) Serverless compute tier

Explanation:

The Serverless compute tier is designed for workloads with intermittent or unpredictable usage. It automatically scales compute resources up or down based on workload demand. When the database is idle, Serverless can pause compute resources, significantly reducing costs. This tier is ideal for development, testing, or lightly used applications where the database is not consistently active but needs to respond quickly when queries occur.

Hyperscale tier supports massive storage capacity and allows independent scaling of compute and storage, making it suitable for very large databases. However, it does not pause idle compute, so costs are incurred even when workloads are minimal. Hyperscale is optimized for performance and scalability rather than cost savings for idle databases.

Business Critical tier provides fixed compute and high availability, with low-latency storage and redundancy. It ensures consistent high performance for transactional workloads but does not automatically scale or pause during periods of inactivity. Consequently, it does not offer the cost-saving benefits of Serverless compute for low-usage scenarios.

Elastic Pool allows multiple databases to share resources, which can optimize costs when workloads vary across databases. However, it does not automatically scale individual databases or pause idle compute. While useful for managing resource utilization across multiple databases, it is not suitable for a single database with sporadic workloads.

Serverless compute tier is the only option that combines automatic scaling with the ability to pause during idle periods, ensuring both responsiveness and cost efficiency.

Question 100 

You want to offload reporting queries from a Business Critical Azure SQL Database without impacting write performance on the primary. Which feature should you enable?

A) Read Scale-Out
B) Auto-Failover Groups
C) Elastic Pool
D) Transparent Network Redirect

Answer:  A) Read Scale-Out

Explanation:

Read Scale-Out allows the Business Critical tier to provide one or more read-only secondary replicas. These replicas can handle read-heavy workloads, such as reporting or analytics queries, without affecting the primary database’s write performance. This ensures that reporting does not degrade transactional workloads, providing both high availability and optimal performance for mixed-use environments.

Auto-Failover Groups provide high availability and disaster recovery by allowing automatic failover between primary and secondary databases. They ensure minimal downtime during outages but do not distribute read workloads across replicas. Failover groups are about reliability and uptime rather than offloading reporting queries.

Elastic Pool allows multiple databases to share compute resources, which helps manage fluctuating workloads and reduce costs. It does not provide additional read replicas for offloading reporting queries and does not improve read query performance for a single database.

Transparent Network Redirect is a mechanism to automatically reconnect clients after a failover, ensuring minimal disruption. While useful in maintaining connectivity, it does not offload read queries or improve performance for reporting workloads.

Read Scale-Out is the only feature specifically designed to handle read-only workloads on secondary replicas, making it the correct choice for offloading reporting queries without impacting the primary database’s write operations.

img