Microsoft DP-300 Administering Microsoft Azure SQL Solutions Exam Dumps and Practice Test Questions Set 10 Q181-200

Visit here for our full Microsoft DP-300 exam dumps and practice test questions.

Question 181 

You want to automatically scale compute resources based on workload and pause the database when idle to reduce costs. Which deployment model should you select?

A) Serverless compute tier
B) Hyperscale tier
C) Business Critical tier
D) Elastic Pool

Answer:  A) Serverless compute tier

Explanation:

The serverless compute tier is designed specifically for scenarios where workloads are variable or intermittent. It automatically scales compute resources based on current workload demand, which ensures that your database can handle sudden spikes in activity without over-provisioning resources during quieter periods. Additionally, serverless compute can pause the database when it is idle, eliminating compute charges during periods of inactivity. This feature is particularly beneficial for development or reporting databases that are not constantly accessed, allowing significant cost savings while still ensuring performance when needed.

The Hyperscale tier, by contrast, is optimized for applications that require very large storage and the ability to independently scale compute and storage resources. While it can handle massive databases and provide high throughput, it does not include the ability to pause compute resources during idle periods. Hyperscale is better suited for workloads that require continuous availability and extremely high storage capacity rather than for cost optimization during periods of low activity.

The Business Critical tier focuses on providing high performance and low-latency transaction processing. It offers fixed compute resources that are not dynamically scalable and does not support automatic pausing of the database. This tier is ideal for mission-critical applications that require high availability, high transaction throughput, and premium storage redundancy, but it is not designed to minimize costs for intermittent workloads because it maintains allocated compute resources at all times.

Elastic Pools allow multiple databases to share a pool of compute resources, enabling efficient management of aggregate resource utilization across many databases. While Elastic Pools are useful for balancing workloads across multiple databases, they do not provide the ability to pause individual databases when idle, nor do they automatically scale compute resources based on individual database activity. As a result, although Elastic Pools provide some cost efficiency through resource sharing, they do not meet the requirement of automatically scaling and pausing the database for intermittent workloads.

The serverless compute tier is the ideal choice because it directly addresses both cost efficiency and workload variability. It combines dynamic scaling with the ability to pause idle databases, making it the most appropriate deployment model for scenarios where the database is idle most of the time but still needs to handle occasional spikes efficiently. It balances performance and cost optimally, ensuring that you only pay for compute resources when they are actively used.

Question 182 

You want to offload read-only reporting queries from a primary Business Critical database without affecting write operations. Which feature should you enable?

A) Read Scale-Out
B) Auto-Failover Groups
C) Elastic Pool
D) Transparent Network Redirect

Answer:  A) Read Scale-Out

Explanation:

Read Scale-Out enables offloading of read-only workloads to secondary replicas within a Business Critical Azure SQL Database. By redirecting read-heavy queries, such as reporting and analytics tasks, to these replicas, the primary database is freed from the additional load, allowing write operations to proceed without performance degradation. This feature is particularly useful for applications where reporting and transactional processing occur simultaneously, ensuring that user-facing applications remain responsive even during heavy reporting periods.

Auto-Failover Groups provide high availability and disaster recovery by enabling automatic failover across regions. While this ensures business continuity in the event of a regional outage, it does not specifically provide the ability to offload read queries from the primary database. Its focus is on availability rather than performance optimization for read-intensive workloads, so it would not meet the specific requirement of reducing the primary database’s read load.

Elastic Pools are designed to allow multiple databases to share a pool of compute resources, which can help with cost management and resource efficiency across multiple databases. However, Elastic Pools do not create secondary replicas to handle read queries, nor do they improve read performance for an individual Business Critical database. They are useful for managing multiple small to medium databases but do not address the need for offloading read-only queries from a single high-performance database.

Transparent Network Redirect is a feature that helps client applications reconnect to the correct database after a failover event. It does not provide functionality for query offloading or secondary read replicas. Its primary use is to maintain connection consistency during failover scenarios, not to optimize query performance or reduce load on the primary database.

Read Scale-Out is the most appropriate feature for offloading read workloads because it allows reporting and read-only queries to be executed on secondary replicas, preserving the performance of the primary database for write operations. By leveraging this feature, organizations can ensure efficient resource utilization and maintain consistent performance for both transactional and reporting workloads, which makes it the correct choice for this scenario.

Question 183 

You want to encrypt sensitive columns so that client applications can query them without exposing plaintext to administrators. Which feature should you implement?

A) Always Encrypted
B) Transparent Data Encryption
C) Dynamic Data Masking
D) Row-Level Security

Answer:  A) Always Encrypted

Explanation: 

Always Encrypted is designed to protect sensitive data at the column level while allowing applications to query the data without revealing it in plaintext to administrators or the database engine. Encryption and decryption are performed on the client side, which ensures that sensitive data, such as social security numbers or credit card information, remains encrypted in transit and at rest. This allows applications to perform computations, comparisons, and searches on encrypted data while keeping it hidden from unauthorized parties.

Transparent Data Encryption (TDE) encrypts the database at rest, securing the physical files on disk. While it is effective at protecting data from unauthorized access at the storage level, TDE does not prevent administrators or the database engine from accessing plaintext during query execution. As such, TDE alone does not provide the level of column-specific security required when administrators must be restricted from seeing sensitive values.

Dynamic Data Masking (DDM) obfuscates data in query results based on user roles. For example, a user might see only the last four digits of a credit card number. However, the underlying data in the database remains in plaintext, and privileged users can still access the full values. DDM is primarily a presentation-layer control, not a true encryption solution.

Row-Level Security (RLS) controls access to rows in a table based on user attributes, such as department or role. It is effective for enforcing access restrictions but does not encrypt data or protect it from being viewed in plaintext. RLS ensures proper access policies but cannot secure sensitive data at the column level.

Always Encrypted is the correct solution because it combines strong encryption with client-side key management, ensuring that sensitive columns remain protected from administrators, auditors, or anyone without access to the keys. It allows applications to operate normally on encrypted data without ever exposing it in plaintext, making it the ideal feature for highly sensitive data protection.

Question 184 

You need to store audit logs securely in Azure with long-term retention for regulatory compliance. Which destination should you select?

A) Azure Storage account
B) Log Analytics workspace
C) Event Hubs
D) Power BI

Answer:  A) Azure Storage account

Explanation:

Azure Storage accounts provide a secure, durable, and scalable repository for audit logs. They allow long-term retention of data with configurable retention policies, supporting compliance with regulatory requirements such as financial or healthcare regulations. Storage accounts also provide features like encryption at rest, access control, and replication options to ensure that audit logs remain secure and durable over extended periods, often spanning several years.

Log Analytics workspaces are optimized for real-time analysis, monitoring, and querying of logs. While they provide powerful search and analytics capabilities, they are not intended for multi-year storage and may not meet long-term regulatory retention requirements. Log Analytics is better suited for operational monitoring rather than archival storage.

Event Hubs is a streaming data platform used to ingest and process large volumes of events in real time. While it is ideal for telemetry and logging pipelines, Event Hubs is not a persistent storage solution and does not meet requirements for long-term archival of audit logs.

Power BI is a visualization and reporting platform. Although it can connect to log sources and display data, it cannot serve as a storage solution for compliance purposes. Power BI is designed for analytics rather than secure retention of sensitive audit data.

Azure Storage is the correct choice because it offers secure, long-term storage with retention policies and encryption. It fulfills regulatory requirements while providing a central repository for audit logs that can be accessed and maintained over years, ensuring compliance and data durability.

Question 185 

You want to monitor query performance and preserve historical execution plans to detect regressions over time. Which feature should you enable?

A) Query Store
B) Extended Events
C) SQL Auditing
D) Intelligent Insights

Answer:  A) Query Store

Explanation:

Query Store is specifically designed to capture detailed execution statistics and execution plans over time. It stores historical query performance data, allowing administrators to analyze trends, compare plan performance, and detect regressions. By retaining historical plans, Query Store makes it possible to identify changes in query performance, pinpoint problematic queries, and understand the impact of database modifications, such as indexing changes or schema updates. This capability ensures that queries remain optimized over time and provides a foundation for proactive performance management.

Extended Events is a versatile event-handling system that allows capturing diagnostic data about server and database activities. While it can collect information about query execution, it does not automatically maintain historical execution plans for performance analysis. Extended Events is more appropriate for ad hoc troubleshooting and detailed monitoring of specific events rather than ongoing performance regression tracking, which makes it less suitable for continuous performance trend analysis.

SQL Auditing provides a mechanism for tracking database actions, such as changes to schema or data access, to satisfy compliance requirements. Auditing focuses on capturing who performed which operation and when but does not store query execution statistics or preserve historical execution plans. Its purpose is security and compliance rather than query performance monitoring, so it cannot provide the insights needed to detect regressions over time.

Intelligent Insights analyzes performance data and offers recommendations for optimizing workloads. While it can identify bottlenecks or performance anomalies and provide guidance for corrective action, it does not retain execution plan history in the same way that Query Store does. It is more advisory than historical, meaning it supports optimization but cannot fully replace the ability to track plan changes over time.

Query Store is the correct solution because it directly addresses the need to monitor query performance and preserve execution plans for historical analysis. By capturing detailed statistics and plan information, it allows administrators to detect regressions, understand root causes, and take corrective actions. Its ability to maintain a complete historical record makes it an indispensable tool for proactive performance management in Azure SQL Database.

Question 186 

You want to detect and automatically remediate query plan regressions in Azure SQL Database. Which feature should you use?

A) Automatic Plan Correction
B) Query Store
C) Intelligent Insights
D) Extended Events

Answer:  A) Automatic Plan Correction

Explanation:

Automatic Plan Correction identifies queries experiencing performance regressions caused by changes in execution plans. When a regression is detected, it automatically enforces a previously known good plan to restore performance, ensuring consistency without manual intervention. This automation is especially valuable in dynamic workloads or continuously evolving databases, as it reduces the administrative overhead required to monitor query performance and manually fix problematic plans.

Query Store captures query execution statistics and plans over time, providing historical insight into performance trends. While it helps identify plan regressions and provides the data necessary to manually enforce a good plan, Query Store alone does not automatically correct plan regressions. Its primary role is historical monitoring and analysis, not automated remediation, which limits its utility in scenarios requiring self-healing behavior.

Intelligent Insights monitors database performance and provides recommendations, such as suggesting index improvements or query optimizations. However, it does not automatically enforce corrections for regressions. It relies on administrators to review insights and take action, which introduces manual intervention and potential delays in addressing plan regressions.

Extended Events is a low-level monitoring tool used to collect diagnostic information about server and database activities. While it can provide detailed visibility into query execution events, it does not automatically remediate performance regressions. Its focus is on capturing data rather than enforcing corrective actions, which makes it unsuitable for automated plan correction.

Automatic Plan Correction is the correct choice because it combines monitoring and self-healing capabilities. By automatically applying known good plans when regressions occur, it ensures stable performance without requiring manual intervention. This makes it ideal for environments where maintaining query performance is critical, and administrative resources are limited.

Question 187 

You want to enforce row-level access restrictions on a table based on department or user roles. Which feature should you enable?

A) Row-Level Security
B) Dynamic Data Masking
C) Always Encrypted
D) Transparent Data Encryption

Answer:  A) Row-Level Security

Explanation:

Row-Level Security (RLS) allows fine-grained access control by applying predicates to filter rows dynamically based on user attributes or roles. This ensures that users can only access rows they are authorized to see, supporting security and compliance policies effectively. RLS policies are defined at the table level and enforced automatically during query execution, making it seamless for applications while restricting unauthorized access.

Dynamic Data Masking hides sensitive data in query results by providing obfuscated views to certain users. While it prevents unauthorized viewing of column values, it does not restrict access to specific rows. Users could still potentially access restricted rows if they have query privileges, so DDM cannot enforce true row-level access control.

Always Encrypted protects sensitive column data by encrypting it on the client side, ensuring that the server never sees plaintext values. It focuses on protecting sensitive information but does not manage access at the row level. It cannot restrict which rows a user can query, making it unsuitable for enforcing access based on roles or departments.

Transparent Data Encryption secures the entire database at rest, protecting data files from unauthorized access. While it ensures encryption of data on disk, it does not manage row-level access or query filtering. TDE is a storage-level security measure, not an access-control feature.

Row-Level Security is the correct choice because it directly enforces access restrictions on a per-row basis according to user roles or attributes. This allows organizations to control access to sensitive data at a granular level while maintaining seamless application performance and security compliance.

Question 188 

You want to monitor anomalous access patterns and receive proactive alerts for potential security threats. Which feature should you enable?

A) Threat Detection
B) Query Store
C) Automatic Plan Correction
D) SQL Auditing

Answer:  A) Threat Detection

Explanation:

Threat Detection continuously monitors database activity for unusual patterns such as SQL injection attempts, abnormal login behavior, or unauthorized access. It generates alerts for suspicious activity, enabling administrators to respond proactively to potential security incidents. By analyzing behavior over time and detecting deviations from normal activity, Threat Detection helps prevent security breaches before they cause significant damage.

Query Store captures historical query execution data for performance analysis, but it does not provide monitoring of anomalous access or generate security alerts. Its focus is on query performance, not security monitoring, making it inadequate for detecting potential threats.

Automatic Plan Correction addresses query performance regressions by applying known good plans automatically. While it ensures stability and consistency in execution plans, it does not monitor access patterns or alert administrators about suspicious behavior. Its domain is performance management, not security.

SQL Auditing records database activity for compliance purposes, tracking who did what and when. While auditing provides a historical log of activities, it does not actively detect anomalies or provide real-time alerts. It is reactive rather than proactive and relies on manual review to identify suspicious activity.

Threat Detection is the correct choice because it provides proactive monitoring and alerting for security threats. By analyzing database behavior, it enables organizations to respond quickly to anomalies and potential attacks, helping maintain a secure environment.

Question 189 

You need to maintain database backups for multiple years to comply with regulatory retention policies. Which feature should you enable?

A) Long-Term Backup Retention
B) Geo-Redundant Backup Storage
C) Auto-Failover Groups
D) Transparent Data Encryption

Answer:  A) Long-Term Backup Retention

Explanation:

Long-Term Backup Retention (LTR) allows storing full database backups in Azure Storage for multiple years, ensuring compliance with regulatory requirements. LTR supports configurable retention policies and enables organizations to retain backups for extended periods while still being able to restore them when needed. This feature is essential for industries such as finance, healthcare, and government, where regulatory compliance mandates the long-term retention of critical data.

Geo-Redundant Backup Storage ensures that backups are replicated across regions, providing disaster recovery and resiliency. While this improves availability and protects against regional outages, it does not address regulatory retention requirements. GRB focuses on redundancy rather than long-term storage duration.

Auto-Failover Groups provide high availability and enable automatic failover across regions for disaster recovery purposes. They do not manage backup retention, nor do they ensure that backups are preserved for extended periods to meet regulatory requirements. Their purpose is continuity, not archival compliance.

Transparent Data Encryption protects data at rest by encrypting the database files. While TDE is important for securing backup data, it does not manage backup storage or retention policies. TDE addresses security, not regulatory compliance in terms of duration.

Long-Term Backup Retention is the correct solution because it specifically enables the storage of database backups for extended periods with configurable retention policies, fulfilling compliance requirements. It ensures durability, security, and recoverability for years, providing peace of mind for organizations under strict regulatory frameworks.

Question 190 

You want to reduce compute costs for a database that is idle most of the day while ensuring automatic scaling during peak workloads. Which deployment model should you select?

A) Serverless compute tier
B) Hyperscale tier
C) Business Critical tier
D) Elastic Pool

Answer:  A) Serverless compute tier

Explanation:

Serverless compute is designed to dynamically adjust compute resources according to workload demand. During periods of inactivity, the database can automatically pause, eliminating compute charges when the database is idle. This makes serverless compute highly cost-efficient for databases that are used intermittently or have variable workloads, as you only pay for the resources when they are actively consumed. Automatic scaling ensures the database can handle sudden spikes in activity without performance degradation.

Hyperscale tier allows independent scaling of storage and compute for extremely large databases. While it can handle high workloads and provide significant performance, it does not support pausing during idle periods. As a result, Hyperscale may be more costly for intermittent workloads that do not require continuous high compute.

Business Critical tier provides fixed compute resources with high performance, low-latency transaction processing, and premium storage redundancy. While this tier ensures reliability and performance for mission-critical workloads, it cannot dynamically scale or pause during idle periods. This can result in unnecessary compute costs for workloads that are not constantly active.

Elastic Pool allows multiple databases to share resources efficiently, distributing compute capacity across several databases. Although this can optimize costs in a multi-database environment, it does not pause idle databases or scale individual database compute automatically. Therefore, it cannot provide the same level of cost savings and workload flexibility as serverless compute.

Serverless compute tier is the correct choice because it directly addresses the requirements for cost efficiency and automatic scaling. By pausing during idle periods and scaling dynamically during peak usage, it provides a balanced solution for databases with intermittent or variable workloads, ensuring both performance and cost optimization.

Question 191

You want to offload read-only reporting queries from a primary Business Critical database without impacting write operations. Which feature should you enable?

A) Read Scale-Out
B) Auto-Failover Groups
C) Elastic Pool
D) Transparent Network Redirect

Answer:  A) Read Scale-Out

Explanation:

Read Scale-Out is a feature in Azure SQL Database that allows read-only workloads, such as reporting, analytics, or ad hoc queries, to be directed to secondary replicas of the database. These secondary replicas are synchronized copies of the primary database, ensuring that read operations can proceed without affecting the performance of write operations on the primary. This architecture helps organizations optimize database performance by distributing workload types according to their nature, which is particularly beneficial for scenarios where heavy reporting is needed alongside frequent transactional writes.

Auto-Failover Groups, while essential for high availability and disaster recovery, focus on providing automatic failover between primary and secondary databases in geographically distributed regions. They ensure continuity of service if the primary region goes down but do not offload read-only workloads. All queries, whether read or write, would still interact primarily with the main database unless explicitly routed to the failover secondary during an outage. Therefore, while Auto-Failover Groups improve resilience, they are not designed for performance optimization of read workloads.

Elastic Pool is a resource management solution that allows multiple databases to share a pool of resources, such as compute and storage. It helps manage costs and optimize resource allocation for multiple databases with varying utilization patterns. However, Elastic Pool does not create secondary replicas specifically for offloading read queries. Its focus is on cost efficiency and balanced resource sharing rather than performance optimization through workload separation.

Transparent Network Redirect is a feature that supports client reconnections following failover events. It ensures that client applications can automatically redirect to the current primary database without changing connection strings. While critical for seamless failover, it does not handle the offloading of read-only queries and does not contribute to reducing load on the primary database for reporting purposes.

The correct choice, Read Scale-Out, is specifically designed for scenarios where read-only workloads need to be separated from write-heavy operations. By routing read queries to secondary replicas, it ensures that the primary database remains optimized for transactional performance. This approach preserves the responsiveness of the primary database while supporting analytical and reporting workloads, making it the ideal solution for offloading read-intensive operations without affecting write performance.

Question 192 

You want to encrypt sensitive columns so that client applications can query them without exposing plaintext to administrators. Which feature should you implement?

A) Always Encrypted
B) Transparent Data Encryption
C) Dynamic Data Masking
D) Row-Level Security

Answer:  A) Always Encrypted

Explanation:

Always Encrypted is an advanced encryption feature that protects sensitive data at the column level in Azure SQL Database. Unlike traditional encryption methods, it ensures that sensitive information, such as social security numbers or credit card details, remains encrypted not only at rest but also during query execution. The encryption keys are stored client-side, meaning that database administrators or anyone with server-level access cannot view the plaintext data. This allows client applications to perform computations, searches, and queries on encrypted data securely.

Transparent Data Encryption (TDE) encrypts the entire database at rest, ensuring that backup files and stored data are protected from unauthorized access. While TDE prevents data exposure if storage media are stolen or compromised, it decrypts data during query execution on the server. This means administrators or those with sufficient privileges can still view the plaintext, which does not satisfy the requirement of protecting sensitive columns from server-side exposure.

Dynamic Data Masking (DDM) is a feature that hides sensitive information in query results by masking it based on user roles. For example, a credit card number might appear partially masked to unauthorized users. However, DDM does not encrypt the stored values. The underlying data remains in plaintext within the database, making it unsuitable for scenarios where plaintext must remain hidden from administrators.

Row-Level Security (RLS) enforces access policies that restrict which rows a user can see based on their identity or role. While RLS provides fine-grained access control, it does not encrypt data or prevent exposure of sensitive columns. It addresses access restrictions, not data encryption.

Always Encrypted is the correct choice because it uniquely ensures that sensitive data is encrypted end-to-end. It allows authorized applications to perform queries and computations without exposing plaintext to administrators or unauthorized personnel. By using client-side encryption keys, it maintains operational usability while safeguarding sensitive information from server-side access, meeting both security and compliance requirements.

Question 193 

You want to store audit logs securely and durably for compliance with long-term retention requirements. Which destination should you select?

A) Azure Storage account
B) Log Analytics workspace
C) Event Hubs
D) Power BI

Answer:  A) Azure Storage account

Explanation:

Azure Storage accounts provide secure, durable, and centralized storage for data, including audit logs. They support long-term retention, replication, and encryption, making them an ideal choice for meeting regulatory compliance requirements. Storage accounts can hold data for multiple years while providing redundancy options such as locally redundant storage (LRS), geo-redundant storage (GRS), or read-access geo-redundant storage (RA-GRS), ensuring durability and availability of audit records.

Log Analytics workspaces are designed for analyzing and querying log data. They allow advanced monitoring and alerting but are optimized for active querying rather than long-term storage. Retention in Log Analytics is limited, typically measured in months rather than years, which might not satisfy regulatory mandates for multi-year archival. While they provide insights into log patterns, they are not a primary storage solution for secure, long-term retention.

Event Hubs is a scalable event ingestion service that allows streaming of telemetry, logs, or event data for real-time processing. While excellent for ingesting large volumes of data, Event Hubs does not provide long-term storage or secure archival capabilities. Data must be processed and moved to durable storage for compliance purposes. It is a transport mechanism, not a repository for audit logs.

Power BI is a data visualization and reporting service. While it can create dashboards and reports based on log data, it cannot serve as a durable storage solution. Reports in Power BI are temporary views of data rather than an archival system. It does not meet compliance requirements for retaining raw audit logs for years.

The correct choice is Azure Storage account because it ensures secure, durable storage for audit logs with configurable retention policies, encryption, and replication. It satisfies compliance requirements by providing long-term accessibility to logs without risk of data loss or unauthorized access.

Question 194

You want to monitor anomalous access patterns in Azure SQL Database and receive proactive alerts for potential security threats. Which feature should you enable?

A) Threat Detection
B) Query Store
C) Automatic Plan Correction
D) SQL Auditing

Answer:  A) Threat Detection

Explanation:

Threat Detection is a proactive security feature in Azure SQL Database that monitors database activity for anomalies, such as unusual login attempts, SQL injection attacks, and suspicious access patterns. It automatically generates alerts when potential security threats are detected, enabling administrators to take timely action to protect the database. This feature is particularly valuable for identifying malicious activity or policy violations before they escalate into serious breaches.

Query Store is designed to track query performance over time, recording execution plans and query statistics to help diagnose performance issues. While useful for performance tuning, it does not provide security monitoring or alerting capabilities. It focuses solely on query optimization and regression detection rather than proactive threat identification.

Automatic Plan Correction identifies queries whose execution plans degrade over time and automatically reverts to a previously known good plan to maintain performance stability. It addresses performance regressions but does not monitor for security incidents or anomalous access patterns. Its function is entirely performance-oriented.

SQL Auditing captures database activity and writes it to audit logs for later analysis. While auditing provides visibility into actions taken against the database, it does not proactively detect anomalies or generate alerts in real-time. Administrators must manually review logs to identify suspicious activity, which may delay response times to potential threats.

Threat Detection is the correct feature because it actively monitors for suspicious behavior and provides immediate alerts, helping administrators protect the database against security risks in real-time. Unlike auditing or query-focused features, it is specifically built for proactive threat identification and mitigation.

Question 195 

You need to maintain database backups for several years to satisfy regulatory retention policies. Which feature should you enable?

A) Long-Term Backup Retention
B) Geo-Redundant Backup Storage
C) Auto-Failover Groups
D) Transparent Data Encryption

Answer:  A) Long-Term Backup Retention

Explanation:

Long-Term Backup Retention (LTR) is specifically designed to meet regulatory and compliance requirements by storing database backups for extended periods, ranging from months to years. This feature ensures that organizations can retain full, differential, and transaction log backups according to specific policies without manually managing storage or rotation. LTR allows administrators to define retention schedules, providing predictable and auditable backup storage for compliance purposes. It is ideal for industries with strict regulatory requirements, such as finance, healthcare, or government.

Geo-Redundant Backup Storage (GRS) focuses on disaster recovery rather than long-term retention. It replicates backups to a secondary geographic location to protect against regional outages or data center failures. While this enhances availability and ensures backups survive localized incidents, it does not inherently extend the retention period beyond standard backup cycles. Organizations requiring multi-year retention still need to configure LTR on top of GRS.

Auto-Failover Groups are a high availability and disaster recovery solution for Azure SQL Databases. They allow for automatic failover between primary and secondary databases to maintain uptime during planned or unplanned outages. However, Auto-Failover Groups do not manage backups or retention schedules; their focus is on service continuity rather than compliance-driven archival of backup data.

Transparent Data Encryption (TDE) encrypts the database at rest to protect data from unauthorized access. While TDE secures the stored data, it does not control retention periods or manage backup schedules. TDE is a security feature rather than a compliance or archival feature.

The correct solution is Long-Term Backup Retention because it provides a structured, policy-driven method to store backups for several years. It meets regulatory requirements and ensures that organizations have a reliable archive of database snapshots that can be restored if needed. By combining LTR with secure storage and optional geo-redundancy, organizations can satisfy both compliance and disaster recovery requirements.

Question 196 

You want to reduce compute costs for a database that is idle most of the day while supporting automatic scaling during high workloads. Which deployment model should you select?

A) Serverless compute tier
B) Hyperscale tier
C) Business Critical tier
D) Elastic Pool

Answer:  A) Serverless compute tier

Explanation:

The Serverless compute tier in Azure SQL Database is designed for databases with intermittent or unpredictable workloads. It automatically scales compute resources up or down based on demand, ensuring that high workloads are supported when needed while idle databases can be paused to reduce costs. Billing is based on the amount of compute consumed per second rather than fixed allocation, making it cost-effective for scenarios where databases are idle for significant portions of the day.

Hyperscale tier provides massive storage and allows independent scaling of storage and compute resources. While it is ideal for very large databases with high throughput requirements, it does not automatically pause idle databases or dynamically reduce compute costs in the same way as the Serverless tier. Hyperscale is optimized for scale rather than cost efficiency for idle workloads.

Business Critical tier provides high-performance resources and includes features such as multiple replicas and low-latency storage. However, the compute resources are fixed and do not scale automatically. This means that even during periods of inactivity, costs remain consistent, which is not suitable when the goal is to minimize spending during idle periods.

Elastic Pool allows multiple databases to share a set of allocated compute and storage resources. While it helps optimize resource utilization across multiple databases, it does not pause individual databases or scale resources automatically based on demand. It is more suitable for managing varying workloads across multiple databases collectively rather than cost optimization for a single intermittent workload.

Serverless compute tier is the best choice because it provides the perfect balance of cost efficiency and performance flexibility. It dynamically allocates resources during high workloads and automatically pauses during idle periods, minimizing unnecessary spending while maintaining the ability to handle peak demand when needed.

Question 197 

You want to offload read-only queries from a primary Business Critical database without affecting write operations. Which feature should you enable?

A) Read Scale-Out
B) Auto-Failover Groups
C) Elastic Pool
D) Transparent Network Redirect

Answer:  A) Read Scale-Out

Explanation:

Read Scale-Out is designed to improve performance by routing read-only workloads, such as reporting or analytics queries, to secondary replicas of a primary Business Critical database. This separation ensures that write-intensive operations on the primary database are not impacted by heavy read traffic. Organizations benefit from better performance for both transactional and reporting workloads without having to maintain additional reporting databases.

Auto-Failover Groups provide automatic failover and high availability across primary and secondary databases, ensuring service continuity during outages. However, they do not direct read queries to secondary replicas for load balancing purposes. All non-failover operations still primarily interact with the main database unless explicitly redirected.

Elastic Pool allows multiple databases to share a set of compute resources efficiently. While it optimizes resource allocation across databases, it does not offer secondary replicas to offload read queries. Therefore, heavy reporting queries would still impact primary write performance if Elastic Pool alone were used.

Transparent Network Redirect ensures seamless client reconnections following a failover event by automatically redirecting connections to the new primary database. It is focused on maintaining connectivity rather than offloading read workloads or improving database performance.

Read Scale-Out is the correct feature because it allows organizations to separate read-heavy reporting workloads from write-heavy transactional workloads. By using secondary replicas for read operations, it maintains optimal performance on the primary database while efficiently supporting analytics and reporting activities.

Question 198 

You want to encrypt sensitive columns and allow client applications to query them without exposing plaintext to administrators. Which feature should you implement?

A) Always Encrypted
B) Transparent Data Encryption
C) Dynamic Data Masking
D) Row-Level Security

Answer:  A) Always Encrypted

Explanation:

Always Encrypted is designed to protect sensitive column data in Azure SQL Database. The encryption occurs on the client side, and decryption keys never leave the client environment. This ensures that administrators and other database users cannot access plaintext data. Applications can still perform computations and queries on the encrypted data, providing both security and operational usability.

Transparent Data Encryption (TDE) encrypts data at rest but decrypts it during query execution. This protects data from unauthorized access on disk but does not prevent administrators from viewing sensitive information during query operations. It is insufficient when column-level confidentiality is required.

Dynamic Data Masking (DDM) hides sensitive data in query results based on user roles. While it prevents users from seeing certain values in results, it does not encrypt the underlying data. The plaintext remains stored in the database, making it unsuitable for scenarios where administrators should not see sensitive data.

Row-Level Security (RLS) controls access to rows based on user identity or roles. It restricts access but does not encrypt data or protect sensitive columns. Its purpose is access control, not encryption.

Always Encrypted is the correct solution because it allows secure querying of sensitive columns without exposing plaintext to administrators. It combines security, compliance, and usability, ensuring sensitive information remains confidential while applications continue to operate normally.

Question 199 

You want to store audit logs securely and durably for compliance with long-term retention requirements. Which destination should you select?

A) Azure Storage account
B) Log Analytics workspace
C) Event Hubs
D) Power BI

Answer:  A) Azure Storage account

Explanation:

Azure Storage accounts provide a secure, durable, and centralized location to store audit logs. They support long-term retention policies, replication options, and encryption, ensuring that logs remain intact and tamper-proof for regulatory compliance. Organizations can configure storage accounts for multiple years of retention, making them ideal for compliance scenarios where audit logs must be retained and available for review.

Log Analytics workspaces are primarily for querying and analyzing log data. While they provide monitoring, visualization, and alerting capabilities, their retention is limited and may not satisfy multi-year regulatory requirements. They are better suited for operational analytics than long-term archival.

Event Hubs is a real-time event ingestion service for streaming telemetry and logs. It is designed for high-throughput scenarios but does not provide long-term storage for compliance purposes. Event Hubs requires additional storage or processing layers to retain data over extended periods.

Power BI is a reporting tool that visualizes data but cannot serve as a durable storage solution. It cannot satisfy retention or archival requirements because it is designed for interactive dashboards and reports rather than secure, long-term storage.

Azure Storage account is the correct choice because it provides secure, durable, and policy-driven retention of audit logs, fulfilling compliance requirements and ensuring logs are preserved safely for extended periods.

Question 200 

You want to monitor anomalous access patterns in Azure SQL Database and receive proactive alerts for potential security threats. Which feature should you enable?

A) Threat Detection
B) Query Store
C) Automatic Plan Correction
D) SQL Auditing

Answer:  A) Threat Detection

Explanation:

Threat Detection is a proactive security feature that monitors activity in Azure SQL Database for unusual or suspicious behavior. It can detect anomalies such as failed login attempts, SQL injection attempts, and abnormal access patterns. When a potential threat is identified, administrators receive alerts so they can investigate and mitigate risks in real time. This allows organizations to respond quickly to potential breaches and maintain strong security posture.

Query Store captures query execution history, plans, and performance metrics over time. While useful for troubleshooting performance issues or plan regressions, it does not analyze activity for security threats or generate alerts for anomalous access. Its focus is performance, not security.

Automatic Plan Correction identifies queries whose execution plans degrade over time and restores known good plans to maintain performance. It helps with stability and performance optimization but does not monitor security incidents or access anomalies.

SQL Auditing logs database activity for later analysis. While it records valuable information about operations and access, auditing alone does not provide real-time detection or proactive alerts. Administrators must manually analyze audit logs to identify potential threats.

Threat Detection is the correct choice because it actively monitors for security anomalies and immediately alerts administrators. This feature enables organizations to detect and respond to threats in real time, which is essential for maintaining the integrity and confidentiality of their databases.

img