Microsoft DP-300 Administering Microsoft Azure SQL Solutions Exam Dumps and Practice Test Questions Set 8 Q141-160
Visit here for our full Microsoft DP-300 exam dumps and practice test questions.
Question 141
You want to automatically scale compute resources for a database based on workload and pause it when idle to reduce costs. Which deployment model should you use?
A) Serverless compute tier
B) Business Critical tier
C) Hyperscale tier
D) Elastic Pool
Answer: A) Serverless compute tier
Explanation:
The serverless compute tier is designed to dynamically adjust compute resources based on workload demands. This means that during periods of high activity, the database automatically scales up compute resources to maintain performance, and when the workload decreases, it scales down to reduce costs. Additionally, serverless databases can pause automatically during extended periods of inactivity. This pause capability ensures that no compute resources are billed while the database is idle, making it highly cost-effective for workloads that are intermittent or unpredictable. Queries automatically resume when the database is needed again, offering a seamless experience for users and applications.
The Business Critical tier provides a fixed allocation of compute and storage resources with high availability and low-latency access due to its use of local SSD storage and multiple replicas. While it ensures excellent performance for mission-critical applications and supports features like high-availability replicas, it does not automatically scale compute resources based on demand. Furthermore, the Business Critical tier cannot pause the database to save costs. Its main strength lies in providing consistent, high-performance computing rather than dynamic cost optimization.
The Hyperscale tier supports large databases and allows independent scaling of compute and storage. This makes it ideal for very large workloads, especially those that require rapid storage growth or highly scalable compute. While Hyperscale provides flexibility for compute and storage separation, it does not offer automatic pausing during idle periods, and the scaling is more manual compared to serverless. This tier is excellent for scenarios with massive data volumes and steady workloads but not optimized for variable workloads where cost efficiency during idle periods is a priority.
Elastic Pools allow multiple databases to share a pool of compute and storage resources. This approach optimizes resource usage across multiple databases and is particularly beneficial when managing several small or medium-sized databases with variable workloads. However, Elastic Pools do not automatically scale compute resources for individual databases in response to their specific workload, nor do they pause databases when idle. Therefore, for a single database with intermittent activity and the need to reduce costs dynamically, the serverless compute tier remains the optimal choice. It combines automatic scaling and pausing capabilities, directly addressing both performance and cost-efficiency requirements.
Question 142
You need to offload read-only queries from the primary Business Critical database without impacting writes. Which feature should you enable?
A) Read Scale-Out
B) Auto-Failover Groups
C) Elastic Pool
D) Transparent Network Redirect
Answer: A) Read Scale-Out
Explanation:
Read Scale-Out is a feature designed to enable secondary replicas of a Business Critical database to handle read-only queries. This allows the primary database to focus on write operations, maintaining high transaction throughput while read workloads are offloaded. By leveraging the secondary replicas, applications can query large amounts of data without negatively impacting write performance, which is particularly useful for reporting, analytics, and reporting-heavy workloads. This ensures that read operations do not compete with write transactions, optimizing overall database performance.
Auto-Failover Groups provide high availability and disaster recovery by replicating databases across regions and automatically redirecting client connections in the event of a failure. While this ensures business continuity, it does not provide a mechanism to offload read-only queries under normal operations. Its primary function is failover management and not performance optimization for read-heavy workloads.
Elastic Pools, as mentioned previously, allow multiple databases to share resources. While this optimizes resource allocation across multiple databases, it does not provide secondary replicas to specifically handle read-only queries. Elastic Pools are useful when managing multiple databases with varying workloads, but they do not directly address the need to reduce load on the primary database during read operations.
Transparent Network Redirect ensures that client connections are automatically redirected after a failover occurs. This is critical for maintaining seamless application connectivity but does not provide any mechanism to offload read workloads or improve performance for queries. Read Scale-Out is specifically designed to allow secondary replicas to handle read-only workloads, making it the correct choice when the goal is to relieve the primary database of read-heavy operations while maintaining write performance.
Question 143
You want to encrypt sensitive columns so that client applications can query data without exposing plaintext to administrators. Which feature should you use?
A) Always Encrypted
B) Transparent Data Encryption
C) Dynamic Data Masking
D) Row-Level Security
Answer: A) Always Encrypted
Explanation:
Always Encrypted is a feature that encrypts sensitive data at the column level and ensures that decryption occurs only on the client side. This means that even database administrators and service operators cannot view sensitive information in plaintext. Applications can still query and filter data as needed because the encryption is transparent to the client, allowing secure computations without exposing the underlying data. This is ideal for scenarios where strict compliance and privacy are required, such as financial or personal data.
Transparent Data Encryption encrypts data at rest on the server, protecting files and backups from unauthorized access. However, when a query is executed, the data is decrypted for processing, which means that administrators with access to the database can still potentially see plaintext values. TDE ensures protection against storage-level threats but does not protect data from users or administrators with database access during query execution.
Dynamic Data Masking obscures data in query results based on user permissions, showing masked or partial values to certain users. While it improves privacy in reporting scenarios, it does not encrypt the underlying data stored in the database. Users with sufficient privileges can still access the original data, meaning it does not provide strong protection against unauthorized access.
Row-Level Security restricts access to specific rows based on user roles or predicates. It is an access control mechanism rather than an encryption mechanism. It ensures that users can only see the rows they are authorized to access but does not encrypt or protect sensitive column data from administrators or other high-privilege roles. Always Encrypted directly addresses the requirement of keeping data encrypted while still queryable on the client side, making it the correct choice.
Question 144
You want to store audit logs securely in Azure for compliance with long-term retention requirements. Which destination should you choose?
A) Azure Storage account
B) Log Analytics workspace
C) Event Hubs
D) Power BI
Answer: A) Azure Storage account
Explanation:
Azure Storage accounts provide durable, secure, and scalable storage for data, making them ideal for long-term retention of audit logs. Storage accounts support configurable retention policies and encryption, ensuring that audit logs remain protected and compliant with regulatory requirements. They are cost-effective for storing large volumes of logs over extended periods and can integrate with various monitoring and auditing solutions.
Log Analytics workspaces are optimized for querying, analysis, and monitoring of operational data in near real-time. While they are excellent for analyzing logs and generating insights, they may not be the most suitable for long-term storage of audit data due to retention limits and associated costs for prolonged storage.
Event Hubs is a data streaming service designed to ingest large volumes of event data for processing and analysis. It is ideal for real-time event processing but does not provide persistent storage suitable for long-term compliance retention. Data in Event Hubs is typically transient unless persisted elsewhere.
Power BI is a business intelligence and reporting tool, used for visualizing and analyzing data. It is not a storage platform and does not provide long-term retention capabilities. Using it as a log repository would not satisfy compliance or regulatory requirements. Azure Storage accounts offer the necessary durability, security, and policy enforcement for storing audit logs, making them the correct choice for long-term retention.
Question 145
You want to monitor query performance and preserve historical execution plans to detect regressions over time. Which feature should you enable?
A) Query Store
B) Extended Events
C) SQL Auditing
D) Intelligent Insights
Answer: A) Query Store
Explanation:
Query Store continuously captures query execution statistics, runtime metrics, and execution plans over time. By storing this historical information, it enables database administrators to detect performance regressions and compare current execution plans with prior ones. This makes it easier to identify queries that have degraded in performance and take corrective actions such as plan forcing or tuning. Query Store is particularly useful in environments where workloads evolve over time and query performance needs proactive monitoring.
Extended Events provides a lightweight framework for collecting and analyzing diagnostic events in SQL Server and Azure SQL Database. While it is powerful for debugging and troubleshooting, it does not automatically preserve historical execution plans over long periods. Administrators must manually configure sessions and store event data, making it less convenient for continuous regression detection compared to Query Store.
SQL Auditing records database actions such as logins, schema changes, and query executions for compliance purposes. While valuable for security and auditing, it is not intended for monitoring query performance trends or storing execution plan history. SQL Auditing captures “who did what” but not detailed execution statistics needed for regression analysis.
Intelligent Insights analyzes database performance and provides recommendations for tuning, identifying potential issues based on patterns and telemetry. However, it does not maintain historical execution plan information for detailed comparisons. It provides high-level guidance but cannot replace the detailed plan history offered by Query Store. Query Store’s focus on capturing both query performance metrics and execution plans over time makes it the correct solution for detecting regressions and ensuring optimal performance.
Question 146
You want to detect and automatically remediate query plan regressions. Which feature should you enable?
A) Automatic Plan Correction
B) Query Store
C) Intelligent Insights
D) Extended Events
Answer: A) Automatic Plan Correction
Explanation:
Automatic Plan Correction is a feature designed to ensure consistent query performance by automatically identifying queries that are experiencing execution plan regressions and enforcing previously known good plans. It works by continuously monitoring query performance and comparing execution statistics over time. When a regression is detected, the system can automatically apply a plan that has historically delivered optimal performance, effectively correcting the issue without requiring manual intervention from a database administrator. This feature is particularly useful in production environments where query performance fluctuations can directly impact application responsiveness and user experience.
Query Store, while closely related, primarily functions as a repository for query performance data. It captures historical execution plans, runtime statistics, and execution metrics for individual queries. By storing this information, Query Store allows administrators and developers to analyze performance trends, identify regressions, and investigate query behavior over time. However, Query Store itself does not automatically correct regressions. While it is an essential component for diagnosing issues and enabling Automatic Plan Correction, it requires manual intervention to apply fixes or revert to previous plans, which can be time-consuming and prone to human error.
Intelligent Insights is another monitoring tool that provides recommendations and guidance on potential performance issues. It analyzes telemetry data and identifies problematic queries or configuration issues, offering insights into optimization opportunities. While Intelligent Insights helps identify the root causes of regressions, it does not actively remediate the problem. The recommendations generated require administrators to review, validate, and manually apply corrective actions. Therefore, Intelligent Insights enhances situational awareness and diagnostic capability but does not achieve the goal of automatic remediation.
Extended Events is a low-overhead monitoring framework that captures detailed diagnostic and performance data across SQL Server and Azure SQL databases. It allows administrators to track events, log performance metrics, and correlate system behavior for deep analysis. While highly flexible for custom monitoring and troubleshooting, Extended Events does not include any built-in capability for automatically applying corrections to queries. Its primary role is data collection and analysis, not automated remediation.
Automatic Plan Correction is the only option that directly addresses both detection and automatic remediation of query plan regressions. By leveraging Query Store data and system intelligence, it continuously monitors execution plans and enforces previously known good plans whenever performance degrades. This ensures consistent, predictable query performance without requiring human intervention. Therefore, for scenarios where automatic correction of regressions is required to maintain operational stability and performance, Automatic Plan Correction is the optimal choice.
Question 147
You want to enforce row-level access restrictions for a table based on department. Which feature should you enable?
A) Row-Level Security
B) Dynamic Data Masking
C) Always Encrypted
D) Transparent Data Encryption
Answer: A) Row-Level Security
Explanation:
Row-Level Security (RLS) is a feature designed specifically to enforce fine-grained access control at the row level within a database table. It dynamically filters rows returned in query results based on user attributes, such as department membership, roles, or other security policies. This allows a single table to store data for multiple departments while ensuring that users only access the rows they are authorized to see. The filtering is applied transparently to queries, so applications do not need to include custom filtering logic, reducing the risk of accidental data exposure and simplifying development.
Dynamic Data Masking, in contrast, does not restrict access to rows. Instead, it masks the values of sensitive columns to prevent unauthorized users from viewing sensitive information. While useful for protecting personally identifiable information (PII) or other confidential data, Dynamic Data Masking only obfuscates data values without restricting access to specific rows. Users can still see the existence of data, and queries may return the full set of rows, which does not satisfy scenarios requiring strict row-level access control.
Always Encrypted protects sensitive data by encrypting it at the column level so that it remains encrypted both in transit and at rest. Only authorized clients with the proper keys can decrypt and access the data. However, Always Encrypted does not implement access policies or filtering logic. It ensures data confidentiality but does not control which rows a user may access based on their identity or attributes.
Transparent Data Encryption (TDE) is designed to secure data at rest by encrypting the entire database. While it protects the database files and backups from unauthorized access at the storage level, it does not affect row-level access within the database. TDE ensures compliance with storage encryption requirements but cannot implement user-based row filtering.
Row-Level Security is uniquely suited for scenarios requiring granular access restrictions. By combining policies with predicates based on user attributes, it allows seamless enforcement of departmental or role-based access controls at the row level. Unlike masking or encryption, which protect data values or storage, RLS controls the visibility of entire rows dynamically, making it the correct solution for enforcing departmental access policies.
Question 148
You want to monitor anomalous access patterns and receive proactive alerts for potential security threats. Which feature should you enable?
A) Threat Detection
B) Query Store
C) Automatic Plan Correction
D) SQL Auditing
Answer: A) Threat Detection
Explanation:
Threat Detection is a security feature in Azure SQL Database that continuously monitors database activity to identify potentially harmful behaviors. It detects anomalies such as SQL injection attacks, abnormal login attempts, privilege escalations, or unauthorized access attempts. When suspicious activity is identified, it automatically generates alerts that can be sent to administrators via email or integrated with monitoring systems, enabling rapid response. This proactive approach helps prevent breaches before they escalate into significant security incidents.
Query Store captures query performance metrics and execution plans over time, allowing administrators to analyze query behavior and detect performance regressions. While it is valuable for performance monitoring and troubleshooting, it does not track security-related events or generate alerts for suspicious access patterns. Its functionality is focused entirely on query optimization rather than threat monitoring.
Automatic Plan Correction focuses on detecting and remediating query execution plan regressions. It monitors query performance and applies previously successful plans when regressions are detected. Although it provides automated remediation for performance issues, it does not address security threats or anomalous access patterns, making it irrelevant for proactive threat detection.
SQL Auditing records database activity by capturing detailed logs of queries, login events, and data access. These logs are valuable for compliance and post-incident investigation, but auditing is reactive rather than proactive. It does not automatically analyze patterns or issue alerts for anomalous behavior; administrators must review the logs manually or configure additional monitoring to detect threats.
Threat Detection is purpose-built for real-time monitoring, analysis, and alerting of suspicious database activity. Unlike Query Store, Automatic Plan Correction, or SQL Auditing, it focuses on security rather than performance or data encryption. By providing immediate alerts and actionable insights into potential threats, it enables proactive mitigation, making it the appropriate choice for monitoring anomalous access patterns.
Question 149
You need to maintain database backups for multiple years to comply with regulatory requirements. Which feature should you enable?
A) Long-Term Backup Retention
B) Geo-Redundant Backup Storage
C) Auto-Failover Groups
D) Transparent Data Encryption
Answer: A) Long-Term Backup Retention
Explanation:
Long-Term Backup Retention (LTR) in Azure SQL Database allows organizations to retain full database backups for multiple years, directly supporting regulatory and compliance requirements. LTR enables administrators to define policies specifying how long backups should be kept, ensuring that historical data is preserved for audits, legal compliance, or disaster recovery. The feature stores backups in Azure Storage and provides mechanisms to restore databases from any point within the retention window, making it essential for long-term data governance.
Geo-Redundant Backup Storage provides replication of backups across geographically separated Azure regions. This ensures that data can be recovered even in the event of a regional disaster. While Geo-Redundant Backup Storage improves reliability and disaster recovery capabilities, it does not control how long backups are retained. Without specifying retention policies, backups may still expire according to standard retention periods, which may not meet multi-year regulatory requirements.
Auto-Failover Groups are designed to provide high availability and disaster recovery for databases by replicating them across regions and automatically redirecting clients in the event of a failure. While they enhance availability and resilience, they do not store long-term backups or satisfy regulatory retention requirements. Their purpose is to minimize downtime and ensure continuous application access rather than to preserve historical data for extended periods.
Transparent Data Encryption (TDE) protects data at rest by encrypting database files and backups. While it addresses security requirements, encryption alone does not extend the backup retention period. TDE ensures that stored backups remain secure, but it does not define or manage how long backups are kept. Only Long-Term Backup Retention directly addresses the requirement to maintain backups for multiple years, making it the correct solution for compliance-driven data retention.
Question 150
You want to reduce compute costs for a database that is idle most of the day while allowing automatic scaling during high workloads. Which deployment model should you use?
A) Serverless compute tier
B) Hyperscale tier
C) Business Critical tier
D) Elastic Pool
Answer: A) Serverless compute tier
Explanation:
The Serverless compute tier in Azure SQL Database is designed to automatically scale compute resources based on workload demand. It monitors CPU and memory usage and adjusts the allocation dynamically, ensuring that resources are available during peak times but reduced when the database is idle. Serverless can also pause the database entirely during periods of inactivity, which significantly reduces compute costs because charges are applied only for actual usage. This makes it particularly cost-effective for workloads that experience intermittent or unpredictable traffic patterns.
Hyperscale tier provides virtually unlimited storage and rapid scaling of compute and storage independently, making it ideal for very large databases or workloads with high growth rates. While it offers powerful scaling capabilities, Hyperscale does not automatically pause databases or reduce costs when the system is idle. It is optimized for large, high-performance workloads rather than cost-sensitive intermittent workloads.
Business Critical tier provides consistently high performance with high availability, replication, and low latency. It is ideal for workloads that require strong transactional consistency and guaranteed performance. However, compute resources in this tier are provisioned upfront and do not automatically scale down when idle, meaning idle periods still incur full compute costs. It is better suited for continuously active workloads.
Elastic Pool allows multiple databases to share a set of allocated resources, which helps optimize cost for many small to medium databases with variable workloads. However, individual databases cannot pause, and resources are shared rather than automatically scaled per database. It does not provide the granular scaling or pause capabilities of the Serverless tier.
Serverless compute is uniquely suited to situations where workloads fluctuate, and cost efficiency is a priority. Its ability to automatically scale up during demand and pause during idle periods ensures optimal performance without unnecessary cost, making it the best choice for databases with intermittent workloads.
Question 151
You want to offload read-only reporting queries from a primary Business Critical database without affecting write operations. Which feature should you enable?
A) Read Scale-Out
B) Auto-Failover Groups
C) Elastic Pool
D) Transparent Network Redirect
Answer: A) Read Scale-Out
Explanation:
Read Scale-Out is a feature designed specifically for Business Critical tier databases in Azure SQL. It allows read-only queries to be executed on secondary replicas, which are typically used for high availability purposes. By routing reporting or analytic workloads to these secondary replicas, the primary database’s resources remain dedicated to handling write operations and transactional workloads. This separation ensures that heavy reporting queries do not interfere with critical online transactions, maintaining optimal performance for the primary database. Read Scale-Out is particularly beneficial for workloads where reporting or analytics are frequent but should not impact operational responsiveness.
Auto-Failover Groups are primarily intended to provide high availability and disaster recovery for Azure SQL databases. They allow automatic failover between a primary and secondary database in the event of an outage, ensuring continuity of service. While Auto-Failover Groups support read-only routing for replicas, their main purpose is failover management, not consistent offloading of read workloads for performance optimization. Using Auto-Failover Groups alone does not achieve the targeted performance benefit of separating read-only queries from write operations under normal circumstances.
Elastic Pools are designed to allow multiple databases to share a pool of resources, such as CPU and memory, providing cost-effective resource management for variable workloads. While Elastic Pools help balance resource usage among databases, they do not provide the ability to offload read-only queries from a specific primary database to secondary replicas. The feature focuses on resource allocation rather than workload separation. For scenarios where high transaction throughput must coexist with heavy reporting, Elastic Pools alone are insufficient.
Transparent Network Redirect is a mechanism used to automatically redirect client connections to the appropriate server endpoint, often after failover events. Its purpose is to simplify connection management and maintain application continuity during server or regional failovers. However, it does not handle query routing or offloading. It cannot separate read-only workloads from write operations in real-time, which is critical for minimizing performance impact on the primary database.
The correct solution is Read Scale-Out because it directly addresses the need to route read-only queries to secondary replicas. By doing so, it preserves the primary database’s capacity for writes while still enabling analytics and reporting on up-to-date data. This targeted offloading is not the primary function of Auto-Failover Groups, Elastic Pools, or Transparent Network Redirect, making Read Scale-Out the optimal choice for this scenario.
Question 152
You want to encrypt sensitive columns so that client applications can query them without exposing plaintext to administrators. Which feature should you implement?
A) Always Encrypted
B) Transparent Data Encryption
C) Dynamic Data Masking
D) Row-Level Security
Answer: A) Always Encrypted
Explanation:
Always Encrypted is an advanced security feature in Azure SQL Database that ensures sensitive data remains encrypted not only at rest and in transit but also during query execution. Encryption keys are maintained on the client side, which means SQL Server never sees plaintext data. This allows applications to query encrypted data securely, performing operations such as searches or comparisons without exposing the underlying sensitive information. This feature is particularly suitable for scenarios where compliance regulations demand that administrators or DBAs cannot view confidential data, such as personally identifiable information or financial records.
Transparent Data Encryption (TDE) encrypts the entire database at rest, including backups, ensuring that data cannot be read directly from storage. However, TDE decrypts data when it is queried, so administrators and SQL Server processes can see the plaintext. While TDE is crucial for protecting data on disk, it does not prevent server-side access during query execution, making it insufficient for scenarios requiring client-side encryption.
Dynamic Data Masking (DDM) is designed to obscure sensitive data in query results by replacing values with masked representations based on user roles. While it helps prevent unauthorized users from viewing sensitive data, it does not encrypt the data itself. The original values are still stored in plaintext in the database and accessible to administrators, which fails to meet the requirement for encrypting data against server-side exposure.
Row-Level Security (RLS) allows the database to restrict which rows are visible to which users. RLS enforces access policies but does not encrypt any data or protect it from administrators. It is focused on controlling visibility rather than ensuring encryption or preventing exposure during queries.
Always Encrypted is the correct choice because it fully encrypts sensitive columns and allows applications to perform operations on encrypted data without exposing plaintext to SQL Server or administrators. Unlike TDE, DDM, or RLS, it provides true client-side encryption and is specifically designed for secure querying of confidential information.
Question 153
You want to store audit logs securely and durably for regulatory compliance with long-term retention. Which destination should you select?
A) Azure Storage account
B) Log Analytics workspace
C) Event Hubs
D) Power BI
Answer: A) Azure Storage account
Explanation:
Azure Storage accounts provide highly durable, secure, and cost-effective storage for data, including audit logs. With support for redundancy options such as locally redundant storage (LRS) or geo-redundant storage (GRS), audit logs are protected against hardware failures or regional outages. Azure Storage also allows organizations to implement retention policies that satisfy regulatory requirements, often spanning multiple years. Its capability to store raw log files in an immutable, secure manner ensures that logs remain tamper-resistant and accessible for auditing or compliance reporting.
Log Analytics workspace is designed primarily for querying, monitoring, and visualizing telemetry data. While it can ingest audit logs and provide advanced analytics, its retention is typically limited compared to regulatory long-term storage requirements, and costs can increase significantly for multi-year retention. It is optimized for operational insight rather than archival storage, making it less suitable for scenarios requiring strict compliance with long-term retention policies.
Event Hubs is a high-throughput data streaming platform intended for ingesting and processing large volumes of events in near real-time. It excels at routing telemetry to analytics pipelines but does not provide durable long-term storage. Logs sent to Event Hubs must be consumed and stored elsewhere, so it cannot serve as the primary repository for regulatory audit logs.
Power BI is a visualization tool for reporting and dashboarding. While it can be used to display audit trends or summarize activity, it does not store raw logs or provide durable retention suitable for compliance. It cannot act as an archival repository or fulfill regulatory retention mandates.
The correct choice is an Azure Storage account because it ensures that audit logs are securely stored, durable, and compliant with long-term retention requirements. Unlike Log Analytics, Event Hubs, or Power BI, it provides the necessary durability, security, and policy controls needed for compliance-driven log retention
Question 154
You want to monitor anomalous access patterns in Azure SQL Database and receive proactive alerts for potential threats. Which feature should you enable?
A) Threat Detection
B) Query Store
C) Automatic Plan Correction
D) SQL Auditing
Answer: A) Threat Detection
Explanation:
Threat Detection in Azure SQL Database proactively identifies suspicious activity, such as potential SQL injection attacks, anomalous logins, or unusual data access patterns. When such anomalies are detected, the system sends alerts to administrators, enabling timely intervention. This feature leverages advanced algorithms and machine learning to recognize deviations from normal user behavior, making it highly effective for early detection of security threats before they escalate. Threat Detection also integrates with Azure Security Center, providing centralized monitoring and management of security alerts.
Query Store is a performance monitoring feature that tracks query execution plans and runtime statistics over time. Its purpose is to detect query performance regressions and optimize execution plans. While it is invaluable for performance troubleshooting and tuning, it does not monitor security-related events or generate alerts for anomalous access patterns.
Automatic Plan Correction is focused on performance optimization by detecting query plan regressions and automatically applying better execution plans. It does not provide any security monitoring or alerting capabilities. Its scope is strictly limited to ensuring consistent query performance rather than identifying potential threats or suspicious activity.
SQL Auditing logs database events, including data access and changes, to external storage or monitoring systems. While auditing captures valuable information about database activity, it does not provide proactive alerts or anomaly detection. Administrators must manually analyze audit logs or set up separate mechanisms to detect threats, which is less immediate and actionable than Threat Detection.
The correct solution is Threat Detection because it directly addresses the requirement for proactive monitoring and alerting of potential threats. Unlike Query Store, Automatic Plan Correction, or SQL Auditing, it is purpose-built to detect anomalous access patterns and notify administrators in real time.
Question 155
You want to maintain backups for multiple years to comply with regulatory retention policies. Which feature should you enable?
A) Long-Term Backup Retention
B) Geo-Redundant Backup Storage
C) Auto-Failover Groups
D) Transparent Data Encryption
Answer: A) Long-Term Backup Retention
Explanation:
Long-Term Backup Retention (LTR) in Azure SQL Database is designed specifically to store backups for extended periods, often spanning several years. This feature supports compliance requirements for financial, healthcare, or other regulated industries that mandate long-term retention of data. LTR allows administrators to define backup schedules and retention policies to meet organizational or legal requirements, ensuring that historical data remains accessible and secure for auditing or restoration purposes.
Geo-Redundant Backup Storage (GRS) replicates database backups to a secondary region, protecting against regional failures. While GRS improves disaster recovery and availability, it does not extend the duration of backup retention. Its primary focus is resiliency rather than regulatory compliance for long-term storage.
Auto-Failover Groups provide high availability by allowing automatic failover of databases between regions. This ensures minimal downtime during regional outages but does not address the storage or retention period of backups. Failover capabilities are unrelated to regulatory compliance in terms of multi-year data retention.
Transparent Data Encryption (TDE) secures data at rest by encrypting the database and its backups. While critical for protecting sensitive data, TDE does not determine the duration for which backups are retained. Encryption complements retention policies but does not manage them.
The correct feature is Long-Term Backup Retention because it directly meets the requirement of storing backups for multiple years. Unlike GRS, Auto-Failover Groups, or TDE, LTR explicitly enables compliance with extended retention regulations while ensuring backups remain secure and accessible.
Question 156
You want to reduce compute costs for a database that is idle most of the day while supporting automatic scaling during high workloads. Which deployment model should you select?
A) Serverless compute tier
B) Hyperscale tier
C) Business Critical tier
D) Elastic Pool
Answer: A) Serverless compute tier
Explanation:
The serverless compute tier is designed to optimize cost efficiency for databases with unpredictable or intermittent workloads. This model automatically scales compute resources up or down based on the actual workload demand. When the database is idle, serverless can pause compute entirely, which stops billing for compute resources while maintaining storage persistence. This makes it ideal for workloads that experience periods of inactivity interspersed with bursts of high demand, such as development environments, lightly used applications, or seasonal workloads.
The hyperscale tier provides independent scaling of storage and compute, enabling very large databases with high throughput requirements. It allows rapid growth in storage capacity and can scale compute to handle heavy workloads. However, hyperscale does not pause compute when the database is idle. Because compute is always provisioned in hyperscale, the cost-saving benefit of pausing during low activity periods is not available. Hyperscale is better suited for applications requiring high availability and large-scale storage rather than cost optimization for intermittent workloads.
The Business Critical tier offers fixed, high-performance compute resources with built-in high availability and redundancy. It is designed for mission-critical workloads requiring fast transaction processing and minimal latency. However, it does not automatically scale compute up or down, nor does it pause during idle periods. This tier ensures predictable performance and reliability, but for workloads with significant idle time, it can result in higher costs since the provisioned compute remains active regardless of utilization.
Elastic Pools are useful for managing multiple databases that share a set of resources. They provide a way to allocate compute and storage across multiple databases to optimize overall resource usage. While they improve efficiency for multiple databases with varying utilization patterns, individual databases in an Elastic Pool do not automatically pause when idle, nor do they scale independently in response to workload. Elastic Pools are more appropriate for organizations managing many small or medium databases rather than minimizing costs for a single intermittently active database.
Serverless compute tier is the best solution for the scenario described because it provides both automatic scaling during high workloads and cost savings during idle periods. By adjusting compute dynamically and pausing when not in use, it combines operational efficiency with cost reduction. None of the other options offer the same combination of automated scaling and idle compute pausing, making serverless the clear choice for variable workload environments.
Question 157
You want to offload read-only reporting queries from a primary Business Critical database without impacting write operations. Which feature should you enable?
A) Read Scale-Out
B) Auto-Failover Groups
C) Elastic Pool
D) Transparent Network Redirect
Answer: A) Read Scale-Out
Explanation:
Read Scale-Out allows secondary replicas of a Business Critical database to handle read-only workloads. This capability is particularly useful for offloading reporting queries, analytics, and dashboards, which often consume significant read resources. By redirecting these operations to secondary replicas, the primary database remains focused on write operations, ensuring high performance for transactional workloads. This separation of read and write workloads helps maintain application responsiveness and prevents resource contention on the primary instance.
Auto-Failover Groups provide high availability by replicating databases across regions and enabling automatic failover during outages. While this feature ensures business continuity and seamless client redirection during failures, it does not provide a mechanism to offload read-only queries. Its focus is on resilience and disaster recovery rather than load distribution between primary and secondary replicas.
Elastic Pools allow multiple databases to share a pool of compute and storage resources. This approach is effective for cost optimization and resource efficiency across many databases with fluctuating workloads. However, Elastic Pools do not provide secondary replicas or support offloading read workloads from a primary database. They manage resources collectively but cannot separate read and write workloads on a single database.
Transparent Network Redirect helps client applications reconnect to a database automatically after failover events. While this improves the client experience and minimizes downtime during failover, it does not redistribute workloads or handle read-only query offloading. It is purely a network-level convenience for continuity rather than performance optimization.
Read Scale-Out is the correct solution for offloading read-heavy workloads from a primary Business Critical database. It allows secondary replicas to serve reporting and analytical queries without affecting transactional performance. Other options focus on disaster recovery, resource sharing, or client reconnections, none of which address the core requirement of offloading read queries to reduce load on the primary database.
Question 158
You want to encrypt sensitive columns so that client applications can query them without exposing plaintext to administrators. Which feature should you implement?
A) Always Encrypted
B) Transparent Data Encryption
C) Dynamic Data Masking
D) Row-Level Security
Answer: A) Always Encrypted
Explanation:
Always Encrypted is specifically designed to protect sensitive column data by keeping it encrypted both at rest and in transit. Client applications perform encryption and decryption locally, ensuring that SQL Server or database administrators never have access to plaintext values. This allows applications to query encrypted data safely while complying with strict data privacy regulations. Sensitive information, such as credit card numbers or personally identifiable information, remains protected during database operations, preventing unauthorized access or insider threats.
Transparent Data Encryption encrypts the entire database at rest, protecting data from unauthorized access to the storage layer. However, TDE decrypts data when it is read by authorized queries. While TDE protects against disk-level theft or tampering, it does not prevent administrators from viewing sensitive data or performing queries that expose plaintext. It is an important security feature, but it does not satisfy the requirement of restricting visibility at the application level.
Dynamic Data Masking conceals sensitive information in query results by displaying masked values to unauthorized users. This is useful for reducing accidental exposure of data during application queries or testing. However, it does not encrypt the stored values themselves. Administrators or users with sufficient privileges can still access the unmasked data directly, so it does not provide the same level of protection as Always Encrypted.
Row-Level Security restricts access to specific rows in a table based on user context, such as department or role. It is useful for enforcing data access policies within a database but does not encrypt column data. Sensitive information may still be visible to users who have access to the table, and administrators can still see all data. Row-Level Security addresses access control rather than cryptographic protection.
Always Encrypted is the appropriate choice because it encrypts sensitive columns at all times while allowing client applications to operate normally. None of the other options prevent administrators from accessing plaintext data while allowing operational query usage. Always Encrypted uniquely balances usability and security, making it the correct feature for secure column-level encryption.
Question 159
You want to store audit logs securely and durably in Azure for compliance with long-term retention requirements. Which destination should you select?
A) Azure Storage account
B) Log Analytics workspace
C) Event Hubs
D) Power BI
Answer: A) Azure Storage account
Explanation:
Azure Storage accounts provide highly durable, secure, and cost-effective storage for audit logs. They support retention for many years, making them suitable for compliance requirements such as regulatory audits and internal governance. Logs stored in Azure Storage can be encrypted at rest and managed with access control policies. The platform also supports features like versioning, replication, and lifecycle management, ensuring that audit logs remain intact and recoverable over time.
Log Analytics workspaces are optimized for querying and analyzing log data. They provide rich analytics and visualization capabilities, enabling administrators to monitor trends and detect anomalies. While excellent for operational insights, Log Analytics may not meet strict long-term retention requirements due to cost and storage limits. It is better suited for interactive analysis rather than persistent archival storage.
Event Hubs is a streaming platform for ingesting high-volume telemetry and event data. While it can temporarily store incoming events, it is not designed for long-term retention or compliance-level durability. Event Hubs serves as a conduit to other storage or analytics platforms, but it is not a final destination for archival data.
Power BI is a business intelligence and reporting tool. It is used for visualizing data and creating dashboards but does not provide storage for raw audit logs. It cannot ensure long-term retention or regulatory compliance for security logs.
Azure Storage accounts are the correct choice because they provide secure, durable, and cost-effective storage specifically designed for long-term retention of audit logs. Other options focus on analysis, visualization, or streaming, none of which satisfy compliance requirements for archival storage.
Question 160
You want to monitor anomalous access patterns and receive proactive alerts for potential security threats. Which feature should you enable?
A) Threat Detection
B) Query Store
C) Automatic Plan Correction
D) SQL Auditing
Answer: A) Threat Detection
Explanation:
Threat Detection continuously monitors database activity for unusual patterns that may indicate security threats, such as SQL injection attempts, unexpected logins, or access from unusual locations. It generates proactive alerts and recommendations, allowing administrators to respond quickly to potential breaches or suspicious activity. This capability helps organizations maintain a secure posture and ensures compliance with security policies.
Query Store is primarily used to capture query execution statistics and store historical execution plans to identify performance regressions. While it provides detailed performance insights, it does not monitor for security threats or generate alerts for anomalous behavior. Its focus is strictly on query performance management rather than security monitoring.
Automatic Plan Correction automatically detects query plan regressions and attempts to correct them. This feature improves query performance reliability but does not include monitoring for suspicious activity, security anomalies, or unauthorized access attempts. Its scope is limited to performance rather than security.
SQL Auditing records database events and stores logs for compliance and forensic purposes. While auditing provides visibility into database activity, it does not automatically generate proactive alerts for anomalies. Administrators must manually review logs or integrate them with monitoring systems for real-time detection.
Threat Detection is the appropriate feature because it actively monitors for suspicious activity and generates alerts in real time, addressing the requirement to detect and respond to security threats. Other features focus on performance or auditing without providing proactive security alerting, making Threat Detection the correct choice.
Popular posts
Recent Posts
