Microsoft DP-300 Administering Microsoft Azure SQL Solutions Exam Dumps and Practice Test Questions Set 9 Q161-180
Visit here for our full Microsoft DP-300 exam dumps and practice test questions.
Question 161
You want to automatically scale compute resources based on workload and pause the database when idle to reduce costs. Which deployment model should you select?
A) Serverless compute tier
B) Hyperscale tier
C) Business Critical tier
D) Elastic Pool
Answer: A) Serverless compute tier
Explanation:
The Serverless compute tier is designed specifically for scenarios where database workloads are intermittent or unpredictable. It automatically scales compute resources up or down based on the current workload, allowing you to pay only for the compute you use rather than a fixed allocation. Additionally, when the database remains idle for a configurable period, it can automatically pause, which further reduces costs by stopping compute charges while maintaining the data in storage. This makes it particularly suitable for development, testing, or infrequently accessed production databases where cost optimization is a priority.
The Hyperscale tier offers massive storage capacity and the ability to scale compute resources independently from storage. It is optimized for very large databases and workloads that require rapid scale-out. However, while it provides flexible scaling, it does not support automatic pausing of idle databases, meaning compute costs are still incurred even during periods of inactivity. This makes Hyperscale more suitable for applications that require high throughput and large-scale storage rather than cost-sensitive intermittent workloads.
The Business Critical tier is intended for workloads that demand high performance, low latency, and high availability. It offers fixed compute resources with dedicated replicas for high availability and fault tolerance. While it provides excellent performance for critical transactional applications, it does not include automatic scaling or pausing capabilities. This means that costs are generally higher, and the resources remain allocated even when workload demand is low, making it less suitable for scenarios where minimizing compute costs during idle periods is essential.
Elastic Pools are designed to allow multiple databases to share a pool of resources, optimizing overall resource utilization across many databases. They help manage fluctuating workloads across multiple databases but do not provide the ability to pause individual databases. Compute resources are allocated based on pool capacity, and each database cannot scale or pause independently. While Elastic Pools are valuable for multi-database management, they do not address the requirement of pausing an idle database to reduce costs.
The Serverless compute tier is the correct choice for this scenario because it uniquely combines automatic scaling with the ability to pause during periods of inactivity. It ensures that workloads are handled efficiently while minimizing compute costs during idle periods. This tier is particularly useful for databases with unpredictable or intermittent usage patterns, offering both operational efficiency and cost savings.
Question 162
You want to offload read-only reporting queries from a primary Business Critical database without impacting write performance. Which feature should you enable?
A) Read Scale-Out
B) Auto-Failover Groups
C) Elastic Pool
D) Transparent Network Redirect
Answer: A) Read Scale-Out
Explanation:
Read Scale-Out is a feature that allows secondary replicas in a Business Critical deployment to handle read-only queries. By directing reporting or analytical workloads to these secondary replicas, the primary database can continue processing write operations without degradation in performance. This ensures that transactional workloads are not affected while enabling the offloading of resource-intensive queries. Read Scale-Out provides an immediate performance benefit for applications that require frequent reporting or analytics alongside active transactional processing.
Auto-Failover Groups are primarily used for high availability and disaster recovery. They replicate data across primary and secondary servers to enable automatic failover in case of outages. While Auto-Failover Groups can improve resiliency and continuity, they do not inherently offload read-only queries from the primary database, as the secondary replicas are mainly intended for failover purposes rather than continuous read operations.
Elastic Pools allow multiple databases to share a set of allocated resources. They are useful for managing resource utilization across a collection of databases, particularly when workloads fluctuate at different times for each database. However, Elastic Pools do not provide the capability to offload read-only queries to secondary replicas, meaning the primary database still handles all write and read workloads directly, which does not meet the requirement in this scenario.
Transparent Network Redirect is designed to ensure that client applications can reconnect seamlessly after a failover event. It simplifies client reconnections but does not provide any mechanism to offload read workloads or improve reporting query performance. It is a network-level feature and does not influence how queries are distributed or executed.
The correct solution is Read Scale-Out because it is specifically engineered to offload read-only workloads to secondary replicas without affecting the primary database’s write performance. It enables improved reporting and analytics without compromising transactional throughput, which directly addresses the stated requirement.
Question 163
You want to encrypt sensitive columns and allow client applications to query them without exposing plaintext to administrators. Which feature should you implement?
A) Always Encrypted
B) Transparent Data Encryption
C) Dynamic Data Masking
D) Row-Level Security
Answer: A) Always Encrypted
Explanation:
Always Encrypted is a feature that ensures sensitive data remains encrypted both at rest and during query execution. Encryption and decryption occur on the client side, meaning the database engine and administrators never see plaintext values. This is particularly useful for scenarios involving highly sensitive data, such as personal identifiable information, credit card numbers, or health records. Applications can perform queries on encrypted data without exposing the underlying values, preserving both security and privacy.
Transparent Data Encryption (TDE) protects data at rest by encrypting the database files, backups, and logs. While TDE ensures that unauthorized users cannot access database files directly, it does not protect data during query execution. Database administrators and applications with sufficient privileges can still access plaintext data. Therefore, TDE addresses storage-level security but does not meet the requirement of preventing exposure of sensitive columns during queries.
Dynamic Data Masking (DDM) restricts the visibility of sensitive information in query results by presenting masked values to non-privileged users. While it helps reduce accidental exposure of sensitive data in outputs, it does not encrypt the underlying data. The original data remains stored in plaintext, and privileged users, including administrators, can still access it. DDM is more about obfuscation for presentation rather than actual encryption.
Row-Level Security (RLS) enforces access control at the row level, restricting which records a user can query based on their identity or role. It is useful for multi-tenant databases or selective access, but it does not provide encryption. Data remains unencrypted and fully readable by authorized users and administrators, which does not satisfy the requirement to prevent plaintext exposure.
Always Encrypted is the correct choice because it enables secure client-side querying while ensuring that sensitive columns remain encrypted throughout the database and during transmission. It provides strong security guarantees that cannot be achieved through TDE, DDM, or RLS alone.
Question 164
You need to store audit logs securely in Azure with long-term retention for regulatory compliance. Which destination should you use?
A) Azure Storage account
B) Log Analytics workspace
C) Event Hubs
D) Power BI
Answer: A) Azure Storage account
Explanation:
Azure Storage accounts offer durable, secure, and cost-effective storage for audit logs with the ability to configure retention policies suitable for regulatory compliance. They support long-term storage and can be configured with immutable storage policies to prevent tampering, making them ideal for maintaining auditable records for multiple years as required by regulations. Azure Storage also allows fine-grained access control and encryption at rest to ensure data security.
Log Analytics workspace is primarily designed for log aggregation, monitoring, and querying. It provides excellent real-time analytics capabilities and integrates well with Azure Monitor. However, retention is generally more limited, and while logs can be exported, Log Analytics is not optimized for long-term archival storage for compliance purposes. It is better suited for operational monitoring rather than audit retention.
Event Hubs is an event streaming platform used to ingest large volumes of events in near real-time. While it can temporarily store events for processing or analytics pipelines, it is not intended for long-term retention or regulatory compliance. Event Hubs is better suited for transient data streams rather than long-term audit logs.
Power BI is a reporting and visualization service that allows users to analyze and display data. While it can connect to audit logs to create dashboards, it is not a storage service and cannot serve as a compliant repository for long-term retention.
Azure Storage accounts are the correct solution because they combine durability, security, and compliance capabilities. They provide an ideal environment for storing audit logs over long periods while meeting regulatory and security requirements.
Question 165
You want to monitor query performance and preserve historical execution plans to detect regressions over time. Which feature should you enable?
A) Query Store
B) Extended Events
C) SQL Auditing
D) Intelligent Insights
Answer: A) Query Store
Explanation:
Query Store is a feature designed to capture query execution statistics and execution plans over time. It preserves historical information, allowing administrators to track performance trends, identify regressions, and analyze query behavior in a structured manner. Query Store simplifies performance tuning and troubleshooting by maintaining consistent historical context for workloads, enabling comparison between different execution plans and helping to prevent regressions after updates or schema changes.
Extended Events provide a powerful mechanism for capturing diagnostic data, such as query execution, wait times, and system events. However, they are more of a low-level monitoring tool and require manual setup, storage, and analysis. Extended Events do not automatically retain historical execution plans in a structured way, making them less suitable for identifying performance regressions over time without additional configuration and effort.
SQL Auditing logs database activity, such as changes to data, schema modifications, or permission updates. While auditing is important for compliance and security, it is not designed to capture execution plans or monitor query performance. It focuses on tracking user actions rather than providing performance insights.
Intelligent Insights offers proactive performance recommendations and alerts by analyzing database telemetry. While it helps identify potential issues and suggests optimizations, it does not maintain a comprehensive history of execution plans or allow detailed regression analysis. Its focus is on automated insights rather than historical tracking.
Query Store is the correct solution because it combines query performance monitoring with historical execution plan retention. This makes it an essential tool for detecting regressions, analyzing trends, and maintaining overall database performance over time.
Question 166
You want to detect and automatically remediate query plan regressions in Azure SQL Database. Which feature should you use?
A) Automatic Plan Correction
B) Query Store
C) Intelligent Insights
D) Extended Events
Answer: A) Automatic Plan Correction
Explanation:
Automatic Plan Correction is designed to help maintain consistent query performance by identifying execution plans that result in regressions and automatically reverting to previously known good plans. It continuously monitors query execution and uses historical data to determine when a plan change negatively impacts performance. Once a regression is detected, it intervenes automatically without requiring manual intervention, ensuring that workloads remain stable and predictable. This feature is particularly useful in environments with frequent query plan changes due to updates, index changes, or schema modifications.
Query Store is closely related because it stores historical information about query execution plans and performance metrics. It enables administrators to analyze trends, detect regressions, and even force certain plans. However, Query Store itself does not actively correct regressions unless combined with manual intervention. It acts as a monitoring and diagnostic tool, making it highly valuable for analysis but not sufficient for fully automated remediation of regressions.
Intelligent Insights provides performance monitoring and recommendations for tuning queries, identifying long-running queries, and suggesting optimization steps. While it can flag potential issues and provide guidance, it does not automatically enforce any corrective actions. Administrators still need to review and implement the recommendations manually. Therefore, Intelligent Insights is more of a reactive and advisory tool rather than an automated corrective mechanism.
Extended Events is a low-level diagnostic framework that captures detailed event data from SQL Server and Azure SQL Database. It allows deep analysis of database activities, including query execution, wait times, and errors. While extremely flexible for troubleshooting and advanced monitoring, Extended Events does not have built-in capabilities to automatically detect or correct query plan regressions. Its primary role is to provide visibility into system behavior rather than enforce stability.
Automatic Plan Correction is the only option that integrates detection and automated remediation. By leveraging Query Store data behind the scenes, it can automatically enforce good plans and prevent performance degradation without requiring administrator intervention. This combination of monitoring, analysis, and automatic correction makes it uniquely suited for ensuring stable query performance in production environments.
Question 167
You want to enforce row-level access restrictions on a table based on user attributes such as department. Which feature should you enable?
A) Row-Level Security
B) Dynamic Data Masking
C) Always Encrypted
D) Transparent Data Encryption
Answer: A) Row-Level Security
Explanation:
Row-Level Security (RLS) allows organizations to restrict access to specific rows in a database table based on user context, such as role membership, department, or other attributes. Policies and predicates can be defined to enforce dynamic filters at query time, ensuring that users only see data they are authorized to access. This provides fine-grained control without requiring application-side changes or additional security layers. RLS is particularly useful in multi-tenant or department-specific scenarios where sensitive data must be restricted to subsets of users.
Dynamic Data Masking (DDM) obscures sensitive data by masking column values in query results. It allows users to see obfuscated versions of sensitive information while preventing exposure of the original data. However, DDM only masks values and does not enforce access controls at the row level. Users can still query all rows; they just see masked values. Therefore, it does not meet requirements for restricting row-level access.
Always Encrypted is a security feature that ensures sensitive data remains encrypted both at rest and in transit, with encryption and decryption happening client-side. It is designed to protect sensitive data such as credit card numbers or social security numbers from unauthorized access. While it prevents unauthorized reading of sensitive column data, it does not control which rows are returned to a user, so it cannot enforce access policies based on attributes like department.
Transparent Data Encryption (TDE) encrypts the database at rest to protect against unauthorized access to storage media. TDE works at the storage level rather than at the query or row level. It does not impact query results or user access policies and therefore cannot provide row-level access restrictions. Its purpose is to safeguard against physical data theft rather than enforce logical access controls.
Row-Level Security is the correct solution because it is specifically designed to dynamically filter rows based on user attributes. Unlike other options, it directly addresses the requirement to enforce access at the row level, ensuring only authorized users see the appropriate data. Its seamless integration with SQL queries and policies allows for a secure and maintainable implementation of fine-grained access control.
Question 168
You want to monitor anomalous access patterns and receive proactive alerts for potential security threats. Which feature should you enable?
A) Threat Detection
B) Query Store
C) Automatic Plan Correction
D) SQL Auditing
Answer: A) Threat Detection
Explanation:
Threat Detection is a proactive security feature that continuously monitors database activity for suspicious patterns, such as anomalous logins, unusual queries, potential SQL injection attempts, and other unauthorized access attempts. When such activity is detected, it can trigger alerts, allowing administrators to respond promptly. It provides actionable insights into potential threats in near real-time, which helps organizations maintain a robust security posture and mitigate risks before they escalate.
Query Store focuses on capturing query performance metrics and execution plans over time. It enables administrators to monitor performance trends, detect regressions, and analyze workloads. While valuable for performance troubleshooting and optimization, Query Store does not provide security monitoring or anomaly detection capabilities. It is strictly performance-oriented rather than security-focused.
Automatic Plan Correction ensures stable query performance by detecting plan regressions and enforcing previously known good plans. Its focus is on query performance management, not security. While it contributes to operational stability, it does not monitor access patterns or provide alerts for anomalous activity.
SQL Auditing records database activity, including logins, role changes, and executed statements. It creates a detailed log for compliance and forensic analysis. While it provides visibility into past activity, it does not proactively analyze patterns or send real-time alerts for suspicious behavior. Auditing is reactive, and administrators need to manually review logs to identify anomalies.
Threat Detection is the correct solution because it combines continuous monitoring with automated alerting for suspicious activity. Unlike Query Store, Automatic Plan Correction, or SQL Auditing, it specifically targets security threats, enabling timely responses to anomalous behavior and strengthening the overall security posture.
Question 169
You need to maintain database backups for several years to comply with regulatory retention policies. Which feature should you enable?
A) Long-Term Backup Retention
B) Geo-Redundant Backup Storage
C) Auto-Failover Groups
D) Transparent Data Encryption
Answer: A) Long-Term Backup Retention
Explanation:
Long-Term Backup Retention (LTR) is designed to store database backups for extended periods, typically multiple years, to meet regulatory or compliance requirements. LTR enables organizations to archive backups in Azure Storage and retrieve them when needed, supporting retention policies that span months or years. It ensures that historical data is preserved securely and can be restored in case of audits, investigations, or data recovery needs.
Geo-Redundant Backup Storage (GRS) replicates backups to a secondary geographic region to ensure disaster recovery and business continuity. While it enhances resilience against regional outages, it does not inherently extend the retention period of backups. Its focus is on availability and redundancy rather than long-term compliance.
Auto-Failover Groups are designed to provide high availability and automatic failover across regions. They ensure that databases remain accessible in the event of an outage but do not manage backup retention. Their purpose is continuity and uptime, not compliance with long-term storage policies.
Transparent Data Encryption (TDE) protects data at rest by encrypting the database files. While critical for securing backups and stored data, TDE does not determine how long backups are kept. It is complementary to LTR but does not address regulatory retention requirements directly.
Long-Term Backup Retention is the correct choice because it explicitly supports extended backup storage, meeting compliance and regulatory obligations. Unlike redundancy, failover, or encryption solutions, LTR directly addresses the need for multi-year retention and retrieval of backups.
Question 170
You want to reduce compute costs for a database that is idle most of the day while allowing automatic scaling during peak workloads. Which deployment model should you select?
A) Serverless compute tier
B) Hyperscale tier
C) Business Critical tier
D) Elastic Pool
Answer: A) Serverless compute tier
Explanation:
The Serverless compute tier is designed for databases with intermittent workloads. It automatically scales compute resources up or down based on demand and pauses the database during periods of inactivity. This dynamic behavior helps reduce operational costs by only consuming resources when needed. Additionally, when the database is paused, storage continues to be billed, but compute charges are minimized, making serverless ideal for workloads that do not require constant compute availability.
Hyperscale tier allows massive storage and independent scaling of compute and storage. It is optimized for large databases and high throughput workloads, with rapid scaling capabilities. However, it does not provide automatic pausing for idle databases, so while it scales efficiently, it does not inherently reduce costs during periods of inactivity.
Business Critical tier is designed for workloads that require high availability, low latency, and strong transactional consistency. It provides fixed compute resources and high-performance storage but does not automatically scale or pause. As such, it is not suitable for cost reduction in environments where the database is idle most of the time.
Elastic Pool allows multiple databases to share compute resources within a pool. While it can optimize resource usage across many databases, it does not automatically pause individual databases or scale them independently. Its main benefit is resource sharing rather than cost reduction for sporadically active single databases.
The Serverless compute tier is the correct option because it uniquely combines automatic scaling during peak workloads with the ability to pause during idle periods, directly addressing the requirement to reduce compute costs while maintaining performance flexibility.
Question 171
You want to offload read-only reporting queries from a primary Business Critical database without affecting write operations. Which feature should you enable?
A) Read Scale-Out
B) Auto-Failover Groups
C) Elastic Pool
D) Transparent Network Redirect
Answer: A) Read Scale-Out
Explanation:
Read Scale-Out is a feature in Azure SQL Database designed specifically to offload read-only workloads from the primary database. This allows reporting queries, analytical operations, and other read-heavy tasks to be executed on one of the secondary replicas. By diverting read operations, the primary database is preserved for write operations, which maintains transactional performance and reduces latency for end-users. Read Scale-Out is particularly useful in environments with a high volume of reporting queries that could otherwise degrade write performance on the primary database.
Auto-Failover Groups provide high availability by creating a secondary database in a different region that can automatically take over if the primary fails. While they do include a secondary replica, this replica is primarily intended for failover and disaster recovery rather than routine read operations. Consequently, enabling Auto-Failover Groups alone does not efficiently offload regular read queries without impacting replication latency or requiring complex routing.
Elastic Pools are designed to manage and share compute and storage resources among multiple databases in a cost-efficient manner. They are effective for scenarios with fluctuating resource demands across many databases, but they do not create secondary replicas for read-only workloads. Therefore, Elastic Pools cannot be used to offload reporting queries from a Business Critical database while maintaining write performance on the primary.
Transparent Network Redirect ensures client applications automatically reconnect to the appropriate database after a failover event. This feature helps maintain connectivity in failover scenarios, but it does not address query offloading or reduce load on the primary database. Its purpose is purely related to network redirection post-failover and does not provide additional read replicas for scaling read operations.
Read Scale-Out is the optimal choice because it is explicitly designed to handle read-heavy workloads on secondary replicas, leaving the primary database free for writes. This separation of read and write operations helps maintain high transactional throughput and allows reporting and analytical queries to run without negatively affecting the performance of critical operations on the primary database. By enabling Read Scale-Out, organizations can ensure their Business Critical database continues to perform efficiently even under heavy reporting loads.
Question 172
You want to encrypt sensitive columns and allow client applications to query them without exposing plaintext to administrators. Which feature should you implement?
A) Always Encrypted
B) Transparent Data Encryption
C) Dynamic Data Masking
D) Row-Level Security
Answer: A) Always Encrypted
Explanation:
Always Encrypted is a feature of Azure SQL Database that allows sensitive data to remain encrypted both at rest and during query execution. The encryption and decryption occur on the client side, meaning administrators and database operators never see the plaintext data. This approach ensures that sensitive information, such as credit card numbers or personal identifiers, is protected against unauthorized access while still being usable in application queries. The ability to run queries over encrypted columns without exposing the data makes Always Encrypted a robust solution for protecting sensitive information.
Transparent Data Encryption secures data at rest by encrypting the entire database storage, including backups. However, when queries are executed, the data is decrypted automatically within the database engine, which means that database administrators and others with sufficient privileges can still access plaintext data. While TDE protects against physical theft or unauthorized access to storage media, it does not prevent exposure of sensitive data to administrators.
Dynamic Data Masking provides a way to obfuscate sensitive data in query results so that users see only masked values. While this is useful for reducing accidental exposure of sensitive information in applications, it does not encrypt the data at rest or in transit, and the underlying data remains accessible to administrators. Therefore, Dynamic Data Masking alone does not meet the requirement for encryption with restricted administrative access.
Row-Level Security restricts access to specific rows in a table based on user identity or context. This feature is valuable for enforcing access policies and preventing unauthorized users from viewing certain data, but it does not encrypt the data itself. It is therefore not suitable for scenarios where sensitive data must be protected from administrators and maintained in an encrypted state.
Always Encrypted is the correct solution because it ensures sensitive columns remain encrypted at all times, with decryption occurring only on the client side. This enables applications to query encrypted data safely while preventing administrators or other unauthorized users from viewing the plaintext values. The combination of encryption and client-side accessibility makes Always Encrypted the ideal choice for secure, operationally usable column-level protection.
Question 173
You want to store audit logs securely and durably for compliance with long-term retention requirements. Which destination should you select?
A) Azure Storage account
B) Log Analytics workspace
C) Event Hubs
D) Power BI
Answer: A) Azure Storage account
Explanation:
Azure Storage accounts provide a secure, durable, and centralized location for storing audit logs over long periods. They support features such as encryption at rest, access control, and replication options to ensure durability and compliance with regulatory requirements. Azure Storage is optimized for retention of large amounts of data, making it suitable for organizations that need to store logs for multiple years to meet compliance standards, including financial and healthcare regulations.
Log Analytics workspaces are primarily designed for querying, analyzing, and visualizing log data rather than long-term archival. While Log Analytics is excellent for real-time monitoring, alerting, and operational insights, storing logs for extended periods can become costly, and retention may not meet regulatory requirements without additional configuration. It is more suited for short- to medium-term operational use rather than long-term compliance.
Event Hubs is a highly scalable data streaming platform for ingesting large volumes of event and telemetry data. It is designed for real-time processing and analytics pipelines rather than durable storage. Event Hubs is excellent for temporary ingestion and downstream processing, but it is not intended for long-term archival of audit logs.
Power BI is a visualization and reporting tool, allowing users to create dashboards and reports from existing data sources. It cannot store raw logs or act as a secure long-term repository, and it is not suitable for regulatory compliance. Power BI is for analysis and presentation rather than persistent storage.
Azure Storage is the optimal choice for retaining audit logs because it combines durability, security, and scalability, ensuring logs can be preserved safely for years. It meets compliance requirements while providing mechanisms such as lifecycle policies and geo-redundancy, allowing organizations to enforce retention schedules and protect against accidental deletion or data loss. For secure, long-term archival of logs, Azure Storage is the recommended solution.
Question 174
You want to monitor anomalous access patterns in Azure SQL Database and receive proactive alerts for potential security threats. Which feature should you enable?
A) Threat Detection
B) Query Store
C) Automatic Plan Correction
D) SQL Auditing
Answer: A) Threat Detection
Explanation:
Threat Detection in Azure SQL Database is a proactive security feature that continuously monitors database activity for anomalous behavior. It identifies suspicious activities such as SQL injection attempts, unusual login patterns, and potential unauthorized access. When such events are detected, alerts are generated for administrators, allowing them to respond promptly to security threats. Threat Detection enhances database security by providing real-time monitoring and actionable insights without requiring manual log analysis.
Query Store is designed to monitor query performance over time by storing historical execution plans and statistics. While Query Store is valuable for identifying performance regressions and optimizing queries, it does not provide alerts for security-related anomalies. Its focus is purely on workload and query performance rather than security threats.
Automatic Plan Correction automatically detects and fixes query plan regressions by reverting to previously known good execution plans. This feature helps maintain consistent performance but is not intended to monitor access patterns or detect malicious activity. It is focused exclusively on query optimization rather than security monitoring.
SQL Auditing tracks database activity, including login events, query executions, and data modifications, and stores audit logs for review. While auditing provides a historical record of database events, it does not proactively analyze patterns for anomalies or generate immediate alerts. Auditing is reactive in nature, requiring manual review to identify potential threats.
Threat Detection is the correct choice because it combines monitoring, analysis, and alerting into a proactive security mechanism. By automatically identifying suspicious activity and notifying administrators, it allows organizations to respond quickly to potential breaches. Unlike Query Store, Automatic Plan Correction, or SQL Auditing, Threat Detection actively protects the database by focusing on anomalous access patterns and security threats.
Question 175
You want to maintain database backups for multiple years to satisfy regulatory retention policies. Which feature should you enable?
A) Long-Term Backup Retention
B) Geo-Redundant Backup Storage
C) Auto-Failover Groups
D) Transparent Data Encryption
Answer: A) Long-Term Backup Retention
Explanation:
Long-Term Backup Retention (LTBR) allows Azure SQL databases to retain full database backups for extended periods, often several years, to meet regulatory and compliance requirements. This feature ensures that organizations can access historical backups for auditing, reporting, or legal purposes, providing a robust mechanism for maintaining data over long durations. LTBR allows administrators to define retention periods that comply with specific regulations and ensures backups are preserved in a secure, durable storage solution.
Geo-Redundant Backup Storage (GRS) provides replication of backups across multiple regions for disaster recovery purposes. While GRS ensures high availability and protection against regional outages, it does not inherently manage long-term retention. Its primary focus is on resilience rather than compliance with multi-year regulatory requirements.
Auto-Failover Groups are designed to provide high availability and enable automatic failover of databases to secondary regions. This feature ensures business continuity but does not extend the retention period of backups. It addresses uptime and recovery objectives rather than regulatory retention policies.
Transparent Data Encryption (TDE) protects data at rest by encrypting the database and its backups. While TDE secures the data from unauthorized access, it does not manage backup retention schedules or durations. Encryption and retention serve different purposes, and TDE alone cannot fulfill regulatory requirements for multi-year storage of backups.
Long-Term Backup Retention is the correct solution because it explicitly addresses the need to retain backups for several years. It ensures compliance with regulatory requirements by providing secure, durable, and manageable long-term storage. Organizations can define retention policies aligned with legal obligations, maintain audit-ready backups, and protect critical data over extended periods while complementing disaster recovery and security features like GRS and TDE.
Question 176
You want to reduce compute costs for a database that is idle most of the day while supporting automatic scaling during high workloads. Which deployment model should you select?
A) Serverless compute tier
B) Hyperscale tier
C) Business Critical tier
D) Elastic Pool
Answer: A) Serverless compute tier
Explanation:
The Serverless compute tier in Azure SQL Database is specifically designed for scenarios where workloads are intermittent or unpredictable. This tier allows the database to automatically scale compute resources up or down depending on demand. When the database is idle, it can pause entirely, which dramatically reduces compute costs since you are not billed for inactive compute resources. Additionally, when a workload is detected, the serverless tier quickly resumes and scales resources dynamically to accommodate the demands of queries or transactions. This makes it an ideal choice for applications that experience periods of low activity followed by spikes in usage.
The Hyperscale tier, on the other hand, is designed for very large databases and workloads requiring rapid scaling of storage and compute independently. While it allows for virtually unlimited database size and fast scaling, it does not support pausing the database when idle. Compute resources are always allocated, which means costs remain constant even during periods of inactivity. Hyperscale is excellent for high-volume, consistently active workloads but is not optimized for cost savings when activity is low.
The Business Critical tier provides high availability and dedicated compute resources with robust performance guarantees. This tier is ideal for workloads that require low latency and high transactional throughput, but it does not provide automatic scaling or the ability to pause during idle periods. Because compute resources are reserved and continuously allocated, Business Critical can be costlier for workloads that are sporadic or primarily idle. Its strengths lie in performance and high availability rather than cost optimization for intermittent workloads.
Elastic Pool allows multiple databases to share a set of resources, which can improve utilization and provide cost efficiency across several databases. However, it does not offer the ability to pause individual databases or automatically scale compute for a single workload. Resources are allocated based on the collective pool, which may reduce individual costs but does not dynamically adjust for fluctuating workloads on a single database.
Given the scenario, where the database is mostly idle but must support bursts of high activity, the Serverless compute tier is the optimal choice. It balances cost savings during idle times with automated scaling during high demand, providing flexibility, efficiency, and financial optimization that the other tiers do not offer.
Question 177
You want to offload read-only queries from a primary Business Critical database without affecting write operations. Which feature should you enable?
A) Read Scale-Out
B) Auto-Failover Groups
C) Elastic Pool
D) Transparent Network Redirect
Answer: A) Read Scale-Out
Explanation:
Read Scale-Out enables secondary replicas of a Business Critical database to handle read-only workloads, such as reporting, analytics, or business intelligence queries. By directing read-only queries to these replicas, the primary database’s write operations are unaffected, ensuring transactional consistency and maintaining performance for critical workloads. This feature is particularly useful in high-traffic environments where reporting queries could otherwise slow down transaction processing on the primary database.
Auto-Failover Groups are designed to provide automatic failover between primary and secondary databases for high availability and disaster recovery. While they maintain data replication and enable seamless failover during outages, they do not provide a mechanism to offload read-only queries under normal operation. Their primary purpose is continuity, not workload distribution, which means they are not suitable for reducing the load on a primary database during routine operations.
Elastic Pool allows multiple databases to share compute and storage resources. While this can optimize resource utilization across databases and reduce costs, it does not provide secondary replicas for read-only query offloading. Queries directed to a database in an elastic pool will still consume compute from the shared pool, meaning read-heavy workloads can still impact primary database performance.
Transparent Network Redirect is a mechanism that redirects client connections to the appropriate primary or secondary database after a failover. While it ensures minimal downtime and correct routing of client requests, it does not provide read scaling or offloading capabilities during normal operation. Its function is limited to connection management post-failover.
Considering the need to offload read-only workloads without impacting write performance, Read Scale-Out is the correct feature. It ensures high availability for read operations while keeping transactional performance intact, which none of the other options provide in the context of read-only workload distribution.
Question 178
You want to encrypt sensitive columns and allow client applications to query them without exposing plaintext to administrators. Which feature should you implement?
A) Always Encrypted
B) Transparent Data Encryption
C) Dynamic Data Masking
D) Row-Level Security
Answer: A) Always Encrypted
Explanation:
Always Encrypted is a column-level encryption feature designed to protect sensitive data, such as credit card numbers or social security numbers, throughout its lifecycle. Data is encrypted both at rest and in transit, and crucially, the encryption keys are never exposed to the database engine itself. This allows client applications to perform queries on encrypted data without administrators ever seeing the plaintext. It provides strong security assurances for scenarios where compliance and privacy are critical.
Transparent Data Encryption (TDE) encrypts the entire database at rest, including backups and data files, protecting against physical data theft. However, when the data is queried, it is decrypted automatically for the database engine, meaning administrators can access plaintext data. TDE secures data storage but does not protect sensitive columns from being exposed to privileged users during normal operations.
Dynamic Data Masking (DDM) hides sensitive data in query results, replacing the original values with masked representations for non-privileged users. While useful for reducing accidental exposure in query results, DDM does not encrypt the underlying data. Data remains in plaintext at rest and in transit and is accessible by administrators or users with sufficient privileges.
Row-Level Security (RLS) restricts access to specific rows in a table based on user permissions. While it is a valuable tool for enforcing access policies, it does not provide encryption. RLS focuses on controlling access to subsets of data rather than securing sensitive values from unauthorized viewing or querying.
Given the requirement to encrypt sensitive columns while allowing client-side queries without exposing plaintext to administrators, Always Encrypted is the only feature that fulfills both criteria. It combines strong encryption with client-side query capabilities, making it ideal for highly sensitive data protection.
Question 179
You want to store audit logs securely and durably for compliance with long-term retention requirements. Which destination should you select?
A) Azure Storage account
B) Log Analytics workspace
C) Event Hubs
D) Power BI
Answer: A) Azure Storage account
Explanation:
Azure Storage accounts provide secure, durable, and cost-effective storage options suitable for long-term retention of audit logs. Storage accounts can be configured with redundancy options like geo-replication to ensure durability, and access controls can be applied to meet compliance requirements. They are designed for retention periods spanning multiple years, making them ideal for regulatory or auditing purposes.
Log Analytics workspaces are optimized for collecting, querying, and analyzing log and telemetry data. While they allow insights into performance and usage patterns and support shorter-term retention for operational analytics, they are not primarily intended for long-term compliance storage. Keeping logs for multiple years in Log Analytics can become cost-prohibitive and does not meet all archival requirements.
Event Hubs is a highly scalable event ingestion service, suitable for streaming telemetry or log data to downstream processing systems. However, Event Hubs is not designed for permanent storage. Its primary function is real-time ingestion and processing, and logs stored in Event Hubs would need to be moved to another storage service for long-term retention.
Power BI is a reporting and visualization tool and cannot serve as a secure, durable repository for raw audit logs. While it can display insights derived from logs, it does not provide the underlying storage or retention capabilities needed for compliance.
Given the need for long-term, secure storage of audit logs for compliance purposes, an Azure Storage account is the appropriate choice. It meets durability, security, and retention requirements that the other options cannot fully satisfy.
Question 180
You want to monitor anomalous access patterns in Azure SQL Database and receive proactive alerts for potential security threats. Which feature should you enable?
A) Threat Detection
B) Query Store
C) Automatic Plan Correction
D) SQL Auditing
Answer: A) Threat Detection
Explanation:
Threat Detection is a security feature in Azure SQL Database designed to monitor database activity continuously for anomalous or suspicious behavior. It can identify potential threats such as SQL injection attempts, unusual login patterns, or unauthorized access attempts. When such behavior is detected, the system generates alerts that can be sent to administrators, allowing rapid response to potential security incidents. This proactive monitoring helps organizations prevent breaches and maintain compliance with security policies.
Query Store tracks query performance over time, capturing execution plans and query statistics. While it is a valuable tool for performance monitoring and troubleshooting query regressions, it does not provide real-time security alerts or monitor for anomalous access patterns. Its focus is strictly on performance rather than security.
Automatic Plan Correction is a feature aimed at identifying and remediating query plan regressions to maintain database performance. It automatically enforces previously known good execution plans to avoid slowdowns. This functionality does not involve monitoring for security threats or access anomalies, so it does not meet the requirements of the scenario.
SQL Auditing logs database activity for compliance and forensic purposes, including login attempts, data changes, and other database actions. While auditing provides a detailed historical record, it does not actively detect anomalies or send real-time alerts. Administrators must manually review logs to identify potential security threats.
Threat Detection is the correct feature for proactively monitoring and alerting administrators to anomalous access patterns. It provides real-time detection of suspicious activity, which neither Query Store, Automatic Plan Correction, nor SQL Auditing fully offer in the context of active threat monitoring.
Popular posts
Recent Posts
