Microsoft DP-300 Administering Microsoft Azure SQL Solutions Exam Dumps and Practice Test Questions Set 7 Q121-140
Visit here for our full Microsoft DP-300 exam dumps and practice test questions.
Question 121
You want to provide high availability across regions for your Azure SQL Database and ensure automatic client redirection after failover. Which feature should you enable?
A) Auto-Failover Groups
B) Read Scale-Out
C) Transparent Data Encryption
D) Elastic Pool
Answer: A) Auto-Failover Groups
Explanation:
Auto-Failover Groups allow Azure SQL Databases to be replicated across multiple regions to provide high availability and disaster recovery. This feature ensures that if the primary database becomes unavailable due to regional outages or failures, client applications can automatically redirect their connections to the secondary database in another region with minimal downtime. Auto-Failover Groups support automatic failover of multiple databases within the same logical server and maintain client connection strings using the failover group listener, which is critical for seamless application continuity.
Read Scale-Out allows read-only workloads to be directed to a readable secondary replica, helping improve performance for reporting or analytical workloads. However, it does not provide cross-region replication, nor does it handle automatic client redirection during failover scenarios. This means that while read scalability is improved, high availability across regions cannot be achieved using Read Scale-Out alone.
Transparent Data Encryption focuses on securing data at rest by encrypting the database, log files, and backups. While TDE protects sensitive data against unauthorized access, it does not handle replication, failover, or high availability concerns. Enabling TDE would not allow applications to automatically redirect to a secondary database in the event of an outage, so it does not satisfy the requirements for cross-region high availability.
Elastic Pool is designed to optimize resource utilization across multiple databases within a single server by sharing compute and storage resources. While it provides efficiency and cost savings, it does not offer cross-region replication or automatic failover. Elastic Pool does not help with disaster recovery or seamless client connection handling after a regional failure. Therefore, Auto-Failover Groups are the correct choice because they combine cross-region replication with automatic client redirection to ensure minimal disruption.
Question 122
You need to encrypt database backups for compliance while ensuring ongoing queries can still execute normally. Which feature should you enable?
A) Transparent Data Encryption
B) Always Encrypted
C) Dynamic Data Masking
D) Row-Level Security
Answer: A) Transparent Data Encryption
Explanation:
Transparent Data Encryption encrypts the entire database, including its data files, transaction logs, and backups, providing a strong layer of security for data at rest. One of the key advantages of TDE is that it operates transparently to applications, meaning existing queries and transactions continue to function normally without modification. This is particularly important for compliance requirements where sensitive information must be encrypted at all times, including backups stored in Azure or on disk.
Always Encrypted focuses on protecting sensitive columns by encrypting data both in transit and at rest, ensuring that only client applications with the correct keys can decrypt it. While this enhances security for specific data columns, it does not automatically encrypt database backups, and configuring it for existing applications can require schema and application changes. Therefore, Always Encrypted does not fully address the backup encryption requirement.
Dynamic Data Masking is a feature that obscures sensitive data in query results to prevent unauthorized viewing. It only affects how data is presented to users and does not encrypt data at rest or in backups. Consequently, while DDM is useful for limiting exposure, it does not satisfy compliance requirements for backup encryption.
Row-Level Security restricts access to specific rows based on user characteristics, such as department or role. While it enhances data access control, it does not provide encryption for storage or backups. It protects sensitive data from being read by unauthorized users but cannot ensure that backups are encrypted or compliant. Transparent Data Encryption is the correct solution because it guarantees that all stored data, including backups, is encrypted while maintaining normal database operations.
Question 123
You want to store audit logs securely in Azure for long-term regulatory compliance. Which destination should you select?
A) Azure Storage account
B) Log Analytics workspace
C) Event Hubs
D) Power BI
Answer: A) Azure Storage account
Explanation:
Azure Storage accounts provide highly durable and secure storage for long-term retention of audit logs. They support features such as redundancy, access control, encryption at rest, and lifecycle management, all of which are critical for meeting regulatory compliance standards. Using storage accounts allows organizations to retain audit logs for extended periods while ensuring the integrity and confidentiality of the data, which is essential for audits or investigations.
Log Analytics workspaces are designed for querying, analyzing, and visualizing log data in near real-time. While they offer powerful insights and alerting capabilities, they are not optimized for long-term storage of audit logs because retention periods can be limited and may not meet regulatory requirements. They are better suited for monitoring and operational analytics rather than compliance-focused storage.
Event Hubs is a streaming platform used to ingest and process large volumes of telemetry or event data in real-time. Although Event Hubs can temporarily store events, it is not designed for long-term, durable storage and lacks native compliance features like retention policies and secure access control for historical logs.
Power BI is a business intelligence tool for visualizing and analyzing data. It does not provide storage capabilities or compliance-grade retention for audit logs. While Power BI can visualize audit data, it cannot securely store it for long-term regulatory requirements. Azure Storage accounts are the best option because they combine secure, durable, and centralized storage with compliance-friendly features for audit log retention.
Question 124
You want to reduce contention in tempdb for a high-concurrency Azure SQL Managed Instance. Which configuration should you modify?
A) Tempdb file count
B) Availability Zone
C) Service Endpoint
D) Geo-Restore settings
Answer: A) Tempdb file count
Explanation:
Increasing the number of tempdb data files in a SQL Managed Instance reduces allocation contention when multiple threads concurrently perform temporary operations such as sorts, joins, and temporary table usage. Each tempdb file can handle a portion of the workload, distributing allocation requests and minimizing bottlenecks that degrade performance under high concurrency. Configuring multiple tempdb files is a well-established best practice for SQL Server workloads that involve heavy use of temporary objects.
Availability Zones provide high availability and resiliency against datacenter-level failures, ensuring that the managed instance continues to operate even if one zone goes down. However, Availability Zones do not address internal database contention issues within tempdb and will not improve performance for concurrent workloads. They focus on redundancy rather than operational efficiency.
Service Endpoints allow secure network access to Azure services over the Azure backbone network, improving connectivity and security. While important for network design, Service Endpoints have no impact on tempdb contention or database performance. They cannot help with allocation bottlenecks or concurrent processing in tempdb.
Geo-Restore settings are related to disaster recovery, allowing a database to be restored from a geo-redundant backup in another region. This feature provides recovery options in case of a catastrophic failure but does not improve tempdb performance or reduce contention. Modifying the tempdb file count is the correct approach because it directly addresses the performance bottleneck caused by high-concurrency workloads in SQL Managed Instances.
Question 125
You need to enforce row-level access restrictions based on department. Which feature should you enable?
A) Row-Level Security
B) Dynamic Data Masking
C) Always Encrypted
D) Transparent Data Encryption
Answer: A) Row-Level Security
Explanation:
Row-Level Security enables fine-grained access control by applying predicates to filter rows dynamically based on user attributes, such as department membership. This ensures that users can only access the rows they are authorized to see, which is essential for regulatory compliance and protecting sensitive information. RLS policies are applied at the database level and work transparently to applications without requiring changes to queries.
Dynamic Data Masking obscures sensitive column values in query results to prevent unauthorized exposure. While this protects data presentation, it does not prevent access to the underlying rows. Users could still query or bypass masked columns in other ways, so DDM alone does not enforce strict row-level access restrictions.
Always Encrypted protects sensitive data in specific columns by encrypting it on the client side. While it secures data from exposure in transit and at rest, it does not implement access control based on row-level attributes. Users with access to the database may still query rows they should not see if no filtering is applied.
Transparent Data Encryption encrypts the entire database at rest, safeguarding data and backups. However, it does not control which rows a user can access. TDE ensures storage security but cannot restrict access to specific rows based on department or other criteria. Row-Level Security is the correct solution because it enforces dynamic access restrictions at the row level while maintaining normal database operations and compliance with access policies.
Question 126
You want to monitor anomalous access patterns in Azure SQL Database and receive proactive alerts. Which feature should you enable?
A) Threat Detection
B) Query Store
C) Automatic Plan Correction
D) SQL Auditing
Answer: A) Threat Detection
Explanation:
Threat Detection is designed to identify unusual or suspicious activities within an Azure SQL Database. It continuously monitors the database for potential security threats, including SQL injection attempts, abnormal login attempts, and unexpected changes in database access patterns. When it detects anomalies, it generates alerts and notifies administrators, enabling proactive security responses. This real-time monitoring is critical for organizations that need to maintain high levels of security and quickly mitigate risks that could lead to breaches or data compromise. Threat Detection combines behavioral analysis and policy-based checks to provide comprehensive protection, making it a key security tool for sensitive workloads.
Query Store, in contrast, focuses on performance management. It tracks query execution plans and their runtime statistics to help identify performance regressions and plan changes. While it provides insight into workload patterns, it does not monitor security threats, unusual login attempts, or suspicious access, which makes it insufficient for proactive security alerting. Organizations that require security monitoring would not benefit from Query Store in this context.
Automatic Plan Correction is another performance-oriented feature. It detects query plan regressions and can automatically revert to a previous optimal plan to maintain consistent performance. Although valuable for performance stability, it does not analyze access patterns, detect potential threats, or send security alerts. Using Automatic Plan Correction would not address the need for monitoring anomalous access or notifying administrators about security events.
SQL Auditing captures detailed logs of database activity, including successful and failed logins, data access, and administrative actions. These logs are useful for compliance and forensic investigations, as they provide a historical record of events. However, SQL Auditing does not provide real-time anomaly detection or proactive alerting. Administrators must manually analyze logs or create additional monitoring solutions to detect suspicious behavior.
The reasoning for selecting Threat Detection over the other options is straightforward. It is the only feature that combines continuous monitoring, anomaly detection, and automated alerting. While Query Store, Automatic Plan Correction, and SQL Auditing serve essential roles in performance and compliance, they do not meet the requirement for proactive identification of security threats. Threat Detection ensures administrators are immediately informed about potentially malicious activity, allowing timely intervention, which is essential for maintaining database security and compliance in dynamic environments.
Question 127
You need to maintain database backups for multiple years to comply with regulatory retention policies. Which feature should you enable?
A) Long-Term Backup Retention
B) Geo-Redundant Backup Storage
C) Auto-Failover Groups
D) Transparent Data Encryption
Answer: A) Long-Term Backup Retention
Explanation:
Long-Term Backup Retention (LTR) allows organizations to store Azure SQL Database backups for extended periods, typically ranging from several months to multiple years. This feature is essential for compliance with regulatory requirements that mandate long-term archival of critical data. Backups are stored in Azure Storage and can be restored to a specific point in time within the retention period. LTR provides both flexibility and security for long-term data storage, ensuring that databases can be recovered in accordance with organizational and legal policies.
Geo-Redundant Backup Storage (GRS) is primarily focused on disaster recovery rather than extended retention. It replicates backups to a secondary geographic region to protect against regional outages, ensuring high availability and resiliency. While GRS enhances disaster recovery capabilities, it does not extend the retention period beyond the default short-term backup duration. Organizations requiring multi-year storage for compliance cannot rely on GRS alone.
Auto-Failover Groups improve database availability by enabling automatic failover between primary and secondary databases in different regions. This feature ensures business continuity during outages but is not designed to store historical backups or maintain data for compliance purposes. It addresses operational availability rather than regulatory retention.
Transparent Data Encryption (TDE) encrypts data at rest to protect against unauthorized access. While critical for data security, TDE does not manage backup retention or long-term archival. It ensures stored data is secure, but it does not satisfy regulatory requirements that mandate keeping historical backups for years.
Long-Term Backup Retention is the only feature specifically designed to meet regulatory and compliance requirements for storing database backups for extended periods. By leveraging LTR, organizations can satisfy legal obligations, maintain historical versions for audits, and ensure reliable recovery options for critical data. This makes LTR the correct choice when long-term archival and compliance are required.
Question 128
You want to reduce compute costs for a database that is idle most of the day while ensuring automatic scaling during peak workload periods. Which deployment model should you select?
A) Serverless compute tier
B) Hyperscale tier
C) Business Critical tier
D) Elastic Pool
Answer: A) Serverless compute tier
Explanation:
The Serverless compute tier dynamically adjusts compute resources based on workload demand. It automatically scales up during periods of high activity and can pause the database when idle, reducing costs significantly. This flexibility makes it ideal for workloads with unpredictable or intermittent usage patterns. Additionally, serverless ensures that resources are used efficiently without requiring constant manual intervention, helping organizations optimize cloud expenditure while maintaining performance during peak periods.
Hyperscale tier is designed for massive databases with independent scaling of storage and compute. While it provides rapid growth capacity and high performance for very large datasets, it does not pause during inactivity. Organizations using Hyperscale will continue to incur compute costs even when the database is idle, which does not meet the cost-reduction requirement in this scenario.
Business Critical tier offers fixed compute resources optimized for low latency and high I/O workloads. It provides high availability and excellent transactional performance but cannot scale automatically or pause during idle periods. This makes it more suitable for consistently busy databases rather than workloads with sporadic usage patterns.
Elastic Pool allows multiple databases to share a set of resources. It provides flexibility in resource allocation among several databases but does not enable pausing individual databases or dynamically scaling compute based on individual workload peaks. While it reduces cost across multiple databases, it is not optimized for single database cost efficiency with variable workloads.
Serverless compute tier is the correct choice because it combines automatic scaling with the ability to pause during idle periods, delivering significant cost savings while ensuring performance during high-demand periods. Its dynamic nature makes it particularly suitable for workloads that are not consistently active but require responsiveness when needed.
Question 129
You want to offload reporting queries from a primary Business Critical database without affecting write operations. Which feature should you enable?
A) Read Scale-Out
B) Auto-Failover Groups
C) Elastic Pool
D) Transparent Network Redirect
Answer: A) Read Scale-Out
Explanation:
Read Scale-Out enables offloading of read-only workloads to secondary replicas in a Business Critical database. This improves performance for reporting and analytical queries without impacting the primary replica that handles transactional writes. By separating read and write operations, organizations can maintain high transaction throughput while providing real-time access to reporting data. This feature is particularly valuable in environments with heavy reporting requirements that could otherwise degrade primary database performance.
Auto-Failover Groups are designed to provide high availability and disaster recovery by automatically failing over databases to a secondary server in another region. While they improve resilience, they do not offload read workloads. All read and write operations continue to affect the primary database until failover occurs.
Elastic Pool provides a shared resource model across multiple databases. It helps optimize costs and manage workloads across databases but does not create secondary replicas for offloading read queries. Reporting queries executed on the primary database could still impact transactional performance.
Transparent Network Redirect ensures seamless client reconnection after failover in Auto-Failover Groups. It addresses client routing and connection continuity but does not distribute query workloads or offload read operations from the primary database.
Read Scale-Out is the correct option because it allows secondary replicas to handle read-only operations independently of the primary database. This separation ensures write operations remain performant while reporting workloads run efficiently, meeting the specific requirement of offloading reads without affecting transactional writes.
Question 130
You want to encrypt sensitive columns and allow applications to query them without exposing plaintext to administrators. Which feature should you implement?
A) Always Encrypted
B) Transparent Data Encryption
C) Dynamic Data Masking
D) Row-Level Security
Answer: A) Always Encrypted
Explanation:
Always Encrypted ensures that sensitive column data remains encrypted both at rest and in transit. The encryption keys are managed client-side, so the SQL Server never sees plaintext data. Applications can query encrypted columns using parameterized queries without exposing sensitive information to database administrators. This approach provides a high level of security for regulated or sensitive data while maintaining full operational functionality for applications.
Transparent Data Encryption (TDE) encrypts data at rest, ensuring storage-level security. However, TDE decrypts the data during query execution, meaning administrators and other server-level roles can access plaintext. While TDE protects against unauthorized access to physical files, it does not prevent exposure during normal operations or queries.
Dynamic Data Masking obfuscates data in query results by showing masked values instead of actual data. It does not encrypt data at rest or during transmission, so the underlying sensitive information remains accessible to administrators. It is primarily a presentation-level security feature rather than a true encryption solution.
Row-Level Security restricts access to rows based on user attributes or roles. While it controls who can see which data, it does not encrypt sensitive information. Users with elevated privileges could still access unencrypted data if allowed by policy.
Always Encrypted is the correct choice because it combines strong encryption with application-level usability, allowing secure querying of sensitive data without exposing plaintext to administrators. It uniquely satisfies both security and operational requirements in scenarios where data confidentiality is paramount.
Question 131
You want to monitor query performance and preserve historical execution plans to identify regressions. Which feature should you enable?
A) Query Store
B) Extended Events
C) SQL Auditing
D) Intelligent Insights
Answer: A) Query Store
Explanation:
Query Store is a specialized feature in Azure SQL Database and SQL Server designed to capture detailed information about query execution over time. It records execution plans, runtime statistics, and query performance metrics, storing them for historical analysis. This makes it possible to track changes in performance and detect regressions when a query suddenly executes less efficiently. By preserving previous query plans, Query Store provides a mechanism to compare current and past performance and identify trends that might indicate a regression, allowing database administrators to take proactive measures.
Extended Events is a highly flexible event-handling system that allows the collection of detailed diagnostics for performance monitoring, troubleshooting, and auditing purposes. While it can capture query execution information and various server events, its primary role is diagnostic. The data collected via Extended Events often requires manual setup and subsequent analysis. Unlike Query Store, it does not automatically maintain a historical record of execution plans specifically designed for regression tracking, which makes it less suitable for automated performance regression monitoring.
SQL Auditing, on the other hand, focuses on tracking database activity for security and compliance purposes. It logs events such as login attempts, schema changes, and data modifications. While auditing is crucial for security and regulatory compliance, it does not provide performance metrics or store historical query execution plans. Therefore, it cannot be used to monitor query performance trends or identify regressions over time.
Intelligent Insights is a feature that uses built-in intelligence to provide performance recommendations, detect potential issues, and offer guidance for query and index optimization. While it can identify potential problem areas and suggest solutions, it does not preserve historical execution plans. Consequently, it is more advisory in nature and does not enable detailed regression analysis over time. Given the need to capture and preserve historical execution plans to monitor query performance trends and identify regressions, Query Store is the most appropriate choice because it combines automatic collection, historical retention, and analytical capabilities in a single, integrated solution.
Question 132
You want to detect and remediate query plan regressions automatically in Azure SQL Database. Which feature should you use?
A) Automatic Plan Correction
B) Query Store
C) Intelligent Insights
D) Extended Events
Answer: A) Automatic Plan Correction
Explanation:
Automatic Plan Correction is a feature designed to proactively address query performance issues caused by execution plan regressions. It leverages historical query plan data to identify when a query’s current execution plan results in slower performance compared to a previously successful plan. Once a regression is detected, Automatic Plan Correction can enforce the last known good plan automatically, ensuring consistent query performance without requiring manual intervention. This process significantly reduces the risk of performance degradation in production environments.
Query Store is closely related to this process as it maintains the historical execution plans that Automatic Plan Correction uses. However, Query Store alone does not remediate regressions. It provides the data needed to identify regressions, but database administrators must analyze the information manually and decide how to respond. Automatic Plan Correction builds on Query Store’s data to automate the remediation process, which is why it is the more appropriate choice for automatic correction.
Intelligent Insights provides recommendations and guidance to improve query and database performance. It can alert administrators to potential performance issues and suggest optimizations. However, like Query Store, it does not automatically correct regressions. It serves as an advisory tool rather than an active corrective mechanism. Extended Events is a diagnostic framework that allows the collection of detailed event data for troubleshooting and monitoring, but it does not provide automatic regression detection or plan enforcement.
Automatic Plan Correction is the correct solution because it integrates detection and remediation. It uses Query Store data to identify performance regressions and automatically applies a known good plan. This ensures reliable query performance, reduces manual workload, and minimizes downtime, providing a robust solution for environments where consistent performance is critical.
Question 133
You want to offload read-only analytics queries from a primary Business Critical database without affecting writes. Which feature should you enable?
A) Read Scale-Out
B) Auto-Failover Groups
C) Elastic Pool
D) Hyperscale replicas
Answer: A) Read Scale-Out
Explanation:
Read Scale-Out is designed to offload read-only queries to secondary replicas in Business Critical Azure SQL databases. This allows analytics, reporting, and other read-heavy workloads to execute without impacting the primary database’s write operations. By routing read workloads to replicas, Read Scale-Out improves overall performance and reduces contention for resources on the primary instance, making it ideal for scenarios where read and write workloads must coexist efficiently.
Auto-Failover Groups are intended to provide high availability and disaster recovery across regions. They automatically redirect client connections to a secondary database in case of failure. While they improve reliability and availability, they do not facilitate offloading of read workloads from the primary database. Elastic Pool allows multiple databases to share compute and storage resources efficiently, but it does not provide the ability to offload reads to secondary replicas. Hyperscale replicas exist only within the Hyperscale tier and are not part of the Business Critical tier’s architecture, which makes them irrelevant in this context.
Read Scale-Out is the correct choice because it specifically addresses the requirement of offloading read queries while maintaining write performance on the primary database. It enables workload separation, reduces bottlenecks, and allows the primary database to focus on transactional workloads without interference from reporting or analytics operations.
Question 134
You want to ensure sensitive data remains encrypted during client queries and prevent administrative access to plaintext. Which feature should you implement?
A) Always Encrypted
B) Transparent Data Encryption
C) Dynamic Data Masking
D) Row-Level Security
Answer: A) Always Encrypted
Explanation:
Always Encrypted is a feature designed to keep sensitive data encrypted both at rest and in transit, including during query execution. The encryption and decryption occur on the client side, ensuring that sensitive values are never exposed in plaintext to the database server or administrators. This is particularly important for compliance scenarios where even trusted administrators should not have access to raw data. Queries can still operate on encrypted data using deterministic or randomized encryption, allowing functionality while preserving security.
Transparent Data Encryption encrypts data at rest within the database and protects backups, but it decrypts data automatically during query execution. As a result, server administrators or attackers with access to the database engine could potentially view plaintext data. Dynamic Data Masking modifies the way sensitive data appears in query results but does not encrypt the underlying stored data. It is intended for reducing accidental exposure rather than securing data cryptographically. Row-Level Security controls access to rows based on user identity or attributes but does not encrypt data and cannot prevent administrative access.
Always Encrypted is the correct choice because it guarantees that sensitive information remains encrypted end-to-end, including during queries. By performing encryption and decryption on the client side, it ensures that only authorized users with access to the encryption keys can view plaintext values, providing robust protection against unauthorized access and meeting strict compliance requirements.
Question 135
You need to store audit logs securely and durably for compliance with multi-year retention requirements. Which destination should you select?
A) Azure Storage account
B) Log Analytics workspace
C) Event Hubs
D) Power BI
Answer: A) Azure Storage account
Explanation:
Azure Storage accounts provide highly durable, secure, and scalable storage options for audit logs. They support long-term retention policies and various redundancy options, such as geo-redundant storage, ensuring that logs are preserved even in the event of regional failures. This makes them ideal for compliance scenarios that require multi-year retention of audit data. Storage accounts also offer encryption at rest and access control features, further enhancing security.
Log Analytics workspace is primarily designed for real-time log analysis, monitoring, and visualization. While it allows querying and analytics on stored logs, it may not be suitable for long-term archival storage, particularly for compliance-driven multi-year retention. Event Hubs is intended for ingesting and streaming large volumes of event data into analytics pipelines; it is not designed for persistent archival storage. Power BI is a visualization and reporting tool and cannot be used as a long-term storage solution for raw audit logs.
Azure Storage accounts are the correct solution because they offer secure, durable, and cost-effective storage for audit logs, meeting both regulatory and operational requirements. By using storage accounts, organizations can ensure that audit logs remain accessible for many years, support compliance audits, and provide a reliable mechanism for protecting sensitive historical data.
Question 136
You want to detect anomalous access patterns in Azure SQL Database and receive proactive alerts. Which feature should you enable?
A) Threat Detection
B) Query Store
C) Automatic Plan Correction
D) SQL Auditing
Answer: A) Threat Detection
Explanation:
Threat Detection in Azure SQL Database is a proactive security feature designed to identify unusual or suspicious activity that could indicate a potential security threat. This includes SQL injection attempts, anomalous login patterns, and unusual access behavior that deviates from established baselines. By continuously monitoring database activity, it can send real-time alerts to administrators, enabling rapid investigation and remediation. The alerts can be integrated with email, Azure Security Center, or other monitoring tools, which makes it highly effective for maintaining security and compliance without manual oversight. Threat Detection combines pattern recognition, anomaly detection, and behavior analytics, which are critical for identifying threats that might not be apparent through standard auditing or logging alone.
Query Store, on the other hand, is primarily focused on performance monitoring. It captures query execution plans, runtime statistics, and historical performance data. While it is invaluable for identifying and troubleshooting performance regressions, it does not monitor for security threats or anomalous user behavior. Its primary function is operational efficiency rather than security, so while it provides insight into queries and workloads, it cannot detect unusual access patterns or generate proactive alerts regarding potential attacks.
Automatic Plan Correction is another performance-oriented feature. It detects regressions in query execution plans and automatically forces the database to revert to a previously known optimal plan. While this helps maintain consistent performance, it is unrelated to security. It does not monitor access patterns, detect anomalies, or provide alerts about suspicious activity. Its focus is entirely on query optimization and performance consistency rather than threat identification.
SQL Auditing tracks database activities by logging actions such as data access, schema changes, and permission changes. These logs are useful for compliance and post-event forensic analysis. However, auditing alone does not proactively analyze patterns for anomalies or generate alerts for suspicious activity. Auditing captures the “what happened” aspect but does not inherently detect or prevent malicious behavior.
Considering the options, Threat Detection is the only feature that actively monitors access patterns for irregularities and generates alerts in real time. Its combination of behavior analysis and alerting capabilities makes it the most suitable choice for detecting anomalous activity, whereas the other options focus on performance monitoring, plan optimization, or activity logging without proactive anomaly detection.
Question 137
You want to maintain database backups for multiple years to comply with regulatory retention policies. Which feature should you enable?
A) Long-Term Backup Retention
B) Geo-Redundant Backup Storage
C) Auto-Failover Groups
D) Transparent Data Encryption
Answer: A) Long-Term Backup Retention
Explanation:
Long-Term Backup Retention (LTR) is designed to meet regulatory and compliance requirements that demand backups be retained for multiple years. This feature allows organizations to store full database backups in Azure Blob Storage for extended periods, such as seven or ten years, depending on legal or business needs. LTR ensures that backups remain available for point-in-time recovery over the long term, supporting compliance audits, legal investigations, and disaster recovery strategies. By automating long-term retention policies, it removes the need for manual management of old backups, reducing administrative overhead and minimizing the risk of non-compliance.
Geo-Redundant Backup Storage ensures that backups are replicated across regions to protect against regional failures. While this improves disaster recovery capabilities, it does not inherently extend the retention period of backups beyond the default or specified backup window. Its primary goal is redundancy and availability rather than long-term retention for compliance purposes.
Auto-Failover Groups are a high availability feature. They replicate databases across regions and allow automatic failover in case of regional outages. Although this ensures database continuity, it does not manage backup retention. Its focus is on operational availability rather than storing historical backup data for regulatory compliance.
Transparent Data Encryption (TDE) encrypts database files and backups to protect data at rest. While it enhances security and compliance in terms of protecting sensitive data, it does not address backup duration or retention policies. TDE ensures that backups are secure but does not extend their lifecycle or manage long-term storage.
Among these options, Long-Term Backup Retention is the only feature specifically designed to maintain backups for extended periods to meet regulatory compliance requirements. It addresses both the storage and retention policies needed for audit and legal purposes, making it the correct choice.
Question 138
You want to reduce compute costs for a database that is idle most of the day while ensuring automatic scaling when needed. Which deployment model should you select?
A) Serverless compute tier
B) Hyperscale tier
C) Business Critical tier
D) Elastic Pool
Answer: A) Serverless compute tier
Explanation:
The Serverless compute tier in Azure SQL Database is optimized for variable workloads. It automatically scales compute resources up or down based on the current workload, allowing you to pay only for the resources you use. If the database is idle, the serverless tier can pause compute entirely, drastically reducing costs while keeping storage available. When a workload resumes, compute is automatically resumed, ensuring seamless scaling without manual intervention. This combination of auto-scaling and pause capability makes it ideal for workloads that have unpredictable or intermittent activity, such as development or testing environments.
The Hyperscale tier is designed for massive databases requiring independent scaling of storage and compute. While it provides excellent performance and capacity, it does not pause idle databases or reduce costs during periods of inactivity. Its primary strength is horizontal scaling rather than cost optimization for intermittent workloads.
The Business Critical tier focuses on high availability and low-latency performance for mission-critical workloads. It uses a fixed allocation of compute resources with redundancy through Always On availability groups. While reliable and fast, it does not offer automatic scaling or cost reduction during idle periods, making it less suitable for workloads with fluctuating usage patterns.
Elastic Pools allow multiple databases to share a set of resources, which can reduce costs if the workloads vary across databases. However, Elastic Pools do not pause individual databases or automatically scale each database based on its activity. Resource allocation is shared rather than dynamically managed for each database independently.
Considering the cost-saving and automatic scaling requirements, the Serverless compute tier is the only deployment model that pauses idle databases while dynamically scaling during activity. This makes it the ideal choice for reducing compute costs without sacrificing performance.
Question 139
You want to offload reporting queries from a primary Business Critical database without affecting writes. Which feature should you enable?
A) Read Scale-Out
B) Auto-Failover Groups
C) Elastic Pool
D) Transparent Network Redirect
Answer: A) Read Scale-Out
Explanation:
Read Scale-Out leverages secondary replicas in a Business Critical database to handle read-only workloads, such as reporting or analytics queries. By directing read queries to replicas, the primary database is free to focus on write operations, maintaining transactional performance and reducing contention. This feature ensures that reporting does not interfere with core application performance, which is particularly important for workloads that combine heavy reporting and transactional activity.
Auto-Failover Groups provide automatic failover between primary and secondary databases to ensure high availability during regional outages. While they maintain uptime, they do not offload read queries or improve reporting performance under normal operations. Their purpose is reliability and failover, not read workload optimization.
Elastic Pools allow multiple databases to share a set of resources to optimize cost efficiency. However, they do not provide secondary replicas for read operations, nor do they separate reporting queries from transactional workloads. Elastic Pools primarily manage resource allocation across multiple databases, not offload specific query types.
Transparent Network Redirect is a mechanism that automatically reconnects clients to a new primary database after failover events. While it is useful for client connectivity and failover scenarios, it does not provide a mechanism to offload read queries or separate workloads. Its function is entirely connectivity-focused.
Among these options, Read Scale-Out is uniquely designed to offload read workloads to secondary replicas without impacting write performance, making it the correct solution for high-performance reporting in a Business Critical environment.
Question 140
You want to encrypt sensitive columns and allow applications to query them without exposing plaintext to administrators. Which feature should you implement?
A) Always Encrypted
B) Transparent Data Encryption
C) Dynamic Data Masking
D) Row-Level Security
Answer: A) Always Encrypted
Explanation:
Always Encrypted is a security feature that ensures sensitive data is encrypted both at rest and during query execution. The encryption keys are managed by the client application, meaning the database engine and administrators cannot see plaintext data. Applications can query encrypted columns as if they were normal data, with decryption happening on the client side. This feature is essential for compliance scenarios where protecting sensitive information from privileged users is required, such as credit card numbers or personally identifiable information (PII).
Transparent Data Encryption (TDE) encrypts data at rest, including backups, to protect against unauthorized access to physical storage. However, when data is queried, TDE decrypts it before sending it to applications, meaning administrators with database access can still view plaintext data. It does not protect data during query execution, so it cannot meet requirements where even DBAs should not see sensitive information.
Dynamic Data Masking obscures sensitive data in query results by replacing it with masks or redacted values. While it prevents casual users from seeing sensitive information, it does not encrypt the underlying data. Masked data can still be accessed by users with sufficient privileges, and the actual column values remain stored in plaintext.
Row-Level Security controls access to specific rows in a table based on user attributes, restricting which users can query certain data. While it limits data exposure, it does not encrypt data and does not prevent administrators from seeing sensitive column values. Its purpose is access control, not encryption.
Always Encrypted is the only feature that both encrypts sensitive data and allows applications to operate on it without exposing plaintext to administrators, making it the correct solution for high-security column-level encryption requirements.
Popular posts
Recent Posts
