Snowflake SnowPro Core Exam Dumps and Practice Test Questions Set 2 Q21-40

Visit here for our full Snowflake SnowPro Core exam dumps and practice test questions.

Question 21 

Which Snowflake feature enables controlled sharing of data with external consumers without copying physical data?

A) Secure Data Sharing
B) Snowpipe
C) Data Marketplace
D) Materialized Views

Answer: A

Explanation

Secure Data Sharing provides a seamless method for granting access to data without requiring replication or movement of physical files. It enables providers to maintain a single copy of their data while granting read-only access to consumers. This approach ensures governed access and simplifies the process of distributing datasets outside the organization. Because data is referenced, not duplicated, storage costs remain minimal. Secure Data Sharing is widely used for collaboration, cross-business reporting, and third-party data delivery, offering a secure and efficient mechanism for controlled external access.

Snowpipe is responsible for continuous data loading. It automatically ingests files from a stage into a table, ensuring near-real-time availability of new data. While powerful for ingestion workflows, it does not provide access or sharing capabilities. Its function is operational, not collaborative, and it is not intended as a means of data distribution.

Data Marketplace is a platform where organizations publish and consume datasets. Although it offers access to external data, it relies on Secure Data Sharing under the hood. It does not itself provide the mechanism for controlled access; rather, it serves as a catalog. Therefore, it is not the foundational sharing capability.

Materialized Views store precomputed query results to improve performance for repetitive workloads. They accelerate analytical queries but do not handle access control or sharing. Their role is entirely centered on optimization.

Secure Data Sharing is the correct answer because it is the only feature designed explicitly for granting external access without data movement. It provides governance, efficiency, and security, allowing consumers to query live provider data while ensuring providers retain control. None of the other choices deliver a foundational sharing mechanism or allow external consumers to access datasets without physical copies being created.

Question 22 

Which virtual warehouse size change results in losing all current query execution state?

A) Scaling Down
B) Scaling Up
C) Resuming a Suspended Warehouse
D) Auto-Suspend

Answer: A

Explanation

Scaling down a virtual warehouse involves reducing its size by decreasing the number of compute resources allocated to it. When this occurs, Snowflake must terminate the existing warehouse instance and restart it with fewer resources. Because this restart resets the execution context, any in-flight queries are terminated. This makes scaling down disruptive when queries are running, and Snowflake documentation clarifies that it always results in loss of execution state.

Scaling up, on the other hand, increases the warehouse size. When Snowflake scales upward, it replaces the warehouse with a larger compute cluster. However, in this case, Snowflake allows active queries to continue by letting them finish on the old warehouse instance while new queries start on the upgraded cluster. This makes scaling up a nondisruptive event, as ongoing workloads are not interrupted.

Resuming a suspended warehouse maintains previous warehouse context but does not interrupt active workloads, as none were running while it was suspended. It simply restores compute capability and does not terminate any running queries because there are none. Therefore, it does not cause loss of execution state.

Auto-Suspend automatically pauses a warehouse after a specified period of inactivity. Since no queries are active at the moment of suspension, it does not interfere with running workloads. This means no execution state is lost because all activity has already finished before suspension occurs.

The correct answer is scaling down because it is the only operation that terminates the active warehouse instance while queries are still executing. This behavior ensures that any workload in progress must be restarted manually after the scale-down action takes effect.

Question 23 

Which Snowflake construct allows masked columns to reveal partial data based on user roles?

A) Dynamic Data Masking
B) Fail-safe
C) External Tables
D) Streams

Answer: A

Explanation

Dynamic Data Masking enables conditional exposure of column values depending on the querying user’s role and entitlements. It allows administrators to design policies that return either full, partial, or obfuscated values. For instance, privileged users may see full details, while restricted users only receive masked versions. This preserves confidentiality while enabling useful access for authorized individuals. Dynamic Data Masking provides flexibility and central governance, making it a powerful security control for sensitive data fields.

Fail-safe is a last-resort mechanism for data recovery. It stores data for seven days after the time travel retention period ends. Its purpose is restoring data after catastrophic errors, not controlling visibility or masking content. It does not apply any role-based transformations, making it irrelevant to selective data exposure.

External Tables allow querying of data stored outside Snowflake in cloud storage. They enable federated data access but offer no conditional masking capabilities. Visibility control for external data must be handled through other mechanisms, not through External Tables themselves.

Streams track data changes to support incremental ingestion patterns. They monitor DELETE, INSERT, and UPDATE activity and are commonly used for ETL and change data capture. They do not alter what data users are allowed to see and cannot provide different values depending on a user’s role.

Dynamic Data Masking is correct because it directly supports conditional visibility of sensitive values. It is designed for scenarios where different personas require varying degrees of access to the same data column while remaining compliant and secure.

Question 24 

What Snowflake feature ensures that only changed micro-partitions are reprocessed when refreshing a materialized view?

A) Automatic Maintenance
B) Time Travel
C) Clustering
D) Micro-Partition Metadata

Answer: A

Explanation

Automatic Maintenance is responsible for managing the refresh process of materialized views. It ensures that only micro-partitions affected by underlying data changes undergo recomputation. This reduces both compute overhead and latency. By targeting only incremental modifications, Snowflake avoids rebuilding the entire materialized view. Automatic Maintenance applies intelligent logic to maintain consistency and freshness while offering high performance and lower operational effort.

Time Travel stores historical data for querying past states and recovering dropped objects. While it is helpful for data auditing and recovery tasks, it is not involved in incremental refresh logic. Time Travel does not determine which partitions must be updated in a materialized view.

Clustering organizes table data based on specified columns so that Snowflake can efficiently prune micro-partitions during queries. Although it enhances read performance, clustering does not participate in the incremental refresh process of materialized views. It does not identify or track changes between partitions.

Micro-Partition Metadata stores information such as minimum and maximum values for each partition and is used primarily for pruning during query execution. While it helps Snowflake identify relevant partitions for reading, it does not execute or manage maintenance operations for materialized views. The refresh process still relies on Snowflake’s background services.

Automatic Maintenance is correct because Snowflake implements a background service that continuously monitors changed partitions and updates the materialized view accordingly. Its incremental approach ensures high efficiency by reprocessing only necessary data, minimizing compute usage while preserving accuracy.

Question 25 

Which Snowflake feature enables automatic triggering of SQL actions based on scheduled intervals?

A) Tasks
B) Snowpipe
C) Pipes
D) Streams

Answer: A

Explanation

Tasks allow users to schedule SQL execution at fixed intervals. They support both standalone schedules and dependency chains. With tasks, teams automate recurring transformations, incremental loads, or maintenance routines. Snowflake’s task engine executes defined statements using a dedicated warehouse or a serverless compute model depending on the configuration. This enables reliable automation without manual intervention.

Snowpipe continuously loads data from files appearing in a stage. It automatically triggers ingestion based on event notifications or file availability, not on time-based schedules. Although both tasks and Snowpipe may automate workflows, Snowpipe’s purpose is ingestion, not scheduled SQL execution.

Pipes represent Snowpipe configurations and define how staged files are loaded. They do not provide a mechanism for time scheduling and only participate in continuous ingestion pipelines triggered by file arrivals.

Streams track data changes in tables. They store delta information for INSERT, UPDATE, and DELETE operations, allowing incremental processing. Streams themselves do not run on a schedule; they feed data to other processes but do not trigger or manage executions.

Tasks are correct because they are the only feature designed to run SQL statements on a recurring schedule. They provide automation and orchestration capabilities essential for ETL, maintenance, and analytical workflows.

Question 26 

Which Snowflake feature allows external applications to access Snowflake objects using role-based policies while keeping data computation inside Snowflake?

A) External Functions
B) Snowpark
C) Secure Views
D) Access History

Answer: B

Explanation

Snowpark enables developers to execute complex transformations inside Snowflake using languages such as Java, Python, and Scala. With Snowpark, the processing takes place inside the Snowflake engine rather than on external systems. This ensures that data remains inside the secure Snowflake environment and is protected by role-based policies. External applications can call Snowpark code, but the actual computation stays within Snowflake. This architecture supports security, governance, and controlled access. Snowpark enhances data engineering, machine learning feature preparation, and advanced transformations while maintaining the principle of bringing computation to the data instead of exporting data outward.

External Functions allow Snowflake to call external services such as AWS Lambda or Azure Functions, but the computation occurs outside Snowflake. Although they support secure integration with third-party logic, they do not keep computation inside Snowflake, making them unsuitable when data residency requirements demand internal execution. Secure Views provide a method to control and restrict visibility of underlying data fields using role-based policies. They help enforce fine-grained access control but do not enable application logic or computation to run inside Snowflake. They serve governance purposes rather than enhancing external application compute pathways.

Access History provides an audit trail of data usage across Snowflake. It records who accessed which objects and when. While useful for compliance, tracking, and security analysis, it does not offer any capability for external applications to run compute logic inside the Snowflake engine. Snowpark is the correct answer because it is the only feature that allows external applications to initiate logic execution while ensuring that the processing remains completely inside the Snowflake environment. This maintains governance, reduces data movement risks, and supports advanced analytics.

Question 27 

What Snowflake feature provides versioned snapshots useful for reproducible data science experiments?

A) Time Travel
B) File Formats
C) Internal Stages
D) Clustering

Answer: A

Explanation

Time Travel enables querying historical snapshots of data, making it essential for reproducible analysis and experiments. Data scientists often need consistent, fixed versions of datasets to ensure that training, validation, or testing processes produce stable results. Time Travel guarantees access to previous states of tables, allowing analysis to run on precisely the same data as before, even after new changes occur. It supports debugging, verifying model behavior, and recreating conditions that existed at specific times. This makes it ideal for scenarios requiring strict reproducibility.

File Formats define how Snowflake interprets staged files during loading and unloading. They deal with file structure, not dataset versioning. They cannot recreate historical states of a table or support controlled reproducibility scenarios directly.

Internal Stages serve as temporary or permanent storage for data prior to loading. They do not store table versions and cannot provide historical snapshots of data. They are part of ingestion and unloading operations, not version control.

Clustering improves query pruning by organizing micro-partitions, but it does not track or preserve past states of data. Its purpose is to optimize selective queries, not to maintain historical versions. It cannot restore or reproduce specific dataset states. Time Travel is correct because it allows reconstructing past data states within a defined retention period, making it indispensable for experiments requiring consistent, versioned data.

Question 28 

Which Snowflake object is required to support continuous ingestion using event notifications?

A) Pipe
B) Resource Monitor
C) Data Masking Policy
D) User-Defined Function

Answer: A

Explanation

A pipe is the central element of Snowpipe and is responsible for continuous data ingestion triggered by event notifications. It defines the COPY statement used for loading and is linked to a stage containing incoming files. When configured with cloud event messages such as AWS SNS or Azure Event Grid, the pipe automatically processes new files as soon as they arrive. This enables near-real-time ingestion without manual intervention. Pipes form the operational foundation for automated ingestion workflows.

Resource Monitors track and limit credit consumption. They are purely cost-governance tools and unrelated to ingestion. They provide alerts and controls but do not enable event-triggered data processing. A Data Masking Policy enforces conditional visibility of sensitive columns. Although important for security, it plays no role in ingestion or event handling.

A User-Defined Function allows custom logic to run within Snowflake but does not control data ingestion or respond to event notifications. UDFs are invoked manually within SQL statements and do not integrate with cloud storage event triggers. A pipe is therefore correct because it is the only construct built for continuous ingestion that reacts to new file arrival notifications.

Question 29 

Which Snowflake feature allows separation of storage and compute for independently scaling workloads?

A) Virtual Warehouses
B) Query Profile
C) Schema Privileges
D) File Format Options

Answer: A

Explanation

Virtual Warehouses provide compute resources for executing queries and other operations. Because they are completely separate from Snowflake’s storage layer, each warehouse can scale independently without affecting stored data. They can be resized, suspended, resumed, and multiplied to satisfy concurrency needs. This design supports workload isolation, allowing analytics, ETL, and data science teams to operate independently. Virtual Warehouses are fundamental to Snowflake’s elasticity and its pay-as-you-use compute model.

Query Profile visualizes performance characteristics such as scan volume, execution paths, and query stages. It is a diagnostic tool and cannot affect compute or storage scaling. It offers insights but no operational control. Schema Privileges regulate access rights over database objects. They handle security and governance but do not provide compute resources or control scaling.

File Format Options govern the method by which staged files are interpreted, including delimiter settings and compression. They relate to ingestion and data formatting rather than compute scaling. Virtual Warehouses are correct because they exclusively provide isolated compute power that can scale independently while sharing the same underlying storage layer for all workloads.

Question 30 

Which feature allows tracking data changes for incremental transformations?

A) Streams
B) Tasks
C) Privileges
D) Roles

Answer: A

Explanation

Streams record data modifications such as inserts, updates, and deletes. They maintain a change log that enables incremental processing. When used in pipelines, streams allow developers to process only newly changed data rather than scanning full tables. This supports efficient ETL jobs, as transformations operate on deltas. Streams are essential for building modern data ingestion architectures where incremental updates reduce compute workloads.

Tasks execute SQL on schedules but do not store data changes. They orchestrate workflows built on streams but are not themselves responsible for tracking changes. Privileges define what users can access but do not participate in data processing. They are governance controls rather than pipeline building blocks.

Roles group privileges and manage entitlement structures but offer no functionality for tracking data changes or supporting incremental logic. Streams are correct because they provide the delta-tracking mechanism necessary for incremental transformations.

Question 31 

Which Snowflake feature allows organizations to securely publish curated datasets for external consumption?

A) Data Marketplace
B) Warehouse Monitoring
C) External Functions
D) Tag-Based Masking

Answer: A

Explanation

Data Marketplace enables organizations to make curated datasets available for other Snowflake consumers in a controlled, secure, and governed manner. Providers can highlight their datasets publicly or privately, and consumers can access them without having to copy or move underlying data. Snowflake ensures security by delivering these datasets through its sharing architecture. The marketplace simplifies collaboration between organizations, supports monetization strategies, and promotes interoperability across enterprises. Because providers maintain full control over access, governance, and updates, Data Marketplace becomes an efficient channel for distributing high-quality, ready-to-use information.

Warehouse Monitoring is not related to dataset publication. It provides insights into compute usage, such as performance, query behavior, and warehouse efficiency. Although useful for operational oversight, it does not support external dataset access or sharing.

External Functions allow Snowflake to interact with services running outside its environment. They provide extended processing capabilities such as calling APIs or machine learning systems but do not facilitate dataset publishing, distribution, or shared access.

Tag-Based Masking applies dynamic visibility controls to sensitive columns using tags. It assists in compliance and governance by restricting data exposure, but it does not provide mechanisms for dataset publication or external distribution.

Data Marketplace is correct because it is specifically designed to publish, distribute, and govern shared datasets, enabling seamless collaboration and consumption by external Snowflake accounts without duplicating data.

Question 32 

Which Snowflake capability ensures consistent performance when many concurrent users query the same dataset?

A) Multi-Cluster Warehouses
B) Fail-safe
C) External Tables
D) Materialized Views

Answer: A

Explanation

Multi-Cluster Warehouses provide automatic or manual scaling across multiple compute clusters. When workloads experience high concurrency, Snowflake adds additional clusters to handle increased demand. Each cluster serves different user groups, ensuring that query performance remains consistent even as more queries compete for resources. Once concurrency load decreases, Snowflake can scale down to fewer clusters, optimizing cost. This elasticity ensures smooth performance during peak usage periods and eliminates queueing issues.

Fail-safe is a long-term recovery mechanism that allows Snowflake to restore data after the time travel window has expired. While important for resilience and compliance, it does not influence performance or concurrency.

External Tables facilitate querying data stored in external cloud storage. They provide federated access but do not address user concurrency or performance consistency. Their functionality centers on accessibility rather than workload distribution.

Materialized Views accelerate repeated queries by storing precomputed results. They enhance performance but do not handle concurrency challenges. They serve optimization needs for repetitive code paths, not workload distribution among users.

Multi-Cluster Warehouses are correct because they directly address concurrency pressures by expanding compute capacity dynamically, ensuring consistent performance regardless of workload spikes.

Question 33 

Which Snowflake feature enables secure integration of encryption key management with external key management systems?

A) Tri-Secret Secure
B) Network Policies
C) Reader Accounts
D) Auto-Clustering

Answer: A

Explanation

Tri-Secret Secure enhances Snowflake’s overall encryption model by introducing a dual-key framework that requires cooperation between Snowflake’s internal encryption mechanisms and a customer-managed master key stored in an external key management system. This layered approach strengthens cryptographic protection by ensuring that no single party holds full decryption capability. Snowflake continues to manage the hierarchical key infrastructure that protects data at rest, but the customer-provided external key becomes an essential component of the decryption process. If the customer disables or rotates that external key, Snowflake can no longer decrypt the data, effectively giving organizations direct and immediate control over access. This capability aligns closely with compliance requirements in tightly regulated sectors where customer ownership and revocation authority over encryption keys are mandatory. By combining operational simplicity with rigorous cryptographic governance, Tri-Secret Secure supports both security and regulatory expectations without sacrificing performance or manageability.

Network Policies operate at the network perimeter and control which IP addresses or IP ranges are allowed to connect to a Snowflake account. They help administrators enforce geographic restrictions, limit unauthorized connection attempts, and reduce attack exposure by filtering incoming connections. However, they function entirely outside the encryption and key-handling workflows. Network Policies neither influence how data is encrypted nor participate in key management. Their scope is connection access control, not cryptographic governance.

Reader Accounts enable data providers to share datasets with consumers who do not maintain their own Snowflake accounts. These accounts allow the consumer to query shared data with controlled privileges, enabling seamless collaboration without requiring a full enterprise Snowflake deployment on the consumer’s side. However, Reader Accounts have no interaction with encryption models or dual-key architectures. They do not manage keys, govern decryption behaviors, or affect how sensitive data is cryptographically protected.

Auto-Clustering focuses on performance optimization by automatically reorganizing micro-partitions based on clustering keys. This feature improves pruning efficiency and helps maintain predictable performance on large or rapidly changing datasets. Auto-Clustering works exclusively within the performance domain and does not involve encryption, key security, or decryption workflows. Its purpose is computational efficiency, not cryptographic protection.

Tri-Secret Secure is the correct choice because it uniquely integrates Snowflake’s native encryption framework with externally managed customer keys, ensuring that data decryption requires joint control. It provides strong governance, regulatory alignment, and the ability for customers to revoke access instantly by disabling their master key, thereby delivering a more robust and controllable encryption model.

Question 34 

Which feature allows Snowflake to provide a fully consistent view of data even when multiple transactions occur simultaneously?

A) ACID Transactions
B) Result Caching
C) Snowpipe
D) Materialized Views

Answer: A

Explanation

ACID transactions provide the foundational guarantees that allow Snowflake to manage concurrent operations while preserving correctness, reliability, and predictable outcomes. Atomicity ensures that every transactional operation either completes fully or does not occur at all, preventing partial updates that could leave data in an inconsistent or unusable state. Consistency guarantees that each transaction moves the database from one stable and valid state to another, ensuring that all integrity rules and constraints that Snowflake supports are respected throughout the process. Isolation ensures that simultaneous operations do not interfere with one another, meaning each transaction behaves as though it is the only one running, even when hundreds or thousands of users are working in parallel. Durability ensures that once a transaction is committed, the data remains persistent and protected through system faults or failures, preserving correctness even during unexpected interruptions. Taken together, ACID properties enable Snowflake to support multi-user workloads while maintaining reliable and predictable data behavior under heavy concurrency.

Result Caching improves performance by storing results of previously executed queries, allowing Snowflake to return identical results instantly without reprocessing data. However, this capability does not participate in any mechanism that ensures correctness during concurrent writes or protects state transitions. It speeds up workloads but does not govern how data is managed, validated, or protected within transactional cycles. Result Caching is purely an optimization layer, not a consistency or concurrency control system.

Snowpipe provides automated and continuous data ingestion from cloud storage locations. Its function is operational, ensuring data arrives quickly and is incorporated into tables with minimal delay. However, Snowpipe does not validate multi-step operations, protect against partial writes, or coordinate transactional correctness. Its purpose is to streamline ingestion workflows, not guarantee consistency under simultaneous or complex update operations.

Materialized Views accelerate performance by storing precomputed results for queries that require recurring aggregations or summaries. They reduce compute time and optimize workloads but remain independent of transactional mechanisms. They do not enforce write consistency, do not protect against interference among concurrent users, and do not manage the correctness of multi-step operations.

ACID transactions are the correct choice because they provide the direct guarantees Snowflake relies on to maintain data correctness, integrity, and reliability under concurrent access. They enable the platform to support simultaneous reads and writes while ensuring predictable, consistent, and durable outcomes even during complex or high-volume operations.

Question 35

Which Snowflake capability ensures that each role sees only the data it is permitted to access, even within shared datasets?

A) Secure Views
B) Network Policies
C) Warehouses
D) Clustering Keys

Answer: A

Explanation

Secure Views function as one of Snowflake’s primary mechanisms for providing selective and protected data exposure, especially in environments where multiple roles or external consumers need controlled access. A Secure View does not simply mask columns but ensures that underlying base tables, query logic, and metadata remain hidden from any user who is not explicitly granted access. This means even users with broader privileges cannot bypass the view definition to inspect restricted fields or understand how results are generated. Because Secure Views maintain a strict boundary between the data model and consumer access, they are effective for enforcing row-level, column-level, and logic-level protections. When combined with Snowflake roles and grants, they ensure that each consumer receives only the intended subset of information, enabling safe data sharing and robust governance.

Network Policies, while extremely important for securing Snowflake accounts at the perimeter, operate on an entirely different layer and do not provide selective visibility into datasets. They allow administrators to define allowed and blocked IP ranges, ensuring only approved network locations can initiate connections. However, once a connection is successfully established, Network Policies do not influence permissions, table visibility, field access, or any aspect of result filtering, meaning they cannot be used to deliver differentiated subsets of data to multiple roles.

Warehouses, which serve as compute engines for executing SQL queries, are not connected to data access governance and cannot modify which data is visible to a user. They determine the performance characteristics of a query, such as speed and resource availability, but they do not evaluate permissions or apply filtering logic. Even if different roles use different warehouses, data visibility remains entirely dependent on Snowflake’s access control system, not on the warehouse configuration itself.

Clustering Keys improve the physical organization of micro-partitions to enhance performance for large or frequently filtered tables. Their purpose is purely operational, enabling more efficient pruning and reducing query costs. They have no capability to restrict which rows or columns are visible to particular users and cannot replace security constructs like Secure Views.

Secure Views are the correct choice because they directly control which portions of a dataset are exposed, ensure that underlying tables and logic remain hidden, and integrate seamlessly with Snowflake’s role-based access control model. This combination supports secure, consistent, and well-governed data delivery across diverse user groups.

Question 36 

Which Snowflake feature enables customers to securely provide access to selected objects for external business partners without granting direct database-level privileges?

A) Secure Views
B) Reader Accounts
C) External Browser Authentication
D) Cloned Databases

Answer: B

Explanation 

Reader accounts are specifically designed to provide outside organizations with isolated access to selected datasets without requiring them to manage their own Snowflake subscription or receive broad privileges within the provider’s main account. To understand why this mechanism meets the requirement while others do not, it helps to explore how each of the alternative features functions in relation to access control, isolation, and data governance. Secure views allow organizations to expose restricted query outputs while concealing sensitive columns or underlying structures. They are excellent tools for data masking and controlled internal exposure, but they operate entirely inside an already authorized Snowflake environment. 

Secure views do not establish isolated consumption environments and cannot be used to give external entities controlled access without granting privileges inside the main account. External browser authentication is a method for logging into Snowflake through a supported identity provider. It manages sign-in flows, session validation, and user identity confirmation, but it has nothing to do with creating secured, segregated access spaces or provisioning isolated accounts for partners. It does not determine how data is exposed or consumed by outside parties. Cloned databases provide copy-on-write replicas of existing data structures. These copies are efficient for development, testing, and backup environments, but they still require that any user accessing the clone exists within the same Snowflake account or be granted privileges through separate means. A clone cannot independently isolate external users, nor does it provide a dedicated consumption environment. 

Reader accounts solve this exact requirement by creating a stand-alone Snowflake account owned and controlled by the data provider. The provider handles computer provisioning, cost management, monitoring, and governance. The external partner receives a separate, restricted account in which they can query only the objects intentionally shared with them. This preserves strong boundary control, maintains strict separation between internal operations and external consumption, and allows monitoring at the partner-specific level. Because reader accounts enable external access without granting internal privileges and provide a fully isolated environment, they represent the precise feature designed for this use case.

Question 37

What does Snowflake use to ensure continuous protection of data stored in tables by allowing quick time-based restoration?

A) Fail-safe
B) Virtual Warehouses
C) Time Travel
D) Secure Materialized Views

Answer: C

Explanation 

Time travel is the only mechanism that enables flexible, user-initiated restoration of data to earlier states by allowing access to historical versions of tables and schemas. To understand why it is uniquely suited to continuous protection and point-in-time recovery, it is necessary to compare it to other Snowflake features and evaluate how each interacts with data retention and restoration. Fail-safe is designed exclusively for catastrophic internal recovery situations where Snowflake intervenes to restore lost data after all other mechanisms fail. It is not a user-controlled feature and does not allow point-in-time querying or cloning. Fail-safe is intentionally limited, intended for severe emergencies rather than routine operational recovery. 

It cannot restore objects to specific historical timestamps on demand. Virtual warehouses provide compute power for executing queries. They do not store data, manage retention, or influence historical recovery. Their function is related to performance and processing, not versioning or time-based access. No matter how a warehouse is sized or configured, it plays no part in restoring older data states. Secure materialized views help accelerate performance by maintaining precomputed results, while also protecting underlying data definitions. They are valuable for optimization and controlled exposure but are not equipped to support historical querying. They do not store past versions of data and therefore cannot be used for restoration. Time travel is built to allow querying, cloning, and restoring tables or schemas as they existed at an earlier time. This includes recovering objects that were accidentally dropped, enabling users to view snapshots of past states, and supporting rollback operations for operational mistakes. The retention window can vary based on object type and Snowflake edition, but its purpose remains consistent: providing flexible, precise, user-driven access to historical data. Because time travel enables continuous protection and immediate restoration, it is the correct feature.

Question 38 

Which Snowflake mechanism enables sharing of live datasets across accounts without copying data?

A) File Format Definitions
B) Snowpipe
C) Data Sharing
D) Row Access Policies

Answer: C

Explanation 

Snowflake’s data sharing capability provides a mechanism for granting live, secure, zero-copy access to datasets across accounts, making it the only option suited for distributing data without replication. To explain why this is the correct choice, it is essential to compare it with other Snowflake components and understand their roles in the data workflow. File format definitions specify how staged files should be interpreted during loading or unloading. They govern parsing rules such as delimiters, compression types, or null handling. 

These configurations are important for ingestion and export processes but have no relationship to sharing datasets between accounts. Their purpose is simply to ensure correct interpretation of files. Snowpipe enables continuous loading by automatically ingesting new files as they appear in cloud storage. It accelerates data availability and simplifies streaming ingestion patterns, but it does not provide any mechanism for sharing already loaded data. Its focus is purely ingestion, not collaboration or distribution. Row access policies enforce fine-grained, conditional row-level filtering based on user attributes. They protect sensitive information once access is granted but cannot transfer data across accounts or deliver shared datasets to external organizations. 

They operate inside an established access boundary. Data sharing is unique in that it exposes live, queryable datasets to external accounts without physically copying data. Consumers access the provider’s data directly while the provider retains full ownership and governance. Updates made by the provider become instantly visible to all consumers, ensuring consistency and eliminating synchronization overhead. Because this capability enables direct, secure, copy-free access to shared datasets, it is the mechanism designed for this purpose.

Question 39 

Which Snowflake capability allows a table to be restored even after it has been dropped?

A) Primary Key Enforcement
B) Zero-Copy Cloning
C) Time Travel
D) Account Failover

Answer: C

Explanation 

Primary key definitions in Snowflake do not function the same way they do in traditional relational database systems, where such constraints are actively enforced to guarantee uniqueness and referential integrity. In Snowflake, these keys are informational only. They do not prevent duplicate values, do not control data quality rules, and do not provide any mechanism for recovering deleted or overwritten data. Because they do not interact with retention features or historical metadata, they have no role in restoring an object that has been dropped or reverted to an earlier state. Their purpose is mainly descriptive, helping downstream tools or processes understand the intended structure of the dataset rather than influencing recovery capabilities.

Zero-copy cloning offers a fast and storage-efficient method for creating independent copies of tables, schemas, or entire databases without physically replicating underlying data files. This makes it extremely valuable for testing, development, analytics isolation, or creating independent environments. However, cloning is not retroactive and cannot restore a table that has already been dropped unless a clone existed beforehand. If no clone was created prior to deletion, cloning provides no mechanism for retrieval. It is best viewed as a proactive tool for environment duplication rather than a reactive tool for error correction.

Account failover provides resilience at the account level by replicating metadata and enabling operational continuity in a secondary region or cloud. This feature is designed for large-scale disaster recovery scenarios, such as regionwide outages, and ensures that organizations can continue operations when a primary region becomes unavailable. Although failover is critical for business continuity, it does not provide time-based data recovery for specific objects. It cannot reverse an accidental table drop or revert a dataset to a previous version within the primary region. Its focus is on high availability and continuity, not fine-grained object restoration.

Time travel is specifically engineered to preserve historical versions of tables, schemas, and databases for a defined retention period. This retention window enables users to query past states, clone objects from earlier moments, or recover objects that were accidentally dropped. When a table is removed, its metadata and underlying data remain accessible through time travel until the retention window expires. This allows the undrop operation, which restores a dropped table exactly as it existed at the moment it was removed. Because time travel is purpose-built for point-in-time recovery and protects against common human errors such as unintended deletions, it is the only mechanism that directly supports restoring dropped tables.

Question 40 

Which Snowflake feature supports assigning compute resources to specific workloads using independent clusters?

A) Secure UDFs
B) Virtual Warehouses
C) Result Caching
D) Sequence Generators

Answer: B

Explanation 

Secure UDFs serve as a method for executing custom business logic within the Snowflake environment while keeping the underlying implementation protected from users who do not have direct access to the code. Their primary focus is on enabling secure execution of logic, such as transformations, calculations, or validation routines. Although they enhance functional capabilities and ensure that proprietary algorithms remain confidential, they do not influence how compute resources are allocated, isolated, or scaled. Secure UDFs are executed within the existing compute allocations provided by a warehouse and do not create separation between workloads. Their purpose is strictly functional rather than operational, and they do not contribute to workload isolation or cluster-level performance tuning.

Result caching is designed to improve performance when identical queries are executed more than once. When a query result is stored in the result cache, Snowflake can return this cached output instantly without reprocessing the underlying data. This enhances efficiency for repetitive workloads but does not alter how computers are assigned or how different workloads are separated from each other. Result caching simply reduces redundant computation, but it depends entirely on previously executed results. It does not provide any mechanism to manage compute resources, isolate tasks, or ensure consistent performance across competing workloads.

Sequence generators create incremental numeric values that support use cases requiring unique identifiers or ordered numbering. They are helpful for creating surrogate keys, generating sequences for processing tasks, or supporting application requirements that depend on predictable incremental values. Despite their utility, sequence generators do not interact with Snowflake’s compute layer in any meaningful way. They do not control performance, do not influence workload distribution, and do not separate processing tasks. Their functionality is limited to producing numbers when requested.

Virtual warehouses operate as the core compute layer for Snowflake and are specifically designed to enable workload isolation. Each warehouse acts as its own independent compute cluster, meaning workloads assigned to one warehouse do not interfere with workloads running on another. Warehouses can be scaled up for more compute power or scaled out to increase concurrency. They can also be suspended to minimize costs when not in use and resumed instantly when processing is needed. This flexibility allows organizations to assign dedicated warehouses to tasks such as data loading, analytics, reporting, or machine learning. Because they allow complete control over compute capacity and workload separation, virtual warehouses are the correct choice for assigning compute resources independently.

img