Snowflake SnowPro Core Exam Dumps and Practice Test Questions Set 4 Q61-80

Visit here for our full Snowflake SnowPro Core exam dumps and practice test questions.

Question 61 

Which Snowflake object primarily determines the compute cost when executing a SQL query?

A) Virtual Warehouse
B) Database
C) Role
D) Schema

Answer: A

Explanation: 

A compute engine within Snowflake directly influences credit usage because it performs the actual data processing. When a query executes, this compute layer determines the amount of resources consumed, the level of parallelism, and the speed at which operations complete. Since computers are billed per credit, the engine itself is the primary factor impacting cost. Its size, scaling behavior, and runtime directly shape how many credits a task will spend, making it central to understanding performance efficiency.

A data container within Snowflake has no role in executing computation. It stores logical groupings of tables, schemas, and other resources but does not process any workload. It has no compute cost associated with it. Storage charges remain separate and operate independently of query cost. Therefore, this data container cannot influence compute charges.

An access identity that represents a security boundary plays no part in determining workload cost. It only governs who can run which queries or manage which objects. Although permissions influence functional behavior, they do not affect credits consumed during query execution.

A logical grouping inside a database behaves similarly to a data container and does not perform processing. It organizes objects but plays no role in compute operations. It has no ability to change performance or credit usage when queries run.

Compute charges arise solely from the resource that executes queries. Snowflake isolates compute from storage, so all activity that requires CPU cycles originates in the compute layer. Therefore, the object responsible for credit consumption is the virtual processing environment assigned to the session. Every query depends on its performance characteristics, making this object the central determinant of compute spend.

Question 62 

Which Snowflake feature allows the capture of row-level changes in near real time?

A) External Table
B) Snowflake Stream
C) Snowflake Stage
D) File Format

Answer: B

Explanation: 

A table representing external cloud files does not maintain change tracking. It simply exposes metadata referencing files stored outside Snowflake. It cannot detect inserts, updates, or deletes because it has no internal change-log mechanism. Its purpose is to provide a read-only representation of external data, not to capture ongoing transactional activity.

A mechanism used for change-data-capture is specifically designed to track row-level insertions, updates, and deletions on supported tables. It retains metadata necessary for downstream consumption and enables near real-time event-driven workflows. It provides delta information rather than full copies of data, making it ideal for pipelines that require continuous ingestion of modifications.

A storage reference for loading and unloading data cannot monitor changes made to tables. It acts as a holding area for files and does not track row modifications. Since stages are independent from table activity, they cannot function as a CDC instrument.

A configuration describing file structure has no connection to transactional changes within Snowflake. Its function relates to ingestion but has no involvement in monitoring rows in tables or capturing deltas.

The correct mechanism for near real-time change capture is the feature designed specifically to track and expose modifications as immutable change records. It supports efficient CDC workflows, ensures retention of tracked changes for defined periods, and allows consumers to process only recent modifications.

Question 63 

What happens when a Snowflake task uses a warehouse that is currently suspended?

A) The task fails immediately
B) The task waits until a user resumes the warehouse
C) Snowflake automatically resumes the warehouse and runs the task
D) The task is skipped without error

Answer: C

Explanation: 

A statement saying the task fails instantly is inconsistent with Snowflake automation capabilities. Tasks are intentionally designed to run scheduled workloads without requiring constant monitoring. Immediate failure would defeat that purpose and create unnecessary interruption in orchestrated pipelines.

A description in which the task waits indefinitely for manual intervention does not match the automated behavior of Snowflake. Tasks depend on autonomous scheduling and compute management. They cannot remain paused waiting for a user; this would halt automation and undermine orchestration continuity.

A claim that the scheduled workload is skipped silently does not align with Snowflake’s reliability guarantees. Skipping would cause data gaps and unreliable pipelines, which would be problematic for production-grade automation.

The correct behavior is that Snowflake automatically resumes the compute resource when needed. When a scheduled workload triggers, the system checks the compute state and resumes it if it is suspended. Once active, the workload executes normally. After the task finishes, compute can enter suspension again based on auto-suspend settings. This design ensures consistent automation and reliable scheduled execution even when compute is not actively running.

Question 64 

What is the primary purpose of a Snowflake resource monitor?

A) To optimize micro-partitions for pruning
B) To track and control credit usage
C) To enforce role-based access policies
D) To automate schema evolution

Answer: B

Explanation: 

A description suggesting that micro-partition optimization is the function of this feature is inaccurate. Pruning efficiency depends on clustering, statistics, and the optimizer. Monitoring tools do not reorganize data or restructure storage.

A claim connecting this feature to access control misunderstands Snowflake’s security boundaries. Access policies are governed by roles, grants, and privileges, not by monitoring utilities. Monitoring tools cannot permit or deny actions based on user identity.

An assertion that it automates schema changes misrepresents the objective of the tool. Schema evolution is handled manually or through automated SQL, but it is not controlled by any consumption tracking mechanism.

The correct purpose is to monitor and manage credit consumption by defining limits and thresholds. Administrators can configure alerts, warnings, and even automated suspension of compute resources to prevent unexpected spend. This functionality is central to cost governance within Snowflake, ensuring visibility and control over credit usage across accounts or specific workloads.

Question 65 

Which type of Snowflake table is designed for fast, transient data with no long-term retention?

A) Permanent Table
B) Temporary Table
C) Transient Table
D) External Table

Answer: C

Explanation: 

A statement indicating that a fully durable table is intended for short-term usage does not reflect Snowflake’s storage model. This type of table includes full fail-safe protection and long-term persistence, making it unsuitable for ephemeral data.

A description suggesting that a session-scoped table is the ideal object for ongoing transient data overlooks the fact that it disappears when the session ends. It is useful for session-bound processing but not for broader transient workloads requiring durability beyond a single session.

A claim that externally referenced files are designed for fast, short-lived storage is incorrect because external tables do not store any data inside Snowflake at all. They rely entirely on cloud object storage and are not built to handle temporary internal workloads.

The correct answer is the storage category that eliminates fail-safe while retaining some level of persistence. It is optimized for low-cost, short-lived usage such as staging, intermediate calculations, or temporary pipelines. It minimizes storage overhead and avoids long-term retention mechanisms, making it ideal for brief but durable operations within Snowflake.

Question 66 

Which Snowflake mechanism ensures that queries can access the same version of data even while concurrent updates occur?

A) Zero-Copy Cloning
B) Time Travel
C) Multi-Version Concurrency Control (MVCC)
D) Search Optimization Service

Answer: C

Explanation: 

A technique enabling instant duplication of objects without physically copying data does not manage concurrent reads and writes. It creates independent metadata snapshots but does not govern transactional consistency during simultaneous operations. Its value is in cloning efficiency, not concurrency handling.

A feature that allows users to query historical versions of data is not responsible for real-time consistency during concurrent updates. It provides temporal querying capabilities, not live transactional coordination. While related to historical snapshots, it does not manage how reads occur while writes are ongoing.

A function designed to enhance selective lookup performance has no relation to concurrent update management. It improves query efficiency for specific patterns by optimizing search paths but provides no mechanism to isolate readers from writers.

The correct mechanism is the concurrency control system that ensures every query sees a stable snapshot of data, regardless of ongoing modifications. This prevents readers from being blocked and allows writers to update without interfering with running queries. It enables consistency and isolation by generating new versions of micro-partitions when updates occur. Readers continue accessing older versions until their queries complete, ensuring consistent results.

Question 67 

Which Snowflake feature automatically generates file-level metadata for structured logs in cloud storage?

A) External Tables with Auto-Refresh
B) Manual Copy into Table
C) Materialized Views
D) Dynamic Data Masking

Answer: A

Explanation: 

A method requiring manual ingestion does not automatically detect or register new files. It requires a user-executed command each time new data appears. There is no autonomous metadata capture or continuous discovery involved.

A structure that precomputes results for faster querying has no integration with cloud storage file updates. Its purpose is performance optimization within Snowflake and does not extend to ingestion or metadata creation for external storage.

A rule-based masking system for sensitive data is unrelated to file detection, ingestion, or metadata management. It applies transformations during query execution but plays no role in discovering new files or building catalog metadata.

The correct feature is the one that identifies new files in cloud storage and refreshes associated metadata automatically. This allows external tables to stay aligned with incoming logs or structured files without requiring user-driven refresh operations. It ensures that metadata about external objects remains current, enabling near real-time reporting or processing of cloud-stored log data.

Question 68 

Which Snowflake command is commonly used to convert semi-structured data into relational columns?

A) INSERT OVERWRITE
B) FLATTEN
C) GRANT USAGE
D) ANALYZE HISTORY

Answer: B

Explanation: 

A command such as INSERT OVERWRITE is designed for replacing the contents of a target table with a new set of rows supplied by a query. Although this operation is powerful for refreshing entire datasets or restructuring table content at the row level, it does not carry any functionality for parsing or interpreting hierarchical structures such as JSON, XML, or AVRO. INSERT OVERWRITE simply exchanges one dataset for another and does not have built-in awareness of nested keys, repeated elements, or semi-structured patterns that require expansion into relational columns. It provides no mechanisms for traversing arrays, unrolling objects, or extracting repeated elements into separate rows. Because it focuses entirely on replacement rather than transformation, it cannot be used to convert semi-structured content into a normalized relational form.

A command like GRANT USAGE operates in the domain of security and permissions. It is exclusively responsible for granting users or roles the ability to access certain objects, such as warehouses, schemas, or stages. While it is essential for enabling appropriate access rights within Snowflake, it contributes nothing to the process of reshaping, expanding, or breaking down semi-structured data. Security statements do not interpret content, do not parse nested values, and do not create relational views of complex data. Their sole function is controlling access, not restructuring data.

A history-oriented command such as ANALYZE HISTORY retrieves metadata about actions previously performed on an object. This may include change events or operational metadata, but it has no role in interpreting or flattening hierarchical data structures. It does not read arrays, does not expose nested attributes, and does not provide tools for converting complex formats into relational columns. It is purely an auditing mechanism without any transformation capabilities.

The correct construct is FLATTEN, which is specifically designed to work with semi-structured formats such as JSON. FLATTEN is typically used with a lateral join to unpack arrays or objects, producing a row-per-element output that enables SQL engines to treat nested values as relational rows. This makes it possible to normalize layers of deeply nested structures, capture repeated keys, and expose hierarchical content as columns suitable for analytical querying. By expanding arrays into rows and exposing object keys as fields, FLATTEN enables the relational modeling of complex semi-structured data, making it a fundamental tool when working with nested JSON, XML, or other hierarchical formats in Snowflake.

Question 69 

Which Snowflake object stores metadata about the locations of files referenced in external stages?

A) File Format
B) External Table
C) Role
D) Sequence

Answer: B

Explanation: 

A file format object in Snowflake defines how files should be parsed when ingested or queried. It includes parameters such as delimiters, compression type, field mappings, binary formats, and file structure rules. However, it does not store any metadata about the actual file paths or storage locations where the files reside. A file format is applied at read or load time to instruct Snowflake on how to interpret raw content, not to catalog what files exist or track external storage objects. Thus, while essential for correct parsing, it contains no structural metadata about external file listings. 

A role is a security construct that acts as a container for privileges. Its purpose is to control access to Snowflake resources, allowing administrators to assign permissions such as USAGE, SELECT, or OPERATE to appropriate users. Roles do not maintain metadata about files, external references, or storage systems. They do not read from cloud storage, nor do they track which files exist in an external stage. Their scope is limited strictly to authorization, not data inspection or metadata tracking. 

A sequence object generates incrementing numeric values for use in surrogate keys or procedural logic. It is entirely independent of external storage layers, file catalogs, or queryable metadata structures. Sequences do not interact with cloud storage, do not reference stages, and do not maintain records of external data sources. Their purpose is autonomous value generation, unrelated to file management. The correct object is an external table, which is specifically designed to store metadata about files located in an external stage. 

External tables maintain information about which files are present, their modification timestamps, partition values, and ingestion status. They provide a metadata layer that makes it possible to query external cloud files through SQL without loading them into Snowflake. By mapping the external file system into a table-like structure, external tables allow the SQL engine to understand which files are available, determine their readiness, and expose them for querying, transformation, or pipeline processing.

Question 70 

What is the main advantage of using Snowflake’s dynamic tables for data pipelines?

A) They automatically encrypt data at rest
B) They continuously process upstream changes into downstream tables
C) They eliminate the need for warehouses
D) They prevent time travel from being used

Answer: B

Explanation: 

An assertion that dynamic tables automatically handle encryption misunderstands the built-in security model. Snowflake encrypts all data at rest by default, regardless of table type. Dynamic tables do not modify or influence encryption behavior.

A claim that they remove the requirement for compute resources is inaccurate. Dynamic tables still rely on compute services behind the scenes. Snowflake manages the compute automatically, but warehouses are still required conceptually, even if not explicitly visible to the user.

A suggestion that they disable temporal querying is not correct. Time travel operates independently of dynamic tables and remains available unless intentionally reduced or disabled based on retention settings.

The correct benefit is that dynamic tables keep downstream tables synchronized with upstream changes. They continuously evaluate the defined query and automatically maintain derived data. This allows pipelines to update incrementally without user intervention, reducing operational complexity and improving freshness.

Question 71 

Which Snowflake feature enables fast querying of repeated patterns in semi-structured data by optimizing lookup paths?

A) Clustering Keys
B) Search Optimization Service
C) Time Travel
D) Row Access Policies

Answer: B

Explanation: 

A mechanism such as clustering keys focuses on improving micro-partition pruning based on the physical arrangement of data but does not provide any specialized optimization for selective lookups through deeply nested paths inside semi-structured structures like JSON, XML, or Avro. Clustering keys work by grouping related rows together on disk according to specified columns, allowing Snowflake to prune partitions more efficiently when those columns are heavily filtered in the query. However, semi-structured data often contains nested elements, repeated objects, and hierarchical patterns, and clustering keys do not create the type of supplemental path-level metadata needed to speed up repeated filtered access inside those complex structures. Thus, while valuable for relational workloads, clustering keys offer no targeted acceleration for nested path queries.

A temporal capability such as Time Travel exists for restoring earlier table states and analyzing past data but does not contribute to optimizing lookup operations. Time Travel keeps older versions of micro-partitions to support historical queries, recovery from accidental modifications, and auditing scenarios. This mechanism operates at the level of version retention and not at the level of enhancing data access patterns, especially those that occur within repeated semi-structured paths. Because it does not generate lookup indexes or path maps, it does not help performance in searches involving highly selective nested filters.

A row-level security construct like a row access policy ensures that users only see rows they are entitled to view. These policies enforce access logic dynamically based on roles, attributes, or context but do not improve query speed or modify underlying lookup patterns. Their purpose is entirely security-driven rather than performance-driven, and thus they do not have any ability to accelerate selective filtering inside hierarchical semi-structured content.

The correct feature, the Search Optimization Service, is designed to handle exactly this type of workload. When enabled, it builds additional metadata structures that operate similarly to acceleration paths, allowing Snowflake to read far fewer micro-partitions when executing highly selective conditions, especially within repeated or deeply nested fields of semi-structured data. This supplemental metadata drastically reduces the need for full scans, which is particularly beneficial when users repeatedly query the same nested keys. The service allows Snowflake to efficiently pinpoint relevant partitions, decreasing both latency and compute consumption. It is purpose-built for scenarios where users frequently search semi-structured data on the same paths, making it ideal for applications involving complex, nested analytical workloads.

Question 72 

Which type of Snowflake caching stores result sets that can be returned instantly when identical queries run?

A) Data Cache
B) Metadata Cache
C) Query Result Cache
D) Remote Disk Cache

Answer: C

Explanation: 

A cache layer such as the data cache exists primarily within Snowflake’s virtual warehouse memory and is designed to store micro-partition data retrieved from the remote storage layer. It accelerates subsequent queries that need to access the same data blocks because the warehouse does not need to re-download the partition files again. However, this type of cache still requires Snowflake to execute the query, perform the necessary filtering, aggregation, or joins, and generate the final result. Since it does not store an entire query’s output, it does not offer the immediate return of results for identical queries. Instead, it simply reduces some of the underlying read operations.

A metadata cache contains information like table structures, column definitions, micro-partition statistics, and pointer locations. It helps Snowflake plan queries faster because the system does not need to repeatedly retrieve metadata from persistent storage. Metadata caching improves performance during the query compilation phase but has no capability to reproduce the full results of a previously executed statement. It assists in planning, not in returning final outputs.

A remote disk cache is not a performance-oriented caching mechanism at all. Instead, it refers to the persistent storage foundation on which Snowflake keeps all data. This storage tier provides durability and scalability, but it does not operate as a cache, nor does it provide any acceleration for repeated queries. Every query executed always requires compute processing unless a separate mechanism is used to bypass re-execution entirely. Thus, a remote storage layer cannot deliver instant retrieval of full result sets.

The correct caching layer is the query result cache, which stores the actual results generated by previously executed queries. When an identical query is run by the same user with the same role, the same warehouse, and without any changes to underlying data that would invalidate the cache, Snowflake can immediately return the stored result set. No compute operations, no scanning, and no reprocessing are necessary. This mechanism provides rapid responses and reduces compute consumption dramatically. It is particularly effective in dashboards, repeated analytical workflows, and any scenario involving recurring identical queries. By storing entire result sets instead of raw data or metadata, it allows Snowflake to deliver instantaneous outputs whenever valid cached results are available.

Question 73 

Which Snowflake security feature allows masking of sensitive fields based on user identity?

A) Network Policies
B) Access History
C) Dynamic Data Masking
D) Data Retention Policy

Answer: C

Explanation: 

A network policy is designed to control the IP address ranges or network locations from which users can connect to Snowflake. Its purpose is to create a perimeter security control that blocks unauthorized network sources and protects the environment from external threats. While important for access control, it does not apply any form of transformation, masking, or selective visibility on specific data fields within tables. Network policies act only at the connection level and have no influence on how data is displayed to different users once they are authenticated.

An auditing capability like Access History is designed to help administrators and auditors track the actions performed within Snowflake, such as which users executed which statements, what objects were accessed, and what privileges were used. Access History captures lineage and activity trails but does not take any action during query execution to obscure, redact, or transform data values. Its role is to provide visibility into past operations, not to enforce dynamic masking rules.

A mechanism such as a data retention policy determines how long historical versions of data remain available for recovery, cloning, and time travel. These policies specify the retention period but perform no role in altering data visibility. They do not inspect user identity, nor do they execute logic that hides or modifies sensitive content. Their functionality is tied strictly to historical data preservation rather than real-time access control.

The correct security feature is Dynamic Data Masking, which enables conditional masking at query time based on user identity, role, or other contextual factors. With dynamic masking, Snowflake can apply expressions that return masked or obfuscated values for unauthorized users while presenting the original values to privileged users. This ensures that sensitive data such as personally identifiable information or financial records is protected dynamically without changing the underlying stored data. The masking logic is embedded in policy definitions that evaluate at runtime, ensuring consistent enforcement without duplicating or altering stored values. By enabling differentiated access based on user identity, dynamic masking supports compliance requirements, reduces exposure risk, and simplifies governance by centralizing sensitive-data protection within Snowflake’s policy framework.

Question 74 

Which Snowflake function allows users to view changes made to objects such as queries, roles, or privileges over time?

A) INFORMATION_SCHEMA.TABLES
B) DESCRIBE HISTORY
C) SYSTEM$CLUSTERING_DEPTH
D) VALIDATE

Answer: B

Explanation: 

A system view such as INFORMATION_SCHEMA.TABLES provides current metadata describing tables, their structures, and their associated properties. While useful for understanding the state of objects at a given moment, it does not contain any time-based information. It cannot show previous configurations, historical changes, modifications to privileges, or adjustments to object properties over time. It is purely a static metadata source rather than a historical tracking mechanism.

A function such as SYSTEM$CLUSTERING_DEPTH focuses exclusively on evaluating how well a table’s micro-partitions are clustered. It provides metrics on the distribution of values relative to declared clustering keys but has no ability to report on historical modifications to Snowflake objects. It is entirely performance-oriented, measuring clustering quality, and does not track or display changes to roles, privileges, queries, or other objects.

A command like VALIDATE performs checks on data quality, constraints, or conditions. It evaluates current values and determines whether they meet specific requirements. However, it does not provide any form of historical insight into previous changes, object evolution, or administrative modifications. Its scope is limited to immediate validation and does not store or present historical context.

The correct capability is DESCRIBE HISTORY, a function that allows users to examine the historical evolution of objects such as tables, stages, and other structures. It displays changes including column additions, privilege modifications, query alterations, or other administrative adjustments. This function offers insight into how an object has evolved over time, enabling users to audit alterations, troubleshoot unexpected behavior, review administrative actions, or analyze the sequence of operations that have modified an object. By surfacing historical metadata, DESCRIBE HISTORY helps administrators maintain transparency, compliance, and accountability within Snowflake’s environment. It is the primary tool for viewing time-based changes to object definitions and configurations.

Question 75 

Which Snowflake feature enables multiple independent compute clusters to process the same database without conflict?

A) Multi-Cluster Warehouses
B) Secondary Indexes
C) Fail-Safe
D) Secure Views

Answer: A

Explanation: 

A structure intended for performance enhancement through selective lookups, such as a traditional index, is not a native Snowflake component and therefore cannot provide any form of compute isolation. In many legacy database systems, indexes are relied upon to accelerate point lookups or improve search efficiency. However, Snowflake is architected differently and does not use traditional indexing mechanisms because its micro-partitioning, metadata optimization, and columnar storage already provide the foundation for efficient query performance. More importantly, indexing—whether hypothetical or conceptual—does nothing to address concurrency challenges. It cannot balance independent workloads, scale compute dynamically, or prevent large numbers of simultaneous users from contending for the same computational resources. Since indexing does not expand compute capacity and offers no mechanism for isolating workload processing, it cannot be the correct answer for a question centered around independent compute clusters.

A long-term data recovery mechanism, such as Fail-safe, is designed strictly for extreme data protection scenarios and not for managing compute performance. Fail-safe exists as a final protective window that allows Snowflake to recover historical versions of micro-partitions if catastrophic operational failures occur. Its entire function lies in safeguarding stored data, not dividing workloads or increasing concurrency. Fail-safe cannot spin up multiple clusters, distribute tasks, or mitigate performance issues caused by high user activity. Because it operates at the data durability layer, completely separate from compute operations, it has no ability to affect or influence compute-level isolation.

A view enabling secure data sharing, such as a secure view used inside a data share, also does not contribute to compute isolation. Secure views are created for governance—allowing providers to expose data in a controlled, masked, or restricted manner to consumers. They manage what data is visible, not how it is processed or how many compute engines are available for execution. Even though secure views are useful for masking sensitive information or limiting exposure of specific columns, they do not distribute workloads, scale compute power, or create parallel processing environments. Their purpose is access control, not compute segmentation.

The correct component, Multi-Cluster Warehouses, enables Snowflake to run several independent compute clusters simultaneously under the same warehouse definition. Each cluster operates separately, handling its own set of queries without interfering with other clusters. This architecture provides true concurrency scaling, allowing Snowflake to support large volumes of simultaneous workloads while maintaining consistent performance. By automatically adding or removing clusters based on demand, Multi-Cluster Warehouses prevent resource contention and eliminate queuing, ensuring that users and processes can access the same database without conflict.

Question 76 

Which Snowflake feature provides an automated way to optimize micro-partition layouts for improved pruning?

A) Clustering Keys
B) Materialized Views
C) Snowflake Tasks
D) Resource Monitors

Answer: A

Explanation: 

A structure that stores precomputed results, such as a materialized view, certainly improves performance for repeated queries but does not influence how data is physically arranged in micro-partitions. Materialized views operate at the logical query layer; they do not reorganize storage or attempt to improve pruning efficiency. Their goal is to provide ready-made answers to specific query patterns rather than enhance data layout. Because of this, they cannot optimize pruning or change micro-partition boundaries.

A scheduling mechanism like Snowflake Tasks focuses entirely on automation and workflow execution. Tasks trigger SQL statements on intervals or dependency graphs, but they have no role in governing the physical structure of stored data. They do not observe micro-partition metadata nor instruct Snowflake to modify storage. Their function is execution timing, not storage optimization, meaning they cannot enhance query pruning or micro-partition efficiency.

A monitoring construct such as a Resource Monitor provides financial governance rather than performance tuning. Its purpose is to track and potentially suspend compute usage when credit thresholds are exceeded. Resource Monitors do not interact with table storage, clustering logic, or pruning mechanisms. They provide operational oversight but have no influence over how Snowflake arranges data on disk or how efficiently queries prune partitions.

The correct feature, clustering keys, directly affects micro-partition organization. When a user defines clustering keys, Snowflake uses those column values to guide the ordering of data within micro-partitions. Over time, with automatic clustering enabled, Snowflake reorganizes partitions to improve the alignment between data values and partition boundaries. This results in predictable ranges of values inside micro-partitions, increasing the effectiveness of pruning. Better pruning means Snowflake scans fewer micro-partitions during query execution, improving efficiency and reducing compute consumption. Clustering keys are therefore a storage-level optimization mechanism that enhances performance by influencing data layout, making them the correct answer.

Question 77 

Which Snowflake functionality enables secure, controlled sharing of selected datasets with external consumers without copying data?

A) Replication Group
B) Secure Share
C) Database Failover
D) Snowpipe

Answer: B

Explanation: 

A replication group involves copying objects, metadata, and sometimes entire databases between Snowflake accounts or regions. While replication enables disaster recovery, failover, and read-scale scenarios, it is fundamentally based on duplication. This means the consumer receives a full copy of the replicated objects rather than a shared, pointer-based view of the provider’s data. Replication does not support selective exposure of specific tables nor provide a mechanism to permit external accounts to query data in place. Because of this, it does not satisfy the requirement of controlled, non-copy data sharing with external parties.

A failover construct such as a database failover group is designed for business continuity. It allows organizations to promote secondary replicas when a primary region becomes unavailable. Although this is essential for high availability and disaster preparedness, it does not involve granting access to external consumers. It focuses on operational resilience within an organization’s own Snowflake footprint and cannot be used for governed sharing of datasets across account boundaries.

A continuous ingestion mechanism like Snowpipe exists solely to automate file loading into Snowflake. It monitors cloud storage locations and triggers ingestion when new files arrive. Snowpipe does not handle data governance, sharing, or access controls. It neither exposes data to external consumers nor enables controlled access across accounts. Its role is ingestion, not sharing.

The correct feature, Secure Share, allows Snowflake providers to expose selected objects to consumers without moving or duplicating physical data. Secure Shares use metadata-level pointers so consumers query the provider’s data in place. This results in strong governance because the provider retains full control over which tables, views, and schemas are exposed, and no physical copies leave the account. Secure Shares ensure that external consumers always access current data and that permissions can be revoked instantly. This makes Secure Share the only Snowflake feature that supports secure, controlled data sharing without copying.

Question 78 

Which Snowflake table type automatically drops itself when the session that created it ends?

A) Permanent Table
B) Transient Table
C) Temporary Table
D) External Table

Answer: C

Explanation: 

A permanent table is the default table type in Snowflake and includes full long-term durability. It benefits from both time travel and fail-safe retention, ensuring historical recoverability. Because permanent tables are designed for long-term storage and continuity, they do not disappear automatically. They must be explicitly dropped by a user. Their characteristics make them unsuitable for temporary or session-based processing and therefore not aligned with the requirement of automatic removal.

A transient table is intended for scenarios where data does not require fail-safe but may still need to persist beyond a single session. While transient tables reduce storage costs by eliminating fail-safe storage, they remain in Snowflake until a user manually drops them or until retention policies purge history. They do not tie their lifecycle to a session. Thus, they do not fulfill the requirement of automatic removal when the session ends.

An external table references data stored outside Snowflake, typically in cloud storage such as S3, Azure Blob, or GCS. External tables serve as metadata pointers to external files and do not manage lifecycle rules related to session termination. They are persistent metadata constructs that remain active until explicitly deleted. Because they are not tied to session duration, they cannot automatically expire upon session closure.

The correct answer, temporary tables, are explicitly designed to support short-term processing tasks that do not require long-term retention. When a session creates a temporary table, its lifecycle becomes tied to that session. Once the session closes—regardless of the reason—the table and all its data are automatically dropped by Snowflake. Temporary tables offer a secure, isolated workspace for intermediate calculations, staging, or transformations that are only relevant during active work. They have no fail-safe, no cross-session visibility, and no need for manual cleanup, making them ideal for transient computations. Their session-scoped lifecycle makes them the correct answer.

Question 79 

Which Snowflake service supports measuring the depth and quality of micro-partition clustering?

A) SYSTEM$CLUSTERING_INFORMATION
B) SYSTEM$PIPE_STATUS
C) SYSTEM$TASK_DEPENDENTS
D) SYSTEM$CANCEL_QUERY

Answer: A

Explanation: 

A function such as SYSTEM$PIPE_STATUS serves a completely different operational purpose. It is designed to report details about Snowpipe ingestion processes—file ingestion metrics, execution status, and error information. While essential for monitoring continuous loading pipelines, it has no relationship to micro-partition organization or clustering efficiency. It cannot evaluate how well data aligns to clustering keys, making it irrelevant to measuring clustering quality.

A function like SYSTEM$TASK_DEPENDENTS provides dependency-graph visibility for tasks. It helps administrators understand which tasks rely on others and how workflow chains are structured. Although this is valuable for orchestration, it does not analyze table storage or micro-partition statistics. It cannot comment on clustering depth, clustering quality, or pruning performance.

A command such as SYSTEM$CANCEL_QUERY terminates actively running queries. Its role is operational control, specifically for stopping runaway or misbehaving queries. This function does not interpret table structures, evaluate partition alignment, or calculate clustering metrics. Cancelling a query does not provide insight into how efficiently data is arranged within micro-partitions.

The correct function, SYSTEM$CLUSTERING_INFORMATION, is specifically designed to provide detailed insights into clustering quality. It analyzes the distribution of clustering key values within micro-partitions and computes clustering depth, which indicates how well data conforms to the expected ordering. It reveals whether micro-partitions contain overlapping or scattered ranges that lead to inefficient pruning. By examining these metrics, users can determine whether reclustering is needed to restore optimal performance. SYSTEM$CLUSTERING_INFORMATION therefore plays a critical role in storage optimization and is the correct answer.

Question 80 

Which Snowflake feature allows recreating a table as it existed at a previous point in time?

A) Zero-Copy Cloning
B) Time Travel
C) Task Graphs
D) Data Sharing

Answer: B

Explanation: 

A mechanism such as zero-copy cloning creates instantaneous logical copies of Snowflake objects by referencing existing micro-partitions. Although cloning is an efficient way to duplicate objects without copying data, it only clones the current state of a table, not a historical version. Zero-copy cloning does not interact with old micro-partition states and cannot reconstruct a table at a previous timestamp. Therefore, it does not qualify as a feature that restores historical versions.

A workflow construct such as task graphs coordinates the execution order of tasks. They help maintain structured pipelines and dependencies but do not preserve historical versions of data. Task graphs are entirely unrelated to historical reconstruction, rollback capability, or time-based recovery. They operate in the orchestration domain, not the historical data domain.

A sharing mechanism such as data sharing allows read-only access to datasets from other accounts but does not restore older snapshots. It simply exposes existing objects to consumers as they exist now. Data sharing cannot look back in time or recreate earlier states.

The correct feature, time travel, allows Snowflake users to query or restore data as it existed at prior points in time. Snowflake retains historical versions of micro-partitions for a defined retention period, enabling reconstruction of earlier states. With time travel, users can query older snapshots, clone historical versions, or even restore tables that were accidentally modified or dropped. This makes time travel a powerful mechanism for recovery, auditing, debugging, and historical analysis. Because it directly enables recreation of tables as they existed previously, it is the correct answer.

img