Snowflake SnowPro Core Exam Dumps and Practice Test Questions Set 1 Q1-20

Visit here for our full Snowflake SnowPro Core exam dumps and practice test questions.

Question 1: 

Which Snowflake feature enables automatic separation of compute and storage for elastic scalability?

A) Virtual Warehouses
B) External Tables
C) Serverless Tasks
D) Stream Objects

Answer: A

Explanation: 

Virtual warehouses in Snowflake serve as the compute layer that operates independently from the storage layer. They allow for elastic scaling by enabling compute clusters to be resized, suspended, or resumed at any time without affecting stored data. This independence is fundamental to Snowflake’s architecture and enables efficient cost management and workload isolation. External tables provide access to data stored outside Snowflake in external stages such as cloud storage. While they offer flexibility in connecting external datasets, they are not responsible for compute or storage separation. Their purpose is centered around referencing external files rather than enabling elasticity. 

Serverless tasks are designed to automate SQL execution without requiring users to manage computers directly. Although they utilize Snowflake-managed compute resources, they do not define the architecture that separates compute from storage. Their role is in orchestration, and they rely on the underlying compute model rather than shaping it. Stream objects track data changes in tables for incremental processing. They provide change data capture functionality, which is important for pipelines, but they have no influence on how compute and storage resources are deployed or scaled.

The correct choice is virtual warehouses because they define the compute engine Snowflake uses for all query processing and warehouse-based computation. Their ability to scale independently ensures that high query volume or large analytical workloads do not force storage adjustments. Snowflake stores data in cloud object storage, while computation happens dynamically through virtual warehouses that can operate concurrently without interference. This leads to advantages such as concurrent workloads, isolated resource consumption, and predictable performance. The other features have specific operational roles but do not shape Snowflake’s core architecture. The separation of compute and storage is rooted directly in the virtual warehouse mechanism, making it the essential component for scalable compute execution in Snowflake.

Question 2: 

What component of Snowflake is responsible for managing metadata and coordinating query execution?

A) Cloud Services Layer
B) Micro-partitions
C) Fail-safe Storage
D) Internal Stages

Answer: A

Explanation: 

The cloud services layer in Snowflake performs critical functions such as authentication, access control, metadata management, and query optimization. It acts as the intelligence layer that coordinates how compute resources interact with storage and ensures that user requests are processed correctly. This layer also manages infrastructure operations like security, replication, and transaction control. Micro-partitions are small data segments Snowflake automatically creates in storage. They hold the physical data and include metadata such as value ranges and compression details. While instrumental for performance and pruning, they do not coordinate queries or manage global metadata for the platform. Their purpose is operational efficiency within storage rather than system-level coordination. 

Fail-safe storage is intended for disaster recovery and offers a seven-day window after time travel retention ends. It allows Snowflake engineers to recover historical data under specific conditions but plays no role in query execution. It is a safety mechanism rather than an orchestrator of compute or metadata operations. Internal stages provide temporary storage for loading and unloading data. They facilitate movement between Snowflake and external locations but have no influence over query planning or processing. Their focus is on file staging rather than system logic. 

The correct answer is the cloud services layer because it is the central component responsible for managing how Snowflake operates. It oversees metadata, which is vital for determining where data resides, how queries should be optimized, and how compute resources should be allocated. Without this layer, Snowflake would not be able to deliver automatic optimization, elasticity, or secure access control. The other components contribute to storage, resiliency, or data movement and do not handle coordination or system intelligence. By managing these foundational activities, the cloud services layer ensures Snowflake’s architectural consistency across different cloud providers and supports the platform’s ability to provide near-infinite scalability and ease of management.

Question 3: 

Which Snowflake caching layer stores query results for rapid reuse without recomputation?

A) Result Cache
B) Metadata Cache
C) Remote Disk Cache
D) Stage Cache

Answer: A

Explanation: 

The result cache contains the final output of previously executed queries. If a user repeats the same query with identical conditions and no underlying data changes, Snowflake can return the result instantly from this cache without reprocessing. This leads to significant performance benefits and reduced compute usage, especially for analytical queries that are accessed frequently. The metadata cache stores structural and statistical information about objects such as tables and partitions. Although it accelerates query planning and pruning decisions, it does not contain actual query results. Its purpose is focused on assisting the optimizer rather than providing reusable results. 

Remote disk cache, sometimes associated with cloud-provider disk caching, does not exist as a separate caching layer within Snowflake. Snowflake does not rely on local disk caching mechanisms due to its architecture, which separates compute and storage. Any terminology referring to disk-level caching does not represent a Snowflake caching layer. Stage cache refers to storage areas used for loading and unloading data. While internal stages may temporarily hold files, they do not function as a cache for query outcomes. Their usage is restricted to data ingestion and extraction workflows, not analytical speeding. 

The correct answer is the result cache because it directly stores the output of executed queries for potential reuse. When the underlying data has not been modified, Snowflake can immediately return results without spinning up the computer. This design reduces cost and load on virtual warehouses. The other caching layers serve complementary roles but do not handle the return of previously computed results. Snowflake’s caching system is designed to balance compute efficiency with responsiveness, and the result cache is the most impactful layer for query repetition scenarios. It plays a crucial role in delivering performance while supporting Snowflake’s pay-per-compute billing model, making query execution more cost-effective.

Question 4:

Which Snowflake construct enables ingesting data incrementally by tracking row-level changes?

A) Streams
B) Views
C) File Formats
D) Stages

Answer: A

Explanation: 

Streams record row-level insertions, updates, and deletions in a table, enabling incremental processing in pipelines. They provide a transactionally consistent change set representing what has been modified since the last point of consumption. This makes them essential for near-real-time ingestion and ETL or ELT workflows that depend on capturing changes efficiently. Views are virtual objects that present results of stored queries. While helpful for abstraction and data modeling, they do not record underlying changes. They reflect current table states rather than tracking modifications. They serve a different purpose centered around logical simplification instead of change capture. 

File formats define how Snowflake interprets data files such as CSV, JSON, or Parquet. They are necessary for loading and unloading but have no awareness of changes to table rows. Their use is limited to file structure interpretation and does not involve tracking data evolution. Stages hold files for loading and unloading between Snowflake and external storage locations. They act as temporary or external repositories but do not store information about how table data changes over time. Their responsibility is data movement rather than incremental tracking. 

The correct answer is streams because they provide the mechanism Snowflake uses to support change data capture. They enable incremental ingestion by exposing only what has changed, avoiding full-table scans or manual tracking processes. The other constructs contribute to data presentation or loading workflows but do not maintain change history. Streams integrate cleanly with tasks and pipes, supporting automated ingestion and transformation pipelines. Their design ensures efficient processing, reduces compute consumption, and supports scalable data engineering workflows.

Question 5:

Which Snowflake feature ensures that historical data can be queried or restored for a retention period?

A) Time Travel
B) External Functions
C) Masking Policies
D) Clustering Keys

Answer: A

Explanation: 

Time travel allows users to query or restore data as it existed in the past. It supports operations such as recovering dropped objects, restoring previous table versions, and executing queries against historical states. This makes it invaluable for auditing, troubleshooting, and recovering accidental modifications within the retention window. External functions enable Snowflake to call external services or APIs. While powerful for extending capabilities, they do not support historical data access or recovery. Their purpose revolves around integrating outside logic into Snowflake rather than managing previous data states.

Masking policies dynamically protect sensitive information by transforming values at query time based on governance rules. While important for data privacy, they do not provide access to previous versions of data or historical querying functionality. Clustering keys influence how micro-partitions are organized for improved performance. They enhance pruning efficiency for large tables but do not manage older versions of data or enable restoration. The correct answer is time travel because it provides controlled access to historical data and supports recovery operations within the defined retention period. It is central to Snowflake’s resilience and auditing capabilities, offering a reliable safety net for accidental changes. The other features address compute extension, security, or performance improvements and are not designed for historical preservation. Time travel leverages Snowflake’s immutable storage design, ensuring users can confidently explore past states or revert changes without complex versioning systems.

Question 6:

Which Snowflake feature enables continuous data ingestion from cloud storage with minimal management overhead?

A) Snowpipe
B) Materialized Views
C) Replication
D) Multi-Cluster Warehouses

Answer: A

Explanation: 

Snowpipe provides continuous ingestion capabilities by automatically loading new files as they arrive in cloud storage. It reduces the need for manual batch loading by monitoring stages and using event notifications or serverless polling. Its serverless nature eliminates the need to manage compute, making it highly efficient for real-time or near-real-time ingestion workflows. Materialized views store precomputed query results and refresh when underlying data changes. Although designed to improve query performance, their purpose does not involve ingesting new files or automating data arrival processes. They contribute to analytical acceleration rather than pipeline automation. 

Replication facilitates copying databases or accounts across regions or cloud platforms. It supports disaster recovery, business continuity, and cross-region access, but it has no involvement in loading new external files. Its main focus is on redundancy and availability. Multi-cluster warehouses allow computers to scale automatically based on concurrency. They handle spikes in workload demand, providing elasticity for many simultaneous users. However, while they benefit query performance, they do not handle ingesting files or automating data arrival detection.

Snowpipe stands out because it offers a lightweight, scalable approach for ongoing ingestion without requiring warehouse management. It uses either cloud event notifications or internal scheduling to detect file arrivals, and leverages serverless compute so ingestion happens immediately and consistently. This makes it particularly effective for pipelines that must react quickly to new data. The other features—materialized views, replication, and multi-cluster warehouses—serve distinct purposes in analytics acceleration, resiliency, and concurrency management. None of them automate ingestion from cloud storage or provide the same ease of operation as Snowpipe. The event-driven architecture of Snowpipe reduces latency, simplifies operational workloads, and ensures data flows continuously into Snowflake with minimal human intervention.

Question 7:

Which Snowflake warehouse size behavior determines how quickly a resumed warehouse becomes fully operational?

A) Warm cache availability
B) Warehouse scaling policy
C) Auto-resume capability
D) Provisioning of compute resources

Answer: D

Explanation:

Provisioning compute resources involves allocating the necessary hardware and infrastructure when a warehouse is resumed. Snowflake must acquire cloud compute instances and initialize them before queries can run. The time required for these resources to become available directly affects how quickly the warehouse becomes ready. This step is foundational to performance when resuming suspended compute. Warm cache availability helps accelerate queries but does not influence how quickly the warehouse starts. Cache contents disappear when the warehouse fully suspends, so it cannot determine resume speed. Cache effects are visible only after the warehouse becomes operational.

Warehouse scaling policy affects the behavior of multi-cluster warehouses under concurrency pressure. It determines whether new clusters start proactively or reactively, but does not influence the initial resume speed of a single warehouse. The policy contributes to load management rather than startup performance. Auto-resume capability ensures that a suspended warehouse can start automatically when a query arrives. While it initiates the process, it does not determine how long the provisioning step takes. Auto-resume is simply a trigger, not the mechanism that controls the delay.

The correct answer centers on compute provisioning because Snowflake must dynamically allocate cloud compute resources every time a suspended warehouse resumes. This allocation, including instance spin-up and initialization, directly impacts readiness. Other factors influence performance after the warehouse is active, but do not govern how fast it transitions from suspended to running. Snowflake uses cloud-based elasticity to scale resources fluidly, but the initial provisioning time varies based on cloud-provider behavior and instance availability. Therefore, the determining factor is the underlying infrastructure setup required during each resume event.

Question 8:

Which Snowflake security feature ensures all customer data is encrypted before it is stored?

A) Always-on Encryption
B) OAuth Authentication
C) Federated Identity
D) Network Policies

Answer: A

Explanation: 

Always-on encryption guarantees that Snowflake automatically encrypts all data before it is written to storage. This mechanism applies to persistent storage, backups, metadata, and internal operations. Snowflake uses hierarchical key management and rotates keys automatically, ensuring that data remains secure without manual intervention. OAuth authentication enables third-party identity providers to issue access tokens for users. It handles authentication rather than encryption. While it secures login workflows, it does not encrypt stored data. OAuth ensures identity verification but does not affect storage-level protection. 

Federated identity allows users to sign in using external identity providers through SAML or OAuth. It simplifies authentication and centralizes identity management but does not protect data at rest. Its purpose is access governance rather than encryption enforcement.
Network policies restrict access to Snowflake based on IP address ranges. These rules defend against unauthorized connections but do not influence data encryption. They provide perimeter-level protection, not cryptographic safeguards for stored content. The correct answer is always-on encryption because Snowflake ensures that all data written to storage is encrypted using strong algorithms and multi-layered key structures. This includes micro-partitions, internal metadata, time travel storage, and fail-safe layers. The other features focus on authenticating users or managing network risks, but none provide encryption services. Always-on encryption is fundamental to Snowflake’s security architecture, ensuring that even if storage were compromised, the data would remain unreadable.

Question 9: 

Which Snowflake operation benefits most from micro-partition pruning?

A) Selective analytical queries
B) File unloading
C) User authentication
D) Virtual warehouse resizing

Answer: A

Explanation:

Selective analytical queries depend heavily on the ability to filter data based on defined conditions, making efficiency a top priority when working with large datasets. Snowflake’s micro-partition pruning plays a critical role in optimizing these types of queries. Each micro-partition stores metadata such as minimum and maximum values for every column. When a query applies filters, the engine evaluates this metadata to determine which partitions could possibly contain matching rows. Any micro-partition whose metadata falls completely outside the filter conditions is skipped automatically. This process dramatically reduces the amount of data that must be scanned, resulting in faster performance, lower compute consumption, and improved responsiveness for analytical workloads. Because pruning prevents unnecessary reads, it is most beneficial for highly selective queries that target narrow slices of data.

File unloading, on the other hand, serves a different operational purpose. Unloading data involves exporting results from Snowflake into external storage systems, often in formats such as CSV or Parquet. This process generally requires scanning either the full dataset or a substantial portion of it to produce the output files. Since unloading is oriented toward data extraction rather than conditional filtering, micro-partition pruning typically has minimal impact. Even if pruning is technically possible, unloading operations frequently involve large-scale data retrieval, so the performance benefits are not as significant as they are with selective analytical queries.

User authentication also stands apart from micro-partition behavior. Authentication is responsible for validating user identities, login credentials, and assigned roles before allowing access to Snowflake resources. This process takes place entirely within the cloud services layer and does not touch the query execution layer where micro-partition pruning occurs. As a result, authentication has no relationship to filtering logic, metadata evaluation, or the way Snowflake determines which partitions to scan.

Similarly, virtual warehouse resizing affects compute power rather than data access patterns. Increasing or decreasing warehouse size adjusts the amount of processing capacity available for workloads, but it does not influence how Snowflake evaluates micro-partition metadata. Resizing helps manage performance and concurrency but does not change the pruning mechanisms that determine which data segments are read during a query.

Micro-partition pruning directly enhances selective analytical queries by reducing data scanned and improving overall efficiency. The other activities—file unloading, user authentication, and warehouse resizing—do not interact with or influence the pruning logic in any meaningful way.

Question 10: 

Which Snowflake feature allows scheduling SQL-based automation without managing computers?

A) Serverless Tasks
B) Materialized Views
C) Resource Monitors
D) Warehouses with Auto-suspend

Answer: A

Explanation: 

Serverless tasks enable users to automate SQL execution on schedules or based on dependencies without requiring a dedicated warehouse. Snowflake automatically computes and charges only for execution time. This eliminates management overhead and simplifies orchestration for ETL, monitoring, or maintenance processes.
Materialized views maintain precomputed results and refresh automatically as data changes. While they automate part of the analytical workload, they do not support scheduling or workflow execution. Their automation is tied to query patterns rather than time- or dependency-based execution.

Resource monitors track credit consumption and enforce thresholds. They help control spending by suspending warehouses or sending alerts, but they do not execute SQL or manage automation. Their purpose is usage governance rather than workflow coordination.
Warehouses with auto-suspend reduce compute costs by stopping activity after periods of inactivity. Although useful for efficiency, they do not eliminate the need to manage computers. They also cannot schedule or run tasks automatically.

The correct answer is serverless tasks because they provide a fully managed scheduling and execution environment. Snowflake handles all compute provisioning and cleanup, enabling users to build automated pipelines without worrying about warehouse lifecycle management. The other features address analytics acceleration, monitoring, or cost efficiency but do not automate SQL execution in a serverless manner.

Question 11: 

Which Snowflake feature enables querying external data without loading it into internal tables?

A) External Tables
B) Temporary Tables
C) Sequences
D) Zero-Copy Cloning

Answer: A

Explanation: 

External tables provide the capability to query data stored outside of Snowflake, such as in cloud storage locations, without the need to load that data into Snowflake-managed tables. They reference external files and maintain metadata that enables SQL queries against them, making them valuable for exploratory analysis or hybrid architectures that combine raw external files with internal Snowflake data. Temporary tables exist only for the duration of a session and store transient data used in intermediate computations. While helpful for staging or transformation steps, they do not extend querying to external storage. Their lifecycle is short-lived, and they do not reference external file systems.

Sequences generate unique numeric values, usually for keys in tables. They are important for designing processes that require incremental identifiers but have no relationship to querying external data. They operate entirely inside Snowflake’s metadata layer. Zero-copy cloning creates a new object referencing existing data without duplicating storage. This is ideal for testing, development, and experimentation but does not enable reading from external storage locations. The correct answer is external tables because they bridge the gap between cloud storage and Snowflake SQL query capabilities. They allow users to analyze external files efficiently without ingesting them first. While the other features support lifecycle management, unique value generation, or test environments, none allow querying remote storage. External tables help reduce storage costs and processing time by avoiding unnecessary ingestion, while still enabling sophisticated joins and transformations alongside internal data sets.

Question 12: 

Which Snowflake mechanism ensures that only filtered and relevant micro-partitions are scanned during query execution?

A) Pruning
B) Re-clustering
C) Replication
D) Time Travel

Answer: A

Explanation: 

Pruning uses metadata from micro-partitions—such as minimum and maximum column values—to skip reading partitions that cannot satisfy a query’s filters. This improves execution speed significantly because fewer partitions need to be scanned, reducing compute demand and optimizing performance for analytical workloads. Re-clustering reorganizes micro-partitions based on a defined key to enhance data locality and improve pruning effectiveness. While it influences pruning performance, the act of pruning itself is what directly filters out irrelevant partitions. Re-clustering is preparatory rather than the mechanism that operates during queries. 

Replication copies data across cloud regions or accounts to support resilience and disaster recovery. While essential for availability strategies, it does not control how many micro-partitions are scanned during query execution. Its responsibilities are architectural and protective rather than analytical. Time travel retains historical versions of data for querying or restoring previous states. Although valuable for recovery and auditing, it does not determine which micro-partitions are scanned for active queries. It interacts with historical storage, not runtime partition selection. Pruning is the correct answer because it directly governs which micro-partitions are evaluated during a query. By leveraging metadata, pruning minimizes storage reads and enhances efficiency. The other features influence historical access, organization, or availability, but do not determine which partitions are included during query filtering.

Question 13: 

Which type of table in Snowflake is designed to automatically expire and remove all data after a session ends?

A) Temporary Table
B) Permanent Table
C) External Table
D) Transient Table

Answer: A

Explanation: 

A temporary table in Snowflake is designed specifically for short-lived, session-bound workloads. It exists only for the duration of the session in which it is created, making it ideal for situations where intermediate or temporary data needs to be stored without the overhead of long-term management. As soon as the user session ends, Snowflake automatically drops the temporary table and removes all related data. This behavior ensures that temporary tables require no manual cleanup, reduce clutter in the database, and keep storage usage efficient. Because of their session-scoped nature, they are commonly used for staging transformations, holding intermediate query results, experimenting with data models, or performing quick calculations that do not need to persist beyond the active session.

A permanent table serves an entirely different purpose. It is built for long-term durability and is used in production environments where data must persist reliably unless explicitly deleted. Permanent tables support Snowflake features such as time travel and fail-safety, which provide extended recovery options in the event of accidental deletions or modifications. These tables do not expire automatically, and their data remains intact until a user issues a DROP command. Their design focuses on resilience, data governance, and long-term storage rather than short-lived computation.

External tables further differ in structure and purpose. Instead of storing data directly in Snowflake-managed storage, they reference files stored in external cloud locations such as Amazon S3, Google Cloud Storage, or Azure Blob Storage. External tables are often used for querying large datasets that organizations maintain outside Snowflake to reduce storage costs or maintain architecture flexibility. They are not tied to session duration and certainly do not drop themselves automatically. Their lifecycle is controlled manually, and they remain accessible as long as the underlying external files exist.

A transient table occupies a middle ground between temporary and permanent tables. It persists beyond the user session and must be manually dropped, but it lacks the fail-safe protection that permanent tables provide. Transient tables do support time travel for a limited duration, but because they are not session-bound, they do not automatically delete themselves. They are useful for staging and semi-permanent operational data but not for truly temporary workloads.

Question 14: 

Which Snowflake feature allows creating a fully functional copy of a table or database without duplicating storage?

A) Zero-Copy Cloning
B) Warehouses
C) Secondary Indexes
D) Materialized Views

Answer: A

Explanation:

Zero-copy cloning creates a new logical copy of an existing object—such as a table, schema, or database—while referencing the same underlying micro-partitions. This method avoids storage duplication and allows rapid environment creation for development, testing, or analysis. The clone behaves independently, and changes made to it do not affect the original object. Warehouses provide compute resources for running queries but do not clone or duplicate storage. They power the execution of queries rather than manage how data is replicated or referenced.
 

Secondary indexes do not exist in Snowflake’s architecture. Snowflake relies on micro-partition metadata and clustering instead of traditional index structures. Therefore, this choice does not support cloning functionality. Materialized views store precomputed results for faster access but do not create independent copies of the underlying data. They maintain derived data and refresh automatically but are not intended for duplicating objects. 

Zero-copy cloning is the correct answer because it enables fast provisioning of isolated environments without extra storage costs. It leverages Snowflake’s metadata-driven architecture, making copies efficient and instantaneous. The other features either do not handle duplication or do not exist in Snowflake. Cloning is essential for agile development workflows and analytical sandboxing.

Question 15:

Which Snowflake capability allows recovering a dropped table within the data retention period?

A) Time Travel
B) Row Access Policy
C) Data Sharing
D) File Format Configuration

Answer: A

Explanation: 

Time travel in Snowflake is a powerful capability that allows users to access historical versions of data, restore previous states of objects, and recover tables that were accidentally dropped, all within the configured retention period. This feature plays a crucial role in maintaining data reliability and operational resilience. When a table is dropped—whether intentionally or by mistake—Snowflake does not immediately and permanently remove it. Instead, the platform retains its metadata and historical versions for the duration of the time-travel retention window. This enables administrators and users to effortlessly restore the table to its last valid state without losing any previously stored information. Such functionality is especially valuable in environments where large teams work with complex datasets, increasing the likelihood of unintentional changes or deletions. Time travel acts as a safety net, significantly reducing the risk of irreversible errors.

A row access policy, by contrast, serves a completely different purpose. It is designed for governance, compliance, and security by restricting which rows a user can view based on attributes such as role, department, or user identity. While this mechanism is essential for protecting sensitive information and implementing fine-grained access controls, it does not offer any ability to recover dropped tables, access prior versions of the data, or restore historical states. Its focus is on ensuring that data visibility aligns with organizational policies, not on data protection or recovery mechanisms.

Data sharing is another core Snowflake capability that allows providers to share live, ready-to-query datasets with consumers without copying or physically moving the data. It enables real-time collaboration, reduces storage redundancy, and simplifies access management across organizations. However, data sharing does not contribute to recovering deleted data or managing historical versions. Its primary objective is to facilitate secure and efficient sharing, not to preserve or restore lost objects.

Finally, file format configuration determines how Snowflake processes external files during loading and unloading, specifying details such as delimiters, compression types, or field handling rules. This configuration helps ensure that external data is interpreted correctly before ingestion into Snowflake tables. While essential for ETL/ELT operations, it provides no capability for object recovery or historical data access.

Question 16:

Which Snowflake feature helps prevent sudden cost increases by automatically limiting computer usage?

A) Resource Monitors
B) Virtual Warehouse Scaling Policy
C) Result Caching
D) Zero-Copy Cloning

Answer: A

Explanation

Resource Monitors provide administrators with a mechanism to control compute spending by setting credit consumption thresholds and actions. This includes sending notifications, suspending warehouses, or both. They act as a safeguard when workloads exceed expected usage so that environments stay within budget. They also offer monitoring capabilities that help teams proactively track compute consumption before budget overruns occur. Their design allows organizations to maintain cost discipline across multiple teams or workloads.

Virtual Warehouse Scaling Policy influences how warehouses scale in response to concurrency pressure but does not establish any form of cost or credit control. It lets warehouses scale up or out to meet user demands but cannot prevent extra usage. It simply responds to workload pressure and ensures performance. Because it only modifies performance behaviour instead of controlling finance-related thresholds, it does not help contain unexpected charges.

Result Caching allows Snowflake to serve repeated queries from cache, which reduces computation but does not enforce limits on compute credits. Whenever a cached result exists, Snowflake avoids reprocessing, but if a cache miss occurs or data has changed, credits are consumed normally. Result caching therefore provides performance enhancements but does not restrict credit usage or issue alerts.

Zero-Copy Cloning enables instant creation of database objects without additional storage costs, promoting flexible development, testing, and analytics setups. However, cloning does not involve any cost enforcement mechanisms. It reduces storage costs, does not compute credits, and does not set limits on usage.

Resource Monitors remain the correct answer because they directly manage and limit credit consumption. They let administrators set quotas at different organizational levels and enforce automatic actions so that workloads do not exceed their allocated budgets. No other feature in the list can automatically halt or restrict usage purely based on spending thresholds. This is why Resource Monitors provide the strongest safeguard for cost governance in Snowflake environments.

Question 17:

Which Snowflake object stores historical data changes for querying past states of a table?

A) Time Travel
B) Stages
C) Pipes
D) Tasks

Answer: A

Explanation

Time Travel in Snowflake preserves data history, allowing users to query previous states of tables, schemas, or databases. It makes it possible to retrieve records that were modified or deleted, restore accidentally dropped objects, and analyze historical trends. With a configurable retention period, Time Travel supports both auditing needs and recovery operations, and organizations often use it during debugging and verification scenarios. Because it enables direct querying of earlier snapshots, it becomes essential for understanding how data evolved over time.

Stages act as locations for data files used in loading and unloading operations. They provide interfaces to cloud storage or internal storage, but they do not store historical states of tables. Their role is strictly related to data movement, not historical retention. While they may hold file versions, they do not preserve snapshots of tables or allow querying previous versions of records.

Pipes facilitate continuous ingestion via Snowpipe by monitoring staged files and loading them into Snowflake tables as soon as they appear. They act as connectors for automated data loading but cannot maintain table history. Their functionality is purely operational and ingestion related.

Tasks allow the scheduling of SQL statements, including transformations or maintenance jobs. They support automation but offer no way to access past data states. They execute repeated operations and workflows, but they do not preserve historical records or snapshots internally.

Time Travel provides the historical snapshot mechanism necessary for data recovery, forensic analysis, and past-state querying. The ability to query using temporal syntax such as AT or BEFORE is unique to this feature. Other objects listed focus on ingestion, storage interfaces, or scheduling but lack the ability to reconstruct older table states. Therefore, Time Travel is the correct answer because it is exclusively designed to store and retrieve historical versions of database objects within the retention window.

Question 18: 

Which command is used to refresh metadata and make newly added column statistics available for optimization?

A) ANALYZE
B) CLUSTER BY
C) COPY INTO
D) CREATE WAREHOUSE

Answer: A

Explanation

Analyze is responsible for collecting statistics on table columns in Snowflake. These statistics enhance query optimization by providing the optimizer with accurate information about data distribution. When executed, analyze helps Snowflake determine how to generate the most efficient query plan. It ensures that the optimizer has visibility into updated column behaviour, especially after significant data changes such as large inserts or reorganizations. Because it refreshes metadata used in optimization, it remains the necessary command for improving performance when underlying data characteristics evolve.

Cluster By defines logical clustering for micro-partition organization. It helps Snowflake prune micro-partitions more effectively for selective queries. While beneficial for performance, clustering does not refresh metadata such as column statistics. It influences how Snowflake stores data but does not act on metadata associated with column-level distributions.

Copy Into loads data from stages into tables. It performs ingestion but has no role in metadata analysis or statistics refresh. Although loading operations may change the structure of data, they do not automatically update statistical metadata. Therefore, further steps are required when optimization is needed.

Create Warehouse establishes compute resources for query execution. Warehouses do not affect metadata or statistics, and creating one does not influence optimizer visibility into underlying data characteristics. Their function is strictly compute provisioning.

Analyze remains the correct answer because it explicitly refreshes metadata and updates column statistics. Snowflake relies heavily on these statistics to make intelligent decisions regarding pruning, join methods, and aggregation strategies. Without performing this command after significant data mutations, the optimizer may base decisions on outdated information, leading to suboptimal performance. None of the other choices provide the same capability or impact metadata in the same manner.

Question 19:

What does Snowflake use to optimize the pruning of micro-partitions during query execution?

A) Metadata stored in the micro-partition
B) Query acceleration service
C) Materialized views
D) Secure views

Answer: A

Explanation

The metadata stored in each micro-partition contains valuable information such as minimum and maximum values, distinct values, and row counts for columns stored in that partition. During query execution, Snowflake consults this metadata to determine whether a micro-partition might contain rows relevant to the query. This allows Snowflake to prune unnecessary partitions, minimizing I/O and dramatically speeding up execution. Because pruning relies entirely on evaluating stored metadata, it becomes the core mechanism behind Snowflake’s performance efficiency.

Query Acceleration Service improves performance by adding additional compute resources for specific large or complex queries but does not prune micro-partitions directly. It enhances performance by accelerating I/O-bound workloads but does not influence partition filtering logic.

Materialized Views store precomputed results to accelerate recurring queries. Although they improve performance for repetitive workloads, they do not modify or control micro-partition pruning. Their benefits come from storing results, not filtering partitions.

Secure Views enforce data masking and security controls. They ensure access policies are enforced but do not contribute to pruning behaviour. Their function is centered on governance rather than performance optimization.

Metadata inside the micro-partition remains essential for efficient query execution. Snowflake leverages these statistics to skip partitions that do not match filtering predicates. This reduces scanning overhead and increases speed without requiring manual indexing. Other choices focus on features that improve performance or security in different ways but not through micro-partition pruning. Therefore, the metadata within micro-partitions is the direct and correct answer.

Question 20:

Which Snowflake feature allows teams to test changes without affecting production while keeping storage costs minimal?

A) Zero-Copy Cloning
B) Query History
C) Stored Procedures
D) Fail-safe

Answer: A

Explanation

Zero-Copy Cloning enables instantaneous cloning of databases, schemas, or tables without duplicating physical data. Rather than creating separate copies, the clone references existing micro-partitions until changes occur. This dramatically reduces storage costs and allows development teams to create multiple isolated environments for testing, prototyping, and experimentation. Because it avoids creating full storage duplicates, it becomes efficient for agile workflows where teams frequently spin up temporary environments.

Query History provides logs of previously executed statements and their performance characteristics. While useful for auditing or debugging, it does not provide isolated testing environments. It merely tracks past queries and does not generate a separate copy of data assets.

Stored Procedures allow the execution of procedural logic within Snowflake. They support complex automation, branching logic, and transformations, but they have no role in environment cloning or cost-efficient test environments. Their purpose is workflow automation, not environment separation.

Fail-safe restores data after catastrophic failures, providing a final layer of protection beyond Time Travel. However, it does not support development or testing scenarios and cannot create isolated copies of data. Its intent is disaster recovery.

Zero-Copy Cloning is the only feature that enables isolated test environments without large storage footprints. Teams can conduct experiments safely and revert or drop clones when finished.The underlying micro-partition mechanism ensures efficiency, making this feature essential for modern development practices.

img