SnowPro Core Recertification Snowflake Practice Test Questions and Exam Dumps

Question 1

What is the default length of time that Snowflake allows users to access historical data using the Time Travel feature across all accounts?

A. 0 days
B. 1 day
C. 7 days
D. 14 days

Correct Answer: B

Explanation:

Snowflake’s Time Travel feature gives users the ability to access historical versions of data, which is especially useful when recovering accidentally deleted data, auditing changes, or comparing current data to prior states. It enables operations like querying past data, cloning data as it existed in the past, and restoring dropped tables or schemas.

By default, the standard retention period for Time Travel is 1 day (24 hours) for all Snowflake accounts. This means that users can view or restore data changes that occurred within the past 24 hours without needing any additional configuration. After this window, the historical data is no longer directly accessible through Time Travel.

This default applies to most object types in Snowflake, such as tables, schemas, and databases. It is automatically enabled upon account creation and requires no user intervention to be active. However, if organizations want a longer retention period, they must upgrade to a higher Snowflake edition.

Specifically:

  • Standard Edition users can choose between 0 (disabling Time Travel) or 1 day.

  • Enterprise Edition and higher can configure a Time Travel retention period up to 90 days for permanent objects.

It’s worth noting that longer retention periods provide extended recovery capabilities but also increase storage usage and associated costs, because Snowflake retains more historical data behind the scenes.

Also, once the Time Travel retention period expires, the data is not immediately purged. Instead, it enters a 7-day Fail-safe period, during which it can still be recovered—but only by Snowflake support, not directly by users. Fail-safe is meant strictly for system-level recovery and compliance, and not for day-to-day operational use.

In summary, the default configuration provides a 1-day retention window for accessing historical data through Time Travel, offering a balance between usability and storage efficiency.

Correct answer: B

Question 2

If a user is already logged into Snowflake and a user-level network policy is assigned to them, what action does Snowflake take if their IP address does not comply with the new policy?

A. Log the user out.
B. Deactivate the network policy.
C. Prevent the user from executing additional queries.
D. Allow the user to continue until the session or login token expires.

Correct answer: D

Explanation:

In Snowflake, network policies are used to restrict access based on IP addresses. These policies can be applied at the account level or at the user level, offering flexibility in managing access across different users or groups.

When a user-level network policy is assigned to an individual who is already logged in, and their IP address does not match the policy's allowed IP ranges, the system does not immediately terminate the session or prevent them from executing actions. Instead, Snowflake allows the session to continue until it expires naturally or the user logs out. Once the session ends, the user will be required to meet the new policy constraints on their next login attempt.

Let’s break down why this is the case and examine each of the provided options:

  • Option A (Log the user out): This is incorrect. Snowflake does not forcibly disconnect a user who is already logged in when a network policy is assigned post-login. Forced disconnection does not align with Snowflake’s session management approach.

  • Option B (Deactivate the network policy): This is also incorrect. Snowflake does not automatically disable a network policy just because a user’s current session doesn’t comply. The policy remains in effect, and it will be enforced at the next login attempt.

  • Option C (Prevent the user from executing additional queries): This is misleading and incorrect. Even though the user’s IP doesn’t align with the newly assigned policy, Snowflake doesn’t restrict their query capabilities mid-session. Their permissions and access remain the same until the session ends.

  • Option D (Allow the user to continue until the session or login token expires): This is the correct answer. Snowflake applies network policy checks only during the login process. If a user is already authenticated and working within an active session, they are allowed to continue using the system. When their session or login token expires and they attempt to re-authenticate, the network policy is enforced and the IP address will be checked at that time.

This behavior ensures that users are not unexpectedly disrupted, which is especially important in mission-critical environments where sudden session termination could cause problems. At the same time, Snowflake maintains security integrity by ensuring that future logins must comply with the new network restrictions.

 Snowflake handles user-level network policy enforcement gracefully by delaying enforcement until the next login, rather than disrupting ongoing sessions. Therefore, the best answer is D.

Question 3

When Snowflake assigns privileges to system-defined roles, what is the policy regarding the ability to revoke those privileges?

A. The privileges cannot be revoked.
B. The privileges can be revoked by an ACCOUNTADMIN.
C. The privileges can be revoked by an ORGADMIN.
D. The privileges can be revoked by any user-defined role with appropriate privileges.

Correct Answer: A

Explanation:

In Snowflake, system-defined roles such as ACCOUNTADMIN, SECURITYADMIN, and SYSADMIN come with predefined privileges that are automatically granted by the system to ensure that these roles function as intended. These privileges are considered non-revocable because they are integral to the role's purpose and the security and administrative structure of the Snowflake environment.

The correct answer is A, because privileges granted to system-defined roles cannot be revoked. This design ensures security, consistency, and administrative integrity in the Snowflake account. Let’s delve deeper into why this is the case and why the other options are incorrect.

System-defined roles in Snowflake are built-in roles with specific purposes and a hierarchical structure. For example:

  • The ACCOUNTADMIN role has the most extensive privileges and full control over all objects in the account.

  • The SECURITYADMIN role is responsible for managing roles and users.

  • The SYSADMIN role manages objects within the database and virtual warehouses.

Because these roles are fundamental to Snowflake's RBAC (Role-Based Access Control) model, their core privileges are locked in by the system. Allowing users, even those with high-level privileges, to revoke these could compromise system stability or create unresolvable access issues. For instance, if someone were able to revoke a critical privilege from the ACCOUNTADMIN role, it could prevent administrative recovery or security oversight, essentially breaking the chain of control.

Let’s examine why the other options are incorrect:

B. The privileges can be revoked by an ACCOUNTADMIN.
This is false because even the ACCOUNTADMIN role, while powerful, cannot revoke system-defined privileges from itself or other system-defined roles. The privileges are automatically tied to the role as part of Snowflake's internal architecture.

C. The privileges can be revoked by an ORGADMIN.
The ORGADMIN role in Snowflake operates at the organization level, overseeing multiple accounts under a single organization. While it has powers like creating or managing accounts and viewing usage, it does not control or modify system role privileges at the individual account level. Therefore, it cannot revoke privileges granted to system-defined roles.

D. The privileges can be revoked by any user-defined role with appropriate privileges.
This is also incorrect. User-defined roles, no matter how privileged, do not have the authority to alter or revoke system-assigned privileges from built-in roles. They can manage grants within their scope but not override or reduce privileges inherent to system-defined roles.

In summary, Snowflake's architecture ensures that system-defined roles retain their essential privileges to preserve administrative and security integrity. These privileges are non-revocable to prevent misuse, accidental lockouts, or security breaches, thereby maintaining a secure and stable access control framework.

Therefore, the correct answer is A.

Question 4

What is the default access level for a securable object before any permissions are granted?

A. No access
B. Read access
C. Write access
D. Full access

Correct Answer: A

Explanation:

In Snowflake's access control framework, a securable object refers to any entity within the system—such as databases, schemas, tables, or views—that can have access permissions assigned to it. By default, these objects are protected by a deny-all policy, meaning that no user or role has any access to them unless explicitly granted.

This approach ensures a secure-by-default posture. When a new securable object is created, it is automatically inaccessible to all users and roles, including the creator, unless specific privileges are assigned. This design minimizes the risk of unauthorized access and enforces strict control over data and resources.

Access to securable objects is managed through roles in Snowflake. Roles are granted specific privileges (such as SELECT, INSERT, or USAGE) on securable objects. Users are then assigned roles, which determine their permissions within the system. This role-based access control model allows for granular and flexible permission management.

For example, if a new table is created, no user can query or modify it until the appropriate privileges are granted to a role, and that role is assigned to the user. This ensures that only authorized users can access or manipulate the data.

In summary, the default access level for any securable object in Snowflake is no access. Permissions must be explicitly granted to roles, which are then assigned to users, to enable access to these objects.

Correct answer: A

Question 5

In Snowflake, streams are used to track change data capture (CDC). On which two types of Snowflake objects can streams be configured to monitor changes? (Select two options.)

A. Pipe
B. Stage
C. Secure view
D. Materialized view
E. Shared table

Correct answers: D and E

Explanation:

In Snowflake, streams are objects that allow users to track Change Data Capture (CDC) by querying the change history (inserts, updates, deletes) on certain base objects. This functionality enables developers and data engineers to build incremental pipelines, as it provides a mechanism to query only new or changed rows since the last time the stream was read.

To understand which objects can be used with streams, we must examine how streams interact with Snowflake’s various object types.

Let’s evaluate each option:

  • A (Pipe): A pipe in Snowflake is used in conjunction with Snowpipe, which is a continuous data ingestion service. Pipes define how data files are automatically loaded from stages into tables. Pipes themselves do not store data, nor do they represent a structure where row-level changes can be tracked. Streams cannot be configured on pipes, so this option is incorrect.

  • B (Stage): A stage is a storage location for files to be loaded into or unloaded from Snowflake. It can be internal or external (like Amazon S3, Azure Blob, etc.). Since stages do not contain structured, queryable data, and do not track row-level changes, streams cannot be configured on stages. Thus, this option is also incorrect.

  • C (Secure view): A secure view is a database object that allows for controlled, secure access to query results. It is a logical object, and does not store physical data that can be mutated. Since views (including secure ones) do not support DML operations (like INSERT, UPDATE, DELETE), and cannot generate changes, streams cannot be used on views, whether secure or not. Therefore, this is not a valid answer.

  • D (Materialized view): A materialized view stores the results of a query physically and supports change tracking. Snowflake supports streams on materialized views, allowing users to monitor changes as the base tables are updated and the materialized view is refreshed. Therefore, this is a correct option.

  • E (Shared table): A shared table refers to a table made accessible through Snowflake Secure Data Sharing. Streams can be used on shared tables, just like on regular tables, to track changes. This allows the recipient of the shared data to implement CDC-based transformations or analysis. Hence, this is also a correct option.

In summary, streams can only be created on certain objects that support change tracking, and these include:

  • Tables (including shared tables)

  • Materialized views

Objects like pipes, stages, and views (including secure views) do not support streams because they either do not store structured data or do not support DML operations.

Therefore, the correct answers are D and E.

Question 6

Which type of chart can Snowflake users create and use in Snowsight dashboards to visualize their data?

A. Area chart
B. Box plot
C. Heat grid
D. Pie chart

Correct Answer: A

Explanation:

Snowsight is Snowflake’s modern, web-based interface designed to improve the user experience for exploring data, running queries, and visualizing results. One of its most powerful features is its ability to build and share dashboards that display query results using a variety of chart types. Among the chart options that Snowsight supports, the area chart is one of the standard visualizations available.

The correct answer is A, because the area chart is fully supported in Snowsight and is commonly used to visualize quantitative data over time, making it ideal for observing trends, cumulative values, and changes across a sequence (e.g., sales over months or storage usage over days). Area charts in Snowsight work similarly to line charts but shade the area beneath the line, enhancing visual emphasis on the magnitude of values.

Let’s look at each of the other chart types mentioned and clarify why they are not currently supported in Snowsight dashboards:

B. Box plot
A box plot, also known as a box-and-whisker plot, is used to show the distribution, median, and outliers of a dataset. However, Snowsight does not currently support box plots as a built-in visualization option. Users who require this type of statistical analysis typically export the data to a tool like Tableau, Power BI, or a Python-based notebook.

C. Heat grid
The heat grid or heatmap is another advanced visualization type useful for showing concentration or intensity of values across two dimensions. As of current Snowsight capabilities, heatmaps are not supported natively in the charting options. Similar to box plots, this kind of visualization would require external tools or custom rendering in environments outside Snowsight.

D. Pie chart
Surprisingly, while pie charts are one of the most recognizable chart types, Snowsight does not support pie charts as part of its built-in visualization options. This decision is likely based on best practices in data visualization, as pie charts are often discouraged for displaying complex data due to their limitations in conveying proportional differences accurately compared to bar or area charts.

In contrast, area charts offer both aesthetic and functional advantages in dashboards. They allow users to understand cumulative trends, compare data series, and create visually engaging representations of business metrics—all without leaving the Snowflake platform. This tight integration with SQL queries and results in Snowsight makes the area chart a natural and supported choice for many Snowflake users building dashboards.

Therefore, based on current capabilities and official documentation, the only correct answer is A.

Question 7

What is the result of selecting the "Notify & Suspend" action on a Snowflake resource monitor when a credit usage threshold is reached?

A. Send an alert notification to all account users who have notifications enabled.
B. Send an alert notification to all virtual warehouse users when thresholds over 100% have been met.
C. Send a notification to all account administrators who have notifications enabled, and suspend all assigned warehouses after all statements being executed by the warehouses have completed.
D. Send a notification to all account administrators who have notifications enabled, and suspend all assigned warehouses immediately, canceling any statements being executed by the warehouses.

Correct answer: C

Explanation:

In Snowflake, resource monitors are essential tools used to track and manage the consumption of compute credits across virtual warehouses in an account. These monitors help prevent runaway credit usage by providing administrators with automatic actions once specific thresholds are reached. Each monitor can be configured with actions such as “Notify,” “Suspend,” or “Suspend Immediately.”

The "Notify & Suspend" action performs two specific functions once a specified credit threshold is met:

  1. Notification: It sends an alert only to account administrators (not to all users or virtual warehouse users). However, the notification is sent only to administrators who have enabled notifications in their user settings. This keeps the alert limited to key personnel responsible for monitoring system performance and budget adherence.

  2. Suspension: The suspension of warehouses does not occur abruptly. Instead, Snowflake allows all currently executing SQL statements to complete before suspending the associated warehouses. This is an important distinction because it avoids abruptly canceling critical jobs or transactional operations. The goal is to halt further credit usage without disrupting currently running tasks. The warehouses are suspended gracefully, ensuring no data corruption or unfinished queries.

Let’s examine the answer choices:

  • A (Send an alert notification to all account users who have notifications enabled): This is incorrect. The alert is sent only to account administrators, not to all users.

  • B (Send an alert notification to all virtual warehouse users when thresholds over 100% have been met): This is incorrect in two ways. First, notifications are not sent to virtual warehouse users. Second, thresholds can be set at any level, not just over 100%.

  • C (Send a notification to all account administrators who have notifications enabled, and suspend all assigned warehouses after all statements being executed by the warehouses have completed): This is the correct answer. It accurately reflects the behavior of the "Notify & Suspend" action—alerting admins and suspending warehouses after in-progress statements finish executing.

  • D (Send a notification to all account administrators who have notifications enabled, and suspend all assigned warehouses immediately, canceling any statements being executed by the warehouses): This describes the behavior of the "Suspend Immediately" action, not "Notify & Suspend." Hence, this option is incorrect.

To summarize: "Notify & Suspend" is a balanced approach, allowing the organization to manage credit usage responsibly while maintaining data integrity by not interrupting running queries. This action is particularly useful in environments where cost control is important, but ongoing operations cannot be halted abruptly.

Therefore, the correct answer is C.

Question 8

Which statements describe benefits of Snowflake's separation of compute and storage? (Choose two.)

A. The separation allows independent scaling of computing resources.
B. The separation ensures consistent data encryption across all virtual data warehouses.
C. The separation supports automatic conversion of semi-structured data into structured data for advanced data analysis.
D. Storage volume growth and compute usage growth can be tightly coupled.
E. Compute can be scaled up or down without the requirement to add more storage.

Correct Answers: A and E

Explanation:

Snowflake's architecture is fundamentally designed around the separation of compute and storage, a principle that offers significant advantages in terms of scalability, flexibility, and cost-efficiency.

Independent Scaling of Compute Resources (Option A):
One of the primary benefits of this architectural choice is the ability to scale compute resources independently of storage. In traditional data warehousing solutions, compute and storage are often tightly coupled, meaning that scaling one necessitates scaling the other, leading to potential inefficiencies and increased costs. Snowflake's decoupled architecture allows organizations to adjust their compute resources based on workload demands without impacting storage. This means that during periods of high query activity, compute resources can be scaled up to maintain performance, and scaled down during periods of low activity to save costs. This flexibility ensures that organizations only pay for the compute resources they actually use, optimizing both performance and expenditure.

Scaling Compute Without Additional Storage (Option E):
Similarly, Snowflake enables users to scale compute resources up or down without the need to add more storage. This is particularly beneficial for organizations that experience variable workloads. For instance, during a data processing-intensive task, compute resources can be increased to handle the load efficiently. Once the task is completed, these resources can be scaled down, all without any changes to the storage configuration. This dynamic scaling ensures that compute resources are utilized effectively, and costs are kept in check, as there's no need to provision for peak capacity at all times.

Analysis of Other Options:

  • Option B: While Snowflake does provide robust data encryption features, ensuring data security across its platform, this benefit is not a direct result of the separation of compute and storage. Encryption is a standard feature that operates independently of this architectural design.

  • Option C: Snowflake's ability to handle semi-structured data, such as JSON or Avro, and convert it for analysis is a feature of its data processing capabilities, not a direct consequence of separating compute and storage.

  • Option D: This statement is contrary to the benefits of Snowflake's architecture. In Snowflake, storage and compute are decoupled, allowing for independent scaling. Therefore, storage volume growth and compute usage growth are not tightly coupled, providing flexibility and cost savings.

 The separation of compute and storage in Snowflake's architecture allows for independent scaling of compute resources and the ability to adjust compute power without altering storage configurations, leading to enhanced flexibility and cost-efficiency.

Correct answers: A and E

Question 9

Which Snowflake parameter allows an account administrator to define the minimum number of days that historical data is retained for Time Travel at the account level?

A. DATA_RETENTION_TIME_IN_DAYS
B. MAX_DATA_EXTENSION_TIME_IN_DAYS
C. MIN_DATA_RETENTION_TIME_IN_DAYS
D. MAX_CONCURRENCY_LEVEL

Correct Answer: A

Explanation:
In Snowflake, Time Travel is a powerful feature that allows users to access historical data—such as tables, schemas, and databases—for a defined retention period after changes have occurred or objects have been dropped. This enables recovery from accidental data loss or changes, auditing, and consistency checking. The duration for which Snowflake retains this historical data is controlled by the parameter DATA_RETENTION_TIME_IN_DAYS.

Thus, the correct answer is A, because DATA_RETENTION_TIME_IN_DAYS is the parameter that sets the number of days Snowflake retains historical data for Time Travel at the object level, and it can also be configured at the account, database, schema, or table level. When set at the account level, it establishes a default retention period that can be inherited by new objects unless overridden at a more granular level.

Let’s analyze the other options to understand why they are incorrect:

B. MAX_DATA_EXTENSION_TIME_IN_DAYS
This parameter does not exist in Snowflake. There is no official configuration by this name in Snowflake’s parameter list. It might sound like it controls some data retention policy, but this is not a valid or recognized setting.

C. MIN_DATA_RETENTION_TIME_IN_DAYS
Although it sounds plausible, MIN_DATA_RETENTION_TIME_IN_DAYS is not a valid Snowflake parameter. Snowflake uses only DATA_RETENTION_TIME_IN_DAYS to define retention duration. The term "minimum" may create confusion, but Snowflake doesn’t use separate parameters for "minimum" and "maximum" retention times; instead, the set value directly defines the duration.

D. MAX_CONCURRENCY_LEVEL
This parameter is related to warehouse performance and defines the maximum number of SQL statements that can run concurrently on a virtual warehouse. It has nothing to do with Time Travel or data retention, making it irrelevant to this question.

Now, a few more technical details about DATA_RETENTION_TIME_IN_DAYS:

  • The default value for this parameter is 1 day for Standard Edition accounts, and up to 90 days for Enterprise Edition and above.

  • This setting affects how long data is kept after a DML operation (such as DELETE, UPDATE) or after an object is dropped.

  • Increasing the retention period can incur additional storage costs, since Snowflake must preserve the historical state of the data.

  • Only roles with the appropriate privileges (such as ACCOUNTADMIN) can change this setting at the account level.

In summary, when configuring Time Travel behavior at the account level in Snowflake, the administrator must use the DATA_RETENTION_TIME_IN_DAYS parameter. It is the official and only parameter for setting how many days historical data is available for recovery and querying, and it can be applied at multiple hierarchical levels within the system.

Therefore, the correct answer is A.

Question 10

In Snowflake, at which system level is the minimum data retention time for Time Travel—controlled by the MIN_DATA_RETENTION_TIME_IN_DAYS parameter—officially configured?

A. Account
B. Database
C. Schema
D. Table

Correct answer: A

Explanation:

The MIN_DATA_RETENTION_TIME_IN_DAYS parameter in Snowflake is used to enforce a minimum duration for how long historical data is kept available via the Time Travel feature. Time Travel allows users to view and recover data as it existed at earlier points within a specified retention window.

The key purpose of this parameter is to ensure that no object within the account can have a data retention period shorter than this minimum. This provides an additional layer of data governance and compliance enforcement, particularly valuable for organizations with strict recordkeeping and audit requirements.

This specific parameter is set at the account level—meaning it applies to the entire Snowflake environment under that account umbrella. No other object levels (like databases, schemas, or tables) can override it to set a shorter retention duration. Instead, if a developer or administrator tries to configure a shorter DATA_RETENTION_TIME_IN_DAYS at the object level (e.g., at a table), Snowflake will automatically enforce the higher of the two values:

This mechanism ensures that all retention settings remain compliant with the account-wide standard, offering consistency across the platform.

To clarify how this differs from related parameters:

  • DATA_RETENTION_TIME_IN_DAYS can be set at the object level (table, schema, database), allowing you to fine-tune how long specific objects retain historical data.

  • MIN_DATA_RETENTION_TIME_IN_DAYS, however, is a top-down constraint—it acts as a floor value that all object-level settings must meet or exceed.

Let’s also quickly review the other options:

  • B (Database): While individual databases can have their own retention settings, this parameter is not set at this level.

  • C (Schema): Schemas, like databases and tables, may have individual retention settings, but cannot define the account-wide minimum.

  • D (Table): Tables can have their own retention periods, but those periods are always checked against the account-level minimum.

By centralizing this control at the account level, Snowflake helps ensure regulatory alignment, data recoverability, and uniform data retention practices.

Correct answer: A


UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.