Use VCE Exam Simulator to open VCE files

SnowPro Core Snowflake Practice Test Questions and Exam Dumps
Question No 1:
Snowflake provides a mechanism for its customers to override its natural clustering algorithms. This method is:
A. Micro-partitions
B. Clustering keys
C. Key partitions
D. Clustered partitions
Correct Answer: B
Explanation:
Snowflake automatically manages the clustering of data within tables by dividing it into micro-partitions (A), but this is not the method that allows customers to override the clustering.
To explicitly control how data is organized, Snowflake provides the clustering keys (B) feature. Clustering keys allow customers to specify one or more columns to guide the physical storage and organization of data in micro-partitions. When you define clustering keys, Snowflake uses them to organize data more efficiently for query performance, particularly for large tables. This mechanism allows customers to override Snowflake's default automatic clustering and optimize storage and performance based on specific query patterns.
Micro-partitions (A) are the underlying structure Snowflake uses to store data in a table. It automatically manages micro-partitioning based on the data inserted, but it does not provide control over clustering.
Key partitions (C) and clustered partitions (D) are not terms used in the context of Snowflake's architecture for overriding natural clustering algorithms.
Therefore, the correct answer is B. Clustering keys.
Question No 2:
Which of the following are valid Snowflake Virtual Warehouse Scaling Policies? (Choose two.)
A. Custom
B. Economy
C. Optimized
D. Standard
Correct answer: A, B
Explanation:
In Snowflake, Virtual Warehouses are used to execute queries and manage workloads. Snowflake provides scaling policies to automatically adjust the size of these warehouses based on demand. The available scaling policies allow you to optimize for cost and performance. Here's an analysis of each option:
Custom is a valid scaling policy in Snowflake. It allows you to define your own scaling rules and configurations based on your specific needs. With a custom scaling policy, you can set the number of clusters and the scaling behavior according to the workload's requirements, providing a flexible way to manage resources efficiently.
Economy is also a valid scaling policy. It is designed to scale warehouses with a cost-conscious approach. In an Economy policy, Snowflake aims to minimize costs by automatically suspending and resuming warehouses as needed, helping to avoid unnecessary charges when demand is low. This is an appropriate choice for organizations that want to prioritize cost savings.
While Optimized sounds reasonable in terms of resource management, it is not a valid scaling policy in Snowflake. The platform uses other predefined policies such as Economy and Standard for scaling, but there is no scaling policy specifically called Optimized.
Standard is another valid scaling policy. It is the default scaling policy in Snowflake and focuses on balancing cost and performance. The Standard policy will automatically scale up the warehouse when needed to handle increased workloads and scale down when the demand decreases. It ensures efficient resource usage without over-provisioning.
Thus, the correct answers are A and B, as both are recognized as valid Snowflake Virtual Warehouse Scaling Policies.
Question No 3:
True or False: A single database can exist in more than one Snowflake account.
A. True
B. False
Correct answer: A
Explanation:
In Snowflake, a database is a logical container that organizes data in the platform. By default, a database resides within a specific Snowflake account, but it is possible for a database to be shared across multiple accounts. Snowflake provides a feature called data sharing, which allows data to be shared between different accounts. This means that a database, or specific tables within a database, can be made available to other Snowflake accounts, allowing users to access the shared data without needing to copy it or transfer it.
Data sharing is a key component of Snowflake’s multi-cloud data platform and is one of its core capabilities for enabling collaboration between different entities. When a database is shared, the data is not replicated but is instead made accessible through a secure data sharing mechanism. This allows the owner of the database to control access while still making the data available to other Snowflake accounts.
Thus, a single database can indeed exist in more than one Snowflake account, as long as the owner of the database has set up data sharing appropriately.
Question No 4:
Which role is most appropriate for creating and managing users and roles in a Snowflake account?
A. SYSADMIN
B. SECURITYADMIN
C. PUBLIC
D. ACCOUNTADMIN
Correct answer: B
Explanation:
In Snowflake, role-based access control (RBAC) is used to assign permissions and responsibilities to various roles. Each role is designed to handle specific administrative and operational tasks in the system. When it comes to managing users and roles, the most appropriate role is SECURITYADMIN. Let's evaluate each option to understand why B is correct and the others are not ideal.
Option A: SYSADMIN
The SYSADMIN role is primarily responsible for managing objects like databases, schemas, warehouses, and other data-related objects. It is one of the most powerful roles when it comes to managing infrastructure within a Snowflake account. However, it does not have the privileges necessary to create or manage users and roles unless explicitly granted by another higher-level role. Therefore, while SYSADMIN can control many aspects of the Snowflake environment, it is not recommended for user and role management.
Option B: SECURITYADMIN
The SECURITYADMIN role is designed specifically for managing security-related objects, including users, roles, and grants. It has the required privileges to:
Create and drop users.
Assign roles to users.
Create and manage other roles.
Manage role hierarchies.
This role is also positioned below the ACCOUNTADMIN in the RBAC hierarchy but is specifically intended for delegating and controlling access through roles. As such, it is the recommended role for user and role administration because it allows delegation of security management without giving full administrative control of the account.
Option C: PUBLIC
The PUBLIC role is a default role that every user in Snowflake has access to. It is meant for granting minimal privileges and ensuring baseline accessibility. It does not have any administrative capabilities and certainly cannot manage users or roles. Using PUBLIC for administrative functions is not only technically incorrect but also a major security risk.
Option D: ACCOUNT ADMIN
The ACCOUNT ADMIN role is the most powerful role in Snowflake and has access to all objects and all administrative operations. While this role can indeed manage users and roles, it is not recommended for daily user and role management due to the broad scope of its privileges. Best practices dictate limiting the use of this role to critical account-level administrative tasks to avoid accidental changes that could impact the entire system. Delegating user and role management to SECURITYADMIN helps in following the principle of least privilege, where users have only the access necessary to perform their job functions.
In summary, the SECURITYADMIN role is specifically tailored for managing users and roles. It provides the right balance between capability and control without exposing the environment to excessive risk. This is why B is the correct and recommended answer.
Question No 5:
True or False: Bulk unloading of data from Snowflake supports the use of a SELECT statement.
A. True
B. False
Answer: A
Explanation:
In Snowflake, the COPY INTO <location> command is utilized for unloading data from a database table into flat, delimited text files. This command allows for the specification of a SELECT statement as the source of the data to be unloaded, rather than directly referencing a table. This capability enables the extraction of data based on complex queries, including JOIN clauses, which can involve multiple tables. The results of the SELECT query are then written to one or more files in the specified location, such as a Snowflake internal stage or an external storage location like Amazon S3, Google Cloud Storage, or Microsoft Azure.
The SELECT statement used in the COPY INTO command can include the full syntax and semantics of Snowflake SQL queries. This means that users can perform complex data transformations, aggregations, and filtering within the query before the data is unloaded. For instance, one can use functions like OBJECT_CONSTRUCT to convert relational table rows into JSON format, or apply CAST functions to convert numeric columns to specific data types when unloading data to formats like Parquet. Additionally, the SELECT statement can be partitioned using the PARTITION BY clause to organize the unloaded data into a directory structure, which can enhance the efficiency of downstream data processing tasks.
Therefore, the statement that bulk unloading of data from Snowflake supports the use of a SELECT statement is True.
Question No 6:
Choose the correct three types of internal stages used in Snowflake.
A. Named Stage
B. User Stage
C. Table Stage
D. Schema Stage
Correct Answers: A, B, C
Explanation:
In Snowflake, internal stages are storage locations managed by Snowflake that allow users to stage (upload) files before loading them into tables or unloading them from tables for further use. These internal stages eliminate the need to use an external cloud storage provider, offering convenience and tighter integration with Snowflake’s native services. There are three primary types of internal stages, each serving a different purpose in managing data workflows: Named Stages, User Stages, and Table Stages.
A. Named Stage
Named stages are explicitly created by users within a Snowflake account. These stages are defined using the CREATE STAGE SQL command and can be customized to include storage integration details such as file format and encryption settings. They are ideal for scenarios where data needs to be shared or reused across multiple processes or users. Named stages are useful because they provide centralized control over file management and access control.
B. User Stage
Every user in Snowflake is automatically assigned a user stage, which is a personal internal staging area unique to that user account. A user can upload files to this stage using the PUT command and then load them into tables as needed. The user stage is implicitly available and does not require explicit creation. This type of internal stage is suitable for individual tasks, testing, or temporary file storage. Because it is associated with the user, the files in the user stage are private to that user unless explicitly shared.
C. Table Stage
Each table in Snowflake has its own table stage, which is used primarily for unloading data from that table or temporarily storing files that will be loaded into the table. Like the user stage, the table stage is automatically created and managed by Snowflake. It allows users to unload data from the table into files, which can then be downloaded or moved elsewhere. This type of stage is very useful for managing export operations or for quick staging when dealing directly with a specific table.
D. Schema Stage
This option is incorrect because Schema Stage is not a valid or recognized type of internal stage in Snowflake. While schemas are logical structures within databases that organize tables and other database objects, they are not used for staging data in the way internal stages are. Snowflake does not define or support any internal stage type specifically referred to as a "Schema Stage." Therefore, D is not one of the correct answers.
To summarize, the three correct types of internal stages in Snowflake are Named Stage, User Stage, and Table Stage. These internal stages provide flexibility in how files are managed for loading and unloading operations, allowing for efficient data pipelines and controlled access to staged files. Understanding these distinctions is key for optimizing data ingestion and export processes in Snowflake.
Question No 7:
True or False: A customer using SnowSQL / native connectors will be unable to also use the Snowflake Web Interface (UI) unless access to the UI is explicitly granted by support.
A. True
B. False
Correct answer: B
Explanation:
Snowflake provides multiple interfaces for users to interact with their data, including the Snowflake Web Interface (UI), SnowSQL, and native connectors. Importantly, using one interface does not restrict access to others. Specifically, a customer utilizing SnowSQL or native connectors can still access and use the Snowflake Web Interface without needing explicit permission from Snowflake support.
The Snowflake Web Interface, often referred to as Snowsight in newer versions, is the browser-based UI that allows users to perform various operations such as running SQL queries, managing databases, and monitoring system performance. This interface is accessible to all users with the appropriate credentials and does not require special permissions to use, regardless of whether they are also using SnowSQL or native connectors.
SnowSQL is the command-line client provided by Snowflake for executing SQL queries and performing administrative tasks. It operates independently of the Web Interface, and its usage does not impact a user's ability to access the Web Interface. Similarly, native connectors are used to integrate Snowflake with external applications and services, and their usage also does not affect access to the Web Interface.
Therefore, the statement that a customer using SnowSQL or native connectors will be unable to use the Snowflake Web Interface unless access is explicitly granted by support is false. Users can freely utilize both the Web Interface and other tools like SnowSQL or native connectors concurrently, as these interfaces are designed to work independently and complement each other within the Snowflake ecosystem.
Question No 8:
Where can account-level storage usage in Snowflake be monitored?
A. The Snowflake Web Interface (UI) in the Databases section
B. The Snowflake Web Interface (UI) in the Account -> Billing & Usage section
C. The Information Schema -> ACCOUNT_USAGE_HISTORY View
D. The Account Usage Schema -> ACCOUNT_USAGE_METRICS View
Correct Answer: B
Explanation:
Monitoring account-level storage usage is essential for organizations using Snowflake to effectively manage costs and ensure resource optimization. Snowflake provides several tools and interfaces that allow users to monitor and analyze their data usage, but not all of them provide account-wide storage metrics. Let's evaluate each option based on its actual functionality.
Option A: The Snowflake Web Interface (UI) in the Databases section
This option allows users to explore and manage individual databases. You can see the size of each database and the amount of storage it consumes. However, this view is not aggregated at the account level. It provides granular details for each database but not a consolidated view of the entire account's usage. Therefore, A is incorrect for account-level storage monitoring.
Option B: The Snowflake Web Interface (UI) in the Account -> Billing & Usage section
This is the correct option. This section of the Snowflake UI offers a comprehensive overview of account-wide storage, including data storage, staged files, and failsafe storage. It provides visualizations, historical trends, and breakdowns that are useful for tracking overall usage and controlling costs. The data here is derived from Snowflake's internal ACCOUNT_USAGE schema and is intended to help account administrators monitor usage and spending. Thus, B is the best answer for monitoring storage at the account level.
Option C: The Information Schema -> ACCOUNT_USAGE_HISTORY View
While it seems plausible, this view does not actually exist. Snowflake’s Information Schema provides metadata about objects in a specific database or schema but does not contain a view called ACCOUNT_USAGE_HISTORY. Snowflake’s ACCOUNT_USAGE schema (not Information Schema) contains views such as STORAGE_USAGE and DATABASE_STORAGE_USAGE_HISTORY, which provide more relevant information. Hence, C is incorrect both because of the incorrect schema and because the view does not exist.
Option D: The Account Usage Schema -> ACCOUNT_USAGE_METRICS View
Again, this option is based on a non-existent view. The Account Usage schema does contain relevant views like STORAGE_USAGE, DATABASE_STORAGE_USAGE_HISTORY, and WAREHOUSE_LOAD_HISTORY, but ACCOUNT_USAGE_METRICS is not a standard or documented view provided by Snowflake. As such, D is not a valid answer.
In summary, while multiple interfaces and schemas can provide usage insights, only the Account -> Billing & Usage section in the Snowflake Web Interface offers a centralized, account-level view of storage consumption. It is intended specifically for billing administrators and decision-makers responsible for resource and cost management.
Question No 9:
Which two factors influence credit consumption by the Compute Layer (Virtual Warehouses) in Snowflake?
A. Number of users
B. Warehouse size
C. Amount of data processed
D. # of clusters for the Warehouse
Correct Answers: B, D
Explanation:
Credit consumption in Snowflake's Compute Layer, which is handled through Virtual Warehouses, is primarily influenced by how resources are provisioned and utilized. Two major factors that directly affect how many credits are consumed are the size of the warehouse and the number of clusters used for scalability. Let's examine each option and why B and D are the correct answers.
Option A: Number of users
While the number of users may indirectly lead to increased compute resource usage (e.g., more users triggering more queries), this is not a direct factor in how Snowflake calculates credit usage. Virtual Warehouses are billed based on the time they run and their configuration, not the number of concurrent users. So, a single user running heavy queries on a large warehouse could consume more credits than many users on a small, efficient one. Therefore, A is incorrect.
Option B: Warehouse size
This is one of the most significant and direct factors in credit consumption. Snowflake offers warehouses in sizes ranging from X-Small to 6X-Large, and each size tier consumes credits at a different rate. For example, an X-Small warehouse consumes 1 credit per hour, while a Large might consume 8 credits per hour. So, the larger the warehouse, the greater the credit consumption—regardless of whether it's being actively used or idling. Therefore, B is correct.
Option C: Amount of data processed
Although this might seem relevant, Snowflake does not charge compute credits based on the volume of data processed by the warehouse. Data processing volume could influence query duration, which in turn might affect how long the warehouse stays on, but the charge is still based on time and warehouse configuration, not data size. So, while indirectly related, this is not one of the primary factors in determining compute credit consumption. Therefore, C is incorrect.
Option D: # of clusters for the Warehouse
Snowflake offers multi-cluster warehouses for handling concurrency and auto-scaling. If a multi-cluster warehouse is configured (e.g., Min 1, Max 10 clusters), and workload spikes trigger the system to spin up multiple clusters, then each active cluster consumes credits at the same rate as the base warehouse size. So if you have a Large warehouse with 3 active clusters, you are billed for 3x the Large rate while all clusters are running. Therefore, the number of clusters directly impacts credit usage. Thus, D is correct.
Summary:
Credit usage in Snowflake’s Compute Layer is not based on the number of users or the amount of data processed. Instead, it is determined by how long the virtual warehouse runs, its size, and how many clusters are used (especially in multi-cluster setups). This ensures that resource-intensive workloads or high concurrency are supported efficiently, albeit at a higher credit cost.
So, the two correct answers are: B and D.
Question No 10:
Which statement most accurately explains the concept of clustering in Snowflake?
A. Clustering represents the way data is grouped together and stored within Snowflake’s micro-partitions
B. The database administrator must define the clustering methodology for each Snowflake table
C. The clustering key must be included on the COPY command when loading data into Snowflake
D. Clustering can be disabled within a Snowflake account
Answer: A
Explanation:
Clustering in Snowflake refers to the way data is organized within the platform’s micro-partitions, which are an internal mechanism Snowflake uses to manage and store table data efficiently. Each table in Snowflake is automatically divided into these micro-partitions as data is loaded, and Snowflake keeps track of the range of values for each column in each partition. This metadata enables Snowflake’s optimizer to skip scanning unnecessary partitions during query execution, improving performance.
Option A correctly identifies this behavior by stating that clustering is the way data is grouped and stored in micro-partitions. This process allows Snowflake to implement automatic pruning, whereby only the necessary partitions are scanned when executing queries. By default, Snowflake uses automatic clustering (also known as natural clustering) unless a clustering key is manually defined. When a clustering key is specified, Snowflake monitors how well the data adheres to the clustering and can perform automatic reclustering in the background to maintain performance.
Option B is incorrect because defining a clustering methodology is not mandatory. Snowflake tables can be created and used without specifying a clustering key. Clustering is typically used for large tables with specific query patterns that can benefit from better partition pruning. It’s an optional performance optimization rather than a required configuration.
Option C is incorrect because the COPY command, used for loading data into Snowflake, does not require the specification of a clustering key. Clustering keys are defined during table creation or altered afterward using SQL DDL commands. The COPY command simply loads data into the table, and Snowflake determines how to store it in micro-partitions, following any existing clustering definitions on the table.
Option D is also incorrect. Clustering is a feature of Snowflake’s table architecture and cannot be "disabled" in a Snowflake account. Instead, if you don’t specify a clustering key, Snowflake defaults to natural clustering. If you do define a clustering key, Snowflake may automatically recluster the data as needed. There’s no setting to universally disable this behavior, although you can choose not to define clustering keys or disable auto-reclustering at the table level.
In summary, clustering is an internal optimization mechanism that organizes data in micro-partitions to enhance query performance through efficient data pruning. Snowflake’s ability to handle this automatically makes it a powerful feature, but it also allows users to manually fine-tune it when needed.
Top Training Courses
LIMITED OFFER: GET 30% Discount
This is ONE TIME OFFER
A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.