Professional Cloud Developer Google Practice Test Questions and Exam Dumps




Question No 1:

You are in the process of migrating data from an on-premises virtual machine to Google Cloud Storage for use by a Cloud Dataproc Hadoop cluster within a Google Cloud Platform (GCP) environment. Which of the following commands should you use to upload the files to Google Cloud Storage?

A. gsutil cp [LOCAL_OBJECT] gs://[DESTINATION_BUCKET_NAME]/
B. gcloud cp [LOCAL_OBJECT] gs://[DESTINATION_BUCKET_NAME]/
C. hadoop fs cp [LOCAL_OBJECT] gs://[DESTINATION_BUCKET_NAME]/
D. gcloud dataproc cp [LOCAL_OBJECT] gs://[DESTINATION_BUCKET_NAME]/

Correct Answer:

A. gsutil cp [LOCAL_OBJECT] gs://[DESTINATION_BUCKET_NAME]/

Explanation:

When uploading files to Google Cloud Storage (GCS) for use in a Cloud Dataproc Hadoop cluster, the correct tool to use is gsutil, which is a command-line tool designed for managing GCS resources. Here's a breakdown of why option A is correct and the others are not:

Option A: gsutil cp [LOCAL_OBJECT] gs://[DESTINATION_BUCKET_NAME]/

gsutil is the recommended command-line tool for interacting with Google Cloud Storage, and it is commonly used for file transfers. The cp command in gsutil allows you to copy files from a local machine to a GCS bucket. In this case, [LOCAL_OBJECT] represents the file you want to upload, and gs://[DESTINATION_BUCKET_NAME]/ is the destination Google Cloud Storage bucket.

  • Why it's correct: gsutil cp is the appropriate command for transferring data from your on-premises machine to Google Cloud Storage, which is the required step before the files can be accessed by the Cloud Dataproc Hadoop cluster.

Option B: gcloud cp [LOCAL_OBJECT] gs://[DESTINATION_BUCKET_NAME]/

While gcloud is a command-line tool for managing Google Cloud resources, it does not provide the cp command for copying files to Cloud Storage. gsutil is the proper tool for file transfers to GCS, not gcloud.

Option C: hadoop fs cp [LOCAL_OBJECT] gs://[DESTINATION_BUCKET_NAME]/

This command is used to interact with Hadoop's distributed file system (HDFS). It's not meant for copying files from an on-premises machine to Google Cloud Storage. This command is for managing files within a Hadoop environment.

Option D: gcloud dataproc cp [LOCAL_OBJECT] gs://[DESTINATION_BUCKET_NAME]/

There is no gcloud dataproc cp command. The gcloud Dataproc command is used for managing Dataproc clusters and related services, but not for directly uploading files to GCS.

The correct command to use for uploading files from an on-premises VM to Google Cloud Storage is gsutil cp, as it is designed specifically for interacting with GCS.



Question No 2:

After migrating your applications to Google Cloud Platform (GCP), you continued using your existing monitoring platform. However, you have noticed that the notification system is too slow for responding to time-critical issues. What should you do to address this issue?

A. Replace your entire monitoring platform with Stackdriver.
B. Install the Stackdriver agents on your Compute Engine instances.
C. Use Stackdriver to capture and alert on logs, then ship them to your existing platform.
D. Migrate some traffic back to your old platform and perform A/B testing on both platforms concurrently.

Correct Answer:

C. Use Stackdriver to capture and alert on logs, then ship them to your existing platform.

Explanation:

In this scenario, the primary issue is that the existing notification system is too slow for handling time-critical problems after migrating applications to Google Cloud Platform (GCP). To address this, you need to enhance your monitoring and alerting system, leveraging the speed and flexibility of Google Cloud's native tools.

Option A: Replace your entire monitoring platform with Stackdriver.

While Stackdriver (now part of Google Cloud Operations Suite) offers a comprehensive set of monitoring and logging tools, replacing your entire monitoring platform is a significant and often unnecessary change. This might also introduce disruptions and require extensive reconfiguration. Since your goal is to address the issue of slow notifications, a more targeted solution is preferable.

Option B: Install the Stackdriver agents on your Compute Engine instances.

Installing Stackdriver agents on your Compute Engine instances allows you to monitor system-level metrics like CPU usage, memory, and disk activity. However, this action alone does not solve the problem of slow notifications for time-critical issues. You need to improve the way alerts and notifications are processed, not just gather more metrics.

Option C: Use Stackdriver to capture and alert on logs, then ship them to your existing platform.

This option is the most suitable solution. By integrating Stackdriver for log capture and alerting, you can take advantage of its near real-time notification capabilities. Stackdriver can generate alerts based on log data and then ship those alerts to your existing platform for processing and notifications. This method addresses the issue of slow notifications by using Stackdriver’s fast and efficient alerting system while maintaining compatibility with your existing monitoring setup.

Option D: Migrate some traffic back to your old platform and perform A/B testing on both platforms concurrently.

This option is unnecessary and introduces additional complexity without directly solving the notification delay problem. A/B testing can be useful for other purposes but will not solve the issue of slow notifications.

The best approach is to use Stackdriver to capture and alert on logs, then ship them to your existing platform (Option C). This integrates the fast alerting capabilities of Stackdriver with your existing platform, enhancing notification speed without the need for a complete overhaul of your monitoring setup.




Question No 3:

You are planning to migrate a MySQL database to the managed Cloud SQL database in Google Cloud. Your Compute Engine virtual machine instances will need to connect to this Cloud SQL instance. However, you do not want to whitelist IP addresses for the Compute Engine instances to access Cloud SQL. What should you do to allow these Compute Engine instances to connect to the Cloud SQL instance without the need for IP whitelisting?

A. Enable private IP for the Cloud SQL instance.
B. Whitelist a project to access Cloud SQL, and add Compute Engine instances in the whitelisted project.
C. Create a role in Cloud SQL that allows access to the database from external instances, and assign the Compute Engine instances to that role.
D. Create a Cloud SQL instance on one project. Create Compute Engine instances in a different project. Create a VPN between these two projects to allow internal access to Cloud SQL.

Correct Answer:

A. Enable private IP for the Cloud SQL instance.

Explanation:

When migrating a MySQL database to Google Cloud's managed Cloud SQL service and connecting it to Compute Engine instances, the goal is to avoid the overhead of managing IP whitelisting. This can be particularly beneficial for maintaining a more secure and flexible connection while ensuring the connection is seamless.

Option A: Enable private IP for the Cloud SQL instance.

This is the best solution. By enabling private IP for the Cloud SQL instance, you allow your Compute Engine instances to communicate with Cloud SQL over a private network within your Google Cloud VPC (Virtual Private Cloud). This method eliminates the need to whitelist external IP addresses or manage IP access lists. The communication occurs securely through Google Cloud's internal network, ensuring lower latency and enhanced security since the connection is isolated from the public internet.

Option B: Whitelist a project to access Cloud SQL, and add Compute Engine instances in the whitelisted project.

While whitelisting a project to access Cloud SQL may allow access, this method still requires managing IP addresses and does not avoid the complexity of public IP whitelisting. It's not the most efficient or secure approach compared to using private IPs.

Option C: Create a role in Cloud SQL that allows access to the database from external instances, and assign the Compute Engine instances to that role.

Roles in Cloud SQL control user access permissions at the database level (such as read/write permissions), but they do not manage network access. This method will not solve the issue of avoiding IP whitelisting for network connections.

Option D: Create a Cloud SQL instance on one project. Create Compute Engine instances in a different project. Create a VPN between these two projects to allow internal access to Cloud SQL.

While a VPN can provide a private connection between two projects, it introduces unnecessary complexity. Enabling private IP for the Cloud SQL instance is a simpler and more straightforward solution within the same project.

The most efficient and secure approach is to enable private IP for the Cloud SQL instance (Option A). This allows your Compute Engine instances to connect to Cloud SQL over a private network, avoiding the need for IP whitelisting and ensuring a more secure, low-latency connection.




Question No 4:

You have deployed your website on Google Cloud's Compute Engine. Your marketing team wants to test conversion rates between three different website designs. Which approach should you use to effectively conduct this A/B testing?

A. Deploy the website on App Engine and use traffic splitting.
B. Deploy the website on App Engine as three separate services.
C. Deploy the website on Cloud Functions and use traffic splitting.
D. Deploy the website on Cloud Functions as three separate functions.

Correct Answer:

A. Deploy the website on App Engine and use traffic splitting.

Explanation:

When you are tasked with testing multiple versions of a website to measure conversion rates, one effective way to do this is by leveraging traffic splitting. This enables you to direct a certain percentage of user traffic to each version of the website, track performance metrics, and gather data for comparison. Let's look at each option in detail:

Option A: Deploy the website on App Engine and use traffic splitting.

This is the best option. App Engine supports traffic splitting, a feature that allows you to split the incoming traffic across multiple versions of your app. You can easily deploy different versions (e.g., different website designs) on App Engine and configure traffic splitting rules to direct traffic to each version. This approach is straightforward for A/B testing as it automatically handles load balancing, scaling, and monitoring. By using traffic splitting, you can test conversion rates for each version of your website in real-time without needing complex infrastructure setup.

Option B: Deploy the website on App Engine as three separate services.

While you could deploy the different website designs as separate services on App Engine, this approach would require additional configuration to manage traffic and ensure a balanced distribution of visitors. It adds complexity compared to using the built-in traffic splitting feature, making it less efficient for A/B testing.

Option C: Deploy the website on Cloud Functions and use traffic splitting.

Cloud Functions are designed for running individual functions in response to events, not for managing full web applications like websites. While you could theoretically use Cloud Functions with traffic splitting, this setup is less suitable for managing website traffic compared to App Engine. Cloud Functions are better for event-driven tasks and microservices, not large-scale web traffic management.

Option D: Deploy the website on Cloud Functions as three separate functions.

This approach would require setting up three separate Cloud Functions for each version of the website. However, Cloud Functions are not ideal for serving full websites, and this method would require more management and complexity than using App Engine with traffic splitting. It would also not be as scalable or efficient for A/B testing large volumes of web traffic.

The best solution is Option A: deploying the website on App Engine and using traffic splitting. This provides a clean, simple, and efficient way to perform A/B testing with minimal setup and automatic scaling, making it the most appropriate choice for this use case.




Question No 5:

You need to copy a directory named local-scripts along with all of its contents from your local workstation to a Google Cloud Compute Engine virtual machine instance. Which command should you use to achieve this?

A. gsutil cp --project "my-gcp-project" -r ~/local-scripts/ gcp-instance-name:~/server-scripts/ --zone "us-east1-b"
B. gsutil cp --project "my-gcp-project" -R ~/local-scripts/ gcp-instance-name:~/server-scripts/ --zone "us-east1-b"
C. gcloud compute scp --project "my-gcp-project" --recurse ~/local-scripts/ gcp-instance-name:~/server-scripts/ --zone "us-east1-b"
D. gcloud compute mv --project "my-gcp-project" --recurse ~/local-scripts/ gcp-instance-name:~/server-scripts/ --zone "us-east1-b"

Correct Answer:

C. gcloud compute scp --project "my-gcp-project" --recurse ~/local-scripts/ gcp-instance-name:~/server-scripts/ --zone "us-east1-b"

Explanation:

When you need to copy files or directories to a Google Cloud Compute Engine instance from your local workstation, Google Cloud SDK provides two primary tools for transferring files: gsutil and gcloud compute. Here's how the commands work and why option C is correct:

Option A:

gsutil cp --project "my-gcp-project" -r ~/local-scripts/ gcp-instance-name:~/server-scripts/ --zone "us-east1-b"

  • Incorrect. The gsutil command is used for managing Google Cloud Storage (GCS) objects, not for copying files directly to a Compute Engine instance. The use of gsutil cp is intended for cloud storage operations, and it does not support direct file transfer to a Compute Engine instance.

Option B:

gsutil cp --project "my-gcp-project" -R ~/local-scripts/ gcp-instance-name:~/server-scripts/ --zone "us-east1-b"

  • Incorrect. This is essentially the same command as Option A, with only the capitalization of -R instead of -r, which is not the correct tool for copying files to Compute Engine instances. Like Option A, this command is meant for cloud storage tasks, not Compute Engine instances.

Option C:

gcloud compute scp --project "my-gcp-project" --recurse ~/local-scripts/ gcp-instance-name:~/server-scripts/ --zone "us-east1-b"

  • Correct. The gcloud compute scp command is the appropriate tool for copying files between your local workstation and a Compute Engine instance. It uses the SCP (Secure Copy Protocol) and allows you to transfer files recursively using the --recurse flag. It is designed specifically for managing files on virtual machine instances.

Option D:

gcloud compute mv --project "my-gcp-project" --recurse ~/local-scripts/ gcp-instance-name:~/server-scripts/ --zone "us-east1-b"

  • Incorrect. The gcloud compute mv command moves files to the instance, not copies them. The mv command is used for moving files, which is not what you need if you want to keep the original directory on your local machine and also have it on the Compute Engine instance.

The correct command to copy a directory to a Compute Engine instance is Option C (gcloud compute scp). This command uses SCP for secure file transfer and allows you to recursively copy directories to remote instances. This is the most appropriate method for copying files directly between your local machine and Google Cloud virtual machine instances.




Question No 6:

You are deploying an application to a Compute Engine virtual machine instance with the Stackdriver Monitoring Agent installed. The application is a Unix process running on the instance. You want to receive an alert if the Unix process has not run for at least 5 minutes. However, you cannot modify the application to generate metrics or logs. Which type of alert condition should you configure?

A. Uptime check
B. Process health
C. Metric absence
D. Metric threshold

Correct Answer:

C. Metric absence

Explanation:

When configuring alerts in Google Cloud using Stackdriver Monitoring (now part of Google Cloud Monitoring), you can set up different alert conditions based on various types of data. In this scenario, the goal is to be alerted if a Unix process on a Compute Engine virtual machine has not been running for at least 5 minutes, and you cannot modify the application to produce logs or metrics directly. Let's review each option and why Metric Absence is the correct choice.

Option A: Uptime check

  • Incorrect. An uptime check is used to verify whether an external resource, such as a web server or a URL, is available. It checks the availability of services like HTTP/S endpoints by pinging or sending requests. However, since you want to monitor the status of a Unix process running locally on the VM, an uptime check is not appropriate.

Option B: Process health

  • Incorrect. Process health is typically used for monitoring specific process metrics, such as CPU usage, memory consumption, or disk I/O for individual processes. However, for this alert, you want to monitor if the process has stopped running or hasn't been executed in the last 5 minutes. Since you cannot modify the application to generate such metrics directly, using process health is not the best fit.

Option C: Metric absence

  • Correct. Metric absence conditions allow you to trigger an alert when a specific metric is absent or missing for a defined period of time. In this case, Stackdriver Monitoring can track a metric that represents the Unix process (e.g., CPU usage, system processes), and you can configure an alert to trigger if no such metric is received for 5 minutes. This condition would notify you if the Unix process is not running or producing metrics, satisfying the requirement.

Option D: Metric threshold

  • Incorrect. Metric threshold conditions are used to trigger an alert when a metric exceeds or falls below a specific threshold. While this might be useful for monitoring resource utilization (e.g., CPU usage or memory consumption), it’s not suitable for monitoring the presence or absence of a process itself, which is what you are trying to achieve.

In this case, Metric Absence is the best alert condition because it allows you to monitor when a specific metric is missing for a specified time period. By leveraging this condition, you can be alerted if the Unix process stops running for 5 minutes without needing to modify the application or its logging/metrics generation.




Question No 7:

You have two tables in an ANSI-SQL compliant database that have identical columns. You need to combine these tables into a single result set, ensuring that duplicate rows are removed. What is the best approach to accomplish this?

A. Use the JOIN operator in SQL to combine the tables.
B. Use nested WITH statements to combine the tables.
C. Use the UNION operator in SQL to combine the tables.
D. Use the UNION ALL operator in SQL to combine the tables.

Correct Answer:

C. Use the UNION operator in SQL to combine the tables.

Explanation:

When working with databases, combining data from multiple tables is a common requirement. In this case, you have two tables that have identical columns, and you need to combine them into a single result set while removing any duplicate rows. Let's look at the different options and why the UNION operator is the best choice.

Option A: Use the JOIN operator in SQL to combine the tables

  • Incorrect. The JOIN operator is used to combine rows from two or more tables based on a related column between them. However, in this scenario, you don't need to join the tables based on specific keys. You simply need to combine their contents and remove duplicates. Therefore, JOIN is not the appropriate choice.

Option B: Use nested WITH statements to combine the tables

  • Incorrect. The WITH clause (also known as Common Table Expressions, or CTEs) is used to simplify complex queries by defining temporary result sets that can be referenced within the main query. While WITH statements can be useful for organizing queries, they do not inherently remove duplicates when combining tables. Thus, they aren't the best choice for this scenario.

Option C: Use the UNION operator in SQL to combine the tables

  • Correct. The UNION operator in SQL combines the result sets of two or more queries and automatically removes duplicate rows from the final result set. This is the most efficient way to combine two tables and ensure that duplicates are eliminated. Since you want to remove duplicates, UNION is the correct choice.

Option D: Use the UNION ALL operator in SQL to combine the tables

  • Incorrect. The UNION ALL operator is similar to UNION, but it does not remove duplicate rows. It simply combines the result sets from both tables, including any duplicates. Since the goal in this case is to eliminate duplicates, UNION ALL would not fulfill that requirement.

The best choice for removing duplicate rows when combining two tables with identical columns is the UNION operator. This operator ensures that duplicates are removed, leaving only distinct rows in the result set.




Question No 8:

You have a production application, and when deploying a new version, some issues only become apparent after the application starts receiving traffic from users. To minimize the impact and reduce the number of users affected, which deployment strategy should you use?

A. Blue/green deployment
B. Canary deployment
C. Rolling deployment
D. Recreate deployment

Correct Answer:

B. Canary deployment

Explanation:

When deploying a new version of an application, it is crucial to minimize the potential impact of any issues that may arise. A few deployment strategies can help achieve this goal, but the most suitable choice for reducing the number of users affected is Canary deployment.

Option A: Blue/Green deployment

  • Not the best fit. In a blue/green deployment, two identical environments (blue and green) are maintained. The application runs in the blue environment, and once the green environment is fully ready, traffic is switched over to the green environment. While this approach ensures minimal downtime and can quickly roll back if issues occur, it doesn't allow for a gradual introduction of traffic. Blue/green deployment can still cause significant issues if the new version is faulty, as the entire environment switch happens at once.

Option B: Canary deployment

  • Correct answer. In a canary deployment, the new version of the application is deployed to a small subset of users (the "canaries") before the full rollout. This strategy allows you to monitor performance and behavior with a smaller group, identifying potential issues early on. If the canary users experience problems, the deployment can be halted or rolled back with minimal impact. This approach significantly reduces the number of users affected by issues, making it ideal for situations where issues only arise once real traffic hits production.

Option C: Rolling deployment

  • Not the best fit. A rolling deployment gradually updates the application across multiple instances. While it provides some gradual changes, it doesn't have the same control over the number of affected users as a canary deployment. Rolling deployments might not catch issues early enough since the update happens across multiple instances at once, although it still mitigates the impact more than a full deployment.

Option D: Recreate deployment

  • Not the best choice. A recreate deployment involves shutting down the existing application version and then deploying the new version in its place. This is a "high-risk" approach since it can cause downtime and user impact if issues arise, and it doesn't allow for any user-specific gradual rollout.

The canary deployment strategy is the most appropriate in this case. It allows you to deploy the new version to a small group of users first, observe its behavior, and minimize the potential impact and number of affected users in production. This method helps identify problems early, reducing the risk of widespread issues and ensuring a smoother transition to the new version.




Question No 9:

Your company is planning to expand its user base outside the United States for a popular application. The company needs to ensure 99.999% database availability for the application and minimize read latency for global users. Which two actions should the company take to meet these requirements? (Choose two.)

A. Create a multi-regional Cloud Spanner instance with the "nam-asia-eur1" configuration.
B. Create a multi-regional Cloud Spanner instance with the "nam3" configuration.
C. Create a cluster with at least 3 Spanner nodes.
D. Create a cluster with at least 1 Spanner node.
E. Create a minimum of two Cloud Spanner instances in separate regions with at least one node.
F. Create a Cloud Dataflow pipeline to replicate data across different databases.

Correct Answer:

A. Create a multi-regional Cloud Spanner instance with the "nam-asia-eur1" configuration.

E. Create a minimum of two Cloud Spanner instances in separate regions with at least one node.

Explanation:

In order to meet the requirements of 99.999% availability and minimize read latency for global users, the company needs to configure their Cloud Spanner instance in a way that maximizes performance and redundancy across different regions. Below is a detailed explanation of the correct and incorrect options:

Option A: Create a multi-regional Cloud Spanner instance with the "nam-asia-eur1" configuration.

  • Correct Answer. A multi-regional Cloud Spanner instance with a configuration like "nam-asia-eur1" provides redundancy and high availability across three regions: North America, Asia, and Europe. This setup ensures that if one region goes down, the other regions can handle traffic, ensuring the desired 99.999% availability. Additionally, distributing the data across regions helps minimize read latency for users worldwide, since the closest regional replica can serve read requests, improving performance.

Option B: Create a multi-regional Cloud Spanner instance with the "nam3" configuration.

  • Incorrect. The "nam3" configuration only spans three regions in North America, which does not help with minimizing latency for users outside the United States. While it provides high availability, it does not address the need for global distribution of data, which is crucial for reducing read latency for users outside of North America.

Option C: Create a cluster with at least 3 Spanner nodes.

  • Incorrect. While having more nodes can improve performance and availability within a single region, this option does not address the need for geographical redundancy and latency minimization for a global user base. The number of nodes alone does not guarantee high availability across multiple regions or optimize global read latency.

Option D: Create a cluster with at least 1 Spanner node.

  • Incorrect. A single node cluster will not meet the required availability or performance needs. Spanner's architecture relies on multiple nodes for fault tolerance and high availability. This option would not be sufficient for the desired 99.999% availability or optimal global performance.

Option E: Create a minimum of two Cloud Spanner instances in separate regions with at least one node.

  • Correct Answer. This option involves deploying Cloud Spanner instances across multiple regions. By creating instances in separate regions (for example, one in North America and one in Europe or Asia), the company can ensure that their database remains highly available, even if one region experiences an issue. Having instances in different regions also reduces read latency by directing traffic to the nearest region.

Option F: Create a Cloud Dataflow pipeline to replicate data across different databases.

  • Incorrect. While Cloud Dataflow can be used for data processing, it's not necessary or optimal for ensuring the global availability and low-latency access that is required here. Cloud Spanner’s built-in multi-region capabilities handle replication automatically, and Dataflow is not designed for this use case.

To meet the 99.999% availability and minimize read latency for users across the globe, the company should create a multi-regional Cloud Spanner instance with the right configuration (Option A) and deploy instances across multiple regions with proper redundancy (Option E). These configurations ensure both high availability and low-latency performance for users worldwide.


UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.