Use VCE Exam Simulator to open VCE files

Professional Cloud Security Engineer Google Practice Test Questions and Exam Dumps
Question No 1:
Your team needs to make sure that a Compute Engine instance does not have access to the internet or to any Google APIs or services. Which two settings must remain disabled to meet these requirements? (Choose two.)
A. Public IP
B. IP Forwarding
C. Private Google Access
D. Static routes
E. IAM Network User Role
Correct Answers: A, C
Explanation:
To meet the requirement that the Compute Engine instance should not have access to the internet or to any Google APIs or services, the following settings need to be disabled:
Public IP (A): The public IP address allows a Compute Engine instance to communicate directly with the internet. If the instance has a public IP, it can connect to external resources via the internet. Disabling the public IP ensures that the instance cannot directly communicate with external systems over the internet.
Private Google Access (C): This setting allows instances without a public IP address to access Google APIs and services (such as Google Cloud Storage or BigQuery) through Google's internal network. To ensure that the instance does not have access to Google services, Private Google Access must be disabled.
IP Forwarding (B): IP Forwarding allows a virtual machine to forward network packets to other virtual machines or external destinations. While disabling it could help in specific networking configurations, it is not a necessary setting to disable in order to block access to the internet or Google APIs.
Static Routes (D): Static routes are used to manually define network paths for routing traffic. While static routes can control traffic flow, they are not directly related to internet or Google service access. Disabling static routes won't prevent the instance from accessing external resources unless specific routes to the internet are in place.
IAM Network User Role (E): This role controls access to network resources but does not influence whether a Compute Engine instance can access the internet or Google services. Therefore, this role doesn't need to be disabled to meet the given requirements.
Therefore, the correct settings to disable are A (Public IP) and C (Private Google Access) to prevent the instance from accessing the internet and Google APIs or services.
Question No 2:
Which two implied firewall rules are defined on a VPC network? (Choose two.)
A. A rule that allows all outbound connections
B. A rule that denies all inbound connections
C. A rule that blocks all inbound port 25 connections
D. A rule that blocks all outbound connections
E. A rule that allows all inbound port 80 connections
Correct answers: A, B
Explanation:
In a Virtual Private Cloud (VPC) network, certain default or implied firewall rules are automatically created by the cloud provider to manage network traffic. These rules apply by default and can be modified or supplemented with additional custom rules as necessary. Here's a breakdown of the rules relevant to this question:
A. A rule that allows all outbound connections: This is a common default rule in many VPC network setups. By default, outbound traffic is allowed unless otherwise specified. This rule allows instances in the VPC to initiate outbound connections, such as accessing external websites or services, without being blocked. This is why A is correct.
B. A rule that denies all inbound connections: By default, inbound traffic is denied unless explicitly allowed. This implies a default deny rule for incoming connections, meaning that unless a firewall rule is specifically created to allow inbound traffic (such as SSH, HTTP, or custom ports), all inbound traffic is blocked. This default behavior ensures a secure network posture, making B the correct answer.
C. A rule that blocks all inbound port 25 connections: While blocking port 25 (typically used for SMTP) is a good practice for preventing spam or unwanted email traffic, this is not an implied rule in a VPC network. It could be a custom rule, but it's not part of the default set of firewall rules, making C incorrect.
D. A rule that blocks all outbound connections: This is not a default rule. As mentioned earlier, the default behavior is to allow all outbound connections unless explicitly denied. This means that D is incorrect.
E. A rule that allows all inbound port 80 connections: While it's common to allow HTTP traffic (port 80), this is not part of the default set of implied rules. The default behavior blocks all inbound traffic unless specified. Allowing inbound port 80 connections would require the creation of a custom rule, making E incorrect.
Thus, the correct answers are A and B, reflecting the typical implied rules for VPC networks.
Question No 3:
How should the customer store their plain text secrets securely in Google Cloud Platform?
A. Use Cloud Source Repositories, and store secrets in Cloud SQL.
B. Encrypt the secrets with a Customer-Managed Encryption Key (CMEK), and store them in Cloud Storage.
C. Run the Cloud Data Loss Prevention API to scan the secrets, and store them in Cloud SQL.
D. Deploy the SCM to a Compute Engine VM with local SSDs, and enable preemptible VMs.
Correct Answer: B
Explanation:
The customer needs to find a secure method for storing secrets without keeping them in their source-code management system. Option B is the most appropriate solution as it suggests encrypting the secrets using a Customer-Managed Encryption Key (CMEK) before storing them in Cloud Storage. This ensures that the secrets are not stored in plain text and are secured by encryption that the customer controls.
Let's break down the options:
A. Use Cloud Source Repositories, and store secrets in Cloud SQL: This option still involves storing secrets in a managed repository (Cloud SQL), which is not a secure practice because it doesn't encrypt the secrets or add a layer of protection. Cloud SQL is not designed for secret storage, and the use of Cloud Source Repositories may expose the secrets during version control.
B. Encrypt the secrets with a Customer-Managed Encryption Key (CMEK), and store them in Cloud Storage: This option uses CMEK to encrypt the secrets before they are stored in Cloud Storage. CMEK gives the customer full control over the encryption keys, ensuring that even if unauthorized users gain access to the secrets, they would not be readable without the proper key. This is a secure and scalable approach for managing secrets.
C. Run the Cloud Data Loss Prevention API to scan the secrets, and store them in Cloud SQL: The Cloud Data Loss Prevention (DLP) API is useful for scanning and identifying sensitive information in various data sources, but it doesn't provide a solution for securely storing secrets. Storing secrets in Cloud SQL without encryption is not a recommended practice for secret management.
D. Deploy the SCM to a Compute Engine VM with local SSDs, and enable preemptible VMs: This option focuses on the deployment of an SCM system to a VM with local SSDs, but it does not address the issue of securing secrets. Additionally, preemptible VMs are not ideal for storing sensitive information because they can be terminated at any time, leading to potential data loss or exposure.
In conclusion, B is the best option because it ensures the secrets are encrypted before storage and leverages the flexibility of CMEK for enhanced security control.
Question No 4:
What should your team do to centrally manage GCP IAM permissions from their on-premises Active Directory Service and manage permissions by AD group membership?
A. Set up Cloud Directory Sync to sync groups, and set IAM permissions on the groups.
B. Set up SAML 2.0 Single Sign-On (SSO), and assign IAM permissions to the groups.
C. Use the Cloud Identity and Access Management API to create groups and IAM permissions from Active Directory.
D. Use the Admin SDK to create groups and assign IAM permissions from Active Directory.
Correct Answer: A
Explanation:
To meet the requirement of centrally managing Google Cloud Platform (GCP) Identity and Access Management (IAM) permissions based on Active Directory (AD) group membership, let's examine each option:
A. Set up Cloud Directory Sync to sync groups, and set IAM permissions on the groups:
This option is the most suitable. Cloud Directory Sync (Cloud Dirsync) allows the synchronization of on-premises AD groups with Google Cloud, enabling the central management of IAM roles based on these synchronized AD groups. Once the groups are synced, IAM permissions can be assigned to those groups in Google Cloud. This setup is designed to ensure that your team can manage permissions in GCP based on AD group membership, which directly addresses the requirement to centrally manage permissions.
B. Set up SAML 2.0 Single Sign-On (SSO), and assign IAM permissions to the groups:
SAML 2.0 SSO is primarily used for authentication and single sign-on, allowing users to access applications with a single set of credentials. While SSO can help with authentication, it does not directly manage group membership for IAM permissions in GCP. The SSO configuration would handle user access to GCP but would not fulfill the specific requirement to manage IAM permissions by AD group membership. Therefore, this is not the ideal solution.
C. Use the Cloud Identity and Access Management API to create groups and IAM permissions from Active Directory:
The Cloud IAM API is designed for managing permissions and roles in GCP, but it does not directly integrate with on-premises Active Directory. While the API could help manage IAM permissions, it does not automatically sync AD groups to GCP for the required setup. This would require a more manual and complex solution compared to Cloud Directory Sync.
D. Use the Admin SDK to create groups and assign IAM permissions from Active Directory:
The Admin SDK allows for managing Google Cloud resources, such as user accounts and groups, but it is not specifically designed to sync with on-premises AD. While the Admin SDK can be used to manage groups within GCP, it would not handle syncing AD groups directly into GCP. Additionally, it would require more manual configuration than using Cloud Directory Sync.
Thus, the correct answer is A. Cloud Directory Sync is the most effective and streamlined solution for syncing AD groups with GCP, allowing for centralized IAM permissions management based on AD group membership with minimal administrative overhead.
Question No 5:
When creating a secure container image, which two items should you incorporate into the build if possible? (Choose two.)
A) Ensure that the app does not run as PID 1.
B) Package a single app as a container.
C) Remove any unnecessary tools not needed by the app.
D) Use public container images as a base image for the app.
E) Use many container image layers to hide sensitive information.
Correct Answer: B, C
Explanation:
When building a secure container image, there are specific best practices to follow in order to ensure both functionality and security. These best practices aim to minimize the attack surface and ensure that the container operates efficiently.
B) Package a single app as a container: A secure container image should ideally focus on packaging a single app. This principle follows the "one process per container" guideline, which reduces complexity and ensures that each container is dedicated to running only the required service. By limiting the scope of the container, you reduce potential vulnerabilities and make the container easier to manage and secure.
C) Remove any unnecessary tools not needed by the app: A secure container should only include the tools and dependencies required for the application to run. By removing unnecessary tools, you reduce the potential attack surface, as unused software packages can present vulnerabilities. This also helps to keep the container image smaller, which in turn improves performance and reduces the risk of security flaws due to unneeded or outdated components.
Now, let’s consider the other options:
A) Ensure that the app does not run as PID 1: While it's generally recommended to avoid running processes as PID 1 (since PID 1 is the first process in a container and typically has special privileges), this is not one of the most critical practices when building secure containers, compared to the practices of minimizing dependencies and packaging a single app.
D) Use public container images as a base image for the app: Using public images as a base image can sometimes introduce security risks, as these images may not be as well maintained or may contain vulnerabilities. It's better to use official or trusted base images, or even better, build your own minimal base image to have full control over its contents.
E) Use many container image layers to hide sensitive information: Using multiple layers is not a recommended security practice. In fact, having too many layers can expose unnecessary information. The best approach is to use multi-stage builds to reduce the number of layers in a container image, while ensuring that sensitive data is handled securely (e.g., using environment variables or secure secret management tools).
In conclusion, B and C are the most important practices for creating a secure container image, focusing on simplicity, minimalism, and security.
Question No 6:
A customer needs to launch a 3-tier internal web application on Google Cloud Platform (GCP). The customer's internal compliance requirements dictate that end-user access may only be allowed if the traffic seems to originate from a specific known good CIDR. The customer accepts the risk that their application will only have SYN flood DDoS protection.
They want to use GCP's native SYN flood protection. Which product should be used to meet these requirements?
A. Cloud Armor
B. VPC Firewall Rules
C. Cloud Identity and Access Management
D. Cloud CDN
Correct Answer: B
Explanation:
To meet the requirements of allowing access only from specific known good CIDR blocks, the appropriate product to use would be VPC Firewall Rules. VPC Firewall Rules allow you to control the traffic that reaches your instances based on specified criteria such as IP address ranges, which directly aligns with the need to restrict access based on the CIDR block.
Here's how the options break down:
A. Cloud Armor: Cloud Armor provides DDoS protection and security for applications running on Google Cloud, specifically for external-facing services. While it offers some level of protection against SYN floods, it is primarily focused on protecting external services. It is more suited for public-facing applications rather than internal ones. Therefore, it doesn't meet the specific CIDR-based access restriction requirement for an internal application.
B. VPC Firewall Rules: VPC Firewall Rules allow you to define rules that restrict traffic based on source IP ranges, which is exactly what the customer needs. These rules work within the Google Cloud Virtual Private Cloud (VPC) to control the flow of traffic to and from your resources, making them ideal for this scenario. The ability to limit access to known good CIDR blocks is exactly what VPC Firewall Rules are designed to do.
C. Cloud Identity and Access Management: Cloud Identity and Access Management (IAM) is used for managing access to Google Cloud resources based on roles and permissions, but it is not designed to control traffic based on IP address ranges or CIDR blocks. It focuses more on authentication and authorization for Google Cloud services, not on network-level access control.
D. Cloud CDN: Cloud CDN is a content delivery network designed to cache and distribute content across global locations to improve performance for end users. While it helps with the distribution of content, it does not provide network-level access controls based on CIDR blocks or SYN flood protection. Therefore, it does not meet the specific needs of restricting access to a 3-tier internal application.
In conclusion, VPC Firewall Rules are the best choice for restricting access based on a specific known good CIDR.
Question No 7:
A company is running workloads in a dedicated server room. They must only be accessed from within the private company network. You need to connect to these workloads from Compute Engine instances within a Google Cloud Platform project.
Which two approaches can you take to meet the requirements? (Choose two.)
A. Configure the project with Cloud VPN.
B. Configure the project with Shared VPC.
C. Configure the project with Cloud Interconnect.
D. Configure the project with VPC peering.
E. Configure all Compute Engine instances with Private Access.
Correct Answer: A, C
Explanation:
To connect to workloads in a dedicated server room from Compute Engine instances in a Google Cloud Platform (GCP) project, the two approaches that best meet the requirements are Cloud VPN and Cloud Interconnect.
Cloud VPN (A) allows you to securely connect your on-premises network (in the dedicated server room) with your Google Cloud Virtual Private Cloud (VPC) network over the public internet using an encrypted VPN tunnel. This ensures that only devices within the private company network can access the resources. This method creates a secure, private connection between on-premises systems and cloud-based instances, which is a common solution for private access to workloads hosted within GCP.
Cloud Interconnect (C) offers a dedicated, high-performance connection between your on-premises infrastructure and Google Cloud, typically through a direct physical connection or partner locations. Unlike Cloud VPN, Cloud Interconnect does not use the public internet, offering more reliability and bandwidth for enterprise applications. This method also ensures that the workloads are only accessible from within the private company network. For organizations with high availability or performance requirements, Cloud Interconnect is an excellent choice.
On the other hand:
Shared VPC (B) allows multiple projects within the same organization to share a common VPC network, but it does not directly address the need for private connectivity between Google Cloud and on-premises workloads. It’s useful for managing network resources across different projects but not for ensuring private access from on-premises to cloud resources.
VPC peering (D) enables connectivity between two VPC networks in Google Cloud. While VPC peering works for internal Google Cloud networking, it does not address the specific need to connect on-premises infrastructure with Google Cloud. It is used to connect different VPCs within Google Cloud, not external networks.
Private Access (E) allows Compute Engine instances to access Google Cloud services over the internal network rather than the public internet. However, it doesn't address connecting to on-premises resources or ensuring that workloads in the server room are only accessed from within the private company network.
Thus, Cloud VPN and Cloud Interconnect are the most appropriate solutions for securely connecting the dedicated server room to the GCP instances while ensuring that the workloads are only accessible from within the private network.
Question No 8:
A customer implements Cloud Identity-Aware Proxy for their ERP system hosted on Compute Engine. Their security team wants to add a security layer so that the ERP systems only accept traffic from Cloud Identity-Aware Proxy.
What should the customer do to meet these requirements?
A. Make sure that the ERP system can validate the JWT assertion in the HTTP requests.
B. Make sure that the ERP system can validate the identity headers in the HTTP requests.
C. Make sure that the ERP system can validate the x-forwarded-for headers in the HTTP requests.
D. Make sure that the ERP system can validate the user’s unique identifier headers in the HTTP requests.
Correct Answer: A
Explanation:
To ensure that the ERP system only accepts traffic from Cloud Identity-Aware Proxy (IAP) and is protected from unauthorized access, the ERP system must be able to verify that incoming requests are indeed coming from Cloud IAP and not from other sources. This can be achieved by validating the JWT (JSON Web Token) assertion in the HTTP requests.
Here’s why A is the correct answer and the others are not:
JWT assertion validation (A): Cloud IAP adds a JWT token in the HTTP request headers when a user successfully authenticates. The ERP system can then validate this token to ensure that the request is coming from IAP and is authorized. This validation ensures that only legitimate traffic, authenticated and authorized through IAP, is accepted. By checking the integrity and validity of the JWT assertion, the ERP system can confirm that the traffic is coming from a trusted source (IAP).
Identity headers validation (B): While identity-related information may be passed as headers (e.g., user identity), simply validating these headers alone would not provide assurance that the request is coming from IAP. IAP uses JWT tokens, not just identity headers, for authentication and authorization. Thus, this approach doesn't provide full security.
x-forwarded-for headers validation (C): The x-forwarded-for header typically contains the IP addresses of the client that sent the request and the proxies it passed through. This header is not specifically related to Cloud IAP. It can be manipulated by clients or intermediate proxies, so relying solely on this header to authenticate requests is not secure.
User’s unique identifier headers validation (D): The user’s unique identifier might be included in the headers, but this alone does not ensure that the request is authenticated and authorized by IAP. Validating a user ID in the headers does not verify the security context provided by Cloud IAP.
In conclusion, to ensure that the ERP system only accepts traffic from Cloud IAP, it is essential to validate the JWT assertion in the HTTP requests, as this guarantees that the request has been authenticated and authorized through Cloud IAP. Therefore, the correct answer is A.
Question No 9:
What should you do to get notified if a malicious user attempts to execute the script again and cause your Compute Engine instance to crash?
A. Create an Alerting Policy in Stackdriver using a Process Health condition, checking that the number of executions of the script remains below the desired threshold. Enable notifications.
B. Create an Alerting Policy in Stackdriver using the CPU usage metric. Set the threshold to 80% to be notified when the CPU usage goes above this 80%.
C. Log every execution of the script to Stackdriver Logging. Create a User-defined metric in Stackdriver Logging on the logs, and create a Stackdriver Dashboard displaying the metric.
D. Log every execution of the script to Stackdriver Logging. Configure BigQuery as a log sink, and create a BigQuery scheduled query to count the number of executions in a specific timeframe.
Correct answer: C
Explanation:
To ensure that you get notified in case the malicious script is executed again, the best approach is to create a mechanism to track every execution of the script and to alert you based on those events. Here's an analysis of each option:
A. Create an Alerting Policy in Stackdriver using a Process Health condition, checking that the number of executions of the script remains below the desired threshold. Enable notifications.
This option sounds logical, but Process Health conditions are more focused on system-level metrics such as uptime or availability, rather than specifically tracking the executions of an individual script. It is not designed to track events within an application, so it would not be the most effective for monitoring the execution of a specific script. Therefore, this option is not the best choice.
B. Create an Alerting Policy in Stackdriver using the CPU usage metric. Set the threshold to 80% to be notified when the CPU usage goes above this 80%.
Although CPU usage spikes could be related to malicious activity (like a script running repeatedly), CPU usage is not a precise or reliable indicator for detecting specific malicious actions such as script execution. This approach may result in false positives, as legitimate workloads could also cause CPU spikes. Therefore, this is not the most targeted solution for monitoring the specific issue.
C. Log every execution of the script to Stackdriver Logging. Create a User-defined metric in Stackdriver Logging on the logs, and create a Stackdriver Dashboard displaying the metric.
This is the most effective solution. By logging each execution of the script to Stackdriver Logging, you can precisely track the occurrences of the script. You can then create a User-defined metric based on the logs, which allows you to set specific thresholds and create alerts. The Stackdriver Dashboard will give you real-time visibility into the metric, and you can configure notifications to be alerted whenever the script is executed beyond a certain threshold. This option is tailored for monitoring specific actions within your application, making it the most accurate method for this use case.
D. Log every execution of the script to Stackdriver Logging. Configure BigQuery as a log sink, and create a BigQuery scheduled query to count the number of executions in a specific timeframe.
While this option provides detailed tracking of script executions by logging to Stackdriver Logging and using BigQuery for querying, it introduces more complexity than needed for this scenario. BigQuery is useful for deeper analysis or historical querying, but it requires more setup and may not provide real-time alerts as efficiently as Stackdriver's built-in alerting system. This makes D a more complicated approach without the immediate alerting capabilities of Stackdriver.
Therefore, the correct answer is C, as it offers a simple and effective solution to track and get notified about the execution of the script using Stackdriver Logging and user-defined metrics.
Question No 10:
Which logging export strategy should be used to obtain a unified log view of all development cloud projects in the SIEM?
A. 1. Export logs to a Cloud Pub/Sub topic with folders/NONPROD parent and includeChildren property set to True in a dedicated SIEM project. 2. Subscribe SIEM to the topic.
B. 1. Create a Cloud Storage sink with billingAccounts/ABC-BILLING parent and includeChildren property set to False in a dedicated SIEM project. 2. Process Cloud Storage objects in SIEM.
C. 1. Export logs in each dev project to a Cloud Pub/Sub topic in a dedicated SIEM project. 2. Subscribe SIEM to the topic.
D. 1. Create a Cloud Storage sink with a publicly shared Cloud Storage bucket in each project. 2. Process Cloud Storage objects in SIEM.
Correct Answer: A
Explanation:
The objective is to obtain a unified log view for all development cloud projects in the SIEM, and the development projects reside under the NONPROD organization folder with the ABC-BILLING billing account. The most effective solution involves using Cloud Pub/Sub to export logs in a structured manner that allows unified access and integration with the SIEM system.
Let's break down the options:
A. Export logs to a Cloud Pub/Sub topic with folders/NONPROD parent and includeChildren property set to True in a dedicated SIEM project. Subscribe SIEM to the topic: This is the correct approach. By setting the includeChildren property to True, you ensure that logs from all projects under the NONPROD folder (including test and pre-production projects) are exported to a Cloud Pub/Sub topic. The SIEM can then subscribe to this topic to obtain a unified log view for all development projects. This method ensures the SIEM receives logs in real-time and maintains a centralized logging structure for easy processing.
B. Create a Cloud Storage sink with billingAccounts/ABC-BILLING parent and includeChildren property set to False in a dedicated SIEM project. Process Cloud Storage objects in SIEM: This option is not ideal because setting includeChildren to False would limit the logs to the billing account level only and exclude logs from subfolders or specific projects under the NONPROD folder. Additionally, using Cloud Storage may not provide the same level of real-time log access and flexibility as Cloud Pub/Sub.
C. Export logs in each dev project to a Cloud Pub/Sub topic in a dedicated SIEM project. Subscribe SIEM to the topic: While this approach involves Cloud Pub/Sub, exporting logs from each individual development project can become cumbersome and inefficient, especially if the number of projects grows. A more centralized approach, like in Option A, is preferred for managing multiple projects under a single organizational folder.
D. Create a Cloud Storage sink with a publicly shared Cloud Storage bucket in each project. Process Cloud Storage objects in SIEM: Using publicly shared Cloud Storage buckets is not secure, and it's not a recommended practice to store sensitive logs in publicly accessible storage. Moreover, processing Cloud Storage objects in SIEM can introduce latency and challenges with managing real-time logs.
In conclusion, Option A is the best choice because it provides a centralized and efficient method for exporting logs from all development projects under the NONPROD folder to a single Cloud Pub/Sub topic, allowing SIEM to subscribe and aggregate logs in real-time.
Top Training Courses
LIMITED OFFER: GET 30% Discount
This is ONE TIME OFFER
A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.