Google Professional Cloud Security Engineer Exam Dumps and Practice Test Questions Set1 Q1-20

Visit here for our full Google Professional Cloud Security Engineer exam dumps and practice test questions.

Question 1

An enterprise has multiple projects under a single Google Cloud organization. You want to enforce that all service accounts across the organization must have the principle of least privilege and cannot be granted the “owner” role on any project. Which of the following is the most appropriate mechanism to enforce this requirement?

A) Create a custom IAM role without the “owner” permissions and assign it to service accounts.
B) Apply an Organization Policy constraint to prevent assignment of the “roles/owner” role.
C) Manually audit all projects each month and remove service accounts with “owner”.
D) Use VPC Service Controls to restrict service account permissions.

Answer: B) Apply an Organization Policy constraint to prevent assignment of the “roles/owner” role.
Explanation:

Google Cloud’s Organization Policy service enables you to define constraints across the organization that enforce guardrails (for example, preventing certain roles from being granted). In this case, applying a constraint like constraints/iam.disableServiceAccountKeyCreation (or a custom constraint that prevents roles/owner) ensures that no service account can get that role. Manually auditing is less reliable (C). A custom role (A) doesn’t prevent the owner role from being assigned, it only provides an alternative; and VPC Service Controls (D) are about service boundary/networking rather than role assignment.

Question 2

Your company runs a VPC in which a set of Google Kubernetes Engine (GKE) nodes host workloads that must communicate with on‑premises systems via a VPN. You need to ensure that all traffic between on‑premises and GKE clusters is encrypted and that only specified workloads in GKE can initiate the VPN connection. Which architecture pattern best meets these requirements?

A) Use Cloud VPN between on‑premises and the VPC, and rely on GKE network policies to restrict pod traffic.
B) Use Dedicated Interconnect with encryption at application layer only, and rely on Kubernetes RBAC for workload restrictions.
C) Use Cloud VPN (IPsec) with a tunnel from GCP to on‑prem, place GKE nodes in a private subnet without external IPs, and implement firewall rules restricting nodes to VPN‑gateway traffic.
D) Use VPC peering between on‑prem and GCP network and enforce encryption with TLS in each workload.

Answer: C) Use Cloud VPN (IPsec) with a tunnel from GCP to on‑prem, place GKE nodes in a private subnet without external IPs, and implement firewall rules restricting nodes to VPN‑gateway traffic.

Explanation:

Cloud VPN in Google Cloud establishes a secure IPsec-encrypted tunnel between on-premises networks and GCP VPCs, ensuring that data in transit is protected against interception or tampering. This encrypted tunnel provides a straightforward and reliable way to extend on-premises infrastructure to the cloud while maintaining confidentiality and integrity of traffic. To further reduce attack surfaces, GKE nodes should be deployed in private subnets without external IP addresses, ensuring that workloads cannot be accessed directly from the internet. This approach minimizes exposure and limits potential attack vectors.

In addition, firewall rules or Cloud Armor policies should be configured to tightly control which workloads are permitted to communicate through the VPN tunnel. By restricting access to only the intended source workloads, organizations can enforce that only authorized systems initiate traffic, preventing unauthorized access and reducing risk. This provides both network-level isolation and policy-based control, which is critical for compliance-sensitive workloads.

Alternative architectures are less secure or add unnecessary complexity. Option A lacks strong enforcement over which workloads can initiate the VPN, leaving a potential gap in access control. Option B relies on Dedicated Interconnect, which may not provide encryption unless additional configuration is applied, and enforcement is delegated solely to Kubernetes, which is insufficient for network-level security guarantees. Option D uses VPC peering, which is unencrypted by default and would require each application to implement encryption at the application layer, increasing complexity and the likelihood of misconfiguration.

By combining Cloud VPN, private GKE subnets, and strict firewall policies, organizations achieve a robust, end-to-end encrypted, and tightly controlled connectivity architecture. This ensures confidentiality, integrity, and controlled access, aligning with security best practices and regulatory requirements for hybrid cloud deployments.

Question 3

A data‑processing workload stores sensitive customer data in a Cloud Storage bucket. You must ensure that even if an attacker gains access to the storage bucket, the data remains protected. Which combination of controls provides the highest level of protection? (Select two.)

A) Enable uniform bucket‑level access and restrict to specific service accounts.
B) Enable object versioning on the storage bucket.
C) Use customer‑managed encryption keys (CMEK) in Cloud Key Management Service (KMS) to encrypt objects.
D) Use bucket fine‑grained ACLs so each user has distinct permissions.
E) Enable locking the bucket to prevent deletion of objects for a retention period.

Answer:A) Enable uniform bucket‑level access and restrict to specific service accounts. AND C) Use customer‑managed encryption keys (CMEK) in Cloud KMS to encrypt objects.

Explanation:

Uniform bucket-level access is a best practice in Google Cloud Storage because it simplifies and strengthens IAM-based access control. By disabling legacy object-level ACLs and managing permissions solely through IAM, organizations reduce complexity, avoid accidental misconfigurations, and ensure that access policies are consistently applied across all objects within a bucket. This centralized approach not only enforces the principle of least privilege but also makes auditing and monitoring easier, as all permissions are visible and managed in one place, reducing the risk of inadvertent data exposure (A).

Customer-Managed Encryption Keys (CMEK) add an additional layer of security by ensuring that your organization retains full control over the encryption keys used to protect the data. Even if someone has IAM permissions to access the bucket, the data remains encrypted under your CMEK, preventing unauthorized plaintext access and satisfying regulatory requirements for key ownership and encryption accountability (C). This separation of access control and encryption ensures defense-in-depth, as compromising a user account alone does not grant access to unencrypted data.

While versioning (B) and retention locks (E) provide strong capabilities for data immutability and recovery, they primarily enhance availability and durability, rather than directly preventing unauthorized access. Similarly, using fine-grained ACLs (D) introduces additional complexity, increases the likelihood of misconfigurations, and makes auditing more challenging compared to uniform bucket-level access.

By combining uniform bucket-level access with CMEK, organizations achieve a robust security posture, ensuring data is protected, access is consistently managed, and encryption keys remain under organizational control, effectively mitigating risks of unauthorized plaintext exposure while simplifying compliance and governance.

Question 4

Your security team needs to monitor all IAM role changes and high‑privilege service account key creations across the entire organization for audit and alerting. Which combination of services should you enable and configure?

A) Enable Cloud Audit Logs (Admin Activity, Data Access) at the organization level, send logs to Cloud Logging, and create alerts in Cloud Monitoring for specific log entries.
B) Enable VPC Flow Logs on all subnets, send to Cloud Logging, and filter for IAM changes.
C) Enable Security Command Center’s Event Threat Detection and rely on its rule set for IAM changes.
D) Enable Cloud Asset Inventory, export IAM policy snapshots daily, and run external pipelines to detect changes.

Answer: A) Enable Cloud Audit Logs (Admin Activity, Data Access) at the organization level, send logs to Cloud Logging, and create alerts in Cloud Monitoring for specific log entries.

Explanation:

Cloud Audit Logs capture IAM role changes and service account key creations (these are administrative activities). By enabling these logs across the org and streaming them to Cloud Logging, you can filter and alert in Cloud Monitoring or Logging alerting. Option B (VPC Flow Logs) deals with network traffic, not IAM changes. Option C (Security Command Center) provides threat detection but may not provide the granularity or real‑time alerting you need for custom IAM change tracking. Option D (Cloud Asset Inventory) can help snapshot IAM policies, but snapshots only daily won’t capture real‑time or high‑privilege key creation immediately — hence A is the best fit.

Question 5

An application deployed in a GKE cluster uses Customer Supplied Encryption Keys (CSEK) for its persistent disks. The security team is concerned that the encryption keys used might be accessible via the node’s metadata server if the VM is compromised. What is the most secure way to manage the encryption keys so that even if a node is compromised, the keys cannot be easily retrieved?

A) Use local‑SSD disks instead of persistent disks, so encryption is handled at hardware level.
B) Migrate to using CMEK (customer‑managed encryption key) in Cloud KMS and restrict the node’s service account so it cannot access key‑management permissions.
C) Store the CSEK keys in a secret manager (e.g., Secret Manager) and mount into the pods.
D) Rotate the CSEK keys frequently (every hour) so the compromise window is small.

Answer: B) Migrate to using CMEK (customer‑managed encryption key) in Cloud KMS and restrict the node’s service account so it cannot access key‑management permissions.

Explanation:

Using Cloud KMS with customer‑managed encryption keys (CMEK) centralizes key management. If you restrict the node’s service account so it has only required permissions, an attacker on the node cannot retrieve the key. Local‑SSD (A) doesn’t solve the root cause of key retrieval via metadata. Mounting secrets via Secret Manager (C) is a good practice, but the key for disk encryption still needs to be managed and that doesn’t fully remove key access from the node. Frequent rotation (D) reduces exposure but doesn’t eliminate key access — the root risk remains that the compromised node could access the key momentarily. Thus B is the strongest control.

Question 6

Your organization needs to implement a secure software supply chain for container images that includes vulnerability scanning, signature enforcement, and only allow images from trusted registries to be deployed in production GKE clusters. Which sequence of services/configurations in Google Cloud Build/GKE meets this requirement most comprehensively?

A) Use Cloud Build triggers → build image → push to Container Registry → enable Container Analysis vulnerability scanning → enable Binary Authorization in GKE clusters to enforce signed images only from your registry.
B) Use Cloud Build → push image to Container Registry → use VPC firewall rules to prevent other registries → rely on GKE node image scanning.
C) Build locally, push to third‑party registry, use GKE admission controllers to check signatures at runtime.
D) Use Cloud Build → push image → use Container Registry logs to manually review → allow deployment.

Answer: A) Use Cloud Build triggers → build image → push to Container Registry → enable Container Analysis vulnerability scanning → enable Binary Authorization in GKE clusters to enforce signed images only from your registry.

Explanation:

This sequence covers building, scanning, and enforcing deployment of only signed/trusted images through Binary Authorization (which enforces policies such as only images that meet certain criteria can be deployed). That aligns with best practices for a secure software supply chain. Option B lacks signature enforcement and relies on firewall rules which are weaker. Option C uses third‑party registry and manual runtime checks, increasing risk. Option D is manual and reactive, not automated enforcement. Hence A is the most comprehensive.

Question 7

A manufacturing company must comply with an industry standard that requires all access to certain cloud data be logged, immutable, and retained for seven years. Which Google Cloud service(s) should you configure, and how, to best meet these requirements?

A) Enable Cloud Audit Logs (Admin Activity + Data Access) at the organization level, route logs to a log‑bucket in Cloud Storage with retention of 7 years and enable object versioning and write‑once mode (via bucket retention/lock).
B) Use Cloud Logging and send logs to BigQuery, set table partition expiration of 7 years.
C) Use Cloud Monitoring alert logs and archive alerts for 7 years.
D) Enable VPC Flow Logs on all networks and store in Cloud Storage for 7 years.

Answer: A) Enable Cloud Audit Logs (Admin Activity + Data Access) at the organization level, route logs to a log‑bucket in Cloud Storage with retention of 7 years and enable object versioning and write‑once mode (via bucket retention/lock).

Explanation:

Audit logs in Google Cloud are critical for maintaining visibility into all access and administrative events, including read and write operations, IAM changes, and metadata access. By capturing these events, organizations can establish a verifiable trail of activity that is essential for compliance, forensics, and operational monitoring. Storing audit logs in Cloud Storage with both versioning and retention lock enabled ensures immutability, meaning that logs cannot be deleted or altered until the retention period expires. This setup satisfies regulatory requirements for data retention and tamper-proof auditing, making it suitable for frameworks such as HIPAA, PCI DSS, SOC 2, and ISO 27001.

While tools like BigQuery (B) are excellent for analyzing and querying audit data, they do not inherently enforce immutability or retention policies. Data stored in BigQuery can be modified or deleted unless additional controls are applied, which means relying solely on BigQuery for audit logs could create compliance gaps. Similarly, monitoring alerts (C) are useful for real-time detection of suspicious activity, but they do not store historical log data and cannot satisfy long-term retention or immutability requirements. VPC Flow Logs (D) provide insights into network traffic patterns and egress flows but do not capture detailed access events for specific data sets, limiting their use for auditing and compliance purposes.

Therefore, the most robust and compliant architecture is to use Cloud Storage (A) with audit logs, versioning, and retention lock. This approach ensures both durability and immutability of audit data, supports regulatory requirements, and provides a foundation for secure, auditable operations in Google Cloud. It balances security, compliance, and operational monitoring, creating a reliable and trustworthy audit framework.

Question 8

Your organization has several projects and you want to implement centralized incident response for security events across all projects in your organization. Which architecture supports centralized alerting, investigation and response?

A) Deploy a separate security monitoring project where you enable Cloud Logging sinks from all other projects, route logs into that project, enable Security Command Center in that project, and set up incident response playbooks.
B) Enable Cloud Logging and Cloud Monitoring in each project individually, and each project team handles incidents.
C) Use Pub/Sub topics in each project and each team subscribes individually for alerts.
D) Enable Security Command Center only in one project and assume it picks up events from all projects automatically without routing.

Answer: A) Deploy a separate security monitoring project where you enable Cloud Logging sinks from all other projects, route logs into that project, enable Security Command Center in that project, and set up incident response playbooks.

Explanation:

A centralized monitoring project allows aggregation of logs and alerts from across the organization, enabling a unified incident response function (alerts, case management, forensics). Option B is decentralized and may lead to inconsistent responses. Option C is possible but lacks the centralized orchestration and investigative tooling. Option D is incorrect because the Security Command Center requires enabling in each project or data routed; you need to aggregate via sinks or organization‑level enablement. So A provides the best centralized architecture.

Question 9

You are designing a network architecture for a new GCP environment and must enforce the following: isolate production workloads from development, restrict internet‑outbound access from production except via a proxy, allow management traffic from your on‑premises corporate network, and minimize blast radius. Which of these design patterns is most appropriate?

A) Single VPC with two subnets (prod/dev) and firewall rules.
B) Two separate VPCs (prod/dev), use Shared VPC for prod to centralize, deploy Cloud NAT / proxy for internet outbound, use VPC peering or interconnect to on‑prem corporate network for management traffic.
C) One VPC for all workloads, but put production into the GKE namespace with network policies, and development into a different namespace.
D) Use Shared VPC for both prod/dev and rely on labels to differentiate workloads; use Cloud Armor for filtering outbound traffic.

Answer: B) Two separate VPCs (prod/dev), use Shared VPC for prod to centralize, deploy Cloud NAT / proxy for internet outbound, use VPC peering or interconnect to on‑prem corporate network for management traffic.

Explanation:

Using separate VPCs for production, development, and other environments significantly reduces the blast radius in the event of a security incident. If one environment is compromised, the isolation provided by distinct VPCs ensures that other environments remain unaffected, containing potential damage and preventing lateral movement. A Shared VPC enables central management of network policies, firewall rules, routing, and IAM across multiple projects, ensuring that production traffic is tightly controlled and consistent with organizational security standards.

Deploying a proxy or NAT gateway for outbound internet access adds another layer of protection by controlling and monitoring egress traffic. This prevents production workloads from directly accessing the public internet, reducing exposure to malware, exfiltration risks, or unvetted external services. Management traffic from on-premises environments can still securely traverse via VPC peering or Dedicated Interconnect, ensuring operational continuity without compromising isolation.

Alternative architectures, such as using namespace-level isolation (Option C) or relying solely on labels and Cloud Armor (Option D), do not provide the same network-level isolation and strong policy enforcement. Therefore, Option B represents the strongest, most secure pattern, combining isolation, central governance, and controlled connectivity for production workloads.

Question 10

Your organization uses the service Cloud Key Management Service (KMS) for key management. You want to implement key‑rotation and auditing so that keys are never reused and you have a clear historical change record. What are the best practices you should follow? Choose three.

A) Use key rings and separate cryptographic keys by purpose (e.g. data‑at‑rest vs data‑in‑transit).
B) Enable automatic rotation on keys where supported and set rotation period appropriate to sensitivity.
C) Never delete old keys—disable them only, and keep them for the duration of data encrypted with them.
D) Assign broad permissions (e.g., Owner) to many operators so key operations can happen quickly.
E) Use Cloud Audit Logs for KMS (key usage, rotation), and export to a long‑term archive for compliance.

Answer: A), B), and E) are the correct best practices.

Explanation:

A) Separating keys by purpose enhances organization, limits key compromise impact. B) Automatic rotation (e.g., annually or as required by regulation) ensures keys don’t sit for too long. E) Audit logging and exporting logs ensures historical record of key operations for compliance. C) While you shouldn’t immediately delete keys if they encrypt existing data, best practice is to retire them when safe—not just disable indefinitely; still you should manage lifecycle and deletion when possible. D) Assigning broad permissions violates least privilege and increases risk. So C and D are not recommended as best practices.

Question 11

Your security team wants to ensure that any newly created VMs in any project automatically have OS login enabled, full disk encryption by default, and have the Cloud Operations agent installed for security monitoring. How would you automate this across the organization?

A) Use an Organization Policy to enforce OS login and disk encryption, and use a Deployment Manager/Infrastructure as Code template to standardize VM creation with the agent installed.
B) Rely on the compute administrators in each project to manually apply the settings when creating VMs.
C) Use a Compute Engine machine image that has the agent installed; don’t enforce OS login or encryption.
D) Use a policy in Cloud Identity to enforce installation of the agent at user login.

Answer:

A) Use an Organization Policy to enforce OS login and disk encryption, and use a Deployment Manager/Infrastructure as Code template to standardize VM creation with the agent installed.

Explanation:

Automating security guardrails in Google Cloud using Organization Policy is a best practice for ensuring that critical security controls are consistently applied across all projects and resources. For example, enforcing OS Login ensures that all administrative access to virtual machines (VMs) is performed through identity-aware, auditable accounts rather than relying on local accounts, which can be difficult to track and secure. Similarly, enforcing full-disk encryption guarantees that data at rest on every VM is encrypted using Google-managed keys or Customer-Managed Encryption Keys (CMEK), providing strong protection against unauthorized access and data leakage. By applying these policies at the organization or folder level, administrators prevent bypass of these security controls and ensure that all projects inherit the guardrails automatically.

In addition to Organization Policy, automating monitoring agent installation via Infrastructure-as-Code (IaC) tools or Deployment Manager templates ensures that every VM is deployed with monitoring capabilities enabled. This automation reduces the risk of human error, eliminates gaps in operational visibility, and ensures that security and operational teams have consistent telemetry from all resources.

In contrast, manual processes (Option B) are error-prone, slow, and difficult to audit, making them unsuitable for large-scale environments. Option C partially addresses monitoring agent installation but fails to enforce essential security controls like OS Login and full-disk encryption, leaving significant gaps. Option D misapplies Cloud Identity policies, which govern user authentication rather than VM configuration, making it ineffective for ensuring automated agent deployment.

By combining Organization Policy enforcement with automated deployment templates, organizations achieve a comprehensive and scalable security posture, ensuring that VMs are consistently configured according to corporate security standards, fully auditable, and continuously monitored. This strategy minimizes risk, simplifies governance, and supports compliance requirements across the cloud environment.

Question 12

You are required to design a compliance‑dashboard that shows the encryption status of all Google Cloud Storage buckets across the organization, including which use Google‑managed keys, which use CMEK, and which have public access. Which services and approach would you use?

A) Use Cloud Asset Inventory to periodically snapshot storage bucket metadata (including encryption and IAM settings), export to BigQuery, and build a dashboard (e.g., Looker Studio) for compliance reviewers.
B) Use Cloud Logging to capture each time a bucket is created or modified, then run queries live in Logging Viewer.
C) Use Storage Transfer Service to copy buckets and inspect encryption status locally.
D) Use a manual spreadsheet where each project owner enters encryption status monthly.

Answer: A) Use Cloud Asset Inventory to periodically snapshot storage bucket metadata (including encryption and IAM settings), export to BigQuery, and build a dashboard (e.g., Looker Studio) for compliance reviewers.

Explanation:

Cloud Asset Inventory (CAI) in Google Cloud is a powerful tool that provides a centralized view of all cloud resources and their configurations, including encryption settings, IAM policies, resource hierarchies, and metadata. By continuously tracking the state of resources, CAI enables organizations to maintain an accurate and up-to-date snapshot of their cloud environment, which is essential for compliance auditing, security assessments, and operational governance. Exporting this asset data to BigQuery allows teams to perform ad hoc queries, historical comparisons, and trend analysis, providing deep insights into configurations that may violate internal policies or regulatory requirements.

Integrating CAI with visualization tools or dashboards further enhances centralized compliance monitoring. Teams can create custom dashboards showing resources that are non-compliant with encryption policies, IAM permissions that exceed least-privilege requirements, or resources deployed in non-approved regions. This centralized approach enables security and compliance teams to quickly identify and remediate risks across the organization, providing a single source of truth for audits and regulatory reporting.

Alternative options are less effective or scalable. For example, Cloud Logging (Option B) captures events and operational activity but does not provide a comprehensive view of the current state of resources, making it insufficient for configuration-based compliance auditing. Option C is irrelevant as it does not capture or analyze metadata at all. Option D, relying on manual tracking or spreadsheets, is prone to human error, difficult to scale, and cannot provide near real-time insights, making it unsuitable for large or dynamic cloud environments.

By combining Cloud Asset Inventory with BigQuery and dashboards, organizations achieve a robust, automated, and scalable compliance architecture. This approach not only improves visibility and governance but also supports continuous compliance, enabling teams to detect misconfigurations, enforce security policies, and maintain audit-ready reporting without the operational overhead of manual processes. CAI ensures that metadata auditing is both reliable and actionable, forming the foundation of an effective cloud security and compliance strategy.

Question 13

Your organization’s DevOps team is building an automated CI/CD pipeline. They require that before any code reaches production, it be scanned for vulnerabilities, dependencies checked for known issues, code review is enforced, and every build artifact is signed. Which of the following practices align with the role of a cloud security engineer network? (Select two.)

A) Ensure that the CI/CD pipeline uses a hardened build host with minimal privileges, logs all build actions, and rotates credentials frequently.
B) Rely solely on runtime monitoring in production to catch vulnerabilities rather than scanning in build time, to avoid slowing down delivery.
C) Use artifact signing (e.g., via KMS) and enforce invocation of signatures in deployment stage through Binary Authorization.
D) Let the developers manage the pipeline entirely—no need for security engineer involvement.

Answer: A) and C) are correct.

Explanation:

A) builds a strong foundation by ensuring the pipeline itself is secure (hardened, minimal privileges, well‑logged) which is a key responsibility of a cloud security engineer. C) enforces artifact integrity via signing and deploying only signed artifacts via Binary Authorization, which is directly within secure software supply chain best practices. B) is incorrect (deferring scanning to runtime increases risk). D) is incorrect (the security engineer must define controls and guardrails). So A and C align best.

Question 14

A global company uses multiple compliance standards (HIPAA, GDPR, PCI‑DSS) for its cloud workloads. You must design a solution for classifying sensitive data stored in BigQuery, Cloud Storage and using that classification to automatically restrict access (for example via VPC Service Controls or IAM policies) and trigger additional logging. Which features or services help you most?

A) Use Data Loss Prevention API (DLP) to scan data and tag assets, use Cloud Asset Inventory to store classification, then enforce service perimeter via VPC Service Controls and custom IAM conditions.
B) Use BigQuery SQL views only, restricting columns manually for each compliance standard.
C) Use Cloud Logging to track who accessed data and manually audit monthly.
D) Use Dataflow jobs to periodically move data to a safe zone manually and monitor.

Answer: A) Use Data Loss Prevention API to scan data and tag assets, use Cloud Asset Inventory to store classification, then enforce service perimeter via VPC Service Controls and custom IAM conditions.

Explanation:

The Data Loss Prevention (DLP) API in Google Cloud provides a robust framework for discovering, classifying, and tagging sensitive data across storage solutions such as Cloud Storage and BigQuery. By leveraging automated scans, organizations can identify sensitive information—such as personally identifiable information (PII), financial data, or health records—and assign metadata tags or classifications to the assets. This tagging enables policy-driven enforcement, allowing subsequent access controls to adapt based on the sensitivity of the data. For example, sensitive datasets can be restricted to only authorized users, applications, or service accounts, aligning with regulatory requirements and internal governance policies.

To enforce these protections dynamically, organizations can combine DLP classifications with VPC Service Controls to create service perimeters around sensitive data. This prevents exfiltration and ensures that only requests originating from authorized networks or workloads can access the data. In addition, IAM conditions can leverage the classification metadata to enforce contextual access controls, such as limiting access based on resource tags, user attributes, or device posture. This creates a dynamic, automated, and granular access control model, reducing reliance on manual approvals or static policies.

Alternative approaches are less effective. Option B is manual and limited in scope, failing to scale across large datasets. Option C is reactive, responding to incidents rather than proactively enforcing protection. Option D involves cumbersome manual processes that are error-prone and inconsistent.

By combining DLP API, asset tagging, VPC Service Controls, and conditional IAM policies, organizations implement a proactive, scalable, and automated architecture for sensitive data protection. This approach ensures discovery, classification, and dynamic enforcement while minimizing risk and maintaining compliance across Google Cloud environments.

Question 15

You must secure API‑based services in your environment, ensuring only authorized clients (internal microservices and selected external partners) can call your backend services, and that all traffic is logged, encrypted and subject to rate limiting and anomaly detection. Which combination of controls in Google Cloud is appropriate?

A) Use API Gateway (or Cloud Endpoints) for traffic routing, enable mutual TLS (mTLS) for authentication, use Cloud Armor for rate limiting, and send all logs to Cloud Logging.
B) Use direct HTTP endpoints on GKE with no gateway, rely on service account token authentication, and inspect logs manually.
C) Use VPC firewall rules only to restrict which IPs can connect to backend services.
D) Use Cloud NAT and no gateway, relying on NAT to restrict partners.

Answer: A) Use API Gateway (or Cloud Endpoints) for traffic routing, enable mutual TLS (mTLS) for authentication, use Cloud Armor for rate limiting, and send all logs to Cloud Logging.

Explanation:

API Gateway or Cloud Endpoints provide entry point enforcement for APIs (routing, authentication, authorization). mTLS ensures strong authentication between clients and service. Cloud Armor supports rate‑limiting and anomaly detection (via edge and SA policies). Logging of all traffic to Cloud Logging enables auditing and forensics. Option B lacks gateway, rate limiting and centralized controls. Option C only uses a firewall which is at the network layer—missing application-level controls (authorization, logging). Option D misuses NAT; NAT is for outbound connectivity not inbound control and partner authentication. Thus A is the correct set of controls.

Question 16

Your organization wants to adopt a Zero Trust networking model in Google Cloud. Which of the following design principles should you apply? (Select two.)

A) Trust everything inside the VPC once inside the perimeter.
B) Assume no implicit trust; always authenticate and authorize every request, even within the internal network.
C) Use a flat network and rely on perimeter firewalls only.
D) Segment workloads, use identity-aware proxies, enforce least‑privilege access, and log all communication.

Answer: B) and D) are correct.

Explanation:

  1. B) is aligned with Zero Trust principles: no more implicit trust even within the network. D) describes best practices: segmentation (reducing lateral movement), identity-aware proxies, least‑privilege, and logging communication. A) is opposite to Zero Trust. C) is also opposite (flat network, perimeter only). So B and D apply.

Question 17

An on‑premises data centre is being migrated to Google Cloud. The security team requires that all data at rest in Google Cloud must not be accessible by any other Google Cloud region than the one specified (for data‑residency/regulatory compliance reasons). Which features should you apply?

A) Use Cloud Storage bucket with a dual‑region in the allowed region only, enable CMEK and restrict key usage to that region.
B) Use multi‑region storage and rely on IAM to restrict region access.
C) Use regional persistent disks and prevent snapshots to other regions by using organization policy.
D) Use global services only and assume Google handles data‑residency.

Answer: A) Use Cloud Storage bucket with a dual‑region in the allowed region only, enable CMEK and restrict key usage to that region.

Explanation:

Ensuring data residency compliance in Google Cloud requires more than simply storing data in a chosen region. By selecting a region or dual-region that aligns with regulatory or organizational requirements, you ensure that all resources, including Cloud Storage buckets, databases, and other managed services, physically reside within the approved geographic boundaries. To strengthen compliance, Cloud KMS keys should be provisioned in the same region or dual-region. This guarantees that encryption and decryption operations occur only within the permitted jurisdiction, preventing accidental exposure of data outside allowed regions and maintaining control over cryptographic operations.

Alternative approaches are insufficient for strict residency requirements. Option B relies on multi-region storage, which can span multiple geographic locations and may violate compliance mandates when data must remain within a specific country or region. Option C partially addresses residency using regional persistent disks, but snapshots and backups may still be replicated outside the region unless additional controls, such as Organization Policy constraints, are applied. Option D does not provide control over where data resides or how keys are managed, making it non-compliant.

Therefore, Option A is the most effective solution as it enforces both storage location and key operations, ensuring full compliance with residency regulations and giving organizations control over where sensitive data resides and how it is encrypted. This approach provides end-to-end regional governance for data at rest and in transit.

Question 18

Your application uses BigQuery to store analytics data. Some of the columns contain PII (personally‑identifiable information). The security team wants to make sure that only masked or tokenized values are visible to most analysts, whilst a small subset of users with elevated privileges can view the clear values. Which approach should you implement?

A) Use BigQuery column‑level access control: create separate views where PII columns are masked or tokenized for most users, and grant elevated users direct table access.
B) Encrypt the PII columns in BigQuery with CMEK and give analysts decryption keys.
C) Export the PII to Cloud Storage, manually tokenize it outside BigQuery, then load back.
D) Use row‑level security only, with no column masking.

Answer: A) Use BigQuery column‑level access control: create separate views where PII columns are masked or tokenized for most users, and grant elevated users direct table access.

Explanation:

BigQuery supports views to limit access and mask sensitive columns. This lets you control which users can see the raw PII versus masked data. Encryption with CMEK (B) protects storage but does not differentiate visibility among users inside BigQuery easily. Option C adds complexity and manual processing. Option D (row‑level only) doesn’t address column‑level sensitivity. So A is the best fit for the requirement of differential visibility.

Question 19

A regulatory audit requires that every change to firewall rules in the production VPC be approved, logged, and reversible (i.e., you can roll back to the previous state). You must design a system to meet this requirement. Which combination of controls should you implement?
A) Use Infrastructure as Code (e.g., Terraform) with version control (Git) for firewall rules, enforce pull‑request based change approvals, and enable audit logs of IAM and firewall change events.
B) Let the network team make firewall rule changes directly in the console; ask them to save screenshots.
C) Use Cloud Logging to capture firewall rule change events only, rely on manual documentation for approvals.
D) Use basic network tags to restrict changes automatically without review.

Answer: A) Use Infrastructure as Code (e.g., Terraform) with version control (Git) for firewall rules, enforce pull‑request based change approvals, and enable audit logs of IAM and firewall change events.

Explanation:

By treating firewall rules as code, storing them in version control, requiring pull‑request reviews for approval, and using audit logs for change tracking, you satisfy the audit requirement for approved changes, log history, and ability to revert (roll back via version control). Option B is weak (screenshots are not reliable). Option C captures logs but lacks automated approval and rollback. Option D automates but removes human approval and documentation—thus fails audit requirements. So A is the correct choice.

Question 20

Your organization must implement a segmentation strategy for network traffic and workloads in Google Cloud such that workloads handling sensitive data are isolated, monitored and do not share the same VPC with non‑sensitive workloads. What design pattern aligns with this requirement in a multi‑project organization structure?

A) Use separate projects (e.g., “prod‑sensitive”, “prod‑standard”) each with its own VPC; use Shared VPC with host in “network” project to centralize network management; apply organization policies and firewall segmentation; enable monitoring per project.
B) Use a single shared VPC across all projects, segregate via subnets only, rely on network tags to isolate sensitive workloads.
C) Use multiple subnets in one project and rely solely on firewalls to segment sensitive/non‑sensitive workloads.
D) Use one project for everything and rely on IAM to prevent access to data, ignoring network segmentation.

Answer: A) Use separate projects (e.g., “prod‑sensitive”, “prod‑standard”) each with its own VPC; use Shared VPC with host in “network” project to centralize network management; apply organization policies and firewall segmentation; enable monitoring per project.

Explanation:

This architecture provides clear separation of sensitive workloads by project and VPC, while using Shared VPC for centralized network policy and firewall enforcement. This helps limit blast radius, gives clear billing separation, and allows dedicated monitoring. Option B uses shared VPC with subnets, which may not provide as strong isolation. Option C uses subnets only within one project, which is weaker. Option D ignores network segmentation altogether, which does not meet the requirement. Therefore A is the best pattern.

 

img