Use VCE Exam Simulator to open VCE files

Professional Cloud Network Engineer Google Practice Test Questions and Exam Dumps
Question No 1:
You are tasked with restricting access to your application hosted behind a Google Cloud HTTP(S) Load Balancer so that only specific client IP addresses can reach it. What is the most appropriate way to implement this restriction?
A. Create a secure perimeter using the Access Context Manager feature of VPC Service Controls and restrict access to the source IP range of the allowed clients and Google health check IP ranges.
B. Create a secure perimeter using VPC Service Controls, and mark the load balancer as a service restricted to the source IP range of the allowed clients and Google health check IP ranges.
C. Tag the backend instances "application," and create a firewall rule with target tag "application" and the source IP range of the allowed clients and Google health check IP ranges.
D. Label the backend instances "application," and create a firewall rule with the target label "application" and the source IP range of the allowed clients and Google health check IP ranges.
Correct Answer: C
Explanation:
In Google Cloud, when using an external HTTP(S) Load Balancer, the traffic destined for the backend services comes from Google’s global load balancer infrastructure, not directly from the client IP addresses. However, you can still apply access restrictions by using firewall rules, especially if you are using instance groups as backends (such as unmanaged or managed instance groups). In such a case, the backend instances will still receive traffic that originates from the Google Load Balancer, but that traffic can be conditionally permitted based on source IP ranges for specific needs, such as health checks and trusted client IPs.
Option A refers to Access Context Manager and VPC Service Controls. While Access Context Manager allows creating access levels based on attributes like IP addresses and identity, this applies more to user-level access to APIs and not infrastructure-level access to load-balanced HTTP(S) applications. VPC Service Controls are designed to restrict access to Google-managed services (e.g., Cloud Storage, BigQuery) from within a secure perimeter, but not for controlling access to HTTP(S) Load Balancers or controlling ingress from specific external IP addresses. Therefore, A is not suitable for this case.
Option B is similar in that it misuses VPC Service Controls. These are not applicable to compute instances behind a load balancer; they are primarily useful in controlling access to services via identity-based mechanisms. HTTP(S) Load Balancers are not "services" in the context of VPC Service Controls, so they cannot be directly restricted in this way. This makes B incorrect.
Option C is correct because it takes advantage of Google Cloud firewall rules, which can be applied based on network tags. By tagging the backend instances (e.g., with the tag "application"), you can then create an ingress firewall rule that allows traffic only from the specified IP ranges — in this case, the allowed clients and Google’s health check IPs. This ensures that only the intended sources can connect to the backend instances. The firewall rule would target ingress traffic with a specific source IP range and only apply to those instances with the matching tag. This is a common and supported method to restrict access to services behind a Google Cloud Load Balancer.
Option D refers to using instance labels in firewall rules. While tagging is supported for firewall rules (through target tags), using labels as firewall rule targets is not supported for GCE instances in the way described. Therefore, D is technically incorrect in this context.
In summary, tagging backend instances and applying a firewall rule using that tag — with the appropriate source IP ranges — is the best-practice approach for restricting access to a load-balanced application. This ensures tight control while also permitting necessary services like Google health checks to function uninterrupted.
Question No 2:
Your end users are located in close proximity to us-east1 and europe-west1. Their workloads need to communicate with each other. You want to minimize cost and increase network efficiency. How should you design this topology?
A. Create 2 VPCs, each with their own regions and individual subnets. Create 2 VPN gateways to establish connectivity between these regions.
B. Create 2 VPCs, each with their own region and individual subnets. Use external IP addresses on the instances to establish connectivity between these regions.
C. Create 1 VPC with 2 regional subnets. Create a global load balancer to establish connectivity between the regions.
D. Create 1 VPC with 2 regional subnets. Deploy workloads in these subnets and have them communicate using private RFC1918 IP addresses.
Correct Answer: D
Explanation:
To design a network topology that both minimizes cost and maximizes efficiency for workloads needing to communicate between us-east1 and europe-west1, the most optimal solution is to use a single Virtual Private Cloud (VPC) that spans both regions and allows the workloads to use private IP addresses for direct communication. In this scenario, option D is the correct choice because it uses one VPC with regional subnets and allows communication over internal IPs (RFC1918), which are private, cost-effective, and highly efficient.
Let’s examine why option D is the best choice by evaluating all options:
Option A suggests creating two separate VPCs, each in a different region, and using VPN gateways to interconnect them. While this setup can work, it introduces significant complexity in terms of network management and security. It also introduces higher latency and additional cost because VPN traffic often travels over the public internet or uses Google’s Cloud VPN infrastructure, which may not provide the same low-latency and high-throughput connectivity as internal IP communication within a shared VPC.
Option B involves setting up two separate VPCs and relying on external IP addresses for instance communication. This approach is both insecure and costly. Sending traffic over the public internet—even if encrypted—introduces unnecessary latency, exposes workloads to internet-based threats, and results in egress charges for public IP traffic. This approach is the least efficient and most expensive option.
Option C proposes a single VPC with two regional subnets but uses a global load balancer to facilitate communication. Although Google Cloud’s global load balancers are powerful and support multi-region deployment, they are primarily intended for distributing external traffic across backend services, not for internal workload-to-workload communication. Using a load balancer in this context introduces unnecessary overhead and additional costs because it forces traffic to go through an intermediary when it could simply go point-to-point.
Option D is superior because it uses a single VPC that spans multiple regions (a feature supported by Google Cloud). In this configuration, each region (us-east1 and europe-west1) has its own subnet, but they are part of the same VPC. This means that workloads in these subnets can directly communicate using private IP addresses (RFC1918). Google’s VPCs are global, which means the network is automatically configured for cross-region routing. Communication across regions using internal IP addresses benefits from Google’s high-speed backbone network and avoids public internet exposure. Most importantly, using private IPs reduces cost since Google does not charge for internal IP traffic between regions if they are within the same VPC and meet the requirements of Google's network egress discount tiers.
This design also scales better. If you later decide to deploy more regions or add more subnets, a single-VPC design will simplify routing, firewall rules, IAM policies, and service access. You’ll also benefit from centralized management and security policies. It also eliminates the need for peering or VPNs, simplifying the overall architecture.
Therefore, based on cost efficiency, network performance, security, and scalability, option D is the most effective solution.
Question No 3:
Your organization is deploying a single project for 3 separate departments. Two of these departments need to communicate over the network, while the third must remain isolated. The design should ensure distinct network administrative domains for each department while keeping operational complexity low.
What topology should you implement?
A. Create a Shared VPC Host Project and the respective Service Projects for each of the 3 separate departments.
B. Create 3 separate VPCs, and use Cloud VPN to establish connectivity between the two appropriate VPCs.
C. Create 3 separate VPCs, and use VPC peering to establish connectivity between the two appropriate VPCs.
D. Create a single project, and deploy specific firewall rules. Use network tags to isolate access between the departments.
Correct answer: C
Explanation:
The goal in this scenario is to deploy infrastructure for three departments within a single project while maintaining isolation for one of the departments and allowing communication between the other two. In addition, it is important to establish distinct network administrative domains and minimize operational overhead. Given these constraints, option C, which proposes creating three separate VPCs and using VPC peering to connect only the two departments that need to communicate, offers the best balance of security, flexibility, and simplicity.
VPC peering is a native Google Cloud feature that enables private connectivity between VPC networks. It’s designed to facilitate low-latency, high-throughput communication between networks without traversing the public internet. It also allows for granular control by enabling network connectivity only between the desired VPCs—in this case, the two departments that need to communicate—while keeping the third one completely isolated by not establishing peering for it. Furthermore, separate VPCs inherently represent distinct administrative boundaries, allowing each department to manage its resources independently, aligning perfectly with the requirement of maintaining distinct network administrative domains.
Option A, the Shared VPC model, is well-suited for centralizing network administration while delegating resource management to service projects. However, in this case, the requirement is for distinct network administrative domains, which Shared VPC inherently does not offer because all networking resources (like subnets and routes) reside in the Host Project and are shared. This approach does not provide the degree of isolation needed for the third department and may not meet the governance requirement of fully distinct network domains.
Option B suggests using Cloud VPN between separate VPCs. While this does provide connectivity and isolation, it introduces unnecessary operational complexity. VPNs require managing gateway configurations, tunnels, and IPsec policies, which adds overhead. VPC peering, on the other hand, is simpler to configure and maintain within Google Cloud, particularly for internal communication between GCP-native networks.
Option D, which involves using a single project and relying on firewall rules with network tags for isolation, fails to meet the key requirement of creating distinct network administrative domains. Tags and firewall rules can help with segmentation, but they don’t provide true isolation or separate administrative control. All resources would still exist within a single VPC, making it more difficult to enforce strict governance and potentially increasing the risk of misconfigurations leading to security breaches.
In summary, option C strikes the optimal balance by using separate VPCs for strong isolation and VPC peering for controlled connectivity between the departments that require it. It leverages built-in cloud features with low operational overhead, satisfies the requirement for administrative separation, and avoids unnecessary complexity introduced by VPNs or over-reliance on firewall configurations within a shared network.
Question No 4:
You are migrating to Cloud DNS and want to import your BIND zone file. Which command should you use?
A. gcloud dns record-sets import ZONE_FILE --zone MANAGED_ZONE
B. gcloud dns record-sets import ZONE_FILE --replace-origin-ns --zone MANAGED_ZONE
C. gcloud dns record-sets import ZONE_FILE --zone-file-format --zone MANAGED_ZONE
D. gcloud dns record-sets import ZONE_FILE --delete-all-existing --zone MANAGED ZONE
Correct answer: C
Explanation:
When migrating your DNS records to Google Cloud DNS from another provider, it's common to export your existing DNS records into a BIND-formatted zone file, which is a standardized plain-text format that describes DNS zone contents. Google Cloud offers a CLI tool—gcloud—that supports importing DNS record sets directly from such a file. However, to ensure proper interpretation of the file format during import, you must include a specific flag that tells the tool you are using a BIND-style zone file. That is where the --zone-file-format flag becomes crucial.
Option C correctly includes this flag: --zone-file-format. This flag is essential when importing a zone file that adheres to the BIND format because, without it, Cloud DNS assumes the file is in a different default record format, which would likely result in errors or misinterpretation of the DNS records. This option also specifies the zone with the --zone flag, allowing Cloud DNS to associate the imported records with the correct managed zone.
Let’s examine why the other options are incorrect:
Option A is missing the --zone-file-format flag. Although this command includes the zone file and the destination zone, it does not inform the system that the file is in BIND format. Therefore, if the BIND formatting is not recognized, the import process could fail or lead to incorrect entries.
Option B includes the flag --replace-origin-ns, which is only relevant if you want to overwrite the original nameserver (NS) records in the imported zone file. This flag is not required unless you have a specific reason to update the NS records. More importantly, it still lacks the necessary --zone-file-format flag, making it unsuitable for a BIND file import.
Option D contains the --delete-all-existing flag, which instructs Cloud DNS to remove all preexisting records in the specified managed zone before importing the new ones. While this might be useful in certain contexts, it is a potentially risky operation if not used cautiously, and more importantly, like A and B, it fails to include the essential --zone-file-format flag required for BIND file imports.
Thus, C is the only option that both safely and correctly handles the import of a BIND-formatted zone file into Google Cloud DNS by explicitly specifying the expected file format, ensuring compatibility and accurate record creation. This makes it the most suitable and effective command for the given migration task.
Question No 5:
You have an auto mode VPC network called Retail and need to create another VPC named Distribution that can be peered with Retail. How should you set up the Distribution VPC to ensure proper peering configuration?
A. Create the Distribution VPC in auto mode. Peer both the VPCs via network peering.
B. Create the Distribution VPC in custom mode. Use the CIDR range 10.0.0.0/9. Create the necessary subnets, and then peer them via network peering.
C. Create the Distribution VPC in custom mode. Use the CIDR range 10.128.0.0/9. Create the necessary subnets, and then peer them via network peering.
D. Rename the default VPC as "Distribution" and peer it via network peering.
Correct Answer: C
Explanation:
In Google Cloud, when configuring VPC networks for peering, it’s essential to avoid overlapping IP address ranges. The Retail VPC, created in auto mode, automatically assigns subnets using the 10.128.0.0/9 IP address space, which spans from 10.128.0.0 to 10.255.255.255. This makes that range unavailable for any other VPC that you plan to peer with Retail.
To set up a new VPC named Distribution for peering, you must select an IP address range that does not overlap with 10.128.0.0/9. The 10.0.0.0/8 block is a common private IP range, and from this, 10.0.0.0/9 (which spans from 10.0.0.0 to 10.127.255.255) does not overlap with Retail’s existing space. Therefore, using 10.0.0.0/9 for Distribution ensures compatibility.
The VPC should be created in custom mode, which gives you full control over subnet creation and IP assignment. Auto mode is inflexible in this case, as it would again try to assign the same IP ranges used by Retail. Peering two auto mode VPCs is not feasible without manually altering one or both to remove the overlapping ranges. That's why Option A is invalid.
Option C is wrong because assigning 10.128.0.0/9 to Distribution would overlap with Retail, which already uses that range by default due to its auto mode configuration. This overlap would cause the peering attempt to fail.
Option D is also incorrect because renaming the default VPC to “Distribution” does not address IP range conflicts or the need to configure proper subnet ranges. Peering requires more than just renaming—it needs explicitly non-overlapping IP address allocations.
Therefore, the most suitable approach is to create the Distribution VPC in custom mode, assign it the 10.0.0.0/9 CIDR block, and then configure peering with Retail. This ensures no IP conflicts and enables successful peering.
Question No 6:
You are using a third-party next-generation firewall to inspect traffic. You created a custom route of 0.0.0.0/0 to route egress traffic to the firewall. You want to allow your VPC instances without public IP addresses to access the BigQuery and Cloud Pub/Sub APIs, without sending the traffic through the firewall.
Which two actions should you take? (Choose two.)
A. Turn on Private Google Access at the subnet level.
B. Turn on Private Google Access at the VPC level.
C. Turn on Private Services Access at the VPC level.
D. Create a set of custom static routes to send traffic to the external IP addresses of Google APIs and services via the default internet gateway.
E. Create a set of custom static routes to send traffic to the internal IP addresses of Google APIs and services via the default internet gateway.
Correct Answers: A and D
Explanation:
To allow VPC instances without public IP addresses to access Google APIs like BigQuery and Cloud Pub/Sub without routing through the third-party firewall, two key objectives must be met:
Enable those instances to reach Google APIs privately.
Ensure that traffic to these APIs bypasses the default route that sends traffic through the firewall.
Let’s examine each option in detail to determine which two actions fulfill these requirements.
A. Turn on Private Google Access at the subnet level.
This is essential. Private Google Access allows VM instances that do not have external IP addresses to reach Google APIs and services using the internal IP address of the VM. It is enabled at the subnet level, not the VPC level. This setting is mandatory when private VMs (no public IPs) need to access services like BigQuery or Cloud Pub/Sub. So, this is a correct action.
B. Turn on Private Google Access at the VPC level.
This is a distractor. Private Google Access is configured per subnet, not globally at the VPC level. Hence, there is no such setting at the VPC level. Therefore, this option is incorrect.
C. Turn on Private Services Access at the VPC level.
Private Services Access is used for connecting to Google-managed services like Cloud SQL or AI Platform notebooks over internal IPs, but not for accessing public Google APIs like BigQuery or Pub/Sub. It's unrelated to accessing Google APIs over Private Google Access, so this option is also incorrect.
D. Create a set of custom static routes to send traffic to the external IP addresses of Google APIs and services via the default internet gateway.
This is a correct action. Since your 0.0.0.0/0 route directs all traffic to the firewall, Google API traffic would also go through the firewall by default. However, you want to bypass the firewall for Google APIs. Google publishes a list of external IP ranges for their APIs and services. By creating custom static routes for those IP ranges with the next hop as the default internet gateway, you allow egress traffic to reach the APIs without going through the third-party firewall.
E. Create a set of custom static routes to send traffic to the internal IP addresses of Google APIs and services via the default internet gateway.
This is not correct. Google APIs are accessed over public IPs, not internal ones. There is no routing need for "internal IP addresses" of these APIs. Also, the default internet gateway is for public destinations. This choice doesn't make technical sense and is incorrect.
In conclusion, the correct steps are to enable Private Google Access at the subnet level (A) and to create custom static routes for Google's API IP ranges that bypass the firewall (D). This setup ensures that VMs without external IPs can access Google APIs like BigQuery and Cloud Pub/Sub securely and efficiently, without unnecessary inspection through a third-party firewall.
Question No 7:
All the instances in your project are configured with the custom metadata enable-oslogin value set to FALSE and to block project-wide SSH keys. None of the instances are set with any SSH key, and no project-wide SSH keys have been configured.
Firewall rules are set up to allow SSH sessions from any IP address range. You want to SSH into one instance. What should you do?
A. Open the Cloud Shell SSH into the instance using gcloud compute ssh.
B. Set the custom metadata enable-oslogin to TRUE, and SSH into the instance using a third-party tool like putty or ssh.
C. Generate a new SSH key pair. Verify the format of the private key and add it to the instance. SSH into the instance using a third-party tool like putty or ssh.
D. Generate a new SSH key pair. Verify the format of the public key and add it to the project. SSH into the instance using a third-party tool like putty or ssh.
Correct Answer: C
Explanation:
To SSH into a Compute Engine instance on Google Cloud Platform (GCP), there must be a valid SSH key associated with either the project or the instance itself. In this scenario, the project has been explicitly configured to block project-wide SSH keys, and enable-oslogin is set to FALSE, which means that OS Login (the IAM-based method of managing SSH access) is disabled. This configuration leaves no existing method to access the instance unless an SSH key is added directly to the instance-level metadata.
Let’s break this down further:
The project is blocking project-wide SSH keys, so even if you add a public SSH key to the project metadata, the instance won't accept it due to that configuration. This immediately rules out option D.
The enable-oslogin setting is FALSE, so even if you set it to TRUE now as per option B, it won’t help unless you also configure IAM roles correctly and ensure the OS supports it, which is more involved than the question allows for. Additionally, it still won’t allow access unless SSH keys are added and associated properly through IAM.
Option A, using Cloud Shell and running gcloud compute ssh, won’t work either. Cloud Shell uses your user identity and adds SSH keys to project metadata automatically when using OS Login or project-wide keys. But in this setup:
OS Login is disabled.
Project-wide SSH keys are blocked. So gcloud compute ssh cannot insert your SSH key, nor is there an existing key on the instance to authenticate with.
Option C is the most viable solution. By generating a new SSH key pair, and then adding the public key manually to the instance metadata, you’re assigning a key that the instance can recognize and allow. The private key (usually in PEM format for use with ssh or .ppk for putty) can then be used to authenticate from your local terminal or SSH client.
This approach bypasses both the blocked project-wide SSH keys and the disabled OS Login by applying the key directly to the instance’s metadata — which the instance will honor regardless of broader project settings. Once the key is in place, you can use any standard SSH client like putty, ssh, or Google’s SDK tools to access the instance, assuming the firewall allows incoming traffic on port 22, which it does in this case.
Thus, the correct and functional step to access the instance in this constrained configuration is to use option C.
Question No 8:
What is the most cost-effective solution for providing on-premises connectivity to multiple university departments on Google Cloud while maintaining centralized network control and meeting requirements such as 10 Gbps throughput and low latency?
A. Use Shared VPC, and deploy the VLAN attachments and Interconnect in the host project.
B. Use Shared VPC, and deploy the VLAN attachments in the service projects. Connect the VLAN attachment to the Shared VPC's host project.
C. Use standalone projects, and deploy the VLAN attachments in the individual projects. Connect the VLAN attachment to the standalone projects' Interconnects.
D. Use standalone projects and deploy the VLAN attachments and Interconnects in each of the individual projects.
Correct Answer: A
Explanation:
The scenario involves a university migrating to Google Cloud that needs a centralized networking administration, on-premises connectivity with 10 Gbps, and lowest possible latency. Additionally, the solution must be cost-efficient as new departments (presumably with individual GCP projects) request connectivity to on-premises infrastructure.
The best solution in this case is Shared VPC with VLAN attachments and Dedicated Interconnect deployed in the host project—which is option A.
Using a Shared VPC allows the central networking team to maintain control over networking resources (such as subnets, routes, firewall rules, and connectivity), while departments can create and manage their own workloads in service projects. This setup aligns perfectly with the requirement for centralized network administration. Departments get access to cloud resources without managing the underlying connectivity infrastructure themselves.
Dedicated Interconnect is preferred over Partner Interconnect when very high throughput (like 10 Gbps or more) and low latency are needed. It provides direct physical connections between your on-premises environment and Google Cloud at speeds up to 100 Gbps. To use it efficiently and cost-effectively across multiple departments, deploying it centrally in the Shared VPC host project avoids duplication of resources and maximizes use of the bandwidth.
VLAN attachments are part of Dedicated Interconnect, and deploying them in the host project ensures that the central network team can manage and monitor all interconnectivity resources without needing to duplicate Interconnects for every department.
Let’s evaluate why the other options fall short:
B. Use Shared VPC, and deploy the VLAN attachments in the service projects. Connect the VLAN attachment to the Shared VPC's host project.
This is not a valid architecture. VLAN attachments must be deployed in the same project as the Interconnect. You cannot create VLAN attachments in a service project and connect them to an Interconnect in the host project. This setup breaks the management model and introduces technical inconsistencies.
C. Use standalone projects, and deploy the VLAN attachments in the individual projects. Connect the VLAN attachment to the standalone projects' Interconnects.
This is inefficient and expensive. Each department would need to provision its own Interconnect and VLAN attachments, duplicating physical connections, increasing operational overhead, and violating the requirement for centralized management. It also results in significant additional cost for each Interconnect and attachment.
D. Use standalone projects and deploy the VLAN attachments and Interconnects in each of the individual projects.
This is the most decentralized and costliest approach. Every project/dept would manage its own Interconnect, duplicating resources and reducing control. It also fails to meet the requirement for centralized network administration and would dramatically increase both complexity and expenses.
Therefore, option A is the most scalable, manageable, and cost-effective approach, ensuring that all departments benefit from a high-speed, low-latency connection to Google Cloud while maintaining centralized control through Shared VPC and consolidated networking resources.
Question No 9:
You have deployed a new internal application that provides HTTP and TFTP services to on-premises hosts. You want to be able to distribute traffic across multiple Compute Engine instances, but need to ensure that clients are sticky to a particular instance across both services.
Which session affinity should you choose?
A. None
B. Client IP
C. Client IP and protocol
D. Client IP, port and protocol
Correct Answer: B
Explanation:
When dealing with load balancing in Google Cloud, session affinity refers to the ability to direct client requests to the same backend (Compute Engine instance) across multiple sessions. This is also referred to as "sticky sessions." In this specific case, your internal application provides both HTTP and TFTP services, and the goal is for on-premises clients to consistently connect to the same instance for both services. This implies that the stickiness should not be limited to a specific protocol or port, but rather should treat the client’s IP address as the primary identifier for session consistency.
Option A, None, means there would be no session affinity at all. Requests from the same client could be routed to different backend instances for each request or protocol. This would be unsuitable because you require session stickiness across services.
Option C, Client IP and protocol, ties the session affinity to both the client’s IP address and the specific protocol being used (for instance, one affinity rule for HTTP, another for TFTP). However, this would not satisfy your use case since your goal is to ensure clients are routed to the same instance across both HTTP and TFTP—two different protocols. If session stickiness is tied to the protocol, then the load balancer may send HTTP traffic from a client to one instance and TFTP traffic from the same client to another instance. That breaks the required consistency across services.
Option D, Client IP, port and protocol, is even more specific. It requires the client’s IP address, the protocol, and the source port to remain the same for session affinity. This is too restrictive and unnecessary for your use case. It’s also impractical because clients typically use ephemeral source ports that can change with each new connection.
Option B, Client IP, is the correct and most effective choice. This affinity mode ensures that any traffic from a specific client IP address—regardless of protocol or port—gets directed to the same backend instance. That means your HTTP and TFTP traffic originating from the same on-premises client will consistently go to the same Compute Engine instance, satisfying the requirement for cross-service stickiness. It is the only option among the ones listed that provides session affinity at the client IP level alone, without protocol or port dependency, which is exactly what the question demands.
Therefore, Client IP affinity is the most suitable for your scenario, as it enables consistent routing for each client across different types of traffic (in this case, HTTP and TFTP), ensuring stable interactions with the internal application across multiple Compute Engine instances.
Question No 10:
You created a new VPC network named Dev with one subnet and a firewall rule that allows only HTTP traffic. Logging is enabled. When you attempt to connect to a VM using Remote Desktop Protocol (RDP), it fails.
You check the firewall rule logs in Stackdriver Logging but find no entries for blocked traffic. How can you see logs for blocked traffic?
A. Check the VPC flow logs for the instance.
B. Try connecting to the instance via SSH, and check the logs.
C. Create a new firewall rule to allow traffic from port 22, and enable logs.
D. Create a new firewall rule with priority 65500 to deny all traffic, and enable logs.
Correct Answer: D
Explanation:
When managing firewall rules in Google Cloud Platform (GCP), only the traffic that matches an explicit rule with logging enabled is logged in the Firewall Rules Logging within Cloud Logging (formerly Stackdriver). If there is no firewall rule that explicitly denies traffic and has logging enabled, GCP does not log the default implied deny rule (which automatically blocks traffic not explicitly allowed).
In this scenario, the firewall rule allows only HTTP traffic (typically port 80). Since Remote Desktop Protocol (RDP) uses port 3389, and this port is not allowed by any firewall rule, the connection attempt is being implicitly denied by GCP's default behavior. However, because this default deny action is not explicitly defined as a rule, no logs are generated for this blocked traffic.
To see the logs of denied traffic, you need to create an explicit deny firewall rule and enable logging on that rule. This way, when traffic is blocked because it matches that rule, the logs will be visible in Cloud Logging. Setting the rule's priority to 65500 ensures that it does not override more specific allow rules but still applies before the implied default deny rule (which has a lower priority).
Let’s examine the other options:
A. VPC flow logs do provide network traffic metadata, but they are not the same as firewall logs. Flow logs show information about network connections (e.g., source/destination IPs, ports, bytes transferred), but they don't clearly indicate whether the traffic was allowed or blocked by a firewall rule. They are useful for high-level network analysis but not for understanding firewall rule behavior directly.
B. Trying to connect via SSH instead of RDP doesn’t solve the problem. The question is about seeing logs for blocked traffic, and switching to a different protocol (SSH uses port 22) would only work if port 22 is allowed, which it is not. Even then, the problem is about logging blocked traffic, not successfully connecting.
C. Creating a rule to allow traffic on port 22 (SSH) and enabling logs would show allowed traffic to SSH, but not denied RDP traffic. This option would not help solve the original issue of understanding why RDP traffic is being blocked and where it is logged.
Therefore, the only way to explicitly log denied traffic, which is currently not being logged because it is being denied by the implied rule, is to create a firewall rule that explicitly denies all traffic and turn on logging for it. This will capture and log all blocked attempts, including the RDP one, and make them visible in Cloud Logging.
Top Training Courses
LIMITED OFFER: GET 30% Discount
This is ONE TIME OFFER
A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.