Use VCE Exam Simulator to open VCE files

156-215.80 Checkpoint Practice Test Questions and Exam Dumps
Question 1
Which of the following is NOT an integral part of VPN communication within a network?
A. VPN key
B. VPN community
C. VPN trust entities
D. VPN domain
Correct Answer: C
Explanation:
Virtual Private Network (VPN) communication relies on several components to ensure secure, encrypted data transmission between remote locations or devices over public or shared networks. Understanding which elements are standard and which are not is key to identifying the correct answer.
VPN Key
This refers to the cryptographic key used in VPN tunnels for encryption and decryption of data.
It is fundamental for securing the confidentiality and integrity of the transmitted data.
VPN keys are usually exchanged via key exchange protocols like IKE (Internet Key Exchange).
VPN Community
This defines a logical grouping of VPN participants (gateways and clients) that can communicate with each other.
It helps in managing and enforcing VPN policies, such as encryption domains and tunnel settings.
Widely used in systems like Check Point VPN configurations.
VPN Domain
A VPN domain typically refers to the set of networks or IP addresses behind a VPN gateway that can participate in the VPN.
It is a critical configuration component to define which resources should be accessible over the tunnel.
It determines the scope of traffic that should be encrypted and passed through the VPN tunnel.
VPN Trust Entities (NOT a standard term)
This option is not a standard or formal element in VPN configurations.
While the concept of trust exists in cryptographic systems (such as trust anchors in PKI), "VPN trust entities" is not an established term or functional element in VPN communication protocols or implementations.
It may be a distractor term designed to resemble legitimate concepts like "trusted certificate authorities," but is not directly applicable to standard VPN setups.
VPN communication is structured around clear components: keys for encryption, communities for grouping, and domains for network definition. "VPN trust entities" is not a formal or integral part of VPN communication architecture and terminology, making it the correct choice as the non-integral component in the list.
The correct answer is C.
Question 2
Two administrators Dave and Jon both manage R80 Management as administrators for ABC Corp. Jon logged into the R80 Management and then shortly after, Dave logged in to the same server. They are both in the Security Policies view. From the screenshots below.
Why does Dave not have the rule no.6 in his SmartConsole view even though Jon has it in his SmartConsole view?
A. Jon is currently editing rule no.6 but has Published part of his changes.
B. Dave is currently editing rule no.6 and has marked this rule for deletion.
C. Dave is currently editing rule no.6 and has deleted it from his Rule Base.
D. Jon is currently editing rule no.6 but has not yet Published his changes.
Correct Answer: D
Explanation:
In Check Point’s R80 SmartConsole environment, administrators can work simultaneously with multiple sessions. However, the visibility of changes between admins is dependent on whether changes have been Published. The concept of publishing is essential: until a change is published, it remains local to the administrator's session and is not visible to others.
Let’s analyze the situation:
In Jon's session, rule no.6 "Cleanup rule" is visible in the rule base.
In Dave's session, rule no.6 is not present. He sees only up to rule no.5.
Given this behavior, the discrepancy must be due to changes that only exist in Jon’s session and have not been shared (published) to the global rule base, which Dave would then see.
Option A: Jon is currently editing rule no.6 but has Published part of his changes.
This is incorrect. In Check Point R80, publishing is all or nothing for a session—either all changes are published, or none are. There is no concept of “partial publish.” Therefore, Jon cannot have published part of the changes and kept others local.
Option B: Dave is currently editing rule no.6 and has marked this rule for deletion.
If Dave had deleted the rule, it would appear as struck-through or greyed out in his view, depending on the SmartConsole version—but it would still exist until published. Also, this does not explain why Jon can see the rule but Dave cannot.
Option C: Dave is currently editing rule no.6 and has deleted it from his Rule Base.
Again, even if Dave had deleted the rule in his own session, Jon would not see this deletion unless Dave had published his changes. Furthermore, the scenario is explicitly about why Dave does not see a rule that Jon sees, not the other way around.
Option D: Jon is currently editing rule no.6 but has not yet Published his changes.
This is the correct explanation. In R80 SmartConsole, until an administrator publishes changes, those changes are visible only in their session. Since Jon has not published his session, Dave cannot see the new "Cleanup rule" (rule no.6) that Jon added.
The R80 SmartConsole interface isolates changes per admin session until those changes are published. Because Jon has made a change (added rule no.6) but has not published it, Dave—who logged in after Jon and views the centralized policy—cannot see this rule. This fully supports the scenario described in the question.
The correct answer is D.
Question 3
Vanessa is firewall administrator in her company; her company is using Check Point firewalls on central and remote locations, which are managed centrally by R80 Security Management Server. One central location has an installed R77.30 Gateway on Open server. Remote location is using Check Point UTM-1 570 series appliance with R71.
Which encryption is used in Secure Internal Communication (SIC) between central management and firewall on each location?
A. On central firewall AES128 encryption is used for SIC, on Remote firewall 3DES encryption is used for SIC.
B. On both firewalls, the same encryption is used for SIC. This is AES-GCM-256.
C. The Firewall Administrator can choose which encryption suite will be used by SIC.
D. On central firewall AES256 encryption is used for SIC, on Remote firewall AES128 encryption is used for SIC.
Correct Answer: A
Explanation:
Secure Internal Communication (SIC) is a fundamental security feature in Check Point environments, ensuring trusted and encrypted communication between Check Point components—such as between Security Gateways and the Security Management Server. SIC uses certificates for authentication and encryption and is initialized with a one-time password to set up trust.
The type of encryption used by SIC depends significantly on the version of the Check Point software running on the firewall, not just the management server. Each version supports a specific encryption algorithm based on its cryptographic capabilities at the time it was released.
Management Server: R80
Central Gateway: Running R77.30 on an Open Server
Remote Gateway: UTM-1 570 appliance running R71
R71 (Remote Location): This version uses older encryption standards. Specifically, it uses 3DES for SIC, as AES support was limited or not default at this version level.
R77.30 (Central Location): By this version, AES128 is the default for SIC communications. Check Point improved encryption handling starting around R75, adopting AES over 3DES where supported.
Even though the Security Management Server is on R80, the encryption used in SIC is determined by the gateway’s version, since SIC is established per device based on the capabilities of the gateway's own software.
A. On central firewall AES128 encryption is used for SIC, on Remote firewall 3DES encryption is used for SIC.
This is correct. The central firewall supports and defaults to AES128 due to R77.30. The remote firewall is on R71 and thus uses 3DES. This option accurately reflects version-based encryption use.
B. On both firewalls, the same encryption is used for SIC. This is AES-GCM-256.
This is incorrect. AES-GCM-256 is not supported in older versions like R71 or even R77.30 for SIC. Also, AES-GCM is a much newer standard.
C. The Firewall Administrator can choose which encryption suite will be used by SIC.
This is incorrect. SIC encryption is not user-configurable per device; it is determined by the software version.
D. On central firewall AES256 encryption is used for SIC, on Remote firewall AES128 encryption is used for SIC.
This is incorrect. R71 does not support AES128 for SIC; it uses 3DES. Also, R77.30 by default uses AES128, not AES256.
SIC encryption is version-dependent, not configurable, and the gateway’s own software dictates the cryptographic suite used. In this case, the central firewall uses AES128 (supported by R77.30), and the older remote UTM-1 570 with R71 uses 3DES.
The correct answer is A.
Question 4
Review the following screenshot and select the BEST answer.
A. Data Center Layer is an inline layer in the Access Control Policy.
B. By default all layers are shared with all policies.
C. If a connection is dropped in Network Layer, it will not be matched against the rules in Data Center Layer.
D. If a connection is accepted in Network-layer, it will not be matched against the rules in Data Center Layer.
Correct Answer: C
Explanation:
This question is based on understanding how layered security policies work in Check Point R80+ Security Management, particularly how traffic is processed across different policy layers. The screenshot shows multiple layers under the Access Control section: "Network" and "Data Center Layer". These are examples of ordered layers, which follow a specific sequence during rule evaluation.
Check Point R80 introduced the concept of ordered layers and inline layers:
Ordered layers are evaluated one after the other. The decision to pass traffic to the next layer depends on whether it was accepted in the current layer.
Inline layers are embedded within a rule in another layer. They are evaluated only if that specific rule is matched.
In the screenshot, "Data Center Layer" appears separately listed under the Access Control policy. This means it is an ordered layer, not an inline one. So Option A is incorrect.
When traffic hits a Check Point gateway, it is evaluated layer by layer:
If traffic is dropped in an earlier layer (e.g., the "Network" layer), it will not proceed to the next layer (e.g., the "Data Center Layer").
If traffic is accepted, it is then evaluated in the next layer.
This mechanism enforces a fail-early approach: no point in checking other layers if traffic is already blocked. Thus, Option C is correct: if traffic is dropped at the "Network Layer", it won’t be matched against rules in the "Data Center Layer".
A. Incorrect. The Data Center Layer is shown as a separate ordered layer, not an inline layer.
B. Incorrect. Layers are not shared by default; sharing requires explicit configuration.
C. Correct. In an ordered policy, if traffic is dropped in an earlier layer, it is not processed in subsequent layers.
D. Incorrect. If traffic is accepted in the Network Layer, it will be evaluated in the Data Center Layer, assuming that is the next ordered layer.
The question tests understanding of ordered layer processing. Since traffic dropped in the Network Layer does not proceed to the Data Center Layer, the best and correct answer is:C.
Question 5
Which of the following is NOT a SecureXL traffic flow?
A. Medium Path
B. Accelerated Path
C. High Priority Path
D. Slow Path
Correct Answer: C
Explanation:
SecureXL is a performance-enhancing technology used in Check Point firewalls. It accelerates traffic flows by offloading tasks from the CPU to specialized kernel-level modules, reducing the overhead and increasing throughput. SecureXL defines different traffic flow paths based on how traffic is processed by the system. Each path represents a different level of inspection and processing depending on security features enabled and the type of traffic.
The primary SecureXL paths include:
This is the most efficient path. It handles traffic that can be processed entirely in the SecureXL kernel space, without needing to involve the Firewall kernel. Packets handled here are not inspected by deeper engines (e.g., Application Control, IPS) and include sessions that meet specific acceleration conditions (e.g., known, simple connections like ICMP or DNS). The Accelerated Path is designed for performance, where security risk is low.
This path processes traffic that cannot be fully handled in the Accelerated Path because it requires additional inspection, such as by the IPS or Application Control. Traffic is offloaded partially: part of the inspection is done by SecureXL, and the rest is handled by the firewall kernel module (F2F – "Forward to Firewall"). This balance optimizes performance while preserving inspection fidelity.
Traffic that requires full inspection by multiple software blades (such as Threat Prevention, Content Inspection, or VPN decryption) is sent down the Slow Path. This means all packets are processed by the Firewall kernel and user-mode processes. These are typically complex or suspicious connections needing deep inspection.
This option is not a legitimate classification within SecureXL. While Check Point does implement prioritization techniques such as QoS (Quality of Service), there is no path officially designated as the "High Priority Path" within the context of SecureXL. Therefore, this term is not recognized in any SecureXL documentation or flow classification scheme.
SecureXL officially defines three traffic flow paths: Accelerated Path, Medium Path, and Slow Path. High Priority Path is not one of them.
Question 6
Which of the following Automatically Generated NAT rules have the lowest implementation priority?
A. Machine Hide NAT
B. Address Range Hide NAT
C. Network Hide NAT
D. Machine Static NAT
Correct Answer: B
Explanation:
In Check Point firewalls, Network Address Translation (NAT) rules can be manually defined by the administrator or automatically generated based on object settings. When multiple NAT rules are applied to an object, Check Point uses a predefined priority hierarchy to determine which NAT rule takes precedence.
Automatic NAT rules are derived from the NAT settings within the object properties, such as for a host, network, address range, or group. When conflicting NAT types or overlapping conditions exist, Check Point must choose which rule to apply first based on implementation priority.
The general NAT priority order, from highest to lowest, is as follows:
Manual NAT rules – These always take precedence over automatic rules.
Automatic Static NAT rules – For hosts with a one-to-one mapping; e.g., internal IP statically mapped to a public IP.
Automatic Hide NAT (Host) – When a single host uses a single external IP to hide behind.
Automatic Hide NAT (Network) – A group of hosts hides behind a single IP address.
Automatic Hide NAT (Address Range) – A range of IP addresses hides behind a single IP; has the lowest priority among automatically generated rules.
Now let's analyze each option:
A. Machine Hide NAT
This refers to host-based Hide NAT. It has higher priority than address range or network NAT rules because it is specific and precise.
B. Address Range Hide NAT
This refers to NAT configured on a range of IPs, not individual machines or networks. It is less specific and is given the lowest priority among automatically generated rules.
C. Network Hide NAT
This refers to NAT applied to a subnet or network object. It has higher priority than address range NAT but lower than host NAT.
D. Machine Static NAT
Static NAT on a host has higher priority than any Hide NAT, as it involves a direct one-to-one mapping and is deterministic.
Among automatically generated rules, Address Range Hide NAT has the lowest priority. Because it applies to a broad and less specific range of IPs, Check Point NAT policy applies this only after more precise object types like host or network.
Question 7
VPN gateways authenticate using ___________ and ___________ .
A. Passwords; tokens
B. Certificates; pre-shared secrets
C. Certificates; passwords
D. Tokens; pre-shared secrets
Correct Answer: B
Explanation:
VPN gateways in Check Point (and in most IPSec-based VPN architectures) use two primary methods for authentication: certificates and pre-shared secrets (also known as pre-shared keys or PSKs). These authentication methods are defined during VPN community configuration and determine how two gateways verify each other’s identity before establishing a secure connection.
Let’s explore these two authentication mechanisms in detail:
Certificates rely on public key infrastructure (PKI) to authenticate VPN peers. In this method:
Each gateway has a digital certificate issued by a Certificate Authority (CA), either internal (Check Point Internal CA) or external.
During VPN negotiations, the gateways exchange certificates.
The identity is validated based on the certificate’s authenticity, expiration, and trust chain.
This is considered the most secure and scalable method, especially in large environments where managing PSKs would be inefficient.
Pre-shared secrets are a shared password or key configured on both VPN gateways:
It is simpler to set up but less secure compared to certificates, especially in large deployments.
Vulnerable to brute-force or dictionary attacks if weak passwords are used.
Commonly used in smaller environments or where certificate infrastructure is unavailable.
A. Passwords; tokens: Tokens are typically used in user-level authentication (e.g., remote access VPNs), not in gateway-to-gateway authentication. Passwords are not typically used in VPN gateway authentication alone.
C. Certificates; passwords: Passwords might be used for administrator access, but they are not used in VPN peer authentication between gateways.
D. Tokens; pre-shared secrets: Again, tokens are not used in gateway-level VPN authentication. This is more relevant to user VPN scenarios like two-factor authentication.
VPN gateways authenticate each other using either certificates (preferred for stronger security and manageability) or pre-shared secrets (simpler but less secure). These methods ensure that only authorized peers can establish encrypted VPN tunnels.
Question 8
In R80 spoofing is defined as a method of:
A. Disguising an illegal IP address behind an authorized IP address through Port Address Translation.
B. Hiding your firewall from unauthorized users.
C. Detecting people using false or wrong authentication logins
D. Making packets appear as if they come from an authorized IP address.
Correct Answer: D
Explanation:
Spoofing in Check Point R80 and in general network security terminology refers to the act of forging the source IP address of a packet to make it appear as though it is coming from a trusted or authorized source. This technique is often used in malicious attacks, such as unauthorized access attempts or Denial-of-Service (DoS) attacks.
In Check Point R80, Anti-Spoofing is a security mechanism configured on each interface of a gateway object. The purpose of Anti-Spoofing is to:
Verify that packets arriving at a particular interface have a valid and expected source IP address range.
If a packet's source IP address does not match the predefined network or group of addresses associated with that interface, it is flagged as spoofed and dropped.
This is critical to prevent attackers from sending packets from an external or untrusted network while pretending to originate from a trusted network.
Consider a gateway with three interfaces:
Internal (e.g., 192.168.1.0/24)
External (Internet)
DMZ (e.g., 172.16.0.0/24)
If a packet arrives at the external interface with a source IP address of 192.168.1.100, it would trigger the Anti-Spoofing protection. Since 192.168.1.0/24 is expected only on the internal interface, receiving such a packet externally suggests that the packet is spoofed.
A. Disguising an illegal IP address behind an authorized IP address through Port Address Translation: This describes a type of NAT, not spoofing. NAT (Network Address Translation) manipulates IP addresses for routing or obfuscation but is a legitimate and configured process, unlike spoofing.
B. Hiding your firewall from unauthorized users: This relates more to stealth features or security through obscurity. It’s not the definition of spoofing.
C. Detecting people using false or wrong authentication logins: This relates to authentication mechanisms and login security (e.g., brute force detection), not spoofing.
Spoofing is about making a packet appear to come from a legitimate or trusted source by faking its source IP address. In Check Point R80, Anti-Spoofing is implemented as a protection feature on firewall interfaces to detect and drop such malicious traffic.
Question 9
The __________ is used to obtain identification and security information about network users.
A. User Directory
B. User server
C. UserCheck
D. User index
Correct Answer: A
Explanation:
In Check Point security architecture, the User Directory serves as the mechanism through which the system retrieves identification and security-related information about users. This is vital for enabling identity-based security policies, which control network access based not just on IP addresses or services, but also on who the user is.
The User Directory is typically an LDAP (Lightweight Directory Access Protocol) server such as Microsoft Active Directory, Novell eDirectory, or similar systems. Check Point integrates with these directory services to:
Obtain user credentials and group membership.
Retrieve information for identity awareness features.
Facilitate identity-based access control in security policies.
Perform authentication and authorization checks.
The User Directory is defined in the SmartConsole and linked to Identity Awareness configurations. Once properly integrated, the firewall can enforce rules based on usernames and groups rather than relying solely on IP addresses.
Applying rules that allow only members of the "IT Department" group to access certain network segments.
Enabling Single Sign-On (SSO) so users can authenticate without repeated logins.
Creating audit logs based on user identity rather than IP address, which may change due to DHCP.
B. User server: This is not a recognized component or terminology within Check Point’s architecture. It’s a generic term that does not refer to any specific service or function.
C. UserCheck: This is a separate Check Point feature used to interact with users through browser pop-ups for security awareness (e.g., warning them when accessing questionable websites). It does not gather user identification data.
D. User index: There is no standard Check Point component by this name. It might sound like a database or listing but it is not used to obtain real-time user identity or security data.
To support identity-based policies and user-level auditing, Check Point relies on external identity sources such as LDAP directories. This is managed through the User Directory, which is the correct and official component for retrieving user and group information.
Question 10
Which Check Point Application Control feature enables application scanning and detection?
A. Application Dictionary
B. AppWiki
C. Application Library
D. CPApp
Correct Answer: B
Explanation:
In Check Point's Application Control solution, AppWiki is the core feature that enables application scanning and detection. It is an extensive and dynamic database maintained by Check Point that includes signatures, categories, and behavioral definitions for thousands of applications.
AppWiki is a real-time application categorization engine used by Check Point's Application Control and URL Filtering software blades. It allows administrators to:
Search and browse for applications by name, category, or risk level.
Understand what an application does and what type of content it may expose the network to.
Build security policies that control the use of specific applications or application categories.
AppWiki supports deep packet inspection (DPI) and uses Layer 7 inspection techniques to identify applications even when traffic uses non-standard ports or is encrypted (when combined with HTTPS inspection). This allows administrators to accurately detect and control modern application usage beyond traditional port/protocol methods.
A. Application Dictionary: This is not an official Check Point feature. It may sound plausible but is not a term or tool used in the Check Point ecosystem.
C. Application Library: Again, while this term may imply a collection of application definitions, it is not the specific engine or feature used for scanning and detection. AppWiki is the official database and categorization tool used in practice.
D. CPApp: This is not a recognized feature or component within Check Point’s Application Control framework.
When defining security rules in SmartConsole using Application Control, admins use AppWiki to:
Search for applications or websites.
Add them to policy rules.
Apply user/group-based access restrictions.
Enforce risk-based controls based on AppWiki’s metadata (e.g., "block high-risk apps").
Because AppWiki is constantly updated by Check Point, it ensures that new and emerging applications can be scanned and categorized accurately, enabling ongoing detection and control.
AppWiki is the authoritative application classification and detection engine that empowers Check Point Application Control. It provides both the scanning mechanism and a rich UI-backed catalog for policy design.
Top Training Courses
LIMITED OFFER: GET 30% Discount
This is ONE TIME OFFER
A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.