LFCA Linux Foundation Practice Test Questions and Exam Dumps



Question 1
Which of the following commands can be used to lock a user’s account so that they cannot log into a Linux server without removing any files, folders, or data?

A. lock
B. usermod
C. userdel
D. chmod

Answer:  B

Explanation:

In Linux systems, managing user accounts often involves temporarily or permanently restricting access without deleting a user's data. This is especially important in enterprise environments or servers, where administrative control over user access is critical. One such method is locking a user account, which ensures that a user cannot log in but their files and account information remain intact.

Let’s explore what each option does to better understand why B is correct:

A. The lock command might seem intuitively correct because of its name, but it is not a standard command on most Linux distributions. In fact, there is no default lock command used to manage user accounts. While some desktop environments might have a lock utility for graphical session locking, it is unrelated to user account management. Therefore, this is not the correct choice.

B. The usermod command is used to modify user account properties. One of the key options it offers is -L, which locks a user's password. When a user’s account is locked using usermod -L username, the system prepends an exclamation mark (!) to the user’s encrypted password in /etc/shadow. This effectively disables password-based login for that user. Importantly, this does not delete any of the user's files or data, and the account can be unlocked later using usermod -U username. This makes usermod the correct and safest way to lock a user account without deleting any data.

C. The userdel command is used to delete a user account. Depending on the flags used (like -r), it may also delete the user’s home directory and files. Even without that option, it still removes the user account itself, which is not what the question is asking. The requirement is to prevent login without deleting any data, so userdel is not appropriate here.

D. The chmod command is used to change file permissions. While it can alter who can read, write, or execute files, it does not manage user account access. Using chmod on user files or directories could unintentionally restrict access, but it does not prevent a user from logging in. Additionally, it risks breaking the user’s file environment rather than disabling access at the account level.

In summary, usermod is the appropriate tool for locking a user account without deleting any of their data. This command provides a reversible and non-destructive method to temporarily suspend user access — a critical capability in system administration.

Thus, the correct answer is B.



Question 2

Which of the following technologies is supported by the majority of cloud providers in order to support the orchestration of containerized applications?

A. Kubernetes
B. Vagrant
C. Ansible
D. Terraform

Answer:  A

Explanation:

When deploying and managing containerized applications—which are lightweight, portable, and efficient ways to run software in isolated environments—organizations need orchestration tools that automate tasks like deployment, scaling, networking, and availability. Among the many technologies available, cloud providers have largely coalesced around one dominant platform: Kubernetes.

Let’s examine the provided options to understand why Kubernetes is the correct answer:

A. Kubernetes is an open-source container orchestration system originally developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF). It is specifically designed to automate the deployment, scaling, and management of containerized applications. Kubernetes offers features such as service discovery, load balancing, self-healing, rolling updates, and persistent storage integration. Most major cloud platforms, including Amazon Web Services (EKS), Google Cloud Platform (GKE), Microsoft Azure (AKS), and IBM Cloud Kubernetes Service, offer full support for Kubernetes. Because of this widespread adoption and native support, Kubernetes has become the de facto standard for container orchestration, making it the best and most accurate answer.

B. Vagrant is a tool for building and managing virtual machine environments in a single workflow. While it is useful for development environments and can be used to configure virtualized infrastructure using tools like VirtualBox or VMware, Vagrant is not used for orchestrating containers. It doesn't provide capabilities for managing container lifecycle, clustering, or distributed applications. As such, it is not the technology cloud providers rely on for container orchestration.

C. Ansible is a configuration management tool developed by Red Hat. It allows for the automation of software provisioning, configuration management, and application deployment. While Ansible can be used to install and configure Kubernetes clusters, it is not itself a container orchestration tool. Its role is in automation and system setup, not in managing container workloads at runtime.

D. Terraform is an infrastructure-as-code (IaC) tool developed by HashiCorp. It enables users to define and provision infrastructure using declarative configuration files. Terraform is often used to deploy infrastructure, including Kubernetes clusters, but it does not manage containers directly. Once the infrastructure is in place, a tool like Kubernetes is still required to orchestrate containers.

To summarize, while tools like Ansible and Terraform play supporting roles in automation and infrastructure provisioning, and Vagrant is primarily used for local virtualized environments, only Kubernetes is explicitly designed for and widely adopted as the platform for orchestrating containerized applications. Its broad support across all major cloud providers solidifies it as the correct answer.

Thus, the correct answer is A.



Question 3

An IT team is currently implementing a custom software platform to address some key needs of the company. Which of the following is considered a functional requirement?

A. Identifying the purpose of the proposed system
B. Identifying the users of the proposed system
C. Identifying the development methodology
D. Identifying the technology stack of the proposed system

Answer: A

Explanation:

In software engineering and systems development, requirements are typically divided into two broad categories: functional requirements and non-functional requirements. Understanding the difference between these is crucial in the planning and execution of a software project.

Functional requirements describe what the system should do — that is, the specific behavior or functions of the system, such as the operations, inputs, outputs, and interactions that the system must support. These are the tasks the software must accomplish in order to satisfy the business goals or user needs.

Let’s explore each option to determine which one fits the definition of a functional requirement:

A. Identifying the purpose of the proposed system fits as a functional requirement. When we define a system's purpose, we are typically referring to what the system is supposed to do — for example, “the system must allow users to submit and track customer service requests,” or “the system must process payroll for employees every two weeks.” This aligns directly with the definition of a functional requirement because it outlines the system’s intended functions or operations. Therefore, this is the correct answer.

B. Identifying the users of the proposed system is important for user modeling and for defining user roles, but it is not a functional requirement by itself. This is more of a stakeholder analysis or part of the user requirement specification. While it informs functional and non-functional requirements, it does not directly describe a function of the system. It’s more accurately categorized under contextual or stakeholder requirements, not functional.

C. Identifying the development methodology (e.g., Agile, Waterfall, DevOps) is a decision about how the software will be developed, not about what the software will do. This is considered a process-oriented concern, not a system requirement. It’s not part of the software’s functional or non-functional requirements at all but is instead a project management decision.

D. Identifying the technology stack (e.g., using JavaScript, Python, PostgreSQL) is a technical decision related to system architecture and implementation. While it’s essential for developers and architects, it does not describe what the system is supposed to do. This is therefore a technical constraint or implementation detail, not a functional requirement.

To clarify further, an example of a functional requirement would be: “The system shall allow users to reset their password via email verification.” This describes a specific behavior the software must implement. An example of a non-functional requirement might be: “The system must respond to all user requests within two seconds.” This describes a quality attribute of the system’s performance, not a direct function.

In summary, of the four options provided, only identifying the purpose of the system directly relates to the expected behavior or functionality of the system — making it a functional requirement. The other options relate to users, development methodology, or technical tools, which are all important but fall outside the scope of functional requirements.

Thus, the correct answer is A.



Question 4

A server on the network is unreachable. What is the best method to verify connectivity between your computer and the remote server?

A. lookup
B. find
C. ping
D. netstat

Answer:  C

Explanation:

When troubleshooting network issues—such as when a server is unreachable—the first step is often to check basic connectivity between your local machine and the remote host. This helps determine whether the issue is due to network configuration, the server being down, or other factors. The most direct and widely used tool for this purpose is the ping command.

Let’s examine each option and explain why ping is the most appropriate method in this scenario:

A. The term lookup by itself does not refer to a standard networking command. You might be thinking of nslookup, which is used to query DNS servers to obtain the IP address of a domain name. While useful for resolving domain names, nslookup does not check if the destination is reachable—only that a domain name resolves to an IP address. Therefore, this tool doesn’t confirm whether the server is up or reachable over the network.

B. The find command is typically used to search for files and directories on a filesystem. It has no network diagnostic capabilities. It is unrelated to checking connectivity or server availability. This makes it completely unsuitable for this scenario.

C. The ping command is the standard and most straightforward tool used to verify network connectivity between two devices. It works by sending ICMP (Internet Control Message Protocol) Echo Request packets to the specified host and listening for Echo Reply packets. If the destination host replies, it indicates that the server is online and reachable over the network. If the server does not respond, the user receives a timeout or error, which suggests that the server may be down, a firewall may be blocking ICMP traffic, or there's a network issue. Because of its simplicity, speed, and ability to instantly indicate connectivity status, ping is the go-to command in this situation.

D. The netstat (network statistics) command is used to display active network connections, routing tables, and interface statistics. While it’s useful for examining what connections your computer currently has and what ports are being used, it does not test connectivity to a remote host. Netstat can be part of broader diagnostics but won’t tell you whether a remote server is reachable.

To summarize, ping is the most effective and appropriate command to use when you need to verify whether your computer can communicate with a remote server. It provides immediate, clear feedback about connectivity and latency and is universally available on almost all operating systems. It’s often the first tool used in any network troubleshooting process for good reason.

Thus, the correct answer is C.



Question 5

A company’s IT associate lists the contents of a directory and sees this line:
-rwsr-x--x 2 bob sales 2047 Oct 10 09:44 sales-report

What happens when Alice from the accounting team tries to execute this file?

A. The script executes using Bob’s account.
B. The script executes, but Alice cannot see the results.
C. The script executes and Bob is notified.
D. The script fails to execute; Alice is not on the sales team.

Answer:  A

Explanation:

This question revolves around understanding Linux file permissions, specifically the setuid bit and how it affects file execution. The command output shown:

-rwsr-x--x 2 bob sales 2047 Oct 10 09:44 sales-report

contains several elements worth breaking down, especially the permissions field: -rwsr-x--x.

This permission string can be dissected as follows:

  • The first character (-) indicates it is a regular file.

  • The next three characters (rws) are the owner's permissions: read, write, and setuid (represented by the s in the place of the executable bit).

  • The next three (r-x) are the group's permissions: read and execute.

  • The last three (--x) are the others' permissions: execute only.

Key details:

  • The file owner is bob.

  • The group is sales.

  • The file has a setuid bit (s) on the owner's execute bit.

What is the setuid bit?

The setuid (Set User ID) permission causes a program to run with the privileges of the file’s owner, not the user who is running it. In this case, the file is owned by bob, so anyone executing the file does so as if they were bob, regardless of their own user identity — as long as they have execute permission.

Alice’s execution scenario:

  • Alice is from the accounting team and is not a member of the sales group. However, the “others” permissions section (--x) grants execute permission to all other users who are not the owner and not in the group.

  • This means Alice is allowed to execute the file due to the --x permission for others.

  • Because the setuid bit is set (s in the owner’s execute bit), the file runs with bob’s privileges, not Alice’s.

Evaluating each option:

A. The script executes using Bob’s account.
This is accurate. Due to the setuid bit, the script runs with the effective UID of bob, not Alice. This is precisely how setuid is designed to function.

B. The script executes, but Alice cannot see the results.
This is incorrect. While the file permissions might restrict read access, there's no evidence in the question suggesting the results are hidden from execution. The script’s behavior depends on its internal logic, which is not described.

C. The script executes and Bob is notified.
Also incorrect. Linux does not automatically notify file owners when their files are executed. Such behavior would require custom logging or notification scripts, which are not mentioned here.

D. The script fails to execute; Alice is not on the sales team.
This is false. Although Alice is not in the sales group, the “others” permissions allow her to execute the file. Being in the sales group is not a requirement in this context.

In summary, because the setuid bit is present and Alice has execute permission via the “others” section, the script will run with the privileges of bob’s account when executed by Alice.

Thus, the correct answer is A.



Question 6

A software development team uses a single physical server for testing the latest code in multiple environments: development, pre-production, and production.

What is the recommended approach to maintain the basic security of these environments?

A. Assign different developers on the team to work on test, pre-prod, and prod code.
B. Implement peer review for all the changes deployed into any of the environments.
C. Develop and deploy each environment with its own set of software tools.
D. Use different user/group IDs for deploying and running workload in each environment.

Answer: D

Explanation:

When multiple environments—such as development, pre-production, and production—share the same physical server, the risk of accidental cross-contamination, privilege escalation, or unauthorized access increases significantly. Therefore, a secure and isolated approach is necessary, even in a resource-constrained setup.

In such cases, one effective and basic security control is to ensure that each environment operates under separate user and group IDs. This prevents one environment’s processes or users from interfering with or accessing the resources of another.

Let’s analyze each of the options to determine why D is the best choice:

A. Assign different developers on the team to work on test, pre-prod, and prod code.
While this may help enforce segregation of duties and potentially reduce human error or unauthorized changes, it is more of a personnel management strategy than a technical security control. It does not address system-level isolation or access control between environments. Additionally, team members often need to work across environments for integration and debugging, so this is not a scalable or robust security measure by itself.

B. Implement peer review for all the changes deployed into any of the environments.
Peer review is a software quality assurance practice, not a direct security mechanism. Although peer reviews can help catch bugs or security vulnerabilities in code, they do not isolate or protect environments at the system level. They also do not prevent accidental or malicious interactions between processes from different environments.

C. Develop and deploy each environment with its own set of software tools.
Using different tools for each environment may help avoid configuration conflicts, but it does not provide access control. All processes could still be running under the same user or with shared permissions, meaning one environment’s process could still access files or data from another if no user/group separation exists. This approach might lead to unnecessary complexity without solving the core security concern.

D. Use different user/group IDs for deploying and running workload in each environment.
This is the correct answer and the recommended basic security practice. By assigning unique user and group IDs to each environment:

  • You enforce access control boundaries at the OS level.

  • Each environment’s files and processes can be isolated using file permissions.

  • It is easier to apply resource limits, log activity, and perform auditing per environment.

  • If one environment is compromised or malfunctioning, it is less likely to affect the others.

Even on a single physical server, this approach allows you to simulate isolated environments securely without full virtualization or containers. It's a commonly used security practice in shared systems or when resources are limited.

In more advanced scenarios, containerization (e.g., using Docker) or virtualization would offer even greater isolation, but at the basic level, differentiating users and groups for each environment is essential for protecting integrity and access boundaries.

Thus, the correct answer is D.



Question 7

Which utility is used to create public and private key pairs for SSH authentication?

A. adduser
B. ssh-keygen
C. keygen
D. ssh

Answer:  B

Explanation:

In the context of SSH (Secure Shell) authentication, the most secure and commonly used method involves public-key cryptography. This technique requires the generation of a key pair: one private key that remains on the user’s machine, and one public key that is placed on the remote server in a special file (~/.ssh/authorized_keys). The correct utility for generating this pair is crucial to enabling secure, passwordless authentication over SSH.

Let’s evaluate the given options in detail to determine why B is the correct answer.

A. adduser is a command used to create new user accounts on a Linux or Unix-like system. It allows administrators to specify user details like username, password, home directory, and shell configuration. However, it does not generate SSH keys and is unrelated to authentication mechanisms based on cryptographic key pairs. Its role is strictly in account management, not secure access provisioning.

B. ssh-keygen is the correct and standard utility used to generate SSH key pairs. When this command is executed (e.g., ssh-keygen -t rsa), it:

  • Prompts the user to choose a location for saving the private key (usually ~/.ssh/id_rsa) and automatically creates the corresponding public key (e.g., ~/.ssh/id_rsa.pub).

  • Optionally allows the user to set a passphrase for additional security.

  • Supports various algorithms including RSA, ECDSA, and ED25519.

This utility is part of the OpenSSH suite and is widely supported across Unix, Linux, and even Windows (via Windows Subsystem for Linux or native OpenSSH support). Because it is specifically designed to create, manage, and inspect SSH key pairs, it is the most appropriate tool for this task.

C. keygen might sound like a reasonable option, but it is not a valid or standard command on Linux systems. There is no keygen utility in the default set of Unix or Linux tools for managing SSH keys. In other contexts (like cryptographic libraries or proprietary tools), a generic term like "keygen" might be used, but for SSH authentication, the only proper utility is ssh-keygen.

D. ssh is the client utility used to establish secure connections to remote servers. For example, the command ssh user@host initiates an SSH session. While it utilizes the keys that have been generated, it does not create or manage key pairs. Its function is in connection handling, not key creation.

In conclusion, when a user needs to set up secure SSH access using public-key authentication, the only correct utility from the options listed is ssh-keygen. It generates a private and public key pair that enables encrypted communication and can eliminate the need for password-based authentication. Proper use of ssh-keygen improves security, simplifies automated tasks (like deployments or backups), and is a foundational part of modern system administration.

Thus, the correct answer is B.



Question 8

What does LVM stand for?

A. Logical Virtualization Manager
B. Linux Volume Manager
C. Logical Volume Manager
D. Linux Virtualization Manager

Answer: C

Explanation:

LVM stands for Logical Volume Manager, and it is a powerful disk management system used in Linux and other Unix-like operating systems. It provides an abstraction layer over physical storage devices, enabling flexible management of disk space, which is especially useful for servers and enterprise systems that require scalable and dynamic storage solutions.

Let’s explore what LVM does and why Logical Volume Manager is the correct definition:

What is LVM?

LVM allows system administrators to create, resize, and manage logical volumes (LVs) instead of working directly with physical disk partitions. It sits between the physical storage devices (like hard drives or SSDs) and the file systems, offering a logical representation of available storage.

Core Components of LVM:

  1. Physical Volumes (PVs): These are actual storage devices (like /dev/sda1, /dev/sdb) that LVM manages. These devices are initialized for LVM use with the pvcreate command.

  2. Volume Groups (VGs): These act like a storage pool. Multiple physical volumes can be grouped into a single volume group using vgcreate, allowing all available space to be treated as one large storage pool.

  3. Logical Volumes (LVs): These are the partitions or "volumes" created from a volume group. Think of them as flexible partitions that can grow or shrink as needed. File systems like ext4 or xfs are typically created on logical volumes.

This layered approach gives LVM several advantages over traditional partitioning:

  • Dynamic resizing: Logical volumes can be resized (grown or shrunk) without rebooting, making storage management much more flexible.

  • Snapshots: LVM supports the creation of snapshots, which are consistent point-in-time copies of volumes. This is extremely useful for backups or testing changes without affecting live data.

  • Storage aggregation: You can combine multiple disks into a single volume group, effectively allowing you to treat multiple physical devices as one logical storage unit.

Evaluating the Options:

A. Logical Virtualization Manager — This sounds plausible but is incorrect. LVM deals with volume management, not virtualization.

B. Linux Volume Manager — Although it’s used in Linux systems, LVM is not strictly "Linux" specific in its name. The correct term focuses on "logical" rather than just the operating system.

C. Logical Volume Manager — This is the correct and widely accepted name. It accurately reflects the core functionality: managing logical volumes.

D. Linux Virtualization Manager — Again, this is incorrect. LVM is not a virtualization tool; it doesn't deal with virtual machines or hypervisors.

Why LVM Matters:

In environments where storage requirements change over time—such as growing application data or allocating more space to users—LVM provides the flexibility to adapt without major system overhauls. For system administrators, LVM is a foundational tool for disk management, resilience, and scalability.

In summary, LVM stands for Logical Volume Manager, a toolset that makes managing disk storage in Linux systems more dynamic and versatile than traditional partitioning schemes.

Thus, the correct answer is C.



Question 9

Encryption that uses both a private key and public key is known as what?

A. Key Pair Encryption (symmetric cryptography)
B. HMAC Cryptography (hash based message authentication)
C. Public Key Cryptography (asymmetric cryptography)
D. DPE (dual-phased hybrid encryption)

Answer: C

Explanation:

Encryption that uses both a private key and a public key is a type of asymmetric cryptography, often referred to as public key cryptography. In this model, there are two separate keys involved—one for encryption (the public key) and one for decryption (the private key). This approach forms the foundation of many cryptographic protocols and systems, including secure communications over the internet, digital signatures, and certificate-based authentication.

Let’s break down the concepts and options to see why C is the correct answer:

Asymmetric Cryptography (Public Key Cryptography)

  • Asymmetric cryptography, also known as public key cryptography, uses a pair of keys: a public key and a private key.

  • The public key is shared openly and can be used by anyone to encrypt messages. However, only the private key, which is kept secret, can decrypt those messages.

  • This system allows for secure communication without the need to share a secret key in advance. In addition, it enables digital signatures, where a private key is used to sign data, and anyone with the corresponding public key can verify the authenticity of the signature.

How does it work in practice?

  • Encryption: A sender uses the recipient’s public key to encrypt the data.

  • Decryption: The recipient uses their private key to decrypt the data.

  • This ensures confidentiality and security, as only the owner of the private key can decrypt the data.

Example:

  • SSL/TLS (used in HTTPS): When you visit a website using HTTPS, your browser uses the server’s public key to encrypt data (like your login credentials or credit card information). The server then uses its private key to decrypt that data, ensuring it remains secure in transit.

Evaluating the Options:

A. Key Pair Encryption (symmetric cryptography)
This is incorrect. Key Pair Encryption implies using two keys, but the term symmetric cryptography refers to a type of encryption where the same key is used for both encryption and decryption. Symmetric cryptography does not use separate private and public keys. Examples of symmetric encryption algorithms include AES (Advanced Encryption Standard) and DES (Data Encryption Standard).

B. HMAC Cryptography (hash-based message authentication)
HMAC (Hash-based Message Authentication Code) is a mechanism for message authentication using a cryptographic hash function and a secret key. While HMAC provides data integrity and authenticity, it does not involve the use of both a public and private key for encryption and decryption. Instead, it uses a shared secret key and a hashing algorithm, which makes it different from asymmetric (public-key) encryption.

C. Public Key Cryptography (asymmetric cryptography)
This is the correct answer. Public Key Cryptography is another name for asymmetric cryptography, where encryption and decryption are done using a pair of keys: a public key and a private key. The public key is used to encrypt the message, and the private key is used to decrypt it. This method is widely used for secure communication, digital signatures, and key exchange protocols.

D. DPE (dual-phased hybrid encryption)
This is not a standard term in cryptography. While hybrid encryption systems do exist (combining symmetric and asymmetric cryptography), DPE as a term is not commonly used or recognized in the context of encryption involving both public and private keys. Hybrid encryption typically uses asymmetric encryption to securely exchange a symmetric key, which is then used for the encryption of larger data sets.

In summary, the encryption method that uses both a public key for encryption and a private key for decryption is known as public key cryptography or asymmetric cryptography. This method is fundamental to many secure communications protocols, including email encryption, web security (HTTPS), and digital signatures.

Thus, the correct answer is C.



Question 10

An IT associate would find the log files for syslog in which of the following directories?

A. /var/log
B. /usr/local/logs
C. /home/logs
D. /etc/logs

Answer:  A

Explanation:

Syslog is a standard for logging system messages in Unix-like operating systems, including Linux. It collects and stores logs generated by the operating system and applications for system administrators to monitor, troubleshoot, and maintain the system's health. These log files provide crucial information about the activities and errors occurring within the system, such as authentication events, system startups and shutdowns, network issues, or application crashes.

By default, syslog log files are stored in specific directories on the system. The most common location for syslog log files is /var/log. Let’s analyze each option to understand why /var/log is the correct directory.

/var/log

  • /var/log is the standard directory for storing log files in Unix-like operating systems. The syslog files, along with logs for other system services (like kernel logs, application logs, and security logs), are stored here.

  • Common log files found in /var/log include:

    • /var/log/syslog (main system log)

    • /var/log/messages (general system messages)

    • /var/log/auth.log (authentication logs)

    • /var/log/daemon.log (logs from system daemons)

    • /var/log/kern.log (kernel messages)

  • Syslog and other system logging utilities typically direct their log outputs to this directory by default.

Evaluating the Other Options:

B. /usr/local/logs

  • /usr/local is a directory intended for user-installed software and applications, not for system logs. While some applications may create their own log directories under /usr/local, this is not the default location for syslog or other system logs.

C. /home/logs

  • /home is where user-specific directories are typically located. Each user on a system has their own home directory (e.g., /home/user1). It’s not a location for system-wide logs, and you would not find syslog files here. This directory is generally used for storing personal files and configurations.

D. /etc/logs

  • /etc is the configuration directory on Unix-like systems, containing configuration files for system services and applications. However, log files are typically not stored in /etc. It’s important to distinguish between configuration files (like /etc/syslog.conf or /etc/rsyslog.conf) and actual log files, which are stored in /var/log.

Why /var/log is the Correct Choice:

The /var/log directory is specifically designed for storing log files generated by system processes, applications, and services. The naming convention used by syslog and other logging systems ensures that logs are stored in this centralized location, making them easy to access and manage. System administrators routinely check the contents of this directory to troubleshoot issues or monitor system activity.

Thus, the correct answer is A.


UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.