701-100 LPI Practice Test Questions and Exam Dumps

Question 1

Which of the following statements are true about Jenkins? (Choose two correct answers.)

A. Jenkins is specific to Java-based applications.
B. Jenkins can delegate tasks to slave nodes.
C. Jenkins only works on local files and cannot use SCM repositories.
D. Jenkins' functionality is determined by plugins.
E. Jenkins includes a series of integrated testing suites.

Correct Answer : B, D

Explanation:
Jenkins is a widely used open-source automation server that facilitates continuous integration (CI) and continuous delivery (CD) in software development. It is not tied to a specific programming language or development environment, and its core strength lies in its extensibility and distributed build capabilities. To understand the correct answers, let’s review each option.

  • A. Jenkins is specific to Java-based applications:
    This is incorrect. While Jenkins itself is written in Java and runs on a Java Virtual Machine (JVM), it is not limited to Java-based applications. Jenkins is language-agnostic and supports building and testing applications written in a wide variety of programming languages, including Python, Ruby, .NET, Node.js, Go, and more. It achieves this flexibility through the use of plugins and integration with different build tools (e.g., Maven, Gradle, Ant, npm).

  • B. Jenkins can delegate tasks to slave nodes:
    This is correct. Jenkins supports a master-agent architecture (formerly known as master-slave), where the Jenkins master can delegate tasks to one or more agents (nodes). These agents can run builds and other tasks on separate machines, helping to distribute workload and scale operations. This is a key feature of Jenkins that allows for parallel execution, load balancing, and isolation of different build environments.

  • C. Jenkins only works on local files and cannot use SCM repositories:
    This is incorrect. One of Jenkins' core capabilities is its integration with source control management (SCM) systems like Git, Subversion (SVN), Mercurial, and others. Jenkins can be configured to automatically pull code from SCM repositories, trigger builds based on code changes, and even poll repositories on a schedule. Limiting Jenkins to local files would contradict its purpose as a CI/CD tool.

  • D. Jenkins' functionality is determined by plugins:
    This is correct. Plugins are at the heart of Jenkins' flexibility and power. Jenkins has a vast ecosystem of plugins that enable it to integrate with virtually every tool and system used in software development and delivery: SCM systems, build tools, testing frameworks, deployment systems, notification services, and more. By default, Jenkins has a basic set of features, but its functionality can be extended extensively through these plugins.

  • E. Jenkins includes a series of integrated testing suites:
    This is incorrect. Jenkins does not come with built-in testing suites. Instead, it relies on external tools and frameworks (like JUnit, TestNG, Selenium, etc.) for test execution. Jenkins can integrate with these tools via plugins and provide reports and feedback based on their outputs, but it does not include native test suites as part of its core.

Jenkins is a powerful automation server designed to support a wide range of application types and development workflows. Its ability to distribute tasks across agents and its plugin-based architecture make it highly flexible and scalable. While it integrates with external testing tools and version control systems, it is not limited to Java, local files, or built-in test frameworks.

The correct answers are B and D.

Question 2

Which of the following statements about microservices are true? (Select three correct answers.)

A. Microservices facilitate the replacement of the implementation of a specific functionality.
B. Microservices applications are hard to scale because microservice architecture allow only one instance of each microservice.
C. Integration tests for microservices are not possible until all microservices forming a specific application are completely developed.
D. Interaction between microservices can be slower than the interaction of similar components within a monolithic application.
E. Within one application, individual microservices can be updated and redeployed independent of the remaining microservices.

Answer: A, D, E

Explanation:

Microservices architecture is an approach to software development where an application is built as a collection of loosely coupled, independently deployable services. Each microservice is focused on a specific business capability and communicates with others typically over a network using lightweight protocols, such as HTTP or messaging queues. Let's evaluate each option in detail to determine which statements are true.

Option A: Microservices facilitate the replacement of the implementation of a specific functionality.

This is true. One of the key advantages of microservices is modularity and loose coupling. Each service is self-contained and focuses on a single responsibility. This means the implementation of any given functionality can be replaced or refactored without affecting other services, as long as the contract or API remains the same. This supports innovation, faster updates, and technology flexibility across teams.

Option B: Microservices applications are hard to scale because microservice architecture allows only one instance of each microservice.

This is false. In fact, microservices are highly scalable. One of the major benefits of this architecture is that each service can be independently scaled based on demand. For example, if a service that handles image processing becomes a bottleneck, it can be deployed in multiple instances independently of other services. This flexibility improves resource utilization and system responsiveness.

Option C: Integration tests for microservices are not possible until all microservices forming a specific application are completely developed.

This is false. While integration testing in microservices can be more complex, it is certainly possible to test services individually and in groups before the entire system is completed. Techniques like mocking, contract testing, and service virtualization allow developers to test how a microservice would interact with others, even if those other services aren’t fully implemented yet.

Option D: Interaction between microservices can be slower than the interaction of similar components within a monolithic application.

This is true. In a monolithic application, components typically interact through in-memory method calls, which are fast. In contrast, microservices communicate over a network, often using HTTP or other protocols, which introduces latency. Additionally, serialization, deserialization, and network overhead can further slow interactions compared to direct in-process calls in monolithic systems.

Option E: Within one application, individual microservices can be updated and redeployed independent of the remaining microservices.

This is true. One of the major advantages of microservices is independent deployment. Because each microservice is isolated and has its own lifecycle, it can be updated, tested, and redeployed without needing to redeploy the entire application. This enables faster release cycles and reduces the risk of system-wide failure during updates.

The correct and true statements about microservices are:

  • A: They support replacing specific functionality.

  • D: Interactions can be slower due to network overhead.

  • E: They allow independent updates and deployments.

Hence, the correct answers are A, D, and E.

Question 3

When deploying a new version of a service using the canary deployment strategy, which of the following statements about database behavior are correct? (Choose two correct answers.)

A. Changes to the database schema can take long and reduce the database performance.
B. Traffic to the database will significantly increase because of the additional service instance.
C. The database schema must be compatible to all running versions of a service.
D. The database is locked while its content is copied to the canary database.
E. Canary deployments require two synchronized instances of each database.

Answer: A, C

Explanation:

Canary deployments are a strategy used to roll out new versions of a service in a controlled and incremental manner. A small portion of traffic is first routed to the new version, and then gradually increased once it’s confirmed to be stable. While this approach minimizes risk and allows for quick rollback, it introduces several considerations—particularly when it comes to database schema compatibility and performance.

Let’s analyze the correct answers:

A. Changes to the database schema can take long and reduce the database performance:
This is true. When deploying a new version of a service, it may include changes to the database schema such as adding or altering tables, creating indexes, or modifying constraints. These operations can be resource-intensive, especially on large production databases, and may lead to performance degradation, longer deployment times, or even temporary locking of resources. During a canary deployment, these issues are critical because the legacy version and new version of the service may be accessing the database simultaneously, which adds complexity.

C. The database schema must be compatible to all running versions of a service:
This is another correct statement. In canary deployments, both the current (old) and the canary (new) versions of the service often run in parallel. They usually share the same database, so any schema changes must be backward compatible with the existing version and forward compatible with the new version. This ensures that both versions of the service can interact with the database without conflicts or runtime failures. For example, you may need to add columns rather than remove or rename them, and delay deletion of deprecated fields until all instances use the new schema.

Now, let's evaluate the incorrect options:

B. Traffic to the database will significantly increase because of the additional service instance:
This is misleading. While an additional instance (the canary) is introduced, it typically serves a small percentage of total traffic (often just 5–10%). This does not significantly increase database load, especially compared to regular operational fluctuations. Any increase is marginal and typically accounted for in the capacity planning of a well-architected system.

D. The database is locked while its content is copied to the canary database:
This is incorrect. Canary deployments do not involve duplicating the database. In most cases, both the old and new versions of the service continue to interact with the same database. Locking the database to copy data is unnecessary and would contradict the purpose of a smooth, low-risk deployment strategy. Schema migration tools like Liquibase or Flyway often support non-locking, rolling updates to prevent service disruptions.

E. Canary deployments require two synchronized instances of each database:
This is also false. Unlike blue-green deployments, which may use duplicated databases, canary deployments almost always use a shared database. Maintaining two synchronized instances of a production database is highly complex and rarely done, especially in canary scenarios. Ensuring real-time synchronization and consistency between two live databases would be overly complicated and counter to the purpose of canary releases.

In conclusion, the most accurate statements regarding databases during a canary deployment are that schema changes can impact performance and that the schema must remain compatible with all running versions. Therefore, the correct answers are A and C.

Question 4

In a declarative Jenkins pipeline, given the following parameter declaration:

parameters {

string(name: "TargetEnvironment", defaultValue: "staging", description: "Target environment")

}

How can a task in the pipeline use the value provided for TargetEnvironment?

A. {{TargetEnvironment}}
B. $TargetEnvironment
C. %TargetEnvironment%
D. ${params.TargetEnvironment}
E. $ENV{TargetEnvironment}

Correct Answer : D

Explanation:
When working with declarative Jenkins pipelines, parameters defined using the parameters {} block are exposed to the pipeline as entries in the params map. This map allows you to reference the values entered by users or default values directly within your pipeline code.

In the provided example, a string parameter named TargetEnvironment is defined. The goal is to access the value of this parameter correctly within a pipeline script or step.

Let’s break down each option to see which is correct:

  • A. {{TargetEnvironment}}:
    This is incorrect. This syntax is not used in Jenkins pipelines. Double curly braces ({{ }}) are common in templating languages like Jinja2, but they are not recognized by the Jenkins Pipeline DSL (Domain Specific Language).

  • B. $TargetEnvironment:
    This is partially correct but not reliable. In Groovy, which Jenkins pipelines are based on, using $VariableName is sometimes valid when referencing environment variables or shell variables within a sh step. However, pipeline parameters are not automatically available as shell environment variables unless explicitly set. Therefore, $TargetEnvironment will not reliably retrieve the pipeline parameter unless you first inject it into the shell environment.

  • C. %TargetEnvironment%:
    This is incorrect. This syntax is used in Windows batch scripting, not in Jenkins pipeline syntax or Groovy. Jenkins does support running batch scripts on Windows agents, but this format would only work inside a bat step on a Windows node, and even then, not for referencing Jenkins parameters.

  • D. ${params.TargetEnvironment}:
    This is correct. Jenkins exposes user-defined parameters through the params object, which behaves like a map. Accessing params.TargetEnvironment retrieves the value provided by the user (or the default value if no input was given). The ${} syntax ensures that Groovy interpolation happens correctly within strings or script blocks. This is the standard and recommended way to retrieve parameter values in declarative pipelines.

For example, in a pipeline step:

echo "Deploying to ${params.TargetEnvironment}"

This will echo: Deploying to staging (or whatever value the user provides).

  • E. $ENV{TargetEnvironment}:
    This is incorrect. This syntax looks like a hybrid between Bash-style and Perl-style environment variable access, but it is not valid in Jenkins pipelines. If you want to set an environment variable using a parameter in Jenkins, you would need to do that manually using the environment directive or inside a shell script block.

To use a string parameter like TargetEnvironment defined in a declarative Jenkins pipeline, the correct and reliable method is to reference it using the params object with standard Groovy map syntax. This ensures compatibility with the Jenkins Pipeline DSL and makes the value accessible in all stages and steps of your pipeline.

The correct answer is D.

Question 5

Which of the following HTTP headers is a CORS (Cross-Origin Resource Sharing) header?

A. X-CORS-Access-Token:
B. Location:
C. Referer:
D. Authorization:
E. Access-Control-Allow-Origin

Answer: E

Explanation:

Cross-Origin Resource Sharing (CORS) is a security feature implemented by web browsers that allows or restricts web applications running at one origin (domain) from interacting with resources from a different origin. Without CORS, browsers block such cross-origin requests due to the Same-Origin Policy. To manage this, CORS uses a specific set of HTTP headers that govern whether the browser should allow a web page to make requests to a different domain.

Let’s evaluate each of the headers provided in the options:

Option A: X-CORS-Access-Token:

This is not a standard HTTP header nor a recognized CORS header. It appears to be a custom-made header (not part of the official CORS specification). While applications can define custom headers starting with X-, these are not interpreted by the browser for CORS policy enforcement. Therefore, this is not a valid CORS header.

Option B: Location:

This header is used in HTTP redirection responses to indicate the new location of the requested resource. For example, in a 301 or 302 response, the Location: header tells the browser where to go next. It has no relation to CORS.

Option C: Referer:

This header is automatically included in HTTP requests and indicates the address of the previous web page from which a request was made. While it's useful for analytics and debugging, it is not used to control CORS behavior or browser access permissions for cross-origin requests.

Option D: Authorization:

This header is used to pass authentication credentials, such as bearer tokens, to the server. It is often involved in securing APIs. However, it is not a CORS header. That said, if a client sends a request using this header across origins, the server must explicitly allow it by including it in the Access-Control-Allow-Headers CORS response header, but that does not make Authorization itself a CORS header.

Option E: Access-Control-Allow-Origin

This is the correct answer. It is one of the main headers used in CORS. When a browser makes a cross-origin request, it checks this header in the server’s response to determine whether the resource is allowed to be shared with the origin of the requesting page. For example:

Access-Control-Allow-Origin: https://example.com

Access-Control-Allow-Origin: *

The above tells the browser that the server permits access from the specified origin or from all origins, respectively.

This header is essential for enabling controlled access to resources and is part of the standard CORS specification outlined by the W3C.

Among the options, only Access-Control-Allow-Origin is a standard HTTP response header used specifically to implement CORS policies. It determines whether the requesting origin is allowed to access the resource, and therefore it is the correct answer. All other headers either serve unrelated purposes or are not standard.

Thus, the correct answer is E.

Question 6

Which of the following Git commands are used to manage files within a repository? (Choose two correct answers.)

A. git rm
B. git cp
C. git mv
D. git move
E. git copy

Answer: A, C

Explanation:

Git is a distributed version control system used by developers to manage and track changes in code. While most people associate Git with commits, branches, and merges, it also includes commands for managing files within a repository—specifically for tasks like deleting or moving files. Understanding which commands are officially supported is essential for efficient use of Git.

Let’s break down the two correct answers first:

A. git rm:
This command is used to remove (delete) files from the working directory and the staging area. It is an essential tool for file management in Git. When you use git rm filename, the file is deleted from your local filesystem, and that deletion is staged for the next commit. This is particularly useful when cleaning up unused files or refactoring a project structure.
Example:

git rm old_script.py

git commit -m "Remove deprecated script"

C. git mv:
This command is used to move or rename files within a Git repository. It functions like a combination of mv in Unix and git add + git rm. It ensures the file move is tracked correctly by Git, which can help with preserving file history through renames.
Example:

git mv old_name.py new_name.py

git commit -m "Rename script for clarity"

Both git rm and git mv directly help manage the content and structure of the repository in a way that Git tracks and understands.

Now, let’s analyze the incorrect answers:

B. git cp:
There is no such command in Git. While cp is a standard Unix command to copy files, Git does not provide a dedicated git cp command. Copying a file within a repository can be done using your shell or file manager (cp or manual copy), followed by git add to stage the new copy. Git does not automatically infer or track copies unless they are explicitly committed and detected later via similarity index analysis.

D. git move:
This is not a valid Git command. People sometimes confuse git move with git mv, but only the latter exists. Typing git move in the terminal will return an error indicating that the command is unrecognized.

E. git copy:
Again, this is not a real Git command. Like git cp, if you want to copy files, you use the system-level cp command and then use git add to stage the copied file(s). Git does not offer built-in support for a command named git copy.

In summary, when managing files in a Git repository, you primarily use git rm to remove files and git mv to rename or move them. These are official commands provided by Git and are essential for repository management. The other options are either not valid Git commands or are confused with similar system-level commands.

Therefore, the correct answers are A and C.

Question 7

What implications does container virtualization have for DevOps? (Choose two answers.)

A. Containers decouple the packaging of an application from its infrastructure.
B. Containers require developers to have detailed knowledge of their IT infrastructure.
C. Containers let developers test their software under production conditions.
D. Containers complicate the deployment of software and require early deployment tests.
E. Containers require application-specific adjustment to the container platform.

Correct Answer : A, C

Explanation:
Container virtualization has significantly transformed the DevOps landscape by making software development and deployment faster, more portable, and more scalable. Containers provide a lightweight, consistent environment that encapsulates applications and all their dependencies, ensuring that they can run reliably across different computing environments.

Let’s analyze each option to determine the correct implications:

  • A. Containers decouple the packaging of an application from its infrastructure:
    This is correct. One of the core benefits of container technology is the decoupling of an application from the underlying infrastructure. This means developers can package their applications and all dependencies into containers without worrying about the environment in which the containers will be deployed. This abstraction improves portability, simplifies deployment processes, and supports consistent behavior across development, test, and production environments—key goals in DevOps.

  • B. Containers require developers to have detailed knowledge of their IT infrastructure:
    This is incorrect. Containers are designed to shield developers from the complexities of the underlying infrastructure. Developers don’t need deep knowledge of the host system or the exact deployment environment. Tools like Docker and orchestration platforms like Kubernetes handle much of the infrastructure abstraction, letting developers focus on writing and testing code.

  • C. Containers let developers test their software under production conditions:
    This is correct. Containers allow for the creation of environments that closely mirror production, even on a developer's local machine. This reduces the "works on my machine" problem by enabling developers to simulate production conditions during development and testing. Since the same container can be used throughout the development pipeline, it ensures consistency between stages of deployment, which is a major benefit in a DevOps workflow.

  • D. Containers complicate the deployment of software and require early deployment tests:
    This is incorrect. In fact, containers are often adopted precisely because they simplify the deployment process. They allow for automated builds, versioned releases, and repeatable deployments, reducing complexity. Rather than requiring early deployment tests to mitigate complications, containers support early testing as part of continuous integration/continuous delivery (CI/CD) workflows, which enhances software quality and speeds up release cycles.

  • E. Containers require application-specific adjustment to the container platform:
    This is incorrect. While some minimal configuration (like writing a Dockerfile) is required to containerize an application, containers are designed to be platform-independent. The container engine (e.g., Docker or containerd) ensures that containers behave consistently across different environments. There's no need for each application to be deeply adjusted to the platform, making containers versatile and easy to use across various applications.

In the DevOps world, containers are a powerful enabler. They allow for infrastructure abstraction (as seen in A) and the ability to test under conditions that closely resemble production (as seen in C). These benefits improve consistency, reduce errors, and speed up the delivery process. Conversely, options B, D, and E incorrectly describe container limitations that don’t align with DevOps practices or the advantages containers bring.

The correct answers are A and C.

Question 8

Which of the following HTTP methods are used by REST? (Choose three correct answers.)

A. CREATE
B. REPLACE
C. PUT
D. DELETE
E. GET

Answer: C, D, E

Explanation:

REST (Representational State Transfer) is an architectural style for building web services that rely on standard HTTP methods to perform operations on resources. RESTful APIs use these HTTP methods to map CRUD (Create, Read, Update, Delete) operations to web service actions.

Let’s evaluate each of the HTTP methods listed in the options and how they relate to REST:

Option A: CREATE

This is not a valid HTTP method. While RESTful APIs support creating resources, the HTTP method used for this operation is POST, not CREATE. Therefore, CREATE is not an actual HTTP method recognized by REST or defined in the HTTP specification.

Option B: REPLACE

Like CREATE, this is not a standard HTTP method. Although RESTful APIs can replace existing resources (typically using the PUT method), REPLACE itself is not an HTTP method. It might conceptually describe an action, but it’s not part of the HTTP standard.

Option C: PUT

This is a correct answer. PUT is a standard HTTP method used in REST to update or replace an existing resource or create a resource at a known URI. It is idempotent, meaning multiple identical PUT requests have the same effect as a single one. REST uses PUT to update a resource completely (whereas PATCH is often used for partial updates).

Example:

PUT /users/123 HTTP/1.1

Option D: DELETE

This is another correct answer. DELETE is used in REST to remove a resource identified by a specific URI. Like PUT, DELETE is also idempotent.

Example:

DELETE /users/123 HTTP/1.1

This tells the server to delete the resource associated with the user ID 123.

Option E: GET

This is also a correct answer. GET is used in REST to retrieve a resource or collection of resources from the server. It is a safe, read-only operation, meaning it does not change the state of the server.

Example:

GET /users/123 HTTP/1.1

This would retrieve the information for the user with ID 123.

Summary of Standard HTTP Methods Used in REST:

  • GET → Retrieve a resource

  • POST → Create a new resource

  • PUT → Update or replace a resource

  • DELETE → Delete a resource

  • PATCH → Partially update a resource (not always used)

Only C (PUT), D (DELETE), and E (GET) are actual HTTP methods recognized by the REST architecture. A (CREATE) and B (REPLACE) are not standard HTTP methods and therefore do not apply.

Final answer: C, D, E.

Question 9

The file index.php in a Git repository has been modified locally and now contains an error. The changes have not been committed yet. 

Which Git command will restore index.php to its most recent committed version on the current branch?

A. git lastver "" index.php
B. git revert "" index.php
C. git checkout "" index.php
D. git clean "" index.php
E. git repair "" index.php

Answer: C

Explanation:

When working with Git, it's common to make local changes to files and later decide to discard those changes—especially if the changes are incorrect or unwanted. In this scenario, the file index.php has been modified, but those modifications have not yet been committed. Therefore, the goal is to discard the changes and revert index.php back to the latest committed version on the current branch.

Let’s examine why C. git checkout "" index.php is the correct answer and why the others are not.

C. git checkout "" index.php:
This is the correct command to use when you want to discard uncommitted changes to a specific file. This command reverts the file to the latest committed version from the current branch in your Git working directory.

The correct syntax is:

git checkout -- index.php

The double dash -- separates command options from file names and ensures Git interprets index.php as a filename rather than a branch or tag.

What this command does:

  • It overwrites the local, uncommitted changes to index.php.

  • It restores the file to the version from the most recent commit (HEAD).

  • It does not affect the commit history, as this is not a commit or revert operation.

Now let's look at why the other options are incorrect:

A. git lastver "" index.php:
This is not a valid Git command. git lastver doesn’t exist in Git. It may be confused with a conceptual idea of “last version,” but there is no built-in Git command by this name.

B. git revert "" index.php:
The git revert command is used to undo a previous commit by creating a new commit that reverses the changes introduced by a specific commit. It is not used for uncommitted changes and also requires a commit hash, not a filename.
Example of correct usage:

git revert <commit-hash>

So using git revert in this context would not work for the goal, which is to discard uncommitted local changes to a single file.

D. git clean "" index.php:
The git clean command is used to remove untracked files from the working directory. It does not affect tracked files (like index.php in this case) and cannot be used to restore them to a previous version. Also, the correct syntax does not take file names as direct arguments without flags.
Example:

git clean -f

E. git repair "" index.php:
There is no git repair command in the Git command set. While Git can often recover corrupted repos through lower-level commands or by re-cloning, this is not a valid or recognized Git command.

To discard local changes to a file and restore it to its last committed version in Git, you should use:

git checkout -- index.php

This makes C the correct choice. All other options are either invalid Git commands or are used for entirely different purposes.

Therefore, the correct answer is C.


UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.