DCA Mirantis Practice Test Questions and Exam Dumps

Question 1 

When building container images, you need to place files inside the image. There are two main instructions you can use to do this: one is a basic file copy, and the other offers some extra features. 

What are the correct ways these two instructions differ? Choose two correct answers.

A. One of them allows using file-matching patterns to select files, and the other does not.
B. One of them can automatically unpack compressed archive files (like tar.gz), while the other simply copies them as they are.
C. One of them is capable of bringing in files directly from a web link or internet location, while the other only uses files that are already on your computer.
D. One of them can match files using patterns like wildcards, while the other cannot.
E. One of them can handle compressed files, and the other cannot use them at all.

Correct Answers: B and C

Explanation:

When working with container technology such as Docker, one of the tasks you’ll often do is build what’s called a container image. This image contains everything your application needs to run: files, folders, tools, configurations, and more. To create these images, you use a file called a Dockerfile. In this file, you write instructions that describe how to build the image.

Two of the most commonly used instructions for adding files to an image are ADD and COPY. These two are often confusing because they seem to do the same thing—move files from your computer into the container image. However, they behave differently in important ways.

First, let’s talk about the simpler of the two: COPY. The COPY instruction is designed to do just one thing. It takes a file or folder from your local computer and places it in a specific location inside the image. It doesn’t change the file, unzip it, or download anything. It’s fast, predictable, and easy to understand. This is why COPY is generally preferred when all you need to do is move files.

On the other hand, the ADD instruction can do more than COPY. It can still copy files just like COPY, but it has two extra capabilities that COPY does not have. First, ADD can automatically unpack compressed files. For example, if you give it a .tar.gz file, it will extract its contents inside the image. This can be helpful if you’re working with packages or downloaded archives and want them unzipped automatically.

Second, ADD can pull files from the internet. If you provide a web address (like a URL to a file), ADD will go out and download that file, then place it into your image. COPY does not support this—it only works with files already saved on your own computer.

Because ADD has these extra powers, it might seem like the better choice. But that’s not always the case. These extra features can make the build process more complex and sometimes unpredictable. For example, if the URL is no longer available or the file changes, your image will build differently the next time. That’s why most experts recommend using COPY whenever possible, and only using ADD when you specifically need to unpack a file or download from the web.

So, to sum up:

  • The two key differences are:

    1. ADD can unzip compressed archive files (COPY cannot).

    2. ADD can download files from the internet (COPY cannot).

  • Neither COPY nor ADD supports regular expressions or pattern-matching like some people think. That makes options A and D incorrect.

  • Option E is misleading because COPY can still include compressed files—it just doesn’t unpack them. So COPY can handle them, but only in a limited way.

Question 2 

Imagine you're using Docker Universal Control Plane (UCP) to manage containers in your organization. One day, a problem occurs: an application crashes or behaves unexpectedly. You want to find out what happened by checking which users or systems interacted with UCP right before the failure. 

What must already be set up in your system before the failure happens, so you can review this activity?

A. You must enable UCP audit logging and set it to a level that captures either metadata or full request details.
B. All the nodes (servers) in your system must have their logging settings adjusted to metadata or request mode.
C. You need to set the general UCP logging mode to informational or debugging level.
D. You need to increase the logging level of the UCP Kubernetes API server to “warning” or higher.

Correct Answer: A

Explanation:

When managing containers at scale, it's important to keep track of what's happening across your systems—not just for troubleshooting when something goes wrong, but also for security and accountability. Docker Universal Control Plane (UCP) is a tool that helps manage containerized applications in a centralized, secure way. Like many management tools, UCP allows multiple users to access and interact with the system, each possibly making changes or sending commands.

Now, imagine a situation where your application suddenly fails, slows down, or produces errors. It’s not immediately clear what caused the issue. Did someone delete a resource by accident? Was there an unauthorized API call? Did a script make changes to the environment? To answer these kinds of questions, you need to know exactly what happened leading up to the failure—and that means reviewing logs.

But not just any logs. While UCP (and the Docker engines underneath it) can collect a lot of logs, not all of them are useful for tracking user activity. This is where UCP audit logging comes in. Audit logs are special logs that record every action taken through the UCP API—who did what, when they did it, and from where. They can show you:

  • Which user made a request.

  • What kind of request it was (create, delete, update).

  • Which resources were affected.

  • Whether the request was successful or denied.

These logs are especially useful during incident investigations, compliance checks, or security reviews.

However, these audit logs must be turned on ahead of time. If they were not enabled before the failure happened, UCP will not have stored the details you need. You cannot go back in time and enable them retroactively.

There are different levels of audit logging:

  • The metadata level captures basic information about the request (user, action, resource).

  • The request level includes full details of the actual API request content, providing more context.

Both levels are useful, but the more detailed “request” level can give you a clearer picture during an investigation.

Let’s now consider the incorrect options:

  • Option B says that all engines in the system must have their logging set to metadata or request. This is incorrect because the engine logging levels relate to runtime behavior, not UCP API activity. Engine logs won’t help you track UCP user activity.

  • Option C refers to general logging levels like “info” or “debug.” While these can be useful for diagnosing system performance or errors, they don’t include API audit data.

  • Option D refers to logging in the Kubernetes part of UCP. This might help with Kubernetes-specific issues, but it doesn’t capture general UCP API usage or user actions.

In summary:

If you want to understand what happened in UCP before something went wrong, you need to have audit logs already enabled, and you should choose a logging level that captures enough detail (like metadata or full requests). Without this in place, you won’t be able to see the history of user actions, and finding the root cause of a failure will be much harder.

Question 3 

A user is working inside a container and tries to change the system clock (for example, to update the current time or date). But it doesn’t work—the action fails. 

Could this failure be caused by SELinux (Security-Enhanced Linux)?

Options:

A. Yes
B. No

Correct Answer: A

Explanation:

To fully understand this question, let's break it down into two parts: what the user is trying to do, and how SELinux might be involved.

First, let’s talk about the action itself. The user is trying to change the system time from inside a container. This might seem like a harmless or simple action, but it’s actually very sensitive. The system clock is a core part of the operating system. Changing the clock affects time stamps on files, logging systems, encryption mechanisms, and much more. If a process could freely change the time, it could potentially mess with logs, cover up activity, or break software that relies on accurate timing.

This is why changing the system time requires high-level privileges. Normally, only the main system administrator (root user on the host system) is allowed to do this. Inside a container, even if the user has “root” access within that container, it doesn’t automatically mean they can change the system time. That’s because containers are designed to be isolated from the host system to improve security and stability.

Now, let’s bring in SELinux.

SELinux stands for Security-Enhanced Linux. It’s a security tool that controls what programs and users can do on a Linux system. It adds an extra layer of protection by defining policies for every process, file, and system action. These policies say who can do what—very specifically.

When SELinux is enabled, it restricts what containers can do, even more strictly than the default Docker or container settings. One of the things it blocks is access to certain system-level operations, like changing the system clock. This is considered a privileged operation, and by default, containers do not have permission to do it—especially on systems where SELinux is in enforcing mode.

So, to answer the question: yes, SELinux can absolutely be the reason why the user is unable to change the system clock from inside the container. Even if the user inside the container has administrative rights, SELinux may block this action according to its security policy.

You might be wondering: “Can this be allowed somehow?”

Yes, but only with special configuration. To allow a container to do something like change the time:

  • The container would need to be run in privileged mode, which gives it more control over the host system.

  • SELinux would need to be configured to allow this specific action, or set to a less strict mode (like permissive or disabled).

  • Special options like capabilities would need to be granted to the container. One of these capabilities is called CAP_SYS_TIME, which allows changing the system clock.

But giving a container this level of access is considered risky. It breaks the isolation that containers are supposed to provide, and could allow the container to affect other parts of the system.

In summary:

  • Changing the system clock is a sensitive operation that affects the entire host machine.

  • Containers are designed to be isolated and usually don’t have permission to do this.

  • SELinux enforces strict security policies and will block this type of action unless specifically allowed.

  • So yes, SELinux could be the reason why the user is not able to change the system time from inside the container.

Question 4

In a Kubernetes environment, a container inside a pod has been marked as unhealthy because it has repeatedly failed its health check (also known as the livenessProbe). 

What happens next to fix the unhealthy container? Does the orchestrator automatically restart the container to try to resolve the issue?


A. Yes, the container will be restarted automatically by the orchestrator.
B. No, the container will not be restarted. The issue must be fixed manually.

Correct Answer: A

Explanation:

In container orchestration systems like Kubernetes, containers are constantly being monitored to ensure they are running correctly. One of the most important checks Kubernetes performs on containers is called the livenessProbe.

A livenessProbe is a health check mechanism used to determine whether a container is still functioning properly. The Kubernetes orchestrator (the system that manages your containers) will perform this probe on a regular basis, using specific criteria such as:

  • HTTP requests to a defined URL within the container.

  • TCP socket checks to see if a specific port is open and responding.

  • Command checks where the system runs a command inside the container to see if it returns a success or failure status.

If a container fails this liveness check repeatedly, it means the container is not responding or has become "unhealthy." In this case, the orchestrator takes action to try to fix the situation automatically.

Here’s what happens when a container is marked as unhealthy due to failed liveness probes:

  1. Automatic Restart: When Kubernetes detects that a container has failed its liveness probe several times, it automatically restarts the container. The idea is to give the container a fresh chance to fix itself, since the issue could be temporary or related to a resource mismatch (like a memory spike).

  2. Pod Management: If you have multiple containers in a pod and one of them becomes unhealthy, only the unhealthy container is restarted. This ensures that the other containers in the pod continue to function normally without being affected by the failure of just one container.

  3. Health Check Frequency: You can configure how often Kubernetes checks the liveness probe and how many failures are allowed before the container is considered unhealthy. If a container consistently fails these checks, Kubernetes will try to restart it a certain number of times before giving up.

  4. Rolling Restart Mechanism: In a multi-container environment, Kubernetes can restart containers in a controlled manner to avoid downtime. The orchestrator might restart one container at a time to ensure that the service remains available during the restart process.

  5. Handling Persistent Failures: If the container continues to fail after being restarted multiple times, the orchestrator may stop attempting to restart it and instead mark it as a failed container. This will allow you to intervene manually and address the root cause of the issue, such as examining the container's logs, fixing bugs, or adjusting resources.

  6. Why Restarting Helps: The orchestrator restarts unhealthy containers because it’s often faster and easier than waiting for the issue to fix itself. In many cases, container failures are due to temporary issues like connectivity problems, memory shortages, or software glitches that can be resolved by restarting the application.

In summary, when a container in a Kubernetes pod is marked as unhealthy after failing its livenessProbe several times, the orchestrator will indeed automatically restart the container (Option A). This helps to ensure that the service provided by the container continues without needing manual intervention, at least in the initial stages of the issue.

Question 5 

You want to configure Docker to connect to a registry that doesn’t have a trusted TLS certificate (for example, a registry running on an internal network with a self-signed certificate). 

Is it possible to set this up by editing the /etc/docker/default configuration file and adding the setting INSECURE_REGISTRY?

A. Yes, you can set INSECURE_REGISTRY in the configuration file to allow Docker to use a registry without a trusted TLS certificate.
B. No, you cannot use this method to allow Docker to connect to an insecure registry. The configuration file does not support this setting.

Correct Answer: A

Explanation:

In Docker, the process of pulling or pushing images to and from a registry (a centralized storage location for container images) usually involves secure communication using TLS (Transport Layer Security). TLS ensures that the connection between your Docker engine and the registry is encrypted and that the identity of the registry can be verified. This prevents man-in-the-middle attacks and ensures that the image data being transferred is secure.

However, there are situations where you might want to connect to a Docker registry that does not use a trusted TLS certificate. This can happen if:

  • The registry is using a self-signed certificate, which is not trusted by default.

  • The registry is part of an internal network or development environment, and you don’t have a certificate authority to sign the certificates.

  • The registry is using an insecure HTTP connection (without encryption).

In these cases, Docker might refuse to connect to the registry, considering the connection insecure. But what if you still want Docker to connect to these "insecure" registries?

Configuring Docker to Use an Insecure Registry

Docker allows you to bypass this security feature and explicitly tell it to connect to a registry without verifying the TLS certificate. This can be done by editing the Docker configuration file on the system running Docker.

To configure Docker to connect to a registry without a trusted TLS certificate:

  1. Editing the Docker Configuration:

    • You can add a setting called INSECURE_REGISTRY in Docker’s configuration file (/etc/docker/default on many Linux systems).

The entry should look something like this:

  1. What This Does:

    • By specifying the registry’s address in the INSECURE_REGISTRY field, you’re telling Docker to accept the registry even if it doesn’t use a trusted TLS certificate. Docker will not attempt to verify the registry's certificate, effectively treating it as an "insecure" registry.

    • This is useful for private or internal Docker registries that you control but don’t have access to a trusted certificate authority to issue a valid certificate.

  2. Why Use INSECURE_REGISTRY?:

    • If you are working with internal registries, especially in a development or test environment, it might not be practical to set up a certificate authority or buy a trusted certificate. In such cases, using an insecure registry is a practical workaround.

    • By allowing Docker to communicate with an insecure registry, you can still perform container image pulls and pushes without needing a trusted certificate.

  3. Potential Risks:

    • Using an insecure registry can expose your system to risks, especially if the registry is hosted on an external network. Unencrypted communications are vulnerable to interception, and you won’t be able to verify the identity of the registry.

    • It’s important to restrict the use of insecure registries to environments where security is less of a concern, such as internal, isolated networks or during development.

Why Option A is Correct:

  • Docker provides an easy way to configure registries as insecure by adding the INSECURE_REGISTRY setting to its configuration file. Once this is configured, Docker will allow connections to the registry without checking the TLS certificate.

Why Option B is Incorrect:

  • The /etc/docker/default file does indeed support the INSECURE_REGISTRY setting. This setting is designed specifically to tell Docker to skip certificate verification for certain registries. Therefore, saying that this method won’t work is incorrect.

Conclusion: In situations where you need to connect Docker to a registry that doesn’t have a trusted TLS certificate, setting INSECURE_REGISTRY in the Docker configuration file is the correct and recommended method (Option A). This allows Docker to bypass the certificate check and connect to the registry, which can be useful for internal or development purposes, although caution is advised when doing so in production environments.


UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.