SK0-005 CompTIA Practice Test Questions and Exam Dumps



Question No 1:

Which software licensing model is most commonly associated with cloud-based services, and why is it favored over traditional licensing models such as per socket, perpetual, or site-based licensing?

A. Per socket
B. Perpetual
C. Subscription-based
D. Site-based

Correct Answer: C. Subscription-based

Explanation:

Cloud computing has transformed the way software is delivered, consumed, and licensed. Unlike traditional on-premises software models that often rely on one-time purchases and hardware-specific licensing, cloud services typically follow a subscription-based licensing model.

Subscription-based licensing means users pay a recurring fee—monthly, annually, or based on usage—for access to the software. This approach aligns well with the core principles of cloud computing: scalability, flexibility, and cost efficiency. It allows users to start small, scale usage as needed, and pay only for what they use. Additionally, it enables service providers to continuously update and maintain the software without requiring user intervention.

In contrast:

  • Per socket licensing (A) is an older model mainly used for on-premise server software, where pricing depends on the number of CPU sockets. It doesn’t align well with the abstracted nature of cloud infrastructure.

  • Perpetual licensing (B) grants lifetime usage of software for a one-time fee. While once popular, it lacks the flexibility and scalability of cloud models and often comes with separate support and upgrade costs.

  • Site-based licensing (D) is common in educational or large enterprise environments where software is licensed for an entire physical location. This model doesn’t support the decentralized, remote-access nature of cloud environments.

Cloud software licensing trends toward the subscription-based model because it supports the dynamic and service-oriented architecture of cloud platforms. It simplifies budgeting, improves accessibility, and keeps systems up to date, making it the preferred choice for both providers and consumers in the cloud era.

Would you like a visual comparison table of these licensing models?



Question No 2:

A server administrator is tasked with optimizing system performance by running a performance monitor. Which two of the following system metrics are most commonly used to track and analyze resource utilization in order to ensure optimal performance? (Select two.)

A. Memory
B. Page file
C. Services
D. Application
E. CPU
F. Heartbeat

Correct Answers: A. Memory and E. CPU

Explanation:

Monitoring system performance is essential for identifying bottlenecks, ensuring efficient resource utilization, and maintaining overall system health. A server administrator often uses tools like Performance Monitor (PerfMon) on Windows or similar utilities in other operating systems to track key metrics.

Two of the most critical performance metrics are:

  1. Memory (A):
    Monitoring memory usage helps identify whether a system has enough physical RAM to support its workload. High memory usage or memory leaks can lead to increased paging, slower response times, and system instability. Memory counters like “Available MBytes,” “Pages/sec,” and “Committed Bytes” give insight into how memory is being allocated and whether the system is under memory pressure.

  2. CPU (E):
    The CPU is the brain of the system. Monitoring CPU usage provides insight into how much processing power is being consumed by applications and services. A consistently high CPU utilization could indicate an overworked system, poorly optimized software, or the need for additional processing resources. Important CPU counters include “% Processor Time,” “Processor Queue Length,” and “Interrupts/sec.”

Other options are less directly related to performance monitoring:

  • Page file (B): While related to memory, it is a secondary metric. It becomes relevant when physical memory is low.

  • Services (C) and Application (D): These are categories or components that might be reviewed during troubleshooting but are not performance metrics.

  • Heartbeat (F): Generally used in high-availability or failover systems to check system liveliness—not a standard performance metric.

For optimal system utilization, Memory and CPU metrics are fundamental. They provide immediate, actionable insight into system load, helping administrators detect issues early and maintain system health.

Would you like a breakdown of the most useful Windows PerfMon counters for these metrics?



Question No 3:

A user is unable to save large files to a specific directory on a Linux server, even though smaller files were being saved successfully just minutes earlier. As a technician investigating potential storage issues, which command should be used to quickly check disk space usage and identify if the issue is related to a full partition?

A. pvdisplay
B. mount
C. df -h
D. fdisk -l

Correct Answer: C. df -h

Explanation:

In Linux, when a user suddenly cannot save large files to a directory, a common cause is a lack of available disk space on the partition associated with that directory. This can happen even if smaller files were successfully written earlier, as large files require significantly more space. To diagnose this, the df -h command is the most appropriate first step.

What df -h Does:

  • The df command displays the amount of disk space used and available on all mounted filesystems.

  • The -h (human-readable) flag shows sizes in a format that's easy to understand (e.g., MB, GB), which helps quickly identify full or nearly full partitions.

  • It helps determine whether the issue is due to a lack of space or full inode usage, both of which can prevent file writes.

Why the Other Options Are Less Relevant:

  • A. pvdisplay: This shows details about physical volumes used in LVM (Logical Volume Manager). It's useful for volume configuration, not immediate disk space usage.

  • B. mount: Lists currently mounted filesystems and their mount points but doesn’t provide usage statistics like free space.

  • D. fdisk -l: Displays partition tables of all disks but is more useful for disk setup or troubleshooting unmounted devices, not current space usage.

The technician should use df -h to quickly determine if the file system where the directory resides is full. This allows for a fast, targeted response such as clearing space, resizing the partition, or moving files.

Would you like a visual example of a df -h output with an explanation of each column?



Question No 4:

After a recent power outage, a specific server in the data center is repeatedly going offline and losing its configuration, causing application access issues for users. The technician observes that the server displays an incorrect date and time when it powers on, while all other servers function normally. Which of the following are the MOST LIKELY causes of this issue? (Select two.)

A. The server has a faulty power supply
B. The server has a CMOS battery failure
C. The server requires OS updates
D. The server has a malfunctioning LED panel
E. The servers do not have NTP configured
F. The time synchronization service is disabled on the servers

Correct Answers: B. The server has a CMOS battery failure
& F. The time synchronization service is disabled on the servers

Explanation:

The symptoms described—frequent shutdowns, incorrect date/time on reboot, and configuration loss—strongly point to issues with persistent storage of BIOS settings and system time, most likely caused by CMOS battery failure and lack of proper time synchronization.

B. CMOS Battery Failure

The CMOS battery powers the system’s real-time clock (RTC) and maintains BIOS/UEFI settings when the server is powered off. A failing or dead CMOS battery results in:

  • Incorrect date and time upon boot,

  • Loss of BIOS configurations, which can include boot priorities or hardware settings,

  • Unexpected reboots or instability, especially after a power outage, as the system struggles to maintain critical hardware-level settings.

This explains why only one server (with the failed battery) is affected while others function normally.

F. Time Synchronization Service is Disabled

Even if the hardware clock (RTC) is off, Linux and Windows systems can maintain correct time using NTP (Network Time Protocol) or internal synchronization services. If the time sync service is disabled, the OS cannot correct the RTC upon boot, leading to continued time-related issues. This can cause:

  • Authentication failures (due to time drift),

  • Application crashes, particularly with systems dependent on accurate time (e.g., SSL, logging, file timestamps),

  • Access issues for users, as mentioned in the scenario.

Why the Other Options Are Less Likely:

  • A. Faulty Power Supply: Would likely cause random shutdowns or no power at all, but not necessarily affect time/configuration.

  • C. OS Updates: Not typically linked to hardware clock or BIOS config issues.

  • D. LED Panel: Cosmetic and unrelated to system function.

  • E. No NTP on Other Servers: All other servers are working fine, so this does not apply.

The most likely root causes are a failed CMOS battery and a disabled time synchronization service, both of which can be resolved with minimal cost and downtime.

Would you like a troubleshooting checklist for diagnosing CMOS battery issues?



Question No 5:

A company has recently enforced full disk encryption on all server hard drives to prevent unauthorized access to data in the event of physical theft or loss. As part of a broader data loss prevention (DLP) strategy, which of the following additional measures should the company implement to ensure the confidentiality and security of encrypted data while it's in use?

A. Encrypt all network traffic
B. Implement Multi-Factor Authentication (MFA) on all the servers with encrypted data
C. Block the servers from using an encrypted USB
D. Implement port security on the switches

Correct Answer: B. Implement Multi-Factor Authentication (MFA) on all the servers with encrypted data

Explanation:

Full disk encryption (FDE) is an essential component of a robust data loss prevention (DLP) strategy, especially for protecting data at rest—i.e., when the server is powered off or stolen. However, once the server is booted and the drive is decrypted (typically during the operating system boot process), the data becomes accessible to anyone who can log into the system. Therefore, additional access control measures must be enforced to ensure the data remains protected while in use.

Why B is Correct: Implementing MFA

Multi-Factor Authentication (MFA) adds a second layer of security beyond just a password. Even if an attacker gains access to login credentials or physical access to the server, they would still need the second authentication factor (e.g., a hardware token or authentication app). This dramatically reduces the risk of unauthorized access to decrypted data.

MFA is especially important on servers that store or process sensitive or encrypted data because it protects data in use, closing a key gap in many DLP strategies. It also supports compliance with data protection standards like HIPAA, GDPR, and PCI-DSS.

Why the Other Options Are Less Suitable:

  • A. Encrypt all network traffic: This protects data in transit, not necessarily data in use on servers. While important, it doesn’t address unauthorized access to encrypted data once the drive is mounted.

  • C. Block encrypted USBs: While this helps prevent external data exfiltration, it’s a less direct control for protecting internal server data.

  • D. Implement port security on switches: This is a network-layer protection to prevent rogue devices but does not secure the access to already-decrypted data on the server.

Encrypting hard drives is only one layer of a comprehensive DLP strategy. To effectively secure decrypted data on live systems, implementing MFA on servers ensures that only authorized users can access sensitive information, even if the system is physically or remotely compromised.

Would you like a checklist for implementing DLP controls across servers, networks, and endpoints?



Question No 6:

A systems administrator is configuring a new server to operate within a private Local Area Network (LAN). The network design must comply with the IP address ranges defined by RFC 1918 for private, non-routable use. Which of the following IP addresses is valid under the RFC 1918 private address space standard and should be used for the server configuration?

A. 11.251.196.241
B. 171.245.198.241
C. 172.16.19.241
D. 193.168.145.241

Correct Answer: C. 172.16.19.241

Explanation:

RFC 1918 is a widely adopted standard that defines private IP address ranges for use within internal networks such as LANs. These IP addresses are not routable on the public internet, which makes them ideal for internal communication within organizations.

RFC 1918 Private IP Ranges:

RFC 1918 defines three blocks of IP addresses reserved for private use:

  1. 10.0.0.0 to 10.255.255.255 (Class A)

  2. 172.16.0.0 to 172.31.255.255 (Class B)

  3. 192.168.0.0 to 192.168.255.255 (Class C)

These ranges are commonly used in homes, businesses, and data centers to prevent IP address conflicts and reduce reliance on public IP address allocations.

Why Option C is Correct:

  • C. 172.16.19.241 falls within the private Class B range (172.16.0.0 – 172.31.255.255) and is therefore valid under RFC 1918. This address is suitable for use on an internal LAN.

Why the Other Options Are Incorrect:

  • A. 11.251.196.241: This address is from the 11.0.0.0/8 block, which is public, not private. (Note: 11.0.0.0/8 is assigned to the U.S. Department of Defense.)

  • B. 171.245.198.241: This address falls outside any private range and is considered a public IP address.

  • D. 193.168.145.241: Although it resembles the private Class C range, 193.x.x.x is part of the public address space, not included in RFC 1918.

When configuring a server on a LAN that must adhere to RFC 1918, you must choose an IP from one of the defined private blocks. 172.16.19.241 is the only valid option from the list provided, as it belongs to one of the authorized private ranges for internal networking.

Would you like a diagram showing all RFC 1918 ranges with examples of their typical use?




Question No 7:

An IT administrator must perform low-level, bare-metal maintenance on a server located in a remote data center. This includes tasks such as accessing the system BIOS/UEFI, reinstalling the operating system, or troubleshooting boot failures—activities that require direct console access before the operating system is even loaded. Which of the following tools or methods should the administrator use to achieve this functionality remotely?

A. IP KVM
B. VNC
C. A crash cart
D. RDP
E. SSH

Correct Answer: A. IP KVM

Explanation:

Bare-metal maintenance refers to tasks that occur below the operating system level, such as configuring BIOS/UEFI settings, troubleshooting boot issues, or reinstalling the OS from scratch. These types of operations require direct console-level access, often before the system is fully operational. For administrators working remotely, the tool used must simulate being physically present at the machine.

A. IP KVM – Correct Answer

An IP-based KVM (Keyboard, Video, Mouse) switch allows administrators to remotely control servers at the hardware level, as if they were physically in front of them. With IP KVM:

  • You can interact with the BIOS or boot loader.

  • You can mount virtual media to install an operating system.

  • It works regardless of whether the OS is running.

IP KVM is specifically designed for remote server management in data centers, making it ideal for bare-metal maintenance tasks.

Why the Other Options Are Incorrect:

  • B. VNC (Virtual Network Computing):
    VNC allows GUI-based remote desktop control but requires the operating system to be running, so it's unsuitable for bare-metal tasks.

  • C. A crash cart:
    A crash cart (a portable console with monitor, keyboard, and mouse) is great for on-site maintenance but cannot be used remotely. It physically connects to the server.

  • D. RDP (Remote Desktop Protocol):
    Like VNC, RDP relies on the OS being operational. It doesn’t provide BIOS or pre-boot access.

  • E. SSH (Secure Shell):
    SSH is a command-line tool used to remotely manage systems, but again, it requires the OS and SSH service to be up and running.

For remote bare-metal server maintenance, IP KVM is the correct solution because it provides full out-of-band console access, allowing administrators to manage the server even when the operating system is down.

Would you like an overview of alternatives to IP KVM for out-of-band management, like iLO or DRAC?



Question No 8:

A technician has been assigned the task of ensuring that a virtual machine (VM) maintains high availability in the event of host failure or other disruptions. The goal is to minimize downtime and ensure continuity of services. Which of the following actions should the technician take to provide high availability for the VM in the most efficient and effective manner?

A. Take a snapshot of the original VM
B. Clone the original VM
C. Convert the original VM to use dynamic disks
D. Perform a P2V (Physical-to-Virtual) conversion of the original VM

Correct Answer: B. Clone the original VM

Explanation:

High availability (HA) in virtualization refers to the ability of a system to maintain continuous operation or quickly recover from failure, ensuring that the virtual machine (VM) can be restored or failover occurs with minimal disruption. In a production environment, this is essential for critical applications and services.

B. Clone the original VM – Correct Answer

Cloning a VM creates a complete, functional copy of the original VM, including its configuration, virtual disks, and data. By deploying the cloned VM on a different host within a virtualization cluster, the technician can set up load balancing or failover configurations, depending on the hypervisor platform (e.g., VMware vSphere HA, Hyper-V Failover Clustering). This is an efficient method because:

  • It requires minimal downtime.

  • It creates a redundant, ready-to-launch copy of the VM.

  • It integrates well into HA clusters for automated recovery.

Why the Other Options Are Incorrect:

  • A. Take a snapshot of the original VM:
    Snapshots are used for state preservation and rollback, not for high availability. They are not standalone copies and do not protect against host or hardware failure.

  • C. Convert the original VM to use dynamic disks:
    This change affects disk allocation behavior (dynamic vs. fixed size), not availability or redundancy. It has no impact on failover capabilities.

  • D. Perform a P2V conversion of the original VM:
    This process is used to convert a physical machine into a virtual one. Since the VM already exists, a P2V operation is irrelevant in this context.

To provide high availability for a VM efficiently, cloning the original VM and configuring it within an HA cluster allows quick recovery in case of failure. It's a strategic method to reduce downtime and maintain business continuity.

Would you like guidance on setting up HA using a specific hypervisor platform like VMware or Hyper-V?



Question No 9:

A server administrator receives a support request regarding Ann, a newly created user, who is unable to save files to her home directory on a Linux server. Upon inspecting the directory, the administrator sees the following permissions: dr-xr-xr-- /home/Ann. This indicates that Ann does not have write permissions to her own home directory. To resolve the issue efficiently without granting unnecessary access to others, which of the following chmod commands should the administrator use to correct the permissions and allow Ann to manage files in her directory?

A. chmod 777 /home/Ann
B. chmod 666 /home/Ann
C. chmod 711 /home/Ann
D. chmod 754 /home/Ann

Correct Answer: D. chmod 754 /home/Ann

Explanation:

In Linux, file and directory permissions are represented using a combination of letters (r, w, x) or numeric values (e.g., 754). The permission string shown — dr-xr-xr-- — breaks down as follows:

  • d: indicates a directory

  • r-x (5): the owner (Ann) can read and execute, but not write

  • r-x (5): the group can read and execute

  • r-- (4): others can only read

Since Ann is the owner, she should have write (w) permission to create, modify, or delete files in her own home directory. Without w, she can read files but cannot create or modify them, which is the issue reported.

Why D is Correct (chmod 754)

  • 7 (owner): read, write, and execute — gives Ann full control

  • 5 (group): read and execute only — avoids giving group members write access

  • 4 (others): read only — allows viewing, not modifying, for others
    This is a balanced permission set that provides Ann what she needs without granting excessive access to group or public users.

Why the Other Options Are Incorrect:

  • A. 777: Grants full access to everyone, which is a major security risk.

  • B. 666: Gives read and write permissions to all but removes execute, which is essential for accessing directories.

  • C. 711: Gives execute-only permission to group and others, but doesn’t allow them to read directory contents — also does not grant Ann write permission.

Using chmod 754 /home/Ann correctly resolves the issue by allowing Ann full control over her home directory while limiting access for others, in line with security best practices.

Would you like a breakdown of how to interpret and convert symbolic permissions to numeric values?

UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.