220-1201  CompTIA A+ Certification Exam: Core 1 Exam Dumps and Practice Test Questions Set 10 Q181-200

Visit here for our full CompTIA 220-1201 exam dumps and practice test questions.

Question 181:

Which device allows a laptop to expand storage without opening the chassis?

A) Replace the internal SSD
B) Add an SD or microSD card
C) Install a 3.5-inch HDD
D) Add a PCIe expansion card

Answer: B) Add an SD or microSD card

Explanation:

Laptops are designed to be compact, portable systems, and many models use internal components that are difficult or impossible for the average user to upgrade. Because of this compact design, the user often needs to rely on external or semi-external methods to expand storage capacity. Among the options listed, adding an SD or microSD card is the most appropriate method to increase storage without physically opening or modifying the laptop. This is because most laptops come with an SD or microSD slot on the side or front. The addition of a memory card is extremely simple: the card is inserted into the slot, recognized by the operating system, and immediately available for file storage. It does not require any screws, technical skill, or risk of damaging delicate internal components.

Replacing an internal SSD requires opening the laptop’s chassis. In some ultrabooks or slim models, the SSD may even be soldered directly to the motherboard. If not soldered, replacing it may still involve removing the bottom shell, disconnecting battery cables, and handling sensitive hardware. This process may void the warranty or cause accidental damage if the user is inexperienced. Therefore, it is not a practical method when the requirement is clearly to avoid opening the device.

Installing a 3.5-inch hard drive is not possible in a laptop because 3.5-inch drives are full-size desktop components. Laptops use 2.5-inch drives or M.2 SSDs due to size limitations, and no laptop chassis is physically designed to house a 3.5-inch drive. Even external enclosures for 3.5-inch drives require additional power, making this option irrelevant for internal expansion.

A PCIe expansion card is also invalid because laptops do not have accessible PCIe slots like desktops. Some specialized workstation laptops may support docking stations or ExpressCard slots, but these are limited, rare, and not applicable to standard consumer laptops. A PCIe expansion card cannot be inserted directly into a laptop.

The SD or microSD card option solves these limitations. SD cards come in large capacities such as 128 GB, 256 GB, 512 GB, and even 1 TB, making them practical for general file storage. Although they are not as fast as internal NVMe SSDs, SD cards offer enough performance for storing photos, documents, videos, and backups. They are also highly portable, allowing users to transfer data between devices easily. Many operating systems can mount SD cards as permanent storage, offering convenience for extended use.

Overall, the SD or microSD card is the only option that meets the requirement of expanding storage without opening the laptop or altering internal components. It provides a safe, low-cost, convenient, and widely supported solution for storage expansion.

Question 182:

Which connector is used exclusively to provide dedicated CPU power to the motherboard?

A) 24-pin ATX
B) 4/8-pin EPS
C) PCIe 6-pin
D) Molex

Answer: B) 4/8-pin EPS

Explanation:

Modern motherboards require dedicated power delivery to ensure stable and reliable CPU operation. For this purpose, manufacturers use a separate connector known as the EPS (Entry-Level Power Supply) connector. This connector usually comes as a 4-pin or 8-pin connection and plugs directly into the motherboard near the CPU socket. Its main function is to supply clean, dedicated power to the processor. CPUs consume significant power, especially under heavy workloads such as gaming, video rendering, or multitasking. Therefore, providing them with a reliable, isolated power source is crucial for maintaining system stability and preventing shutdowns or voltage fluctuations.

The 24-pin ATX connector is essential for providing power to the entire motherboard, but it does not focus specifically on the CPU. It distributes electrical power to various components like RAM slots, chipset, PCIe slots, and other onboard components. However, the energy needs of modern CPUs exceed what the 24-pin connector can safely deliver; thus, the additional EPS connector is required.

The PCIe 6-pin connector is primarily designed for graphics cards and other high-performance PCIe devices. While it delivers significant power, it is not compatible with the CPU power socket on the motherboard. Physically, it cannot fit into the EPS connector, and using an adapter is not recommended due to potential power instability.

The Molex connector is an older type of peripheral connector used primarily for powering older drives, case fans, and certain accessories. It was never intended to deliver power to the CPU or modern motherboard components. It also lacks the stability needed for critical components like the CPU.

The EPS 4/8-pin connector is built to meet the voltage and amperage demands of modern processors. Many high-end motherboards even include dual EPS connectors to support extreme overclocking or high-core-count CPUs. Without proper EPS power, the system may fail to boot, randomly shut down, or throttle performance.

For these reasons, the EPS connector is the only correct answer.

Question 183:

Which network type connects devices in a small personal area, such as around a user’s body?

A) LAN
B) WAN
C) PAN
D) MAN

Answer: C) PAN

Explanation:

A Personal Area Network (PAN) is a type of network designed specifically for very short-range communication around a person. These networks support devices such as smartphones, smartwatches, fitness trackers, wireless earbuds, and Bluetooth keyboards or mice. The range of a PAN is typically within a few meters, intended to operate within the immediate personal space of the user. Technologies such as Bluetooth and NFC are commonly used to establish PAN connections.

LAN (Local Area Network) is larger, covering an entire home, office, or building. It supports multiple computers, printers, and servers, far exceeding personal space. WAN (Wide Area Network) spans large geographic regions, connecting multiple LANs over long distances, such as cities or countries. MAN (Metropolitan Area Network) covers medium-scale urban areas.

Since PANs are designed specifically for personal devices around one individual, they are the correct answer.

Question 184:

To allow multiple VLANs to pass traffic between switches, which networking feature is required?

A) Port forwarding
B) Trunking (802.1Q)
C) Static routing
D) PoE

Answer: B) Trunking (802.1Q)

Explanation:

Virtual Local Area Networks (VLANs) allow network administrators to divide a physical network into multiple logical networks. For VLAN traffic to travel between switches while maintaining VLAN separation, a trunk link must be established. Trunking uses the IEEE 802.1Q standard to tag Ethernet frames so that receiving switches know which VLAN each packet belongs to.

Port forwarding and static routing do not handle VLAN tagging. PoE supplies power but has no role in VLAN communication.

Trunking is essential for allowing multiple VLANs to operate across switch-to-switch links, making it the correct choice.

Question 185:

Which cable type is most susceptible to electromagnetic interference (EMI)?

A) UTP
B) STP
C) Fiber
D) Coax

Answer: A) UTP

Explanation:

Unshielded Twisted Pair (UTP) cables consist of twisted copper wires without any additional shielding. Because of the lack of shielding, they are highly susceptible to electromagnetic interference from nearby electrical sources such as motors, fluorescent lighting, wireless signals, and electrical lines. STP cables have extra shielding that reduces interference, fiber uses light and is immune to EMI, and coaxial cables contain shielding layers.

Question 186:

Which Wi-Fi standard operates on both 2.4 GHz and 5 GHz and supports MIMO?

A) 802.11g
B) 802.11a
C) 802.11n
D) 802.11b

Answer: C) 802.11n

Explanation:

The 802.11n Wi-Fi standard is significant because it introduced major improvements over its predecessors, including support for both 2.4 GHz and 5 GHz frequency bands, and MIMO (Multiple-Input, Multiple-Output) technology. To fully understand why 802.11n is the correct answer, you must examine the design, capabilities, and historical context of Wi-Fi standards, as well as what MIMO means and how frequency bands affect wireless performance.

Before 802.11n, many devices used earlier standards such as 802.11b, 802.11g, or 802.11a. The 802.11b standard operates in the 2.4 GHz band and delivers relatively low data rates (up to 11 Mbps). The 802.11g standard is also limited to 2.4 GHz but improves data rates significantly (up to 54 Mbps under ideal conditions). On the other hand, 802.11a operates in the 5 GHz band and offers similar data rates to 802.11g, but its single-band operation limits flexibility. When deploying Wi-Fi networks, administrators face a trade-off: 2.4 GHz offers greater range and better penetration through walls, but it is more crowded with interference; 5 GHz offers less interference and more channels, but higher propagation loss.

The 802.11n standard was developed to combine the advantages of both frequency bands. By supporting dual-band operation, 802.11n devices can use 2.4 GHz for better coverage and fallback, or switch to 5 GHz when higher throughput is needed and the environment allows it. This flexibility is a strong reason why many modern routers and clients support 802.11n as a transitional standard before newer ones such as 802.11ac or 802.11ax.

Moreover, 802.11n introduced MIMO technology. MIMO uses multiple antennas at both the transmitter and receiver ends. In a MIMO system, a router or wireless access point could send multiple spatial streams simultaneously, allowing for significantly higher aggregate throughput. For example, a 2×2 MIMO setup (two transmit and two receive antennas) can send two separate data streams concurrently, effectively doubling potential bandwidth under ideal conditions. This innovation dramatically improved performance and reliability, especially in environments where there is multipath reflection (signals bouncing off walls and objects), because MIMO can use that multipath to its advantage instead of being hampered by it.

In addition to MIMO, 802.11n supports channel bonding, where two adjacent 20 MHz channels can be used together to form a 40 MHz-wide channel, effectively doubling the bandwidth if the wireless environment permits it. This, combined with MIMO, allows 802.11n to achieve theoretical data rates up to 600 Mbps (with four spatial streams), although typical consumer devices more commonly support 2×2 MIMO, giving lower but still significant throughput.

It is also important to understand why the other options (A, B, D) are incorrect in this context. Option A, 802.11g, is limited to 2.4 GHz and does not support MIMO in the way that 802.11n does. Option B, 802.11a, operates only on 5 GHz and does not support dual-band operation — moreover, it predates widespread MIMO usage. Option D, 802.11b, is an older 2.4 GHz-only standard, with very limited bandwidth and no MIMO support. Thus, none of A, B, or D offer both dual-band and MIMO capabilities in the way that 802.11n does.

From a practical perspective, 802.11n was widely adopted precisely because of this combination. It struck a strong balance among range, speed, and compatibility. Many home routers that were released in the late 2000s and early 2010s supported 802.11n as their highest or one of their highest standards. Devices like laptops, smartphones, and tablets could connect using either frequency band, depending on signal strength, congestion, or throughput requirements. In environments where there was interference, older 2.4 GHz connections could fall back to slower speeds, while 5 GHz enabled faster communication when conditions were favorable. MIMO also improved robustness and throughput in real-world conditions, compensating for signal reflections, obstructions, and mobility.

In summary, the 802.11n standard is the only one among the listed options that meets both criteria: it operates on both major Wi-Fi frequency bands (2.4 GHz and 5 GHz) and supports MIMO technology. That is why the correct answer is C) 802.11n.

Question 187:

When replacing a PSU, which specification must match the motherboard?

A) Wattage
B) Efficiency rating
C) Form factor
D) Number of SATA connectors

Answer: C) Form factor

Explanation:

When you replace a power supply unit (PSU) in a desktop computer, the most critical specification to check for compatibility with the motherboard is the PSU’s form factor. The form factor defines the physical dimensions, mounting hole configuration, connector placement, size, shape, and airflow orientation of the power supply. If the form factor does not match the case and motherboard, the PSU might physically not fit, may not align properly with mounting points, and could lead to severe installation issues — regardless of its wattage or other technical capabilities.

The dominant PSU form factor for most consumer desktops is ATX. An ATX PSU is designed to fit standard ATX and micro-ATX cases, lining up with mounting holes in the power supply bay and aligning with screw positions in the case. If you choose an SFX PSU instead, its dimensions are considerably smaller; while it might fit into a larger ATX case using an adapter bracket, such a configuration is not guaranteed to be stable, and airflow patterns could be compromised. Conversely, putting a full-size ATX PSU in a small-form-factor case made for SFX may be physically impossible: the PSU could be too large to sit correctly, its screws may not line up, and the case might not accommodate its depth or fan orientation.

Moreover, some OEM systems (pre-built branded desktops) use proprietary PSUs with non-standard physical layouts or a custom connector arrangement, meaning that even if you find a PSU that has the correct wattage, it might not mount correctly or align with the case’s venting or mounting structure. Without the correct form factor, you risk blocking airflow, interfering with other internal components, or making it impossible to secure the PSU properly to the chassis. Even if the motherboard has the necessary 24-pin ATX power connector and CPU EPS connector, a mismatched form factor may result in awkward or tight cable routes, leading to potential strain on connectors or airflow obstruction.

It is important to note that other specifications listed in the question are also important in their own right, but they do not guarantee physical or mechanical compatibility with the motherboard. For example, wattage (option A) is essential to ensure the PSU can supply enough power for all system components, but a PSU with very high wattage is useless if it does not physically fit into the case. Similarly, the efficiency rating (option B, such as 80 PLUS Bronze, Gold, etc.) affects power consumption, heat generation, and electricity usage, but not the physical fit. A PSU could be highly efficient, but in the wrong size to fit. Regarding option D, the number of SATA connectors is relevant for connecting storage devices, but this affects only the drives, not how the PSU mounts to the case or its compatibility with the motherboard’s power connectors.

In addition, the form factor influences cable length and placement. PSU cables (such as the 24-pin ATX cable, CPU power cable, PCIe power cables, and SATA power cables) must reach the corresponding connectors on the motherboard, graphics cards, and drives. A PSU designed for a different form factor may place cables in positions that make routing difficult or impossible, or shorter cables might not reach the components, causing stress or requiring extensions.

In short, the PSU form factor is the non-negotiable specification that ensures that the power supply physically fits into the computer case, aligns with mounting points, provides adequate cable routing, and practically matches the motherboard layout. Without matching the form factor, other qualities like wattage or efficiency do not matter because the PSU may simply not be installable or usable in the intended system. Therefore, answer C) Form factor is the correct choice for compatibility.

Question 188:

Which interface is commonly used by NVMe SSDs for the fastest performance?

A) SATA
B) PCIe
C) IDE
D) USB-C

Answer: B) PCIe

Explanation:

To understand why PCIe is the correct interface for NVMe SSDs, it’s helpful to examine how storage interfaces evolved, what limitations earlier interfaces have, and why NVMe was developed to take full advantage of modern high-speed, low-latency storage access.

SATA (Serial ATA) has long been the standard interface for connecting hard drives (HDDs) and traditional SATA SSDs. SATA III (the most common version) supports a maximum theoretical bandwidth of around 6 Gbps (roughly 600 MB/s in practical real-world transfer rates). While that is sufficient for many workloads, it becomes a bottleneck for high-performance solid-state storage, particularly as NAND flash memory gets faster and denser.

IDE is a very old parallel interface that was common in early PCs for connecting hard drives, but it has long been superseded by SATA and other more modern standards. IDE’s speed and signaling characteristics are nowhere near the performance required by modern SSDs, especially NVMe drives.

USB-C is a connector type and can carry various protocols (such as USB 3.x, Thunderbolt, or even DisplayPort), but it isn’t an interface standard by itself. NVMe SSDs are typically internal drives, not external ones, and the interface that gives them their performance is not USB but a protocol designed for very low latency and high throughput.

NVMe (Non-Volatile Memory Express) is a protocol built to make the best use of the low latency and internal parallelism of modern flash-based storage. NVMe was explicitly created to work over PCIe (Peripheral Component Interconnect Express), which provides very high-speed, low-latency links to the CPU. PCIe lanes are point-to-point high-bandwidth connections, and an NVMe SSD can use multiple PCIe lanes (often x4) to achieve extremely high read and write speeds — far beyond what SATA can deliver.

For example, a PCIe 3.0 NVMe SSD with four lanes (x4) can theoretically reach up to ~32 Gbps (theoretical maximum) under ideal conditions, and with PCIe 4.0 or 5.0, these speeds increase significantly. The protocol overhead is minimal and optimized for flash memory, so NVMe SSDs deliver not only high throughput but also very low latency for both random and sequential reads/writes — which is exactly what modern computing demands (e.g., fast boot times, quick application loading, rapid file transfers).

Moreover, using PCIe as an interface allows SSD manufacturers to design compact M.2 or U.2 form-factor drives that plug directly into the motherboard or into PCIe expansion slots. These drives do not require the same cabling as SATA drives, because they communicate directly with the CPU or chipset via PCIe lanes. This direct connection reduces overhead, increases efficiency, and improves performance.

In contrast, SATA SSDs, while still fast compared to hard drives, are limited by the SATA bus speed, and they require the typical SATA power and data cables, which also add latency and occupy physical space. USB-C external SSDs that advertise NVMe-like speeds often use Thunderbolt protocols or SATA-over-USB adapters, and they still cannot beat the performance of an internal NVMe SSD using PCIe in many cases, especially in terms of latency.

Therefore, the interface that NVMe SSDs typically use to achieve their maximum performance is PCIe. This is why option B is correct.

Question 189:

A laser printer produces vertical streaks on every page. Which component should be checked first?

A) Transfer belt
B) Fuser unit
C) Toner cartridge
D) Paper tray alignment

Answer: C) Toner cartridge

Explanation:

When a laser printer is producing vertical streaks on every page, the most common culprit is the toner cartridge. Vertical streaks are typically caused by either a low or defective toner level, contamination, or damage inside the cartridge. The toner cartridge contains powdered toner material and a mechanism to distribute it evenly onto the drum. If the toner is unevenly distributed, if there is a scratch on the internal roller of the cartridge, or if a piece of dry toner or foreign material has deposited itself on the internal parts, then the transfer of toner to the drum and ultimately to the paper can be uneven. That results in streaks when the printed image or text is developed.

First, a technician should remove the toner cartridge and gently shake it side to side (with the printer off) to redistribute any remaining toner powder inside. This sometimes solves streak problems by evening out the toner. If that does not work, the cartridge should be inspected for signs of damage, such as a cracked housing, worn or scratched rollers, or toner leakage. Replacing the toner cartridge with a known-good one is often the fastest way to rule out a bad cartridge as the cause.

While the transfer belt (option A) could cause image defects, it usually contributes to issues like banding, ghosting, or misaligned color regions rather than simple vertical streaking. The transfer belt is involved in the process of transferring toner from the drum to the paper, particularly in color printers, but it is not the first place to look for streaks because its failure usually yields different symptoms.

The fuser unit (option B) fuses the toner onto the paper using heat and pressure. Problems with the fuser often lead to issues like smudging, toner that rubs off, or even paper curling, rather than clean, vertical toner streaks. While a faulty fuser can degrade print quality significantly, it is less likely to produce consistent streaks aligned with the page height.

Paper tray alignment (option D) can cause paper feeding issues, skewing, or misfeeds, but it doesn’t typically create vertical streaks in the toner pattern. Misaligned paper might lead to off-center printing or shifting images, but vertical streaks are usually tied to toner or drum problems.

Therefore, because the toner cartridge is the component most directly associated with applying toner to the drum, and because streaking is most often caused by toner distribution or cartridge damage, the first thing to check is the toner cartridge. That is why answer C is correct.

Question 190:

In a home network, which device is responsible for assigning IP addresses?

A) Modem
B) Router
C) Switch
D) Firewall

Answer: B) Router

Explanation:

In most home networking environments, the router is the device that assigns IP addresses to devices on the local network. This is accomplished via DHCP (Dynamic Host Configuration Protocol), a network management protocol used to automate the process of assigning IP addresses and other configuration information, such as subnet mask, default gateway, and DNS servers. When a device connects to the network, it requests an IP address from the DHCP server; the router typically performs this role, responding with an available IP lease.

To understand why the router is the correct device, it’s useful to examine the roles of each device listed as options. The modem (option A) connects your home network to your Internet Service Provider (ISP). Its primary job is to modulate and demodulate signals between your ISP’s infrastructure (e.g., cable, DSL, fiber) and your local network. It does not usually have a full DHCP server for local devices (though some modem-router combo devices do, that is because they integrate a router into the modem). On its own, a pure modem simply passes data between the ISP and your router; it doesn’t assign IP addresses for your internal network.

A switch (option C) operates at Layer 2 of the OSI model. It forwards Ethernet frames based on MAC addresses and does not perform IP address assignment or route traffic between subnets by default (unless it is a Layer 3 switch, which is uncommon in basic home networks). A standard unmanaged switch has no DHCP, no gateway responsibilities, and no routing logic—it simply connects devices that are already on the same network segment.

A firewall (option D) may filter network traffic, enforce security policies, and block or allow connections, but it generally does not perform the role of DHCP in a typical home network. In many home setups, a firewall is integrated into the router, but that does not change the basic fact that the router is doing the DHCP assignment. In a more advanced enterprise network, a firewall appliance might provide DHCP, but that is not relevant to the typical home environment being tested in a CompTIA A+ exam context.

The router’s ability to assign IP via DHCP is fundamental to how most consumer networks operate. When you power up a new device like a laptop, phone, or smart TV and connect it to your Wi-Fi or wired network, it sends a DHCP Discover packet. The router receives this and, assuming there are free addresses in its pool, responds with a DHCP Offer, giving the device a proposed IP address. The device then sends a DHCP Request, and the router finalizes the assignment with a DHCP Acknowledgment. After that, the device has a working IP, subnet mask, gateway, and DNS settings to communicate on the local network and to access the wider internet.

Because the router is the device that typically handles this address assignment in a home setup, option B is the correct answer.

Question 191:

Which connector is used to provide dedicated power to a high-end graphics card?

A) SATA power
B) Molex power
C) PCIe 6-pin or 8-pin power
D) USB power

Answer: C) PCIe 6-pin or 8-pin power

Explanation:

High-end graphics cards require more power than what a standard PCIe slot on a motherboard can provide. A PCIe slot typically delivers up to 75 watts, but modern GPUs often need significantly more, especially gaming and workstation-class cards that handle large amounts of data processing, 3D rendering, and high-performance tasks. To meet these power demands safely and consistently, dedicated PCIe power connectors are used. These connectors come in 6-pin, 8-pin, and sometimes even combinations such as 6+2-pin configurations. They can supply additional wattage, ensuring the card operates efficiently without risk of instability or power shortages.

Other power connectors listed in the options are not appropriate for GPU power requirements. SATA power connectors are designed for hard drives, SSDs, and optical drives; they do not carry the required amperage or voltage structure needed for GPU loads. Molex connectors are older legacy power connectors found in systems from earlier generations, and while adapters exist, they are not recommended because they cannot reliably support the high power draw of modern GPUs and can lead to overheating or electrical issues. USB power, on the other hand, provides extremely limited current compared to GPU needs and is meant only for peripherals like keyboards, mice, and small external devices.

PCIe power connectors are specifically engineered for graphics cards and include secure locking mechanisms and proper pin arrangements to deliver clean, stable power. Graphics card manufacturers design their products with dedicated PCIe connectors to ensure the card receives the correct wattage. Some GPUs require multiple connectors, such as dual 8-pin or one 6-pin combined with an 8-pin, depending on power requirements.

For these reasons, PCIe connectors are the correct choice for delivering dedicated power to a high-end graphics card.

Question 192:

Which component stores the system’s firmware and is responsible for hardware initialization?

A) RAM
B) BIOS/UEFI chip
C) Northbridge
D) GPU

Answer: B) BIOS/UEFI chip

Explanation:

The BIOS or UEFI chip is one of the essential components on a motherboard responsible for storing firmware that initializes hardware when the computer starts. This chip contains the instructions required to perform the initial hardware checks and boot processes. When the computer powers on, the CPU immediately looks to this chip to begin the Power-On Self-Test (POST). This process checks whether components like memory, storage, keyboard, graphics card, and essential motherboard circuits are functioning correctly. If everything is working, the firmware passes control to the operating system loader on the primary storage device.

RAM does not store firmware because it is volatile memory, which means its data disappears when the system powers off. The Northbridge, historically used in older motherboard designs, managed communication between the CPU, RAM, and GPU but did not store firmware. Modern systems integrate these functions directly into the CPU. The GPU’s purpose is to render graphics and display imagery; it has no role in system firmware or hardware initialization.

UEFI is the successor to traditional BIOS, offering more advanced features such as secure boot, network boot, support for large storage drives, and a more user-friendly interface. Despite these differences, both BIOS and UEFI chips share the same essential purpose: they store the firmware that launches the system and prepares hardware for operation. Because they provide the foundational instructions required to turn hardware into a functioning computer, these chips are vital to system operation.

Question 193:

What is the primary purpose of thermal paste between a CPU and its heat sink?

A) Reduce noise levels
B) Prevent electrical interference
C) Improve heat transfer
D) Secure the heat sink in place

Answer: C) Improve heat transfer

Explanation:

Thermal paste plays a crucial role in CPU cooling systems. Although the surfaces of both the CPU’s integrated heat spreader and the heat sink appear smooth to the naked eye, they contain microscopic imperfections such as tiny grooves and air gaps. Air is a poor conductor of heat, so these gaps reduce thermal efficiency and cause the CPU to run hotter. Thermal paste fills these microscopic imperfections, ensuring that heat is transferred more efficiently from the CPU to the heat sink.

The heat sink’s job is to dissipate heat into the surrounding air, often assisted by one or more fans. The effectiveness of a heat sink depends on how well it receives heat from the CPU. Without proper thermal paste, the CPU can overheat, throttle performance, or, in extreme cases, shut down to prevent damage. This is why thermal paste is required whenever a heat sink is installed or replaced.

None of the other answer choices correctly describes the purpose of thermal paste. It does not reduce noise levels because noise is typically caused by fans or moving components, not thermal compounds. It does not prevent electrical interference because thermal paste is not used for electrical shielding; in fact, many types of thermal paste are slightly conductive or capacitive, so care must be taken when applying it. Thermal paste also does not secure the heat sink in place; mechanical brackets, screws, or latches serve that purpose, not the paste itself.

Because thermal conductivity between the CPU and heat sink is critical to maintaining safe operating temperatures, improving heat transfer is the correct and primary function of thermal paste.

Question 194:

A technician upgrades a PC and notices that the system clock resets every time it is powered off. Which component is likely failing?

A) Power supply
B) CMOS battery
C) CPU
D) GPU

Answer: B) CMOS battery

Explanation:

The CMOS battery provides continuous power to the motherboard’s real-time clock (RTC) and stores BIOS/UEFI configuration settings when the computer is turned off. When this battery begins to fail, the system cannot retain settings such as the date, time, boot sequence, and other firmware configurations. As a result, the system clock resets to a default date and time every time the machine loses power.

The power supply is not responsible for storing BIOS settings. While it powers the system during operation, it does not supply power to the CMOS circuitry when the system is off. The CPU executes instructions and performs calculations, but does not maintain firmware settings. The GPU renders graphics and has no connection to system timekeeping or BIOS memory retention.

Replacing the CMOS battery, typically a CR2032 lithium coin cell, resolves this problem in most cases. These batteries are inexpensive and commonly found in desktop and laptop motherboards. A failing CMOS battery is a common issue in older systems. When replaced, the motherboard can once again retain BIOS settings even when the computer is unplugged.

Question 195:

Which type of cable is most commonly used for connecting an internal hard drive to a motherboard in modern PCs?

A) IDE ribbon cable
B) SATA cable
C) Coaxial cable
D) Thunderbolt cable

Answer: B) SATA cable

Explanation:

Most modern internal hard drives and 2.5-inch SSDs use SATA (Serial ATA) cables to connect to the motherboard. SATA replaced older IDE ribbon cables because it offered higher data transfer speeds, improved airflow inside the case due to its slimmer design, and hot-swapping capabilities in some configurations. The SATA interface delivers sufficient bandwidth for traditional spinning hard drives and many consumer-level SSDs, making it the standard choice for internal storage connections.

IDE ribbon cables were widely used in older systems but are now obsolete because they are bulky, slow, and incompatible with modern drives. Coaxial cables are used primarily for cable TV and internet connections, not for internal storage devices. Thunderbolt cables support extremely fast data transfer but are typically used for external peripherals, not internal motherboard connections.

SATA’s reliability, speed, and design make it the most common and practical connection for internal drives in contemporary PCs.

Question 196:

Which type of device is primarily used to protect a computer from power surges?

A) UPS
B) Surge protector
C) Line conditioner
D) Power strip

Answer: B) Surge protector

Explanation:

A surge protector is specifically designed to protect electronic devices from sudden spikes in electrical voltage. Power surges can occur for many reasons, including lightning strikes, power grid fluctuations, electrical faults, or even large appliances switching on and off within a home or office environment. These sudden increases in voltage can damage sensitive computer components such as the motherboard, power supply, RAM, hard drives, and monitors. The purpose of a surge protector is to absorb or divert excess voltage so that it does not reach connected devices. Surge protectors typically contain metal oxide varistors (MOVs), which act as sacrificial components that redirect excessive electrical energy into the ground.

Comparing the surge protector to the other listed options helps clarify why it is the correct choice. A UPS, or uninterruptible power supply, is certainly a valuable device because it not only provides short-term battery backup during power outages but often includes built-in surge protection as well. However, its primary function is battery backup, not surge protection, so it is not the answer to the question being asked. Line conditioners regulate voltage levels and provide clean power, but are not designed specifically to handle high-voltage spikes in the way a surge protector is. A basic power strip, although it may resemble a surge protector, usually lacks the internal MOV components needed to block electrical surges. A simple power strip only provides additional outlets; it offers no real protection unless explicitly labeled as a surge protector.

Surge protectors help extend the life of computer hardware by ensuring that unexpected voltage spikes do not damage internal circuits. They are particularly important in regions with unstable electrical grids or where storms occur frequently. In addition to voltage spikes, many surge protectors also filter electrical noise, further protecting sensitive electronics. Some models provide indicator lights to show whether surge protection is still active, as MOVs can degrade over time. Once the surge protection capability is depleted, the device must be replaced to maintain proper protection.

Because surge protectors are affordable, easy to use, and effective at preventing damage, they are an essential accessory for any computer setup. For these reasons, the correct answer is a surge protector.

Question 197:

Which display technology uses a backlight to illuminate the screen?

A) OLED
B) Plasma
C) LCD
D) MicroLED

Answer: C) LCD

Explanation:

Liquid Crystal Display (LCD) technology uses a backlight to illuminate the screen. LCD panels do not produce their own light; instead, they rely on a light source behind the screen. This backlight is typically composed of LEDs (light-emitting diodes) in modern displays, although older systems used fluorescent tubes such as CCFLs. The liquid crystal layer itself does not emit light. Instead, it manipulates the light passing through it by twisting or aligning the crystals to block or allow different amounts of illumination. This process forms the images you see on the screen.

The key distinction between LCD panels and other technologies lies in how the light is produced and controlled. OLED panels, for example, do not use a backlight because each pixel produces its own light. This allows OLEDs to display deep blacks and high contrast since any pixel can be turned off completely. Plasma displays also generate their own light using electrically charged gas cells, making a backlight unnecessary. MicroLED technology operates similarly to OLED in that each pixel emits its own light without requiring a backlight.

LCD’s reliance on a backlight means it cannot achieve perfect blacks, because some amount of light always leaks through, even in dark scenes. However, LCDs offer advantages such as energy efficiency, bright images, long lifespan, and affordability. LED-backlit LCD screens have largely replaced older CCFL-backlit displays because LEDs are more efficient, brighter, thinner, and more durable. Edge-lit and full-array local dimming technologies have improved contrast over the years, though they still cannot match the deep blacks of OLED.

The defining characteristic of LCD is that it requires a backlight for image visibility, making it the correct answer.

Question 198:

A user reports that their laptop touchpad is not responding. What should be checked first?

A) Touchpad driver reinstall
B) BIOS firmware update
C) Touchpad disables the function key
D) Operating system reinstall

Answer: C) Touchpad disables the function key

Explanation:

When a laptop touchpad becomes unresponsive, the most common and simplest cause is that it has been accidentally disabled using the function key shortcut. Many laptops include a dedicated function key combination—often Fn plus one of the numbered F-keys—that toggles the touchpad on or off. Users may accidentally trigger this shortcut while typing or adjusting volume or brightness. Before performing deeper troubleshooting, checking this function key is the most efficient starting point.

This troubleshooting step is quick, non-invasive, and does not require modifying system settings or installing software. If the touchpad is indeed disabled at the firmware or hardware toggle level, reinstalling drivers or updating BIOS will not resolve the issue. Furthermore, reinstalling the operating system is unnecessarily drastic, time-consuming, and typically unrelated to touchpad functionality.

Drivers and firmware updates are valid troubleshooting steps when hardware toggles fail to address the issue, such as in cases where the driver becomes corrupted or incompatible after an update. However, the majority of cases involving unresponsive touchpads stem from accidental disabling.

Because checking the function key is the simplest and most commonly effective first step, it is the correct answer.

Question 199:

Which type of network cable is most resistant to electromagnetic interference?

A) UTP
B) STP
C) Coaxial
D) Fiber optic

Answer: D) Fiber optic

Explanation:

Fiber optic cables transmit data using pulses of light. Because they use light instead of electrical signals, they are completely immune to electromagnetic interference (EMI). This makes fiber optic cabling ideal for environments with heavy electrical equipment or radio frequency activity. Fiber optic cables also support extremely high bandwidth and long-distance communication without signal degradation.

UTP (unshielded twisted pair) cables are common in networking but offer limited protection against interference. STP (shielded twisted pair) cables provide better shielding than UTP but still conduct electrical signals, making them susceptible to strong EMI. Coaxial cables offer better resistance than UTP or STP, due to their shielding, but they are not immune.

Fiber optic stands apart because it does not use electricity at all, making it the best choice for environments with high EMI.

Question 200:

Which type of computer port is rectangular and commonly used for connecting printers?

A) HDMI
B) USB Type-A
C) Serial
D) Parallel

Answer: D) Parallel

Explanation:

The parallel port, also known as the Centronics port or LPT port, is a wide rectangular connector historically used for connecting printers. Its shape is distinctive: long, flat, and lined with multiple pins. The parallel port was once the standard printer interface before USB became dominant. It transmitted multiple bits of data simultaneously, which is why it contains so many pins. The protocol supported early printers and other peripherals such as scanners, though printers were its most common use.

USB Type-A is rectangular, but is not historically associated with printers in the same way as parallel ports are. Serial ports use a different connector shape entirely, and HDMI is used for video. The parallel port’s legacy association with printers makes it the correct answer.

img