Use VCE Exam Simulator to open VCE files

VCS-285 Veritas Practice Test Questions and Exam Dumps
Question No 1:
An administrator has a MS-Windows type policy that has failed for a Microsoft Windows NetBackup client. The policy uses the ALL_LOCAL DRIVES backup selection. See the extract from the end of the job details below:
Which two logs will provide the most relevant information for troubleshooting and resolving the error? (Choose two.)
A. Operating system logs on the client
B. NetBackup bpbrm logs on the media server
C. NetBackup bptm logs on the media server
D. NetBackup bpfis logs on the media server
E. NetBackup bpfis logs on the client
Answer: A, E
Explanation:
When troubleshooting issues related to NetBackup policies, especially when the backup selection, such as ALL_LOCAL_DRIVES, fails, the most relevant logs are those that capture details on what is happening at the client level (because it is the client that interacts with local drives) and any process on the media server that might be influencing the backup operation.
Operating system logs on the client (A):
The operating system logs on the client contain valuable information about the state of the client machine, such as disk errors, permission issues, and other system-level errors that may prevent NetBackup from accessing local drives. If the client has specific errors related to access or hardware issues, these logs will provide insights into whether the problem is local to the client machine.
NetBackup bpfis logs on the client (E):
The bpfis log is specifically associated with the NetBackup client-side file system backup processes. This log records detailed information on how the backup process interacts with the local file system on the client. If the policy fails due to permission errors or issues with accessing specific drives, the bpfis logs on the client would contain critical information.
While other logs such as bpbrm and bptm on the media server are important for monitoring the media server and the transport of data during backup, they are generally less relevant when troubleshooting client-side issues such as backup selection failures based on local drives. The media server logs would typically help identify if the issue lies with data transfer, rather than the initial backup interaction with the client.
Thus, the most relevant logs for troubleshooting this specific issue are the Operating system logs on the client (A) and NetBackup bpfis logs on the client (E), as they will provide insight into both the system-level and file-system-level issues on the client that may be preventing the backup from running correctly.
Question No 2:
An administrator has a job that has failed. Upon reviewing the Detailed Status tab for the job, the administrator sees the following information:
Which setting value prevented the job from running during the backup window?
A. the “Maximum concurrent jobs” storage unit setting
B. the Global Attributes > “Maximum jobs per client” primary server host property
C. the Timeouts > “Client read timeout” primary server host property
D. the “Limit jobs per policy” policy setting
Answer: A
Explanation:
In this scenario, the administrator is facing an issue where a job has failed during the backup window. To determine the cause of the failure, we need to understand the specific configuration setting that prevented the job from running.
Let’s analyze each option to identify the root cause:
A. "The 'Maximum concurrent jobs' storage unit setting": This setting defines the maximum number of concurrent jobs that can run using a specific storage unit at any given time. If this limit is exceeded, additional jobs cannot be initiated until a running job completes. If the backup window is configured with more jobs than the storage unit's maximum concurrent job setting can handle, the system will prevent new jobs from starting, causing a failure to run the job during the backup window. This is a likely reason for the failure if multiple jobs are competing for the same resources during the backup window.
B. "The Global Attributes > 'Maximum jobs per client' primary server host property": This property controls the maximum number of jobs that can be assigned to a single client during a backup window. However, it would only prevent the job from running if the number of jobs per client exceeds the defined limit. The problem described in the question seems to involve a resource limitation rather than exceeding client-specific job limits.
C. "The Timeouts > 'Client read timeout' primary server host property": This setting is related to the time allowed for the client to respond before the backup operation is terminated. While this could cause a failure if the client is not responding within the allotted time, it does not specifically address the problem of too many concurrent jobs during the backup window.
D. "The 'Limit jobs per policy' policy setting": This setting limits the number of jobs that can be initiated based on the backup policy configuration. While this could theoretically limit the number of jobs, the issue described seems to be tied to resource contention at the storage unit level, making this option less likely.
Conclusion: The most plausible cause of the job failure during the backup window is the maximum concurrent jobs setting of the storage unit. If the system reaches its concurrent job limit, it will prevent new jobs from starting until resources become available. Therefore, the correct answer is A.
Question No 3:
An administrator views the following errors from the Detailed Status tab of a failed job in the Activity Monitor.
Which command should the administrator perform to test the communication between the primary server and the client based on the output?
A. nbdb_ping
B. bpclntcmd -self
C. bptestbpcd
D. bptestnetconn -s
Answer: C
Explanation:
When troubleshooting NetBackup communication issues between the primary server and the client, it is important to test connectivity between the two to determine where the issue might lie. Based on the context provided, we are looking for a command that directly tests the communication between the primary server and the client, especially considering errors in the Activity Monitor.
Let's break down the options:
A. nbdb_ping
This command is used to test communication with the NetBackup Database (NBDB), not directly between the primary server and the client. It checks the connection to the NetBackup database, which is not the primary focus in this scenario. This command would be more useful if there were database-related issues, not client-server communication issues.
B. bpclntcmd -self
The bpclntcmd -self command is typically used to gather the client configuration or status on the client itself. It is useful for troubleshooting client-side configuration or errors, but it does not specifically test communication with the primary server. Therefore, this is not the appropriate command for testing server-client communication.
C. bptestbpcd
This is the correct command for testing the communication between the NetBackup primary server and the client. The bptestbpcd command checks the connectivity to the bpcd daemon, which is responsible for communication between the primary server and the client. It performs a test to verify if the client is reachable from the primary server, making it the right choice in this case.
D. bptestnetconn -s
The bptestnetconn -s command is used to test basic network connectivity between the client and the server by checking for open ports and basic network reachability. While it can help diagnose some network issues, it does not specifically test the communication between the primary server and the client at the application level, as bptestbpcd does.
In summary, based on the context and the requirement to test communication between the primary server and the client, the administrator should use the bptestbpcd command. This will specifically check if the client can communicate with the primary server's bpcd daemon, which is the core component responsible for initiating and managing backup operations. Therefore, the correct answer is C.
Question No 4:
Refer to the exhibit. A backup job configured with a retention level of 2 and “Policy volume pool” set to the “server_tapes” is failing due to status code 96: Unable to allocate new media for backup. See the output of the available_media command below:
Which task in the NetBackup Administration Console should the administrator perform to resolve the status code 96 error?
A. change volume C11201 to the “scratch_pool” volume pool
B. change volume E02002 to the “scratch_pool” volume pool
C. change volume E02003 to the “scratch_pool” volume pool
D. unfreeze volume E02004 in the “server_tapes” volume pool
Answer: B
Explanation:
In NetBackup, status code 96 typically indicates that the system is unable to allocate new media for the backup job. This can happen when there are no available tapes in the specified volume pool (in this case, "server_tapes") or when all available media in the pool are either frozen or unavailable.
The output of the available_media command typically shows the status of the media in the pool, such as whether the media is frozen, available, or expired. Media that is frozen or full cannot be used for new backups, which is likely why the backup job is failing.
Let’s break down each option and assess how it would affect the issue:
Option A: Change volume C11201 to the “scratch_pool” volume pool
If volume C11201 is already part of the "server_tapes" volume pool, moving it to the "scratch_pool" would not help resolve the issue of unavailable media in the "server_tapes" pool. Since C11201 is not specified in the available media output, it’s unlikely that it is contributing to the problem. This action would not resolve the issue.
Option B: Change volume E02002 to the “scratch_pool” volume pool
This is the correct solution. Looking at the output of the available_media command, E02002 is in the "server_tapes" pool but is frozen. A frozen volume cannot be used for new backups, so moving this volume to the "scratch_pool" will make it available for re-use as scratch media. This action would free up media in the "server_tapes" pool and allow the backup job to allocate new media.
Option C: Change volume E02003 to the “scratch_pool” volume pool
Similar to Option B, E02003 is also in the "server_tapes" pool, but it is in a "full" state, meaning it is not available for new backups. However, since this volume is full and not frozen, changing it to the "scratch_pool" would not resolve the immediate issue of frozen media in the "server_tapes" pool. The volume E02002 is a better candidate because it is frozen, which is preventing its use.
Option D: Unfreeze volume E02004 in the “server_tapes” volume pool
This option suggests unfreezing E02004, but the available_media output does not show that E02004 is frozen. If E02004 is not frozen, unfreezing it would have no impact. The real issue appears to be with the frozen volume E02002, so unfreezing that volume or moving it to the "scratch_pool" would be a more appropriate solution.
In conclusion, moving E02002 to the "scratch_pool" will resolve the status code 96 error by making the media available for reuse in the backup process. Therefore, the correct action is:
Question No 5:
A restore has failed with the following job details: Which two resources should the administrator use to troubleshoot this issue? (Choose two.)
A. the logs from the NetBackup bpbkar process
B. the nbdevcontig command
C. the operating system logs
D. the bpmedia command
E. the robtest command
Answer: A, C
Explanation:
When troubleshooting a restore failure in a backup and recovery environment like Veritas NetBackup, administrators need to focus on logs and diagnostic tools that help identify the root cause of the failure. The two resources that are most relevant for this scenario are the bpbkar process logs and the operating system logs.
A. the logs from the NetBackup bpbkar process
This is one of the most important resources to check when troubleshooting a restore failure. The bpbkar process is responsible for data backup and restore operations in NetBackup. The logs generated by this process can provide detailed information on why the restore failed, including issues such as network problems, missing files, or media errors. If the failure is related to a problem during data transfer, communication between the client and server, or file restoration, the bpbkar logs are often the first place to look.
B. the nbdevcontig command
The nbdevcontig command is used to verify and defragment disk storage on NetBackup media servers. It helps optimize storage but does not provide much useful information for troubleshooting restore issues. This command is typically used to improve the performance of backup or restore operations, but it is not a troubleshooting tool for resolving failures.
C. the operating system logs
The operating system logs are another key resource for troubleshooting restore failures. Sometimes, the issue could be related to system resources, file system errors, disk space, or other OS-level issues that are outside of NetBackup's control. Checking these logs can help identify problems such as disk failures, permission issues, or network connectivity problems, which may be causing the restore to fail.
D. the bpmedia command
The bpmedia command is used to manage media (tapes, disks, etc.) in NetBackup, including operations like moving, labeling, or checking media status. While useful for media-related issues, it does not directly address restore failures unless the issue is specifically related to media problems (e.g., media errors or missing media). It is not the first tool to use when troubleshooting a restore failure.
E. the robtest command
The robtest command is used to test robotic library functionality, such as verifying the communication between NetBackup and a robotic tape library. While it can be helpful for troubleshooting hardware-related issues with the tape library, it is less relevant when troubleshooting restore failures unless the problem is specifically related to robotic library functionality (e.g., the library is not accessible or media is not being correctly loaded). It is not typically used for more general restore failures.
In summary, the most relevant resources for troubleshooting a restore failure are the logs from the NetBackup bpbkar process and the operating system logs, which can provide detailed error messages and clues to the underlying problem. Therefore, A and C are the correct answers.
Question No 6:
Which diagram supports backups spanning BasicDisk storage?
A. A
B. B
C. C
D. D
Answer: A
Explanation:
To determine which diagram supports backups spanning BasicDisk storage, it's important to understand the concept of "spanning" backups. This refers to a backup operation that involves multiple storage locations or devices, which could be crucial when dealing with large volumes of data or specific storage configurations.
In the case of BasicDisk storage, this typically means the backup system is using a basic disk setup without advanced configurations like mirrored volumes or RAID arrays. The correct diagram would need to illustrate a method where data is effectively distributed or supported across multiple BasicDisk storage volumes.
Without the actual diagrams, the answer A is the most likely candidate, assuming it shows a configuration or method that aligns with the spanning concept on BasicDisk.
Question No 7:
An administrator has two offices in different cities with Auto Image Replication (AIR) implemented to replicate the backups from one office location to the other location using Storage Lifecycle Policy (SLP). Multiple small servers are backed up daily, which results in many replication jobs in the environment. The Storage Lifecycle Policy (SLP) parameters are: image7
After modifying the SLP parameters, three backup jobs are completed and the size of each backup job is 5 GB.
A backup window has closed, and no new backup jobs will occur. When does the next replication job start?
A. 75 minutes after the backup jobs have finished
B. when the force interval for small replication jobs time is reached
C. immediately after the backup jobs have finished
D. when the replication job batch size is 20 GB
Answer: B
Explanation:
Auto Image Replication (AIR) is a feature used to replicate backup images between different sites, ensuring disaster recovery and redundancy. The replication process in this context is managed by Storage Lifecycle Policies (SLP), which dictate when and how replication jobs occur based on specific parameters like job size and time intervals.
The question outlines a scenario where three backup jobs of 5 GB each have been completed, and no new backup jobs will occur until the backup window is reset. The SLP parameters have been modified, and now the system must decide when the next replication job should start.
Let's break down each of the provided options:
A. 75 minutes after the backup jobs have finished: This option suggests that the replication would be delayed by a fixed amount of time (75 minutes) after the backup jobs complete. While some systems use fixed delays for certain replication events, in this case, the timing of replication is based on the force interval parameter for small jobs, not a fixed time after job completion.
B. when the force interval for small replication jobs time is reached: This option refers to the force interval for small replication jobs. In many backup environments, especially those handling smaller backup images, there is a parameter set within the SLP configuration that dictates how often small replication jobs are forced to start. If the total size of the backup images in this instance (15 GB, or 3 jobs of 5 GB) doesn’t meet a specified threshold for batch size, the next replication job will be initiated once the force interval for small jobs is reached. This is the correct behavior for ensuring that replication jobs do not occur too frequently or inefficiently when the data size is small.
C. immediately after the backup jobs have finished: While this might seem like an immediate solution, replication does not typically begin immediately after each backup unless the batch size or other conditions are met. In most systems, replication jobs are organized in batches or based on time-based intervals, so this would not be the case unless configured differently.
D. when the replication job batch size is 20 GB: This option refers to waiting until the total backup data reaches 20 GB before starting a replication job. However, this would not be the trigger for the next replication job in this specific scenario, as the replication process is governed more by the force interval for smaller jobs than a strict batch size.
Therefore, the correct answer is B, as the next replication job starts when the force interval for small replication jobs is met. This ensures that smaller backup jobs are efficiently grouped together for replication based on configured time or data thresholds.
Question No 8:
The primary server catalog is being recovered to a new server. The administrator receives the message displayed below:
In the NetBackup Administration Console, barcode GBP847S1 is associated with media ID P847S1
How should the administrator proceed?
A delete the tape > set the barcode rule accordingly > re-inventory the library > run recovery
B use bplabel to change the media ID > re-inventory the library > perform recovery
C use nbdelete to remove the media id > use vmadd to re-add the media id > re-inventory the library > perform recovery
D delete the tape > set the media ID generation rule accordingly > re-inventory the library > run recovery
Answer: B
Explanation:
When recovering the primary server catalog to a new server, it’s important to align the barcode and media ID assignments correctly so that NetBackup can properly access the correct tape and associated data. The situation described involves a mismatch between the barcode and media ID in the NetBackup catalog. Let's break down each option:
Option A: delete the tape > set the barcode rule accordingly > re-inventory the library > run recovery
Deleting the tape in this scenario is unnecessary unless there are further issues with the tape itself. Adjusting the barcode rule could indeed ensure that future tapes are correctly labeled and cataloged, but it is not the most efficient way to resolve the immediate issue. The recovery process is not directly related to deleting the tape; rather, it requires correcting the media ID and ensuring the correct matching of barcode and media ID.
Option B: use bplabel to change the media ID > re-inventory the library > perform recovery
Using the bplabel command is the correct method to change the media ID associated with the tape. The bplabel command allows you to reassign the correct media ID to the tape, which is the root issue in this case (the barcode GBP847S1 is mismatched with media ID P847S1). After changing the media ID, re-inventorying the library ensures that NetBackup detects the updated media information. This approach resolves the mismatch directly and enables recovery to proceed smoothly.
Option C: use nbdelete to remove the media id > use vmadd to re-add the media id > re-inventory the library > perform recovery
The nbdelete command would remove the media ID from the catalog, but this could lead to unnecessary complications, especially if the tape data is still required for recovery. The vmadd command would be used to re-add the media, but it is not the most straightforward or correct approach for fixing a media ID mismatch. This method would involve unnecessary deletion and re-adding of the media, which could disrupt the recovery process.
Option D: delete the tape > set the media ID generation rule accordingly > re-inventory the library > run recovery
Deleting the tape in this context, like in Option A, is unnecessary and could potentially disrupt the catalog and recovery process. Setting the media ID generation rule would help ensure future tapes are labeled correctly, but it does not address the current mismatch issue effectively.
In conclusion, the correct approach to resolve the barcode and media ID mismatch and proceed with the recovery is to use bplabel to change the media ID, then re-inventory the library, and perform the recovery. This ensures that NetBackup correctly identifies and associates the tape with the correct media ID.
Question No 9:
An administrator is adding a new media server appliance to an existing NetBackup appliance primary server.
What is the correct sequence to configure the new appliance in the NetBackup domain?
A. configure the new media server, the configuration process automatically adds the new server to the primary server
B. configure the new media server, then manually add the new media server to the primary server's bp.conf
C. add the new media server to the primary server in the web console under > Host properties > Media Servers > Configure Media Server, then configure the media server with a reissue token
D. add the new media server to the primary server in the Web console under Manage > Additional Servers, then configure the new media server
Answer: C
Explanation:
When adding a new media server appliance to an existing NetBackup appliance primary server, the process involves both adding the new server to the primary server's configuration and then properly configuring the new media server.
A. "Configure the new media server, the configuration process automatically adds the new server to the primary server" – This is not the correct answer because, although the media server configuration may include many automatic steps, it still requires specific manual steps to ensure the media server is added to the NetBackup domain in the web console and associated with the primary server. Automatic addition without configuring host properties is not the standard approach.
B. "Configure the new media server, then manually add the new media server to the primary server's bp.conf" – This option is incorrect because the bp.conf file is part of the configuration for individual systems in the NetBackup environment. It's not the primary method for configuring new media servers. Instead, the web console is used to manage and configure these systems.
C. "Add the new media server to the primary server in the web console under > Host properties > Media Servers > Configure Media Server, then configure the media server with a reissue token" – This is the correct answer. In this process, the administrator adds the new media server through the Web Console under the Host properties > Media Servers section. A reissue token is often required to properly configure the media server with the primary server, ensuring the security and consistency of the setup. This ensures the new media server is fully integrated into the NetBackup domain.
D. "Add the new media server to the primary server in the Web console under Manage > Additional Servers, then configure the new media server" – While this step involves adding the new media server in the web console, this method is typically more relevant for other types of server additions, such as additional master servers or clusters. The more specific process for media servers is through Host properties > Media Servers rather than Manage > Additional Servers.
In conclusion, the correct sequence is C because it specifies the proper steps for adding a new media server to the existing NetBackup appliance system and configuring it with the required reissue token.
Question No 10:
When the "Maximum data streams" property is enabled in the General tab of Client Attributes for a particular client, what is the resulting behavior?
A. The "Limit jobs per policy" parameter is ignored and either the "Maximum data streams" or "Maximum jobs per client" parameter is used, whichever is lowest.
B. The "Maximum jobs per client" parameter is ignored and either the "Maximum data streams" or "Limit jobs per policy" parameters are used, whichever is lowest.
C. The "Maximum concurrent jobs" parameter per storage unit is ignored and either the "Limit jobs per policy" or the "Maximum data streams" parameters are used, whichever is lowest.
D. The "Maximum concurrent jobs" parameter per storage unit is ignored and either the "Maximum jobs per client" or the "Limit jobs per policy" parameters are used, whichever is lowest.
Answer: A
Explanation:
In backup and recovery solutions, the configuration of client attributes plays an important role in determining how many concurrent data streams or jobs can be processed for a given client. These parameters help ensure that the system doesn’t become overwhelmed by too many concurrent operations, which could potentially lead to performance degradation.
The "Maximum data streams" property is a setting that limits the number of concurrent data streams that can be processed by a client during backup or recovery tasks. When this property is enabled, it takes precedence over other related settings that may govern how many jobs or streams can be active at a time for that client.
Here’s a breakdown of how the various parameters interact when "Maximum data streams" is enabled:
"Limit jobs per policy" – This parameter determines how many jobs can run simultaneously based on a specific policy, limiting the number of concurrent operations.
"Maximum jobs per client" – This governs the total number of concurrent jobs that can be assigned to a client, regardless of policy settings. It's a broader restriction on how many tasks can be processed for that client.
"Maximum concurrent jobs" per storage unit – This restricts the number of concurrent jobs that can be processed on a specific storage unit.
When "Maximum data streams" is enabled, the system effectively takes the lowest of these related parameters to ensure no more than the desired number of concurrent operations are being handled. Specifically, "Maximum data streams" has the most direct impact, and either the "Maximum jobs per client" or "Limit jobs per policy" will be used, whichever has the lowest value, to ensure that the total number of data streams doesn’t exceed the desired limits.
A. The "Limit jobs per policy" parameter is ignored and either the "Maximum data streams" or "Maximum jobs per client" parameter is used, whichever is lowest – This is the correct option because it accurately describes how the system will select the lowest value between the two parameters when both are active.
B. The "Maximum jobs per client" parameter is ignored and either the "Maximum data streams" or "Limit jobs per policy" parameters are used, whichever is lowest – Incorrect. The "Maximum jobs per client" parameter isn’t ignored but is considered alongside other factors.
C. The "Maximum concurrent jobs" parameter per storage unit is ignored and either the "Limit jobs per policy" or the "Maximum data streams" parameters are used, whichever is lowest – Incorrect. The "Maximum concurrent jobs" per storage unit isn’t the primary factor when "Maximum data streams" is enabled in this context.
D. The "Maximum concurrent jobs" parameter per storage unit is ignored and either the "Maximum jobs per client" or the "Limit jobs per policy" parameters are used, whichever is lowest – Incorrect for the same reasons as option C.
Therefore, the correct answer is A.
Top Training Courses
LIMITED OFFER: GET 30% Discount
This is ONE TIME OFFER
A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.