Certified Platform Developer II Salesforce Practice Test Questions and Exam Dumps

Question 1

A Visualforce page is experiencing slow load times because it displays a large amount of data. What approach can a developer take to enhance its performance?

A. Use Javascript to move data processing to the browser instead of the controller.
B. Use the transient keywords for the List variables used in the custom controller.
C. Use lazy loading to load the data on demand, instead of the controller’s constructor.
D. Use an apex:actionPoller in the page to load all of the data asynchronously.

Correct Answer : C

Explanation:

When a Visualforce page loads slowly due to the sheer volume of data being retrieved and displayed, a developer must consider strategies that reduce the amount of data processed and rendered at once. This is a common performance bottleneck in Salesforce Visualforce development, especially when large datasets are loaded directly through the controller’s constructor. Among the listed options, the most effective and scalable solution is lazy loading.

Lazy loading refers to the design pattern where data is loaded only when needed, rather than at the initial page load. This means the Visualforce page starts by loading only essential data or a subset of the data (for instance, the first page in a paginated list), and then additional data is retrieved through user interaction, such as clicking a “Load More” button or navigating to the next page. Implementing this pattern often involves using Apex methods called through <apex:actionFunction>, <apex:commandButton>, or <apex:actionSupport>, which allows Salesforce to load new data without refreshing the whole page.

Option C correctly identifies this strategy by suggesting lazy loading instead of fetching all data in the controller's constructor. By not retrieving all records during the page initialization, the page load time decreases significantly, especially in scenarios involving thousands of records.

Now let’s examine why the other options are less suitable:

  • Option A, moving data processing to JavaScript, may help reduce server load in certain cases but does not solve the issue of initial page load performance caused by retrieving and rendering too much data. Moreover, the data must already be loaded before it can be processed in the browser, so it doesn’t help with the root problem of retrieving a large data set during the initial load.

  • Option B suggests using the transient keyword on list variables. The transient keyword in Apex is used to prevent variables from being serialized during the view state write process. While it can reduce view state size, it does not affect the initial data retrieval or processing, meaning it won't significantly help with initial page load time.

  • Option D, using <apex:actionPoller>, is a tag designed to periodically send AJAX requests to the server. This is more appropriate for pages that need to refresh or poll for updates in the background (e.g., real-time dashboards), not for handling bulk data loading efficiently. Using it to load all data asynchronously can be inefficient and may even degrade performance due to constant server calls.

In conclusion, the best approach to improve performance in a Visualforce page overloaded with data is to implement lazy loading, which strategically retrieves data only when necessary rather than loading it all up front. This minimizes page load times and improves the overall user experience.

Thus, the correct answer is C.

Question 2

Universal Containers wants to implement a Customer Community using Customer Community Plus licenses, enabling customers to track their rented containers and return dates. These customers are large global organizations with intricate Account hierarchies, often representing multiple departments. The goal is to allow specific community users within the same Account hierarchy to view containers across multiple departments, based on a junction object that connects Contacts to different Account records. 

Which solution best addresses these requirements?

A. A Visualforce page that uses a Custom Controller that specifies without sharing to expose the records
B. A Custom List View on the junction object with filters that will show the proper records based on owner
C. A Custom Report Type and a report Lightning Component on the Community Home Page
D. An Apex Trigger that creates Apex Managed Sharing records based on the junction object’s relationships

Correct Answer : D

Explanation:

This scenario involves a few key technical challenges:

  • Complex Account Hierarchies: Multiple departments under the same corporate umbrella, each represented as separate Account records.

  • Junction Object Relationship: Contacts (community users) are connected to multiple Accounts (departments) using a custom junction object.

  • Sharing Requirements: Some users need access to container records associated with several departments, even if those departments are separate Accounts.

The correct solution needs to go beyond native sharing rules because of the complex, many-to-many relationship facilitated by the junction object. Let’s review each option:

A. A Visualforce page with a custom controller marked "without sharing" could technically bypass sharing settings and expose records. However, this approach is not secure or scalable in a Community setting. Community users operate under strict sharing rules, and overriding these rules with "without sharing" can create data exposure risks. Furthermore, Visualforce is not ideal for Communities that use Lightning Experience, and this option does not offer a maintainable or user-friendly solution.

B. Custom List Views are useful but limited to objects that the user already has access to. They do not override sharing rules, meaning a community user would only see records they already have permission to access. In this case, the junction object defines custom access logic, which a list view cannot enforce. Thus, this approach would not expose the necessary container records unless the user already has visibility via other means.

C. A Custom Report Type and a report on the Community Home Page might allow users to view data in aggregate, but reports respect existing sharing rules. They do not grant additional visibility. Therefore, if users don’t already have access to the container records through standard sharing, the report will simply show no data. This makes it an insufficient solution to the visibility challenge described.

D. Apex Managed Sharing is the most appropriate solution. This technique allows developers to programmatically grant access to specific records for specific users based on custom logic—like the relationships in the junction object. An Apex Trigger on the junction object can be used to evaluate relationships and create Apex Sharing records, enabling the right Contacts (Community users) to view the right container records. Apex Managed Sharing is a robust and secure method that works within the Salesforce sharing model, respects the Customer Community Plus license capabilities, and offers fine-grained control. It's also scalable across the complex Account hierarchies described.

Therefore, D is the only solution that meets all the requirements: respecting the Salesforce sharing model, securely granting access based on a custom junction object, and supporting complex multi-account visibility for community users.

Question 3

Universal Containers wants to use a third-party web service to verify shipping and billing addresses. The current provider uses basic password authentication, but there's a possibility they will change to another provider using OAuth. 

Which solution would enable Universal Containers to switch between these vendors without having to modify their code to manage authentication?

A. Custom Metadata
B. Custom Setting (List)
C. Dynamic Endpoint
D. Named Credential

Correct Answer : D

Explanation:

In this scenario, Universal Containers is integrating with an external web service, and the primary concern is managing authentication in a flexible way so that switching vendors doesn’t require changes to the codebase. To solve this, the company needs a mechanism that can externalize authentication configuration and support different authentication methods such as basic authentication and OAuth.

Let’s explore each of the options to understand why Named Credential is the correct answer.

Option A: Custom Metadata
Custom metadata types are often used to store configuration or metadata that doesn’t change often, such as reference values, environment settings, or mapping logic. While it’s possible to use custom metadata to store authentication endpoints, client IDs, secrets, and other credentials, it does not inherently provide a secure or managed way to handle authentication flows. It would require the developer to write logic in Apex or integration code to manually fetch the metadata and then manage the actual authentication handshake, whether it's basic auth or OAuth. This introduces complexity and doesn't satisfy the requirement to avoid code changes when switching vendors.

Option B: Custom Setting (List)
List Custom Settings are similar to custom metadata but are typically used to store user-specific or profile-specific configuration data. Like custom metadata, these settings can store values such as API endpoints or authentication tokens. However, they do not natively support secure handling of authentication protocols, nor do they integrate with Salesforce’s underlying HTTP callout mechanism for secure and declarative credential management. This also leads to hardcoding logic in Apex, which violates the requirement to avoid code changes when switching vendors.

Option C: Dynamic Endpoint
Dynamic endpoints allow Apex code to determine the target URL at runtime. This feature is useful for flexibility in pointing to different servers or tenants, but it only addresses endpoint management, not authentication. While you can dynamically change the endpoint, you still have to manage and embed authentication logic in your code—whether it’s inserting headers for basic auth or implementing OAuth flows. Therefore, using a dynamic endpoint alone does not satisfy the requirement to separate authentication handling from the codebase.

Option D: Named Credential
Named Credentials provide a powerful way in Salesforce to manage both the endpoint URL and authentication credentials in a secure and declarative manner. With Named Credentials, developers can write callouts without embedding any authentication logic in Apex; the platform handles authentication for them. Furthermore, Salesforce supports different authentication protocols with Named Credentials, including basic authentication and OAuth 2.0, which directly aligns with the current and future requirements of Universal Containers.

Most importantly, switching vendors or changing the authentication method can be done without touching the code. You simply update or replace the Named Credential configuration in Salesforce Setup. This makes Named Credentials the most flexible and secure option for managing integration credentials across vendor
Named Credentials allow the abstraction of both endpoint URL and authentication mechanism, supporting both basic auth and OAuth. They enable configuration-level changes without modifying Apex code, thus meeting the requirement of a seamless vendor switch without code changes. The other options either do not handle authentication or require manual logic in code, which goes against the goal.

Therefore, the correct answer is: D.

Question 4

A company’s Lightning Page contains multiple Lightning Components, some of which cache reference data. Users report that the page occasionally fails to display the most up-to-date reference data. 

What tool should a developer use to investigate and understand this issue within the Lightning Page?

A. Salesforce Lightning Inspector Actions Tab
B. Salesforce Lightning Inspector Event Log Tab
C. Salesforce Lightning Inspector Transactions Tab
D. Salesforce Lightning Inspector Storage Tab

Correct Answer : D

Explanation:

When diagnosing issues related to outdated or stale cached data in a Lightning Page, the key concern revolves around how and where data is being stored locally in the browser. Lightning Web Components (LWC) and Aura Components can use local storage, session storage, or custom caching mechanisms to improve performance and reduce server calls. However, when data changes frequently or is expected to reflect real-time updates (such as reference data that might be updated by other users or systems), stale data in these local caches can cause inconsistencies.

To investigate such caching-related issues, the Salesforce Lightning Inspector extension for Chrome provides several useful tabs, and each serves a specific diagnostic purpose:

  • Option A: Actions Tab
    This tab is mainly used for monitoring Lightning actions—essentially the communication between the component and the server, particularly when Apex methods are invoked. While this is useful for analyzing whether a method is being called or not, it does not expose local caching behavior or client-side storage directly.

  • Option B: Event Log Tab
    The Event Log tab shows application events and component events—useful for understanding the flow of interactions and event-driven updates. While this helps understand whether components are communicating or updating in response to events, it still does not help uncover data staleness due to client-side storage.

  • Option C: Transactions Tab
    This tab is excellent for understanding performance-related metrics, such as component creation times, rendering durations, and the time taken for server interactions. While this can help in detecting delays or bottlenecks, it does not provide insight into where or how data is stored and cached locally.

  • Option D: Storage Tab
    This is the correct answer. The Storage tab in the Salesforce Lightning Inspector is explicitly designed to let developers inspect the client-side storage, including localStorage, sessionStorage, and IndexedDB. These are all potential mechanisms a Lightning Component might use to cache reference data. By using this tab, a developer can view what data has been stored, when it was last updated, and whether it aligns with what the server should be returning.

This information is vital for diagnosing issues where data is no longer current because it has been pulled from a local cache rather than the server. By analyzing entries in the Storage tab, developers can determine whether:

  • Cached data is persisting too long without being refreshed.

  • The logic to refresh or invalidate cache is missing or faulty.

  • There’s a need to implement a time-to-live (TTL) or expiration logic for reference data.

  • Components are unintentionally relying on stale data due to incorrect cache checks.

In conclusion, when investigating issues related to outdated cached data in Lightning Components, especially reference data that fails to update reliably, the Storage Tab in the Salesforce Lightning Inspector provides the tools necessary to analyze the problem at its source. It enables developers to inspect what is stored locally in the browser and make informed decisions about whether to adjust caching strategies or implement proper cache invalidation logic.

Therefore, the correct answer is D.

Question 5

A developer built and tested a Visualforce page in a developer sandbox, but after deployment to Production, users are experiencing ViewState errors when interacting with the page. 

What step should the developer take to resolve these ViewState errors?

A. Ensure queries do not exceed governor limits.
B. Ensure properties are marked as Transient.
C. Ensure properties are marked as private.
D. Ensure profiles have access to the Visualforce page.

Correct Answer : B

Explanation:

ViewState errors in Visualforce are a common issue when large amounts of data or unnecessarily persistent stateful objects are maintained across HTTP requests. The Salesforce ViewState is a mechanism that preserves the state of a page between postbacks. However, it has a size limit of 135 KB, and exceeding this limit causes a ViewState error.

Let’s evaluate each of the provided options in the context of ViewState issues:

A. Ensure queries do not exceed governor limits.
While exceeding governor limits is a common source of Apex errors, such as Too many SOQL queries or Too many DML statements, it is not related to ViewState errors. ViewState errors are due to the size of the page’s stateful data—not query limits or CPU usage. This answer is irrelevant to solving the ViewState issue.

B. Ensure properties are marked as Transient.
This is the correct answer. When a property is marked as transient, it tells the Visualforce framework not to store the value in the ViewState. This is especially useful for data that does not need to persist across postbacks—such as temporary results, helper variables, or any large object that’s only needed for the current request.

public transient List<Account> tempAccountList;

By using the transient keyword, the developer can reduce the size of the ViewState and avoid exceeding the size limit, resolving the error in question.

This is a standard best practice for managing ViewState performance, especially when using custom controllers or controller extensions that store large collections or objects as public properties.

C. Ensure properties are marked as private.
Making a property private restricts its visibility within the class, but it does not affect whether the property is serialized into the ViewState. Any non-transient property—regardless of visibility—will still be included in the ViewState if it is serializable and accessible from the Visualforce page. Therefore, this option does not solve the problem.

D. Ensure profiles have access to the Visualforce page.
This is related to security and permissions, not ViewState management. If users lacked access to the page, they’d encounter authorization errors (such as “Page not found” or “Insufficient Privileges”), not ViewState errors. This choice is not relevant to the scenario described.

In conclusion, ViewState errors are caused by too much data being maintained in memory between requests. The best approach to mitigate these errors is to mark non-essential data as transient so that it is excluded from the ViewState. Therefore, the most appropriate and effective solution is: B.

Question 6

Given the Lightning component layout code that currently displays three rows on mobile devices, Universal Containers wants it to display in a single row on desktops or tablets. 

Which revised code version properly configures the layout to show all fields in one row on larger screens while maintaining mobile compatibility?

A.

<lightning:layout multipleRows="true">  

  <lightning:layoutItem size="12" mediumDeviceSize="6" largeDeviceSize="4">{!v.account.Name}</lightning:layoutItem>  

  <lightning:layoutItem size="12" mediumDeviceSize="6" largeDeviceSize="4">{!v.account.AccountNumber}</lightning:layoutItem>  

  <lightning:layoutItem size="12" mediumDeviceSize="6" largeDeviceSize="4">{!v.account.Industry}</lightning:layoutItem>  

</lightning:layout>


B.

<lightning:layout multipleRows="true">  

  <lightning:layoutItem size="12" largeDeviceSize="4">{!v.account.Name}</lightning:layoutItem>  

  <lightning:layoutItem size="12" largeDeviceSize="4">{!v.account.AccountNumber}</lightning:layoutItem>  

  <lightning:layoutItem size="12" largeDeviceSize="4">{!v.account.Industry}</lightning:layoutItem>  

</lightning:layout>


C.

<lightning:layout multipleRows="true">  

  <lightning:layoutItem size="12" mediumDeviceSize="4">{!v.account.Name}</lightning:layoutItem>  

  <lightning:layoutItem size="12" mediumDeviceSize="4">{!v.account.AccountNumber}</lightning:layoutItem>  

  <lightning:layoutItem size="12" mediumDeviceSize="4">{!v.account.Industry}</lightning:layoutItem>  

</lightning:layout>


D.

<lightning:layout multipleRows="true">  

  <lightning:layoutItem size="12" mediumDeviceSize="6">{!v.account.Name}</lightning:layoutItem>  

  <lightning:layoutItem size="12" mediumDeviceSize="6">{!v.account.AccountNumber}</lightning:layoutItem>  

  <lightning:layoutItem size="12" mediumDeviceSize="6">{!v.account.Industry}</lightning:layoutItem>  

</lightning:layout>


Correct Answer : A

Explanation:
Therefore, the correct answer is: A.The challenge here is to modify the Lightning Layout code to ensure that the component displays each field (Account Name, Account Number, and Industry) in one row when viewed on desktop or tablet screens, but in separate rows on mobile devices. To do this properly, you need to leverage responsive grid system attributes like mediumDeviceSize and largeDeviceSize in combination with size.

Let’s walk through each option in detail.

Option A defines the size as 12 (full-width) for mobile devices, which ensures each field occupies an entire row on smaller screens. On larger devices (tablet and desktop), it sets mediumDeviceSize="6" and largeDeviceSize="4". This means:

  • On tablets (medium), each field will take up 6 of 12 columns, so the first two items will appear side by side, and the third will wrap to the next row (not ideal).

  • On desktops (large), each field takes 4 columns (out of 12), so all three can be displayed in a single row, perfectly aligned across the screen.

Even though on tablets the layout may display two fields on the first row and one on the second, this option is the most comprehensive in handling different screen sizes. You get a responsive layout that adjusts appropriately for each device type, and it allows for a single-row layout on desktops.

Option B sets only largeDeviceSize="4", which means:

  • Mobile devices fall back to the default size="12" — fine for mobile.

  • Tablets (medium devices) are not explicitly defined, so they inherit size=12, which leads to each field being on its own row even on tablets — not ideal when the goal is a single row on larger screens.

  • On desktops, it behaves correctly.

But since it lacks mediumDeviceSize, this doesn’t fully meet the requirements.

Option C assigns mediumDeviceSize="4" but does not define largeDeviceSize. So:

  • On mobile (size=12), it’s fine.

  • On tablets (medium=4), all three items fit on a single row (3 x 4 = 12) — good.

  • On desktops, without a largeDeviceSize, it still inherits size=12, causing each item to take a full row — not what we want.

This approach misses the mark for desktops, making it only partially correct.

Option D uses mediumDeviceSize="6" for all three items. This means:

  • On mobile, it displays correctly (one per row).

  • On tablets, each item takes up half the row — so two will be on the first row and the third will wrap — not a single row.

  • There’s no largeDeviceSize, so desktops fall back to size=12, which puts each item in its own row — again, not acceptable.

In conclusion, Option A is the only one that uses all three levels of responsive sizing (mobile, tablet, desktop) in a way that satisfies the requirement: separate rows on mobile, and a single row on desktop. While tablet view might not always produce one row, it comes closest by showing two on the first line and one below, which is a reasonable tradeoff and closer to expectations than any other option.

Question 7

According to a company's support process, whenever a Case is closed with the Status 'Could not fix', an Engineering Review record (a custom object) must be automatically created. This record should include data from the Case, the related Contact, and any associated Products. 

What is the correct way to implement this automation using an Apex trigger?

A. An after update trigger that creates the Engineering Review record and inserts it
B. A before update trigger that creates the Engineering Review record and inserts it
C. An after upsert trigger that creates the Engineering Review record and inserts it
D. A before upsert trigger that creates the Engineering Review record and inserts it

Correct Answer : A

Explanation:

To determine the correct trigger context for creating a related record (in this case, an Engineering Review), it's essential to understand the distinctions between before and after triggers in Apex and the purpose of insert, update, and upsert events.

The requirement states that when a Case is closed with a Status of 'Could not fix', an Engineering Review record must be created and populated with data from the Case, as well as related objects like the Contact and Products. This strongly suggests that the trigger must run after the Case is updated, because:

  1. The Case already exists, and we’re responding to a change in the Status field.

  2. We need access to related records, such as the Contact (via lookup) and Products (which might come from a junction object or related list). Accessing these related objects often requires that the parent (Case) record is already committed or at least fully constructed, which is true in an after trigger, not a before trigger.

  3. We're required to create and insert a new Engineering Review record. This is a DML operation. In a before trigger, inserting new unrelated records is discouraged and may not behave consistently.

Now let’s analyze the options in turn:

  • Option A: An after update trigger that creates the Engineering Review record and inserts it
    This is the correct choice. In an after update context:

    • The Case record is fully populated, including related fields like Contact.

    • It's safe to query child or related objects.

    • Creating and inserting related records like the Engineering Review object is appropriate and reliable.
      This allows you to check if the Case’s Status has changed to ‘Could not fix’, and then proceed to query any associated Products, and finally insert the Engineering Review record populated with all relevant data.

  • Option B: A before update trigger that creates the Engineering Review record and inserts it
    This is incorrect. In a before update trigger, the record hasn’t been committed yet. More importantly, inserting unrelated records (like Engineering Review) from a before trigger is bad practice. It may also run before relationship fields (like those to Contact or Products) are reliably available.

  • Option C: An after upsert trigger that creates the Engineering Review record and inserts it
    An upsert trigger combines insert and update contexts. However, this scenario is strictly about an update to a Case’s Status field—not a new record being created. Using an upsert trigger here is unnecessary and would introduce complexity with little benefit.

  • Option D: A before upsert trigger that creates the Engineering Review record and inserts it
    This shares the same flaws as Option B, with the added confusion of using upsert when only update is required. Again, DML operations like insert should not be done in before triggers.

In summary, to respond to a specific change in a Case (its closure with a specific status), and create a related record that draws from that Case and its relationships, the appropriate design pattern is to use an after update trigger. This ensures that all necessary data is available, and it’s safe to create and insert the Engineering Review record.

Therefore, the correct answer is A.


UP

SPECIAL OFFER: GET 10% OFF

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 10% Off Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.