Certified Integration Architect Salesforce Practice Test Questions and Exam Dumps


Question 1:

A company is developing a Lightning Web Component (LWC) to show transaction records that are aggregated from multiple systems. Their setup includes custom Salesforce objects, middleware with publish-subscribe and REST APIs, and periodic data replication to Salesforce. Since the custom object doesn't always hold all needed data between updates, what integration solution should the architect recommend to ensure the LWC displays complete transaction data?

A. Publish a Platform Event, have the middleware subscribe and update the custom object on receipt of Platform Event.
B. Call the Enterprise APIs directly from the LWC's JavaScript code and redisplay the LWC on receipt of the API response.
C. Use the Continuation class to call the Enterprise APIs and then process the response in a callback method.
D. Let the Lightning Data Service with an @wire adapter display new values when the custom object records change.

Correct Answer: C

Explanation:

This scenario highlights a common issue in integration design: the need for real-time, complete data visibility from disparate systems when the local data store (Salesforce custom object) is only partially synced. The LWC is expected to show all relevant transactions, but due to periodic updates, the custom object doesn't always reflect complete or current data. Therefore, relying solely on it (as in options A or D) won't solve the problem.

Let’s explore the options:

A. This involves using Platform Events to trigger middleware to update the custom object. While this improves data freshness in Salesforce, it still doesn’t guarantee that all transactions are visible at the moment the user views the LWC, because the update process is still asynchronous and indirect. It won’t solve the real-time completeness concern.

B. Directly calling external Enterprise APIs from the LWC JavaScript code might seem like a straightforward solution. However, this approach is flawed due to Salesforce's security model. LWC runs on the client-side, and calling external APIs directly would require bypassing CORS restrictions, exposing sensitive credentials or tokens in the browser, and compromising maintainability and security.

C. This is the correct approach. The Continuation class in Apex allows the LWC to call long-running external services asynchronously through Apex, using callouts to the middleware's REST APIs. The Apex controller manages the communication, ensuring security, scalability, and proper authentication. Once the response is received from the middleware, the Apex callback method sends the data back to the LWC. This provides a secure and real-time integration, ensuring that the user sees complete transaction data without relying on potentially outdated Salesforce records.

D. Lightning Data Service with @wire decorators is ideal for responding to changes in Salesforce data. But again, it only reacts to updates in the custom object, which may be incomplete due to the periodic replication process. It doesn't address the usability concern of showing all necessary transactions in real-time.

To summarize, the key issue is how to present complete and up-to-date transaction data in the LWC when the local Salesforce custom object may be out-of-sync. Only option C (using the Continuation class for asynchronous Apex callouts to the middleware’s Enterprise APIs) offers a secure and efficient way to retrieve real-time data from external systems and display it in the LWC.


Question 2:

A media company has implemented an Identity and Access Management (IAM) system that supports SAML and OpenID Connect to unify logins and enable self-service. They want new customers to register themselves and gain immediate access to Salesforce Community Cloud via Single Sign-On (SSO). What two capabilities must the Salesforce Community Cloud support to enable this integration and seamless onboarding experience? (Choose two.)

A. SAML SSO and Just-in-Time (JIT) provisioning
B. OpenId Connect Authentication Provider and Registration Handler
C. OpenId Connect Authentication Provider and JIT provisioning
D. SAML SSO and Registration Handler

Correct Answer: A, C

Explanation:

To support a seamless user experience that includes self-registration and single sign-on (SSO) into Salesforce Community Cloud, the integration must ensure that users are authenticated through the IAM system and then provisioned into Salesforce without manual intervention. This leads to two key requirements: SSO protocol compatibility and automated user provisioning.

Let’s analyze the correct options:

A. SAML SSO and Just-in-Time (JIT) provisioning

This is a valid and widely used combination. Salesforce supports SAML 2.0, which allows SSO from an external Identity Provider (IdP) such as an IAM system. With Just-in-Time (JIT) provisioning, a user can be automatically created in Salesforce at the time of their first login, provided the SAML assertion includes the required attributes.

JIT provisioning is triggered by the SAML login attempt—if the user doesn't already exist, Salesforce uses the SAML attributes to create the user in real time, ensuring a frictionless experience. This allows instant access to Community Cloud upon successful authentication, fulfilling both the SSO and self-registration goals.

C. OpenId Connect Authentication Provider and JIT provisioning

This is also a correct answer. Salesforce supports OpenID Connect (OIDC) through Authentication Providers, which allow users to authenticate using external identity services that support OIDC (like the IAM system mentioned).

When combined with JIT provisioning, Salesforce can automatically create users at the time of their first login. The Identity Provider passes identity attributes (like username, email, profile info) via the OIDC token, which Salesforce uses to provision the user dynamically. This again meets the requirement for seamless SSO and automatic user onboarding.

Why the other options are incorrect:

B. OpenId Connect Authentication Provider and Registration Handler
This combination is incorrect because Registration Handlers are used with Authentication Providers in delegated authentication flows, such as third-party login via social accounts, and are generally associated with custom user creation logic. But they are not inherently tied to the standard OIDC + JIT path that provides automated provisioning. Also, using Registration Handler without JIT is not ideal for instant access after self-registration.

D. SAML SSO and Registration Handler
SAML does not use Registration Handlers for user creation. Instead, JIT provisioning is the mechanism through which users are created at the time of SAML login. So this pairing does not work for the scenario described.

The company needs a solution that supports:

  • Authentication via the IAM system using standard SSO protocols (SAML or OpenID Connect).

  • Automatic user creation in Salesforce so that users gain access immediately after authentication.

Both A and C provide these capabilities using industry-standard authentication protocols and Just-in-Time provisioning.



Question 3:

A customer wants to understand the differences between using Platform Events and Outbound Messaging in Salesforce for real-time or near-real-time messaging needs. They plan to have around 3,000 customers receiving messages. What are three key factors they should consider when choosing between these two solutions? (Choose three.)

A. Number of concurrent subscribers to Platform Events is capped at 2,000. An Outbound Messaging configuration can pass only 100 notifications in a single message to a SOAP end point.
B. Both Platform Events and Outbound Messaging offer declarative means for asynchronous near-real time needs. They aren't best suited for real-time integrations.
C. In both Platform Events and Outbound Messaging, the event messages are retried by and delivered in sequence, and only once. Salesforce ensures there is no duplicate message delivery.
D. Message sequence is possible in Outbound Messaging, but not guaranteed with Platform Events. Both offer very high reliability. Fault handling and recovery are fully handled by Salesforce.
E. Both Platform Events and Outbound Messaging are highly scalable. However, unlike Outbound Messaging, only Platform Events have Event Delivery and Event Publishing limits to be considered.

Correct Answers: A, B, E

Explanation:

When evaluating Platform Events and Outbound Messaging as messaging solutions for near-real-time integrations in Salesforce, it's essential to understand their differences in architecture, reliability, delivery guarantees, scalability, and limits. Both serve asynchronous communication use cases but differ significantly in how they are consumed, monitored, and scaled.

Let’s review the correct answers:

This is correct. Platform Events support up to 2,000 concurrent CometD subscribers, which means that if your use case involves more than 2,000 simultaneously connected clients (such as 3,000 customers), this limit is a constraint. On the other hand, Outbound Messaging is limited in payload and batching: it can send up to 100 notifications per message, and only to SOAP endpoints. This highlights scalability and format constraints on both sides.

Correct. Both Platform Events and Outbound Messaging are designed for asynchronous, near-real-time communication, not hard real-time interactions. They are declarative solutions—you can configure them without extensive custom code, using workflow rules (Outbound Messaging) or Process Builder/Flows (Platform Events). However, due to retry mechanisms, delivery delays, and batch processing, they are not suitable for true real-time requirements (like financial transactions or telemetry control).

This is also correct. Platform Events have limits on publishing and delivery—such as daily event publishing limits, event size, and subscriber consumption rates. These can become a bottleneck at scale. While Outbound Messaging is more limited in terms of endpoint flexibility (SOAP only), it doesn't have as many explicit event throughput limitations. Therefore, when scalability is a key concern, Platform Events require careful planning around these limits.

Let’s now look at the incorrect choices:

Incorrect. Neither Platform Events nor Outbound Messaging guarantee exactly-once delivery. In fact, duplicate delivery can occur in both mechanisms, so your integration design must be idempotent (able to handle duplicate events safely). Also, ordering of events is not guaranteed in Platform Events, and delivery may not happen only once. So this statement is inaccurate.

Incorrect. While Outbound Messaging does attempt to preserve message order, Platform Events explicitly do not guarantee sequencing. Also, Salesforce does not handle complete fault tolerance or recovery for external endpoints—if an endpoint is unavailable or slow, retries occur, but handling errors and deduplication remains the responsibility of the consuming system. The phrase “fully handled by Salesforce” is misleading.

  • Platform Events are modern, scalable, and better suited to pub-sub architectures, but they come with strict delivery and publishing limits, and do not guarantee message order or exactly-once delivery.

  • Outbound Messaging is more traditional, SOAP-based, and limited in batch size and endpoint type, but simpler to configure in legacy workflows.

Understanding these constraints helps integration architects select the best-fit messaging mechanism based on subscriber scale, endpoint compatibility, and throughput needs.

Therefore, the correct answers are A, B, and E.

Question 4:

Universal Containers has multiple cloud and on-premise applications. The on-premise systems are secured within the corporate network and have restricted external access. The company wants Salesforce to access data from these on-premise applications in real time to provide a unified interface. What are two actions that should be recommended to meet this requirement? (Choose two.)

A. Run a batch job with an extract, transform, load (ETL) tool from an on-premise server to move data to Salesforce.
B. Develop an application in Heroku that connects to the on-premise database via an Open Database Connectivity (ODBC) string and Virtual Private Cloud (VPC) connection.
C. Develop custom APIs on the company's network that are invokable by Salesforce.
D. Deploy MuleSoft to the on-premise network and design external facing APIs to expose the data.

Correct Answers: C, D

Explanation:

When integrating on-premise applications with Salesforce, especially in real-time, the solution must ensure secure, scalable, and responsive access to data that lives behind corporate firewalls. Since the requirement explicitly states real-time data access, the use of traditional ETL tools or batch processing is not sufficient.

Let’s analyze the correct and incorrect choices.

C is a correct option. Developing custom APIs on the on-premise systems allows Salesforce to interact with internal applications through these exposed services. However, since these systems are behind a secured corporate firewall, appropriate infrastructure (such as reverse proxies, secure tunnels, or API gateways) must be established to allow Salesforce to invoke these APIs. These APIs must also follow standard protocols (like REST or SOAP) and enforce strict authentication and authorization to maintain security.

D is also correct. MuleSoft, which is part of the Salesforce ecosystem, is specifically designed for such hybrid integrations. By deploying MuleSoft runtimes (Mule Runtimes) within the on-premise network, you can expose external-facing APIs that securely connect to Salesforce. MuleSoft provides out-of-the-box support for managing APIs, applying policies (like throttling, rate limiting), and ensuring that sensitive data remains secure. It bridges cloud and on-premise environments and is well-suited for real-time, scalable integration requirements.

Now, let’s examine the incorrect answers:

A is incorrect because it describes a batch processing approach using ETL tools. While this method is suitable for scheduled data syncs, it does not fulfill the real-time data access requirement. In fact, any delay caused by extract and load intervals would result in stale or outdated data in Salesforce, which contradicts the goal of a unified, real-time experience.

B is also incorrect because, although Heroku is a flexible platform that supports VPC peering and ODBC, the complexity and maintenance overhead of managing a custom Heroku application, along with its associated secure connections to on-premise databases, makes this approach less optimal. It introduces an additional layer that could affect latency and scalability. More importantly, it’s not a standard or recommended method for securely accessing on-premise data from Salesforce, especially when better-suited integration tools like MuleSoft exist.

In summary, to meet the goal of real-time data access from on-premise systems within a secure enterprise environment, the best recommendations are to either develop custom, secure APIs that Salesforce can invoke or to use MuleSoft, which is purpose-built for such hybrid integration patterns.

Therefore, the correct answers are C and D.


Question 5:

A global financial institution offers services such as bank accounts, loans, and insurance. It uses a modern core banking system as the primary source for storing customer information and handling approximately 10 million financial transactions daily. The company’s CTO wants to build a community portal in Salesforce that allows customers to view their bank account details, update information, and access their financial transactions. What should the integration architect recommend to allow community users to view these transactions?

A. Use Salesforce External Service to display financial transactions in a community Lightning page.
B. Use Salesforce Connect to display the financial transactions as an external object.
C. Migrate the financial transaction records to Salesforce custom object and use ETL tool to keep systems in sync.
D. Use Iframe to display core banking financial transactions data in the customer community.

Correct Answer: B

Explanation:

In this scenario, the company’s core banking system handles a very high volume of transactions (10 million per day) and is the system of record for financial data. The goal is to allow Salesforce community users to view this transactional data in real time or near-real time through the community portal without duplicating or overwhelming Salesforce’s storage limits.

Let’s analyze the options:

B is the most suitable and scalable solution. Salesforce Connect enables Salesforce to integrate external data sources by referencing data in real-time using external objects. These external objects behave like Salesforce objects but the data resides in the source system—in this case, the core banking system. Since the data is not stored in Salesforce, this method is optimal for high-volume scenarios such as financial transactions. With Salesforce Connect, customers can view their transactions directly within the community portal without the need to physically store the data in Salesforce, thereby avoiding performance issues and data storage limits.

A is not appropriate here. External Services in Salesforce are designed to invoke external APIs in a declarative way and are typically used for transactional operations (e.g., triggering a payment process). They are not well-suited for displaying large volumes of data or implementing virtual object models for browsing records like transaction histories. Also, integrating External Services into community Lightning pages for high-volume read operations is inefficient and not scalable.

C is the least desirable for performance and cost reasons. Importing all financial transaction records into Salesforce custom objects using an ETL tool would put tremendous pressure on Salesforce storage limits and governor limits. Considering that the core system handles 10 million transactions per day, this approach is neither scalable nor cost-effective. It also introduces data latency, because ETL processes are typically run in batches and cannot ensure real-time synchronization.

D is a fragile workaround. While using an iframe might allow data from the core banking system to be embedded into the Salesforce community UI, it lacks proper integration with Salesforce’s security, access controls, and data model. This method is not secure, difficult to maintain, and offers poor user experience. Moreover, it prevents Salesforce from using native tools to control or report on the data being displayed.

In conclusion, Salesforce Connect is the best practice for this use case. It allows real-time access to external data using a standard data model, without consuming Salesforce storage, and provides the flexibility to expose this data securely to community users.



Question 6:

Northern Trail Outfitters (NTO) operates in 34 countries and frequently changes the shipping services it uses to optimize delivery times and costs. Sales reps handle customers globally and must choose from valid shipping options per country and obtain shipping estimates from the selected service. What two solutions should an architect recommend? (Choose two.)

A. Invoke middleware service to retrieve valid shipping methods.
B. Store shipping services in a picklist that is dependent on a country picklist.
C. Use middleware to abstract the call to the specific shipping services.
D. Use Platform Events to construct and publish shipper-specific events.

Correct Answers: A, C

Explanation:

This use case requires a dynamic and scalable integration pattern that can adapt to frequent changes in third-party shipping services and serve a global customer base. The key functional needs are:

  1. Determining valid shipping options based on the customer's country.

  2. Obtaining real-time shipping estimates from the appropriate shipping provider.

  3. Supporting frequent changes in shipping providers without hard-coding logic in Salesforce.

Let’s evaluate each of the options:

A is correct. Invoking a middleware service to retrieve valid shipping methods is an ideal architectural choice. Instead of hardcoding the logic in Salesforce, middleware can manage the constantly changing list of services and business rules. The middleware layer can query its configuration or a backend system to return the country-specific shipping options, ensuring that sales reps always see up-to-date choices.

C is also correct. Using middleware as an abstraction layer means Salesforce does not need to know the internal details of each shipping provider. The middleware handles which service to call, how to format the request, and how to process the response. This abstraction is essential for a setup where services are frequently added or removed, allowing seamless integration changes without modifying Salesforce code. Middleware can also ensure consistency in error handling, logging, retries, and security.

Now let's look at the incorrect options:

B is incorrect. Storing shipping services in a dependent picklist is a rigid and static solution. Since NTO frequently adds or removes services, maintaining a picklist would be error-prone and require continuous administrative effort. Picklists are not ideal for dynamic data that changes based on external systems or business logic. Additionally, they cannot call out to external systems for real-time shipping estimates.

D is also incorrect. Platform Events are best used for event-driven architectures such as notifying systems about a transaction or a status update. While Platform Events can carry messages to external systems, they are not suitable for synchronous, real-time data retrieval—like when a sales rep wants to instantly get a list of valid shippers or a shipping cost. In this case, synchronous integration patterns using request-response via middleware are more appropriate.

To summarize, the best architectural solution involves using middleware to retrieve dynamic shipping options and abstract integration with external shippers, ensuring flexibility, scalability, and maintainability.

Therefore, the correct answers are A and C.



Question 7:

Universal Containers (UC) collaborates with third-party agencies on advertising banner designs. These design files are stored in an on-premise file system and can be accessed by both UC internal users and external agencies. Each conceptual design file is approximately 2.5 GB. UC wants to enable third-party agencies to view these files within their Salesforce community. What solution should the integration architect recommend?

A. Use Salesforce Files to link the files to Salesforce records and display the record and the files in the community.
B. Define an External Data Source and use Salesforce Connect to upload the files to an external object. Link the external object using Indirect lookup.
C. Create a custom object to store the file location URL: when a community user clicks on the file URL, redirect the user to the on-premise system file location.
D. Create a Lightning component with a Request and Reply integration pattern to allow the community users to download the design files.

Correct Answer: C

Explanation:

In this scenario, the integration architect must consider several constraints before recommending the best solution:

  • The design files are large (2.5 GB).

  • Files are stored on-premise and must be accessible to third-party agencies.

  • Community users must be able to view or download these files.

  • The solution must be efficient, secure, and avoid hitting Salesforce file storage limits or governor limits.

Let’s examine each option in detail:

A is not a good choice. Salesforce Files have a maximum file size limit of 2 GB, and although enhanced capabilities are available through Content Delivery and large file support, this option isn't practical or scalable for consistent 2.5 GB file uploads. Additionally, uploading such large files into Salesforce would consume expensive storage, and performance could suffer. Furthermore, transferring large files into Salesforce from an on-premise system would require complex middleware and scheduled syncing—complicating the architecture.

B is inappropriate for this use case. Salesforce Connect is intended for integrating structured data from external sources like databases or OData APIs using external objects—not large binary files. It doesn’t support uploading or referencing large files, especially those stored in file systems. Trying to treat binary files as records is technically unsound and not feasible via indirect lookup.

C is the most suitable and scalable approach. By creating a custom object in Salesforce that stores the URL of the file location in the on-premise system, UC can expose this object in the community. When a community user clicks the link, they are redirected to the file directly—avoiding the need to store or transfer the file through Salesforce. This keeps the file storage and download workload in the on-premise system, which is already set up for such use, and keeps Salesforce’s role lightweight and focused on orchestration. This method also scales easily with minimal storage cost.

D would require building a custom Lightning component and using a request-reply pattern to fetch files from the on-premise system. While technically possible, this approach would involve transmitting 2.5 GB files through Salesforce, which is inefficient, may exceed limits, and requires complex implementation. It also doesn't offer significant benefits over simply redirecting the user to the file location, as in Option C.

In conclusion, the best practice here is to store the file path or accessible URL in a custom object and let users securely access the file directly from the on-premise system. This avoids unnecessary data duplication and respects Salesforce platform limitations.



Question 8:

A business wants to automate the process of checking and updating the phone number type (mobile or landline) for all incoming sales calls. These are the conditions: the call center receives up to 100,000 calls per day, phone number classification is handled by an external API, and updates can be made every 6–12 hours using middleware hosted on-premise. Which component should an integration architect recommend to support Remote-Call-In and Batch Synchronization patterns?

A. Configure Remote Site Settings in Salesforce to authenticate the middleware.
B. Firewall and reverse proxy are required to protect internal APIs and resources being exposed.
C. ConnectedApp configured in Salesforce to authenticate the middleware.
D. An API Gateway that authenticates requests from Salesforce into the middleware (ETL/ESB).

Correct Answer: D

Explanation:

This use case revolves around a large-scale batch integration process where the middleware needs to push or pull data between Salesforce and external systems (specifically a number-type classification API). The volume is high (up to 100,000 records per day), but the operation doesn’t need real-time performance. This flexibility allows for using Batch Synchronization or Remote-Call-In patterns—both of which depend on middleware handling heavy data lifting securely.

Let’s analyze each option to determine the best architectural fit.

A is incorrect. Remote Site Settings in Salesforce are used when Salesforce initiates outbound calls to external systems (for example, via Apex callouts). But in this case, Salesforce is not calling out; rather, the middleware (hosted on-premise) is expected to connect into Salesforce—either to update records or retrieve them for processing. Remote Site Settings do not help authenticate or protect inbound integrations from external middleware.

B is not the best answer. Although firewalls and reverse proxies can provide security by shielding internal services and managing network access, they are infrastructure-level solutions, not integration components. The question specifically asks about a component that supports integration patterns like Remote-Call-In or ETL batch processing. Firewalls may be part of the network architecture, but they don’t handle authentication, routing, or API transformation, which are essential for secure and scalable integration.

C is partially correct but not ideal. A Connected App is primarily used when an external client (such as middleware) wants to connect to Salesforce, typically using OAuth 2.0 for authentication. While the Connected App is a necessary part of secure authentication, it’s not sufficient by itself for handling high-volume ETL jobs, request throttling, retries, or format transformations. It’s more of an authentication mechanism than a comprehensive integration gateway.

D is the best answer. An API Gateway sits between Salesforce and the middleware and provides a managed, secure interface for processing API calls. It can authenticate incoming requests using OAuth credentials from a Connected App, throttle or queue large volumes of requests, enforce policies, transform data formats, and log activity. This component is essential when implementing patterns like Remote-Call-In and Batch Synchronization, especially for enterprise-scale solutions involving frequent and large data updates. It abstracts internal APIs, allowing them to evolve without disrupting Salesforce or vice versa.

To summarize:

  • The business needs secure, scalable integration for batch processing and on-demand updates.

  • The middleware must access Salesforce securely and efficiently.

  • An API Gateway is purpose-built to support this use case—it handles authentication, traffic management, and routing, and integrates well with middleware platforms.

Therefore, the correct answer is D.


Question 9:

Universal Containers (UC) is using a custom-built monolithic web service hosted on-premise to handle point-to-point integrations between Salesforce and several other systems, including a legacy billing system, a cloud-based ERP, and a data lake. The current system has tight interdependencies that lead to failures and performance issues. What should an architect suggest to improve integration performance and system decoupling?

A. Re-write and optimize the current web service to be more efficient.
B. Leverage modular design by breaking up the web service into smaller pieces for a microservice architecture.
C. Use the Salesforce Bulk API when integrating back into Salesforce.
D. Move the custom monolithic web service from on-premise to a cloud provider.

Correct Answer: B

Explanation:

The core issue in this scenario is not merely about performance or location, but about tight coupling and lack of modularity between integrated systems. These point-to-point connections make the integration brittle—when one part fails, the rest can easily follow. This is a classic problem in monolithic architectures that try to do too much as a single unit.

Let’s evaluate the options in the context of system decoupling and integration resiliency.

A is not the correct answer. While re-writing or optimizing a monolithic service might offer some performance improvement, it does not address the underlying architectural problem of tight coupling. If the service remains monolithic, it still acts as a single point of failure. Optimizations might make the service faster, but they won’t improve fault isolation, reusability, or maintainability.

B is the best answer. Microservice architecture promotes modularization by splitting the monolithic application into independent, loosely coupled services that can be developed, deployed, and scaled independently. This architecture aligns well with modern integration best practices. Each microservice can manage a single integration point—for instance, one microservice for Salesforce to ERP, another for Salesforce to billing, and so on. This decouples systems, reduces blast radius in the event of failure, improves fault tolerance, and makes it easier to test, deploy, and scale individual components. Microservices also align better with cloud-native patterns, containerization, CI/CD, and horizontal scaling.

C is a partial but not ideal answer. The Salesforce Bulk API is great for handling large volumes of data, particularly in asynchronous processes such as nightly syncs or ETL jobs. However, it addresses a tactical need (data volume) rather than the strategic need (architectural decoupling and integration robustness). Also, Bulk API is limited to operations into Salesforce, and the integration challenge here spans multiple systems.

D is also not correct. Lifting and shifting the monolithic service from on-premise to a cloud provider (e.g., AWS or Azure) might reduce infrastructure maintenance and improve scalability, but it does nothing to address the service’s design flaws. The tight coupling between systems will persist, just in a new hosting environment. You’ll have the same problems with a new IP address.

In summary, the optimal architectural strategy here is to decouple the integrations using a microservice approach, which isolates functionality, enhances resilience, and aligns with modern DevOps practices. By modularizing the service, UC can reduce failure impact, improve change agility, and gain better observability.

Therefore, the correct answer is B.



Question 10:

A new Salesforce program has a broad requirement: business processes in Salesforce must involve data updates between internal systems and Salesforce. Which three specific details should a Salesforce Integration Architect gather to properly define the integration architecture for this program? (Choose three.)

A. Integration Style - Process-based, Data-based, and Virtual integration.
B. Timing aspects, real-time/near real-time (synchronous or asynchronous), batch and update frequency.
C. Source and Target system, Directionality, and data volume & transformation complexity, along with any middleware that can be leveraged.
D. Integration skills, SME availability, and Program Governance details.
E. Core functional and non-functional requirements for User Experience design, Encryption needs, Community, and license choices.

Correct Answers: A, B, C

Explanation:

When planning for integration architecture in a Salesforce program, an architect must focus on technical, operational, and architectural parameters that define how systems interact. The goal is to ensure reliable, secure, and performant data flow between Salesforce and other systems.

Let’s break down the correct and incorrect choices.

A. Integration Style - Process-based, Data-based, and Virtual integration.

Correct.
Understanding the integration style is fundamental. For example:

  • Process-based means integrating based on business workflows (e.g., an order created in Salesforce triggers fulfillment in an ERP system).

  • Data-based focuses on synchronizing data (e.g., account data between Salesforce and an HR system).

  • Virtual integration allows real-time access to external data without storage in Salesforce (e.g., using Salesforce Connect).

These styles inform decisions on tools, APIs, and performance expectations.

B. Timing aspects, real-time/near real-time (synchronous or asynchronous), batch and update frequency.

Correct.
Timing and frequency dictate the integration method:

  • Real-time (synchronous): Used when immediate data consistency is critical.

  • Near real-time or asynchronous: Used when quick updates are required but can tolerate some delay.

  • Batch: Ideal for large-volume data that can be processed during off-peak hours.

These aspects impact API limits, latency, and user experience, and they help determine whether to use Platform Events, Apex callouts, ETL tools, or other mechanisms.

C. Source and Target system, Directionality, and data volume & transformation complexity, along with any middleware that can be leveraged.

Correct.
Identifying which systems send/receive data, data volume, and the nature of the transformations is vital for choosing the right integration architecture:

  • Directionality helps determine data flow (uni-directional vs. bi-directional).

  • Data volume affects performance, API limits, and job scheduling.

  • Transformation complexity indicates whether middleware or tools like MuleSoft are needed.

This information directly influences design decisions such as API usage, queuing models, and middleware orchestration.

D. Integration skills, SME availability, and Program Governance details.

Incorrect.
While these are important for project planning and execution, they are not directly related to integration architecture design. They are part of resourcing and delivery, not the technical foundation needed for integration pattern selection.

E. Core functional and non-functional requirements for User Experience design, Encryption needs, Community, and license choices.

Incorrect.
These are more relevant to platform architecture and UX design, not core integration architecture. Encryption needs may influence data handling but are typically addressed once integration patterns and channels are selected. Community and licensing details, though important, don’t shape integration methods.

The integration architect must deeply understand the style of integration, timing/latency expectations, system roles, data movement patterns, and technical constraints. This information enables the design of scalable, maintainable, and secure integration solutions.

Therefore, the correct answers are: A, B, C.


UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.