MCPA MuleSoft Certified Platform Architect Level 1 – Implementing Effective APIs Part 3

  1. Object Store

In this lecture, let us understand about the cloud Hub load balancers. One form of the Cloud Hub load balancer service is the cloud Hub shared load balancer. Every mule application deployed to Cloud Hub receives a DNS entry pointing to the Cloud Hub shared load balancer. Okay? So the Cloud Hub Shared Load Balancer in a region is shared by all new applications in that reason.

And the cloud Hub share load balancer builds on top of the AWS elastic load balancer. Okay? If you are aware of the AWS or the acquaintance with the technology, AWS has this ELB concept called Elastic Load Balancer which is used by the mule soft to provision this Cloud Hub Share Load Balancer.

So the Http and Http requests that the APN locations sent to the Cloud Hub Share Load Balancer on the port 80 or four, four, three respectively, are forwarded by the Cloud Hub Share Load Balancer to one of the mule applications Cloud Hub workers. Okay? So it’s approximately round robin, but they will reach the mule application on 8081 and 80 eight to put respectively.

If the request comes from the port 80, then via Http it goes to 8081. If it comes via four, four, three and Https protocol, it goes to 8082 respectively. We discussed this before many times at the Firewall rules. The Cloud Hub Share Load Balancer therefore has to maintain the protocol state whether it’s Http or Https to correctly divert the traffic to the right application port via the Firewall rules.

So by exposing only an Http and the point on port edge rate one or an Https and the point on port edge rate two, the mule application determines the protocol available to its API client. Okay? So it has to be communicated that if it is Https, the H eight two port is available or else it’s eight one and the Firewall rule should be set accordingly.

So only mule applications exposing Http or Https endpoint set 80 81 or 80 82 can be used for the Cloud Hub Share Load Balancer. Okay? There is no other way. Other port can be used if you want to make use of shared Load Balancer. Meaning if you are on a shared worker cloud and using SLB, then you cannot opt for any custom port like in your APA implementation.

The APA implementation, you cannot choose like 8085 or 8585 port, okay? Something different and expect to hit that mule worker application via the Shelby or direct the worker URL, okay? It is not possible. Compulsory, like we discussed for SLB is the default. Firewall Rules are always 8081 for Http and 8082 for the Https and you have to compulsory use one of these ports first on the protocol either Http or Https for hitting that particular request on a shared Load Balancer, okay? And also the Shared Load Balancer terminates the TLS connections and uses its own services certificate. So we cannot import our custom sets and to the SLB this is also something we discussed before. Let us now discuss about the Cloud Hub dedicated load balancers.

Okay, so the other form of the Cloud Hub Load Balancer Service is the Cloud Hub dedicated Load Balancer which is available to new applications deployed to an any point VPC only. Okay, for the applications that are deployed to any point VPC, the Cloud Hub dedicated load balancer can be used. But same time the Cloud Hub shared load balancer also is available. So they get both shared load balancer as well as the dedicated load balancer unless the organization decides to drop the shared load balancer and not share it with anyone.

So one or more Cloud Hub dedicated load balancers can be associated with any point VPC. It’s not necessary that only one DLB should be there for one application. There can be many. So each Cloud Hub dedicated load balancer receives a private IP address from that address range of the VPC. We discussed this already, right? So always a private address will be assigned from the VPC range which is used for creating an endpoint VPC as well as one public IP address which is not in the control of the VPC administrators. Just like for SLB, even for DLV as well, the Http and Http requests that are hit on the 84, four, three ports for Http and Https respectively are forwarded by the Cloud Hub dedicated load balancer to the mule applications via the ports 8091 and 8092 respectively, like we discussed before.

So because Cloud Hub dedicated load balancer sits within any point VPC, the traffic between it and the Cloud Hub workers is internal to the VPC and can therefore use the customary internal server ports 8091 or 8092. Okay? It’s completely within the VPC section. Flexible mapping rules can be defined as well for Dlbis. For manipulating the request URLs and selecting the Cloud Hub mule applications to the workers of which the requests are forwarded, it’s not necessary that okay, if your worker URL is something big, the same will be carry forwarded. We can manipulate and give a short URL path or something in the base URL in the DLB.

Then, based on the resource in the API, the request can be forwarded to the correct worker by defining some proper mapping rules. The upstream protocol for the VPC internal communication between Cloud Hub dedicated Load Balancer and the Cloud Hub workers can be configured to be Https or Http. Okay, so it is up to the setup. So you can decide that okay, the clients will hit the Cloud Hub DLB with Https and then you decide to drop the certificate there and proceed with just Http within the internal communication from DLB to your workers. Or you can keep the same Https all the way. This is not like the shared Load balancer where the shade is maintained all the way till the request hits the workers. The IP addresses of permitted API clients to Cloud Hub dedicated load balancer can be whitelisted also. Okay, so we have a feature where we can create kind of two DLBs one DLB for the internal organization where only the internal teams can use the DLB URL to access the APIs and one can be for public people which can be accessed to anyone over the internet.

So in this scenarios we can whitelist the DLBs with the particular IPS of APA clients saying okay, if your organization has an IP range then we can whitelist one of the DLB with the IP range of your organization. So the only APA client in your organization can hit which is internal DLB and the public one can be accessed without whitelisting. Okay? The Cloud Hub dedicated load balancers also perform the TLS. Shermation just like the SLB. But the benefit here, which we get the extra feature is Cloud Hub Load Balancers. Dedicated load balancers must be compulsory configured with the server side certificates for public private key pair for the Https endpoints that are exposed via DLB. Okay? If you are exposing Https via DLB, then compensatory how to import your server side certificates. Okay, but optionally you can import the client side certificates as well if you want to enforce a TLS Mutual Authentication. Okay, if a client wants to do a video TLS Mutual authentication then client sides can be added to the cloud dedicated load balancer so that it performs TLS Mutual authentication.

What you’re seeing in front of you the picture is an API implementation or a mule application app which is deployed to an endpoint VPC under the management of the US any point platform control plane. Okay, so this particular app exposes Http, the Https endpoints on ports 8081 and edge rate two for the public and as well as ports 8091 and 8092 for the internal VPC. Okay, this arrows, blue arrows depicts that the default access routes for API clients inside and outside this VPC both directly hitting the Cloud Hub workers.

And you can see as well they are hitting the workers via the Cloud Hub shared load balancer and also via the Cloud Hub dedicated load Balancer. Okay, but the DLB actually needs mapping rules. That’s why you see within the DLB icon there is like a small boxes and arrows, they also represent the mapping rules. But we are not explicitly showing the mapping rules here in this diagram. Okay, that’s about the load balancers on the Cloud Hub. Let us move on to the next lecture in the course. Happy learning.

  1. Fault-tolerant API invocations

Hi in this lecture, let us see how we can design API clients in a fault tolerant way. Before we see that, let us first understand what factors generally cause failures in application networks. We already know application networks or connectivity or mixture of different small small APIs. When I say small, small, different small reusable, modular APIs, right? And connection of all such different different APIs built in an API Led connectivity manner will form the application network. We already learned this in one of the lectures in the previous sections. Correct?

Now, as the application network grows, there will be many APIs calling many different APIs, especially in the process layer. Because it is an orchestration layer. That is where a lot of orchestrations happen. So that is the place where most of the APIs will be called to get the business outcome or functional behavior out of the experience API. Correct. So this is good. Okay. This means the more number of APIs we have, that means the more we are reusing and more modular they are, the more maturity your application network is.

It’s good in that aspect, no doubt. But one thumb rule in integration is you have to always remember this. The more the number of integration points are, the more is the chance or the risk for failures. Okay? So the more are the number of integration points, the higher is the chance for the failure. That means if you apply that particular thumb brewer formula into application network because process layer will have complex orchestrations or may have complex orchestrations calling many number of APIs and the more APIs we have a small model and reusable, we have to keep calling more such APIs to achieve the desired behavior. That means we are increasing the number of integration points. Every API call or every call to an external application is an integration point. So there will be many such points so that higher is the risk for failure. So these are some of the chances or most of the chances why such failures occur. Okay? So even if your application trick is matured with lots of number of reusable APIs and small and modular in nature, without mitigating these failures or handling them in a proper way, it is still not that much beneficial.

Okay? So we will see how or what are the different ways we can try and try to mitigate these particular kind of failures in the application network. They don’t eliminate them, but they can mitigate them if you apply some of the principles we’re going to discuss now. Okay, let’s move on to that part, how we can apply. So you already know how the AP invocations happen. You know that there is an APA client who calls APA implementation via the API interface. Correct. Which is if you technically speak an APA client is like a postman who calls APA implementation, which is the back end systems behavior sitting in a process layer or something via the APA Manager, correct? So postman calls runtime Manager, application via the APA Manager, all good.

But the thing is this APA implementation, at least in the Museoft, is implemented via three layered architecture Experience, Process and System Layer. APA clients always call the Experience layer good. That in turn calls the Process layer which in turns calls the System layer. Now, like we discrete a moment back. Process layer is the candidate for higher chance of the failures because of the orchestrations.

So what we have to do when we say we have to implement in a fault tolerant way or indirectly what a fault tolerant API invocation means is that during this call process, if any failures occur in the API implementation, which is the orchestration place, which is process layer. We have to try and mitigate that error so that it won’t cause further cascading errors. Okay. Meaning that those errors will not go and cascade or propagate all the way to the API clients and further down whoever is calling that API client, okay? We have to mitigate such a way, we handle it properly and return a proper response instead of just broadcasting or propagating or cascading the actual error all the way back.

Okay? Indirectly, if Process layer fails using an exception or error, then it should not propagate back and cause a failure in the Experience layer again. Okay? So it should not cause a throw error in the Experience layer. Again, suppose if we handle it properly in the Process layer, return a proper response, it could be error periods of business failure or a validation error. That is fine. We are talking about failures, not the API functional errors or the validation errors, okay? Please try to understand the difference. Failures are application failures or system failures or application failures. So those failures cause exceptions. Like in JVM, you can think like Java I o dot exception something. You know that right. So we are talking about such kind of failures.

Okay? So those kind of failures in Process layer should not go back to Experience and cause another failure in the Experience layer, okay? If we handle and send like a proper 400 bad request response back or some 500 internal server response back, then it won’t fail in the Experience layer, okay? It just goes as a response and Experience layer will just send the forward the same response back to the APA clients. Then there is no failure on the APA client side and the Experience layer side, okay? It’s just like error response going back smoothly. Actual failure happened only in the Process layer. But let’s say if you don’t handle it, then this error may cause some 500 internals Http level error in the Experience layer, which also if it is not handling properly, may cause the error in the postman, say could not parse the response or something, okay? Which is not a proper way. And this is just fast failures. Suppose if the failures are bit costly.

Meaning say if the process layer is stuck in a five minute or ten minute connection establishment call to the back end system, then it should not cause the experience layer again to wait for the 510 minutes or again further down the APA client to wait for that time. Okay? So these kind of things will unnecessarily consume the threads, cause memory exceptions, bring the systems down. Okay? So that is why we have to build in a faltern network. So when we say fault tolerant way, we mean to mitigate these kind of failures, not errors, failures to happen or cascade. Not happen, cascade all the way through all the APIs in the network.

Okay? We have to try and mitigate at the source place only as much as possible. All right, so there are some well established approaches for implementing these fault tolerant invocations. If we try to take care of the APA invocations by adopting these established approaches or combination of them are applying wherever applicable they fit, we can apply those particular approach in that particular APA. Then because these are established, they try to help and mitigate as much as possible. Let us see what are those one way.

The first one is by implementing the timeouts number two, we can implement retries in the APIs. Okay, before we just forward the error back number three circuit breaker pattern. Okay, this circuit breaker pattern is a very popular one. If you are in the Java project especially or if you have read the Release it book from the Michael T. Nygaard, then you would understand what a circuit better breaker pattern is. Okay? This is a very popular pattern in the integration world. Number four fallback API invocations number five API parallel invocations number six cached fallback result. And last one is static fallback result. So let us see what are these or how we can implement or adopt these particular approaches in various scenarios one by one. Alright? Happy learning.

img