MCPA MuleSoft Certified Platform Architect Level 1 – Implementing Effective APIs Part 6

  1. Parallel API Invocation

Now let us see how we can improve the previous approach which is calling a FileBACK API to make it bit performant. Meaning because it’s a fallback API, it will come into play after all the readers are done, right? So it is bit costly in the perspective of the response times. To overcome that, we can do the API invocations in parallel. So what does it mean? Let’s see that if the possibility of a failure in API invocation is considered generally high, okay? So if you know that the system you are going to interact is little prone to the high risk based on your past history or statistics and the API client performing this API invocation.

So it’s really important like we discussed, maybe it’s order acceptance, right? It is important for organization and you know that you have a Fallback API. I can call this then instead of doing it after the retries, depending on the nature of your API, you may choose to call it parallel the same time you call your actual API. Okay? It is not that compulsory. You have to do it parallel to overcome the approach you see? It depends on how you want to design. Okay?

You can still have fall back if you are not worried about the response times much. So that it is the last thing you do. But let’s say if you think no, this is very critical API and why to do it. And let us do parallel. Because you have a big V core or your worker Vcore more if you have the resources much, then the invocation of the primary API and the Fallback APA may be performed in parallel. So if the primary API does not respond in time and then the Fallback APA is giving the response to you, then the result from the file by KPI can be used instead of the primary one.

So aura the little time has been lost because the two API invocation of performance in parallel, you want to lose more time, only little time will be gone. So instead of doing serially, you can do it parallel. So this is a kind of strategy that puts increased load on the application network because of the extra calls by essentially doubling the number of API invocations. But it gives you the actual API performance results. Okay? So that is why you have to use this in the exceptional cases only. OK, like I said, if your API is really important and critical for your business, you have to go with this kind of or this level of implementation in your API. OK, let us move on to the next lecture in this section. Happy learning.

  1. Cached Fallback Results

The next approach in this series is using a previously cached result as a fallback after retrace of an APA invocation are exhausted and a fallback API is also failed, which is very very rare scenario. But still, let’s say things happen, right? The bad things always happen and stay till both of them failed, your retrace failed and fallback has failed. Then the last approach we can use is to use a previously cached API response and give it back as an API response to your APA clients. In other words, the APA clients needs to implement some kinds of say clients are caching where the results of the API invocations are preserved. Okay? So whenever the APIs are actually working that time itself, you need to start caching this because after it fails, you cannot have things ready in cache, right?

So you have to keep caching them using some unique key that is representing the particular request. Say if it is again our validate external IDs, then the customer ID can be key for the customer related responses and the item IDs can be keys for the item related validation responses and the shipment location ID can be key for the shipment location and responses. Using this, if the results are cached before, okay? Then the APA invocation can look up into that cache in the time of need when the actual pipes are failing to check, OK? If there was any validation that is done recently using the key as the customer ID or shipment location ID or the item IDs and get that particular validation, if it is fine they are valid, then the same can be used to proceed further. Okay? So client said caching is generally governed by some rules again, okay? The same rules have been discussed about the Http caching before during the NFR section. Same way it is better to again implement this only for the safe Http methods that can be cached, okay? As per Http rules, we have to honor those things which are just to remain again get head and options, okay?

And Http caching semantics must be honored again to make sure that you implement your API according to the meaning of this Http method verbs only. In practice it is better to use a little bit stale or little bit outdated cache Http response instead of completely failing the APA invocation because it is down currently, right? So again, it is not a straightforward rule or something you have to implement. You have to weigh everything. These are possible options in the order in which we can enhance our fault tolerant API. If some things are not applicable, you need not do them, but if it fits your API, you have to implement them.

Okay? But one thing that has to be remembered is that don’t blindly cache all the responses. You have to see what kind of API it is if you get unique request every time, if you know that there will be definitely unique request then even though if it is a safe Http method like Get header options, it is not recommended to enable this caching. Why? Because if the keys are unique, then almost every request gets cached and there will be huge memory footprint on your API application resources like memory meaning the Ram or the CPU intensive $1 okay with no use. Because anyway, if you’re caching every time in a time of need because the keys are unique, you’re not going to find it in the cache.

Same time there will be big memory print in your application as well. So this will add unnecessarily processing overhead with no benefit. So you have to just see which is the better way to do and implement that way. Mule again, as you know, already has the options to implement this by using the caching scope or the object store connector which is available out of the box. Okay, let us move on to the last strategy in this particular fault tolerant API in implementation in the course happy learning.

img